Misplaced Pages

15.ai: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 20:39, 24 December 2024 editAlalch E. (talk | contribs)Extended confirmed users, New page reviewers, Rollbackers29,948 edits Reception and legacy: ce← Previous edit Revision as of 15:51, 25 December 2024 edit undoAlalch E. (talk | contribs)Extended confirmed users, New page reviewers, Rollbackers29,948 editsm perform the citation style conversion announced on the talk page -- source quote work for asian sources followsNext edit →
Line 19: Line 19:
}} }}


'''15.ai''' was a free non-commercial ] that used ] to generate ] voices of fictional characters from ].{{sfnm|遊戲|2021|Yoshiyuki|2021}} Conceived by an artificial intelligence researcher known as ''"15"'' during their time at the ] and developed following their successful exit from a ] venture, the application allowed users to make characters from various media speak custom text with emotional inflections faster than real-time.{{efn|The term ''"faster than real-time"'' in speech synthesis means that the system can generate audio more quickly than the actual duration of the speech – for example, generating 10 seconds of speech in less than 10 seconds would be considered faster than real-time.}}{{sfnm|Kurosawa|2021|Ruppert|2021|Clayton|2021|Morton|2021}}
'''15.ai''' was a free non-commercial ] that used ] to generate ] voices of fictional characters from ].<ref name="UDN-2021">{{cite web |last=遊戲 |first=遊戲角落 |date=January 20, 2021 |title=這個AI語音可以模仿《傳送門》GLaDOS講出任何對白!連《Undertale》都可以學 |trans-title=This AI Voice Can Imitate Portal's GLaDOS Saying Any Dialog! It Can Even Learn Undertale |url=https://game.udn.com/game/story/10453/5189551 |url-status=live |access-date=December 18, 2024 |website=] |language=zh-tw |quote= |trans-quote= |archive-date=December 19, 2024 |archive-url=https://web.archive.org/web/20241219214330/https://game.udn.com/game/story/10453/5189551}}</ref><ref name="Yoshiyuki-2021">{{cite web |last=Yoshiyuki |first=Furushima |date=January 18, 2021 |title=『Portal』のGLaDOSや『UNDERTALE』のサンズがテキストを読み上げてくれる。文章に込められた感情まで再現することを目指すサービス「15.ai」が話題に |trans-title=Portal's GLaDOS and UNDERTALE's Sans Will Read Text for You. "15.ai" Service Aims to Reproduce Even the Emotions in Text, Becomes Topic of Discussion |url=https://news.denfaminicogamer.jp/news/210118f |url-status=live |archive-url=https://web.archive.org/web/20210118051321/https://news.denfaminicogamer.jp/news/210118f |archive-date=January 18, 2021 |access-date=December 18, 2024 |website=] |language=ja |quote=日本語入力には対応していないが、ローマ字入力でもなんとなくそれっぽい発音になる。; 15.aiはテキスト読み上げサービスだが、特筆すべきはそのなめらかな発音と、ゲームに登場するキャラクター音声を再現している点だ。 |trans-quote=It does not support Japanese input, but even if you input using romaji, it will somehow give you a similar pronunciation.; 15.ai is a text-to-speech service, but what makes it particularly noteworthy is its smooth pronunciation and the fact that it reproduces the voices of characters that appear in games.}}</ref> Conceived by an artificial intelligence researcher known as ''"15"'' during their time at the ] and developed following their successful exit from a ] venture, the application allowed users to make characters from various media speak custom text with emotional inflections faster than real-time.{{efn|The term ''"faster than real-time"'' in speech synthesis means that the system can generate audio more quickly than the actual duration of the speech &ndash; for example, generating 10 seconds of speech in less than 10 seconds would be considered faster than real-time.}}<ref name="Kurosawa-2021">{{cite web |last=Kurosawa |first=Yuki |date=January 19, 2021 |title=ゲームキャラ音声読み上げソフト「15.ai」公開中。『Undertale』や『Portal』のキャラに好きなセリフを言ってもらえる |trans-title=Game Character Voice Reading Software "15.ai" Now Available. Get Characters from Undertale and Portal to Say Your Desired Lines |url=https://automaton-media.com/articles/newsjp/20210119-149494/ |url-status=live |archive-url=https://web.archive.org/web/20210119103031/https://automaton-media.com/articles/newsjp/20210119-149494/ |archive-date=January 19, 2021 |access-date=December 18, 2024 |website=] |language=ja |quote=英語版ボイスのみなので注意。;もうひとつ15.aiの大きな特徴として挙げられるのが、豊かな感情表現だ。 |trans-quote=Please note that only English voices are available.;Another major feature of 15.ai is its rich emotional expression.}}</ref><ref name="Ruppert-2021">{{cite magazine |last=Ruppert |first=Liana |date=January 18, 2021 |title=Make Portal's GLaDOS And Other Beloved Characters Say The Weirdest Things With This App |url=https://www.gameinformer.com/gamer-culture/2021/01/18/make-portals-glados-and-other-beloved-characters-say-the-weirdest-things |url-status=dead |archive-url=https://web.archive.org/web/20210118175543/https://www.gameinformer.com/gamer-culture/2021/01/18/make-portals-glados-and-other-beloved-characters-say-the-weirdest-things |archive-date=January 18, 2021 |access-date=December 18, 2024 |magazine=] |quote=}}</ref><ref name="Clayton-2021">{{cite web |last=Clayton |first=Natalie |date=January 19, 2021 |title=Make the cast of TF2 recite old memes with this AI text-to-speech tool |url=https://www.pcgamer.com/make-the-cast-of-tf2-recite-old-memes-with-this-ai-text-to-speech-tool |url-status=live |archive-url=https://web.archive.org/web/20210119133726/https://www.pcgamer.com/make-the-cast-of-tf2-recite-old-memes-with-this-ai-text-to-speech-tool/ |archive-date=January 19, 2021 |access-date=December 18, 2024 |website=] |quote=}}</ref><ref name="Morton-2021">{{cite web |last=Morton |first=Lauren |date=January 18, 2021 |title=Put words in game characters' mouths with this fascinating text to speech tool |url=https://www.rockpapershotgun.com/2021/01/18/put-words-in-game-characters-mouths-with-this-fascinating-text-to-speech-tool/ |url-status=live |archive-url=https://web.archive.org/web/20210118213308/https://www.rockpapershotgun.com/2021/01/18/put-words-in-game-characters-mouths-with-this-fascinating-text-to-speech-tool/ |archive-date=January 18, 2021 |access-date=December 18, 2024 |website=] |quote=}}</ref>


Launched in March 2020,<ref name="Ng-2020">{{cite web |last=Ng |first=Andrew |date=April 1, 2020 |title=Voice Cloning for the Masses |url=https://www.deeplearning.ai/the-batch/voice-cloning-for-the-masses/|access-date=December 22, 2024 |website=] |quote=}}</ref> The service gained widespread attention in early 2021 when it went ] on social media platforms like ] and ], and quickly became popular among Internet fandoms, including the '']'', '']'', and '']'' fandoms.<ref name="Zwiezen-2021">{{cite web |last=Zwiezen |first=Zack |date=January 18, 2021 |title=Website Lets You Make GLaDOS Say Whatever You Want |url=https://kotaku.com/this-website-lets-you-make-glados-say-whatever-you-want-1846062835 |url-status=live |archive-url=https://web.archive.org/web/20210117164748/https://kotaku.com/this-website-lets-you-make-glados-say-whatever-you-want-1846062835 |archive-date=January 17, 2021 |access-date=December 18, 2024 |website=] |quote=}}</ref><ref name="Chandraseta-2021" /><ref name="GamerSky-2021">{{cite web |date=January 18, 2021 |title=这个网站可用AI生成语音 让ACG角色"说"出你输入的文本 |trans-title=This Website Can Use AI to Generate Voice, Making ACG Characters "Say" the Text You Input |url=https://www.gamersky.com/news/202101/1355887.shtml |url-status=live |access-date=December 18, 2024 |website=] |language=zh |quote= |trans-quote= |archive-date=December 11, 2024 |archive-url=https://web.archive.org/web/20241211221628/https://www.gamersky.com/news/202101/1355887.shtml}}</ref> The website had a role in the emergence of AI voice cloning (]) ]. Launched in March 2020,{{sfn|Ng|2020}} The service gained widespread attention in early 2021 when it went ] on social media platforms like ] and ], and quickly became popular among Internet fandoms, including the '']'', '']'', and '']'' fandoms.{{sfnm|Zwiezen|2021|Chandraseta|2021}}{{sfn|GamerSky|2021|ref=GamerSky-2021}} The website had a role in the emergence of AI voice cloning (]) ].


In January 2022, Voiceverse NFT sparked controversy when it was discovered that the company, which had partnered with voice actor ], had misappropriated 15.ai's work for their own platform. The service was ultimately taken offline in September 2022. Its shutdown led to the emergence of various commercial alternatives in subsequent years. In January 2022, Voiceverse NFT sparked controversy when it was discovered that the company, which had partnered with voice actor ], had misappropriated 15.ai's work for their own platform. The service was ultimately taken offline in September 2022. Its shutdown led to the emergence of various commercial alternatives in subsequent years.


== History == == History ==
15.ai was conceived in 2016 as a research project in ] by a developer known as ''"15"'' during their undergraduate studies at the ] (MIT).<ref name="Chandraseta-2021">{{cite web |last=Chandraseta |first=Rionaldi |date=January 21, 2021 |title=Generate Your Favourite Characters' Voice Lines using Machine Learning |url=https://towardsdatascience.com/generate-your-favourite-characters-voice-lines-using-machine-learning-c0939270c0c6 |url-status=live |access-date=December 18, 2024 |website=Towards Data Science |archive-date=January 21, 2021 |archive-url=https://web.archive.org/web/20210121132456/https://towardsdatascience.com/generate-your-favourite-characters-voice-lines-using-machine-learning-c0939270c0c6}}</ref> The developer was inspired by ]'s ] paper, with development continuing through their studies as ] released Tacotron the following year.<ref name="Twitter">{{cite web |title=The past and future of 15.ai |url=https://x.com/fifteenai/status/1865439846744871044 |website=] |access-date=December 19, 2024 |archive-date=December 8, 2024 |archive-url=https://web.archive.org/web/20241208035548/https://x.com/fifteenai/status/1865439846744871044 |url-status=live}}</ref> The name ''15'' is a reference to the creator's claim that a voice can be cloned with as little as 15 seconds of data.<ref name="Chandraseta-2021" /><ref>{{cite web |last=Button |first=Chris |date=January 19, 2021 |title=Make GLaDOS, SpongeBob and other friends say what you want with this AI text-to-speech tool |url=https://www.byteside.com/2021/01/15-ai-deepmoji-glados-spongebob-characters-ai-text-to-speech/ |url-status=live |access-date=December 18, 2024 |website=Byteside |quote= |archive-date=June 25, 2024 |archive-url=https://web.archive.org/web/20240625180514/https://www.byteside.com/2021/01/15-ai-deepmoji-glados-spongebob-characters-ai-text-to-speech/}}</ref> 15.ai was released in March 2020.<ref>{{multiref|{{cite web |title=About |url=https://fifteen.ai/about |website=fifteen.ai |access-date=December 23, 2024 |archive-url=https://archive.is/oaJPz |archive-date=February 23, 2020 |date=February 19, 2020 |type=Official website |quote=2020-02-19: The web app isn't fully ready just yet}}|{{cite web |title=About |url=https://fifteen.ai/about |website=fifteen.ai |access-date=December 23, 2024 |archive-url=https://archive.is/aXhTU |archive-date=March 3, 2020 |date=March 2, 2020 |type=Official website}}<!-- multiref end-->}}</ref><!--In April 2020, British-American computer scientist ]'s wrote about 15.ai in his newsletter ''The Batch''; he described it as a proof of concept of voice cloning for practical use cases.<ref name="Ng-2020" />--> More voices were added to the website in the following months.<ref>{{cite web |last=Scotellaro |first=Shaun |date=March 31, 2020 |title=Rainbow Dash Voice Added to 15.ai |url=https://www.equestriadaily.com/2020/03/rainbow-dash-voice-added-to-15ai.html |url-status=live |access-date=December 18, 2024 |website=] |quote= |archive-date=December 1, 2024 |archive-url=https://web.archive.org/web/20241201163118/https://www.equestriadaily.com/2020/03/rainbow-dash-voice-added-to-15ai.html}}</ref><ref>{{cite web |last=Scotellaro |first=Shaun |date=October 5, 2020|title=15.ai Adds Tons of New Pony Voices|url=https://www.equestriadaily.com/2020/10/15ai-adds-tons-of-new-pony-voices.html|access-date=December 21, 2024|website=]}}</ref> 15.ai was conceived in 2016 as a research project in ] by a developer known as ''"15"'' during their undergraduate studies at the ] (MIT).{{sfn|Chandraseta|2021}} The developer was inspired by ]'s ] paper, with development continuing through their studies as ] released Tacotron the following year.<ref name="Twitter">{{cite web |title=The past and future of 15.ai |url=https://x.com/fifteenai/status/1865439846744871044 |website=] |access-date=December 19, 2024 |archive-date=December 8, 2024 |archive-url=https://web.archive.org/web/20241208035548/https://x.com/fifteenai/status/1865439846744871044 |url-status=live}}</ref> The name ''15'' is a reference to the creator's claim that a voice can be cloned with as little as 15 seconds of data.{{sfnm|Chandraseta|2021|Button|2021}} 15.ai was released in March 2020.<ref name="fifteen.ai-2020">{{multiref
|{{cite web |title=About |url=https://fifteen.ai/about |website=fifteen.ai |access-date=December 23, 2024 |archive-url=https://archive.is/oaJPz |archive-date=February 23, 2020 |date=February 19, 2020 |type=Official website |quote=2020-02-19: The web app isn't fully ready just yet |ref=fifteen.ai-2020a}}
|{{cite web |title=About |url=https://fifteen.ai/about |website=fifteen.ai |access-date=December 23, 2024 |archive-url=https://archive.is/aXhTU |archive-date=March 3, 2020 |date=March 2, 2020 |type=Official website |ref=fifteen.ai-2020b}}
<!-- multiref end-->}}</ref><!--In April 2020, British-American computer scientist ]'s wrote about 15.ai in his newsletter ''The Batch''; he described it as a proof of concept of voice cloning for practical use cases.{{sfn|Ng|2020}}--> More voices were added to the website in the following months.{{sfnm|Scotellaro|2020a|Scotellaro|2020b}}


In early 2021, the application went viral on ] and ], with people generating skits, ], and fan content using voices from popular games and shows.<ref name="Zwiezen-2021" /><ref name="Clayton-2021" /><ref name="Ruppert-2021" /><ref name="Yoshiyuki-2021" /> 15.ai use also resulted in memes and ]s. These included recreations of the popular ] video '']'',<ref name="UDN-2021" /> ''The RED Bread Bank'',<ref name="Kurosawa-2021" /> and ''] Struggles'',<ref name="Morton-2021" /> which have amassed millions of views on social media. Content creators, ], and ] have also used 15.ai as part of their videos as ].<ref name="Play.ht-2024" /> According to the developer, at its peak, the platform incurred operational costs of {{Currency|12000|United States}} per month from ] infrastructure needed to handle millions of daily voice generations. They funded the website through their previous startup earnings.<ref name="Twitter" /> In early 2021, the application went viral on ] and ], with people generating skits, ], and fan content using voices from popular games and shows.{{sfnm|Zwiezen|2021|Clayton|2021|Ruppert|2021|Yoshiyuki|2021}} 15.ai use also resulted in memes and ]s. These included recreations of the popular ] video '']'',{{sfn|遊戲|2021}} ''The RED Bread Bank'',{{sfn|Kurosawa|2021}} and ''] Struggles'',{{sfn|Morton|2021}} which have amassed millions of views on social media. Content creators, ], and ] have also used 15.ai as part of their videos as ].{{sfn|Play.ht|2024|ref=Play.ht-2024}} According to the developer, at its peak, the platform incurred operational costs of {{Currency|12000|United States}} per month from ] infrastructure needed to handle millions of daily voice generations. They funded the website through their previous startup earnings.<ref name="Twitter" />


On January 14, 2022, a controversy ensued after it was discovered that Voiceverse NFT, a company that video game and ] ] voice actor ], had misappropriated voice lines generated from 15.ai as part of their marketing campaign.<ref>{{cite web |last1=Lawrence |first1=Briana |title=Shonen Jump Scare Leads to Company Reassuring Fans That They Aren't Getting Into NFTs |url=https://www.themarysue.com/shonen-jump-not-doing-nfts/ |website=] |access-date=23 December 2024 |date=19 January 2022}}</ref><ref name="Williams-2022">{{cite web |last=Williams |first=Demi |date=January 18, 2022 |title=Voiceverse NFT admits to taking voice lines from non-commercial service |url=https://www.nme.com/news/gaming-news/voiceverse-nft-admits-to-taking-voice-lines-from-non-commercial-service-3140663 |url-status=live |archive-url=https://web.archive.org/web/20220118162845/https://www.nme.com/news/gaming-news/voiceverse-nft-admits-to-taking-voice-lines-from-non-commercial-service-3140663 |archive-date=January 18, 2022 |access-date=December 18, 2024 |website=] |quote=}}</ref><ref name="Wright-2022">{{cite web |last=Wright |first=Steve |date=January 17, 2022 |title=Troy Baker-backed NFT company admits to using content without permission |url=https://stevivor.com/news/troy-baker-nft-voiceverse-15-ai/ |url-status=live |archive-url=https://web.archive.org/web/20220117231918/https://stevivor.com/news/troy-baker-nft-voiceverse-15-ai/ |archive-date=January 17, 2022 |access-date=December 18, 2024 |website=Stevivor |quote=}}</ref> ] showed that Voiceverse had generated audio of characters from '']'' using 15.ai, pitched them up to make them sound unrecognizable from the original voices to market their own platform—in violation of 15.ai's terms of service.<ref name="Phillips-2022" /><ref>{{cite web |last=Lopez |first=Ule |date=January 16, 2022 |title=Voiceverse NFT Service Reportedly Uses Stolen Technology from 15ai |url=https://wccftech.com/voiceverse-nft-service-uses-stolen-technology-from-15ai/ |url-status=live |archive-url=https://web.archive.org/web/20220116194519/https://wccftech.com/voiceverse-nft-service-uses-stolen-technology-from-15ai/ |archive-date=January 16, 2022 |access-date=June 7, 2022 |website=Wccftech}}</ref> Voiceverse claimed that someone in their marketing team used the voice without properly crediting 15.ai; in response, 15 tweeted "Go fuck yourself."<ref name="Wright-2022" /><ref name="Phillips-2022" /><ref>{{Cite tweet |number=1482088782765576192 |user=fifteenai |title=Go fuck yourself. |date=January 14, 2022}}</ref> On January 14, 2022, a controversy ensued after it was discovered that Voiceverse NFT, a company that video game and ] ] voice actor ], had misappropriated voice lines generated from 15.ai as part of their marketing campaign.{{sfnm|Lawrence|2022|Williams|2022|Wright|2022}} ] showed that Voiceverse had generated audio of characters from '']'' using 15.ai, pitched them up to make them sound unrecognizable from the original voices to market their own platform—in violation of 15.ai's terms of service.{{sfnm|Phillips|2022|Lopez|2022}} Voiceverse claimed that someone in their marketing team used the voice without properly crediting 15.ai; in response, 15 tweeted "Go fuck yourself."{{sfnm|Wright|2022|Phillips|2022}}{{sfn|fifteenai|2022}}


In September 2022, 15.ai was taken offline.<ref name="Play.ht-2024" /><ref name="ElevenLabs-2024" /> The developer claimed that this was due to legal issues surrounding ].<ref name="Twitter" /> In September 2022, 15.ai was taken offline.{{sfnm|ElevenLabs|2024a|1ref=ElevenLabs-2024a|Play.ht|2024|2ref=Play.ht-2024}} The developer claimed that this was due to legal issues surrounding ].<ref name="Twitter" />


== Features == == Features ==
The platform was non-commercial,<ref name="Williams-2022" /> and operated without requiring user registration or accounts.<ref name="Phillips-2022">{{cite web |last=Phillips |first=Tom |date=January 17, 2022 |title=Troy Baker-backed NFT firm admits using voice lines taken from another service without permission |url=https://www.eurogamer.net/articles/2022-01-17-troy-baker-backed-nft-firm-admits-using-voice-lines-taken-from-another-service-without-permission |url-status=live |archive-url=https://web.archive.org/web/20220117164033/https://www.eurogamer.net/articles/2022-01-17-troy-baker-backed-nft-firm-admits-using-voice-lines-taken-from-another-service-without-permission |archive-date=January 17, 2022 |access-date=December 18, 2024 |website=] |quote=}}</ref> Users generated speech by inputting text and selecting a character voice, with optional parameters for emotional contextualizers and phonetic transcriptions. Each request produced three audio variations with distinct emotional deliveries.<ref name="Chandraseta-2021" /> Characters available included multiple characters from '']'' and '']''; ] and ] from the '']'' series; ]; ] from '']''; ] and ] from ]; ] from '']''; ] from '']''; ] from '']''; the ]; ]; and ] from '']''.<ref name="Zwiezen-2021" /><ref name="Clayton-2021" /><ref name="Morton-2021" /><ref name="Ruppert-2021" /> Certain "silent" characters like ] and ] were able to be selected as a joke, and would emit silent audio files when any text was submitted.<ref name="Morton-2021" /><ref name="UDN-2021" /> The platform was non-commercial,{{sfn|Williams|2022}} and operated without requiring user registration or accounts.{{sfn|Phillips|2022}} Users generated speech by inputting text and selecting a character voice, with optional parameters for emotional contextualizers and phonetic transcriptions. Each request produced three audio variations with distinct emotional deliveries.{{sfn|Chandraseta|2021}} Characters available included multiple characters from '']'' and '']''; ] and ] from the '']'' series; ]; ] from '']''; ] and ] from ]; ] from '']''; ] from '']''; ] from '']''; the ]; ]; and ] from '']''.{{sfnm|Zwiezen|2021|Clayton|2021|Morton|2021|Ruppert|2021}} Certain "silent" characters like ] and ] were able to be selected as a joke, and would emit silent audio files when any text was submitted.{{sfn|Morton|2021}}{{sfn|遊戲|2021}}


The deep learning model's nondeterministic properties produced variations in speech output, creating different intonations with each generation, similar to how ] produce different takes.<ref name="Yoshiyuki-2021" /> 15.ai introduced the concept of ''"emotional contextualizers,"'' which allowed users to specify the emotional tone of generated speech through guiding phrases.<ref name="Kurosawa-2021" /><ref name="Chandraseta-2021" /> The emotional contextualizer functionality utilized DeepMoji, a sentiment analysis neural network developed at the ].<ref name="Kurosawa-2021" /><ref name="Chandraseta-2021" /> Introduced in 2017, DeepMoji processed ] embeddings from 1.2 billion Twitter posts (2013-2017) to analyze emotional content. Testing showed the system could identify emotional elements, including sarcasm, more accurately than human evaluators.<ref>{{cite web |last= |first= |date=August 3, 2017 |title=An Algorithm Trained on Emoji Knows When You're Being Sarcastic on Twitter |url=https://www.technologyreview.com/2017/08/03/105566/an-algorithm-trained-on-emoji-knows-when-youre-being-sarcastic-on-twitter/ |url-status=live |archive-url=https://web.archive.org/web/20220602215737/https://www.technologyreview.com/2017/08/03/105566/an-algorithm-trained-on-emoji-knows-when-youre-being-sarcastic-on-twitter/ |archive-date=June 2, 2022 |access-date=December 18, 2024 |website=]}}</ref> The deep learning model's nondeterministic properties produced variations in speech output, creating different intonations with each generation, similar to how ] produce different takes.{{sfn|Yoshiyuki|2021}} 15.ai introduced the concept of ''"emotional contextualizers,"'' which allowed users to specify the emotional tone of generated speech through guiding phrases.{{sfn|Kurosawa|2021}}{{sfn|Chandraseta|2021}} The emotional contextualizer functionality utilized DeepMoji, a sentiment analysis neural network developed at the ].{{sfn|Kurosawa|2021}}{{sfn|Chandraseta|2021}} Introduced in 2017, DeepMoji processed ] embeddings from 1.2 billion Twitter posts (2013-2017) to analyze emotional content. Testing showed the system could identify emotional elements, including sarcasm, more accurately than human evaluators.{{sfn|Knight|2017}}


The application provided support for a simplified version of ], a set of English phonetic transcriptions originally developed by the ] in the 1970s. This feature allowed users to correct mispronunciations or specify the desired pronunciation between ] &ndash; words that have the same spelling but have different pronunciations. Users could invoke ARPABET transcriptions by enclosing the phoneme string in curly braces within the input box (for example, "{AA1 R P AH0 B EH2 T}" to specify the pronunciation of the word "ARPABET" ({{IPAc-en|ˈ|ɑːr|p|ə|ˌ|b|ɛ|t}} {{respell|AR|pə|beht}}).<ref name="equestriacn" /><ref name="Kurosawa-2021" /> The interface displayed parsed words with color-coding to indicate pronunciation certainty: green for words found in the existing pronunciation lookup table, blue for manually entered ARPAbet pronunciations, and red for words where the pronunciation had to be algorithmically predicted.<ref name="equestriacn" /> The application provided support for a simplified version of ], a set of English phonetic transcriptions originally developed by the ] in the 1970s. This feature allowed users to correct mispronunciations or specify the desired pronunciation between ] &ndash; words that have the same spelling but have different pronunciations. Users could invoke ARPABET transcriptions by enclosing the phoneme string in curly braces within the input box (for example, "{AA1 R P AH0 B EH2 T}" to specify the pronunciation of the word "ARPABET" ({{IPAc-en|ˈ|ɑːr|p|ə|ˌ|b|ɛ|t}} {{respell|AR|pə|beht}}).{{sfn|www.equestriacn.com|2022|ref=www.equestriacn.com}}{{sfn|Kurosawa|2021}} The interface displayed parsed words with color-coding to indicate pronunciation certainty: green for words found in the existing pronunciation lookup table, blue for manually entered ARPAbet pronunciations, and red for words where the pronunciation had to be algorithmically predicted.{{sfn|www.equestriacn.com|2022|ref=www.equestriacn.com}}


Later versions of 15.ai introduced multi-speaker capabilities. Rather than training separate models for each voice, 15.ai used a unified model that learned multiple voices simultaneously through speaker ]&ndash;learned numerical representations that captured each character's unique vocal characteristics.<ref name="Twitter" /> Along with the emotional context conferred by DeepMoji, this neural network architecture enabled the model to learn shared patterns across different characters' emotional expressions and speaking styles, even when individual characters lacked examples of certain emotional contexts in their training data.<ref name="Kurosawa-2021" /> Later versions of 15.ai introduced multi-speaker capabilities. Rather than training separate models for each voice, 15.ai used a unified model that learned multiple voices simultaneously through speaker ]&ndash;learned numerical representations that captured each character's unique vocal characteristics.<ref name="Twitter" /> Along with the emotional context conferred by DeepMoji, this neural network architecture enabled the model to learn shared patterns across different characters' emotional expressions and speaking styles, even when individual characters lacked examples of certain emotional contexts in their training data.{{sfn|Kurosawa|2021}}


The interface included technical metrics and graphs,<ref name="equestriacn">{{cite web|date=October 1, 2021|access-date=December 22, 2024|url=https://www.equestriacn.com/2021/10/15-ai-is-back-online-updated-to-v23.html|title=15.ai已经重新上线,版本更新至v23|trans-title=15.ai has been re-launched, version updated to v23|language=zh}}</ref> which, according to the developer, served to highlight the research aspect of the website.<ref name="Twitter" /> As of version v23, released in September 2021, the interface displayed comprehensive model analysis information, including word parsing results and emotional analysis data. The ] and ] (GAN) hybrid denoising function, introduced in an earlier version, was streamlined to remove manual parameter inputs.<ref name="equestriacn" /> The interface included technical metrics and graphs,{{sfn|www.equestriacn.com|2022|ref=www.equestriacn.com}} which, according to the developer, served to highlight the research aspect of the website.<ref name="Twitter" /> As of version v23, released in September 2021, the interface displayed comprehensive model analysis information, including word parsing results and emotional analysis data. The ] and ] (GAN) hybrid denoising function, introduced in an earlier version, was streamlined to remove manual parameter inputs.{{sfn|www.equestriacn.com|2022|ref=www.equestriacn.com}}


== Reception and legacy == == Reception and legacy ==
Critics described 15.ai as easy to use and generally able to convincingly replicate character voices, with occasional mixed results.<ref name="Clayton-2021" /><ref name="Ruppert-2021" /><ref>{{multiref|{{cite web |last1=Moto |first1=Eugenio |title=15.ai, el sitio que te permite usar voces de personajes populares para que digan lo que quieras |url=https://www.qore.com/noticias/78756/15ai-el-sitio-que-te-permite-usar-voces-de-personajes-populares-para-que-digan-lo-que-quieras/pagina/1/1000 |website=Qore |access-date=21 December 2024 |language=es |date=20 January 2021 |quote=Si bien los resultados ya son excepcionales, sin duda pueden mejorar más |trans-quote=While the results are already exceptional, without a doubt they can improve even more}}|{{cite web |last=Scotellaro |first=Shaun |date=March 4, 2020 |title=Neat "Pony Preservation Project" Using Neural Networks to Create Pony Voices |url=https://www.equestriadaily.com/2020/03/neat-pony-preservation-project-using.html |url-status=live |access-date=December 18, 2024 |website=] |archive-date=June 23, 2021 |archive-url=https://web.archive.org/web/20210623210048/https://www.equestriadaily.com/2020/03/neat-pony-preservation-project-using.html}}|{{cite web |last=Villalobos |first=José |date=January 18, 2021 |title=Descubre 15.AI, un sitio web en el que podrás hacer que GlaDOS diga lo que quieras |trans-title=Discover 15.AI, a Website Where You Can Make GlaDOS Say What You Want |url=https://www.laps4.com/noticias/descubre-15-ai-un-sitio-web-en-el-que-podras-hacer-que-glados-diga-lo-que-quieras/ |url-status=live |archive-url=https://web.archive.org/web/20210118172043/https://www.laps4.com/noticias/descubre-15-ai-un-sitio-web-en-el-que-podras-hacer-que-glados-diga-lo-que-quieras/ |archive-date=January 18, 2021 |access-date=January 18, 2021 |website=LaPS4 |language=es |quote=La dirección es 15.AI y funciona tan fácil como parece. |trans-quote=The address is 15.AI and it works as easy as it looks.}}<!--multiref end-->}}</ref> Natalie Clayton of '']'' wrote that ]' voice was replicated well, but noted challenges in mimicking the Narrator from the '']'': "the algorithm simply can't capture Kevan Brighting's whimsically droll intonation."<ref name="Clayton-2021" /> Zack Zwiezen of '']'' reported that " girlfriend was convinced it was a new voice line from GLaDOS' voice actor, ]".<ref name="Zwiezen-2021" /> Taiwanese newspaper '']'' also highlighted 15.ai's ability to recreate GLaDOS's mechanical voice, alongside its diverse range of character voice options.<ref name="UDN-2021" /> ''] Taiwan'' reported that "GLaDOS in ''Portal'' can pronounce lines nearly perfectly", but also criticized that "there are still many imperfections, such as word limit and tone control, which are still a little weird in some words."<ref name="anything">{{cite web| url=https://tw.news.yahoo.com/15-ai-044220764.html|date=January 19, 2021|access-date=December 22, 2024|title=讓你喜愛的ACG角色說出任何話! AI生成技術幫助你實現夢想|trans-title=Let your favorite ACG characters say anything! AI generation technology helps you realize your dreams|language=zh |quote=大家是否都曾經想像過,假如能讓自己喜歡的遊戲或是動畫角色說出自己想聽的話,不論是名字、惡搞或是經典名言,都是不少人的夢想吧。不過來到 2021 年,現在這種夢想不再是想想而已,因為有一個網站通過 AI 生成的技術,讓大家可以讓不少遊戲或是動畫角色,說出任何你想要他們講出的東西,而且相似度與音調都有相當高的準確度 |trans-quote=Have you ever imagined what it would be like if your favorite game or anime characters could say exactly what you want to hear? Whether it's names, parodies, or classic quotes, this is a dream for many. However, as we enter 2021, this dream is no longer just a fantasy, because there is a website that uses AI-generated technology, allowing users to make various game and anime characters say anything they want with impressive accuracy in both similarity and tone.}}</ref> Critics described 15.ai as easy to use and generally able to convincingly replicate character voices, with occasional mixed results.{{sfnm|Clayton|2021|Ruppert|2021|Moto|2021|Scotellaro|2020|4ref=Scotellaro-2020c|Villalobos|2021}} Natalie Clayton of '']'' wrote that ]' voice was replicated well, but noted challenges in mimicking the Narrator from the '']'': "the algorithm simply can't capture Kevan Brighting's whimsically droll intonation."{{sfn|Clayton|2021}} Zack Zwiezen of '']'' reported that " girlfriend was convinced it was a new voice line from GLaDOS' voice actor, ]".{{sfn|Zwiezen|2021}} Taiwanese newspaper '']'' also highlighted 15.ai's ability to recreate GLaDOS's mechanical voice, alongside its diverse range of character voice options.{{sfn|遊戲|2021}} ''] Taiwan'' reported that "GLaDOS in ''Portal'' can pronounce lines nearly perfectly", but also criticized that "there are still many imperfections, such as word limit and tone control, which are still a little weird in some words."{{sfn|MrSun|2021}}


Multiple other critics also found the character limit and the prosody options as not entirely satisfactory.<ref name="GamerSky-2021" /><ref name="anything" /> Peter Paltridge of ] and ] news outlet ''Anime Superhero'' opined that "voice synthesis has evolved to the point where the more expensive efforts are nearly indistinguishable from actual human speech," but also noted that "In some ways, ] is still more advanced than this. It was possible to affect SAM’s inflections by using special characters, as well as change his pitch at will. With 15.ai, you’re at the mercy of whatever random inflections you get."<ref>{{cite web|last=Paltridge|first=Peter|url=https://animesuperhero.com/this-website-will-say-whatever-you-type-in-spongebobs-voice/|title=This Website Will Say Whatever You Type In Spongebob's Voice|access-date=December 22, 2024|date=January 18, 2021}}</ref> Conversely, Lauren Morton of '']'' praised the depth of pronunciation control—"if you're willing to get into the nitty gritty of it".<ref name="Morton-2021" /> Takayuki Furushima of '']'' highlighted the "smooth pronunciations", and Yuki Kurosawa of '']'' noted its "rich emotional expression" as a major feature; both Japanese authors noted the lack of Japanese-language support.<ref name="Yoshiyuki-2021" /><ref name="Kurosawa-2021" /> Renan do Prado of the Brazilian gaming news outlet ''Arkade'' pointed out that users could create amusing results in ], although generation primarily performed well in English.<ref name="do Prado-2024">{{cite web|url=https://arkade.com.br/faca-glados-bob-esponja-e-outros-personagens-falarem-textos-escritos-por-voce/|trans-title=Make GLaDOS, SpongeBob and other characters speak texts written by you!|last=do Prado|first=Renan|website=Arkade|access-date=December 22, 2024|date=January 19, 2021|language=pt-br|title=Faça GLaDOS, Bob Esponja e outros personagens falarem textos escritos por você!}}</ref> Multiple other critics also found the character limit and the prosody options as not entirely satisfactory.{{sfn|GamerSky|2021|ref=GamerSky-2021}}{{sfn|MrSun|2021}} Peter Paltridge of ] and ] news outlet ''Anime Superhero'' opined that "voice synthesis has evolved to the point where the more expensive efforts are nearly indistinguishable from actual human speech," but also noted that "In some ways, ] is still more advanced than this. It was possible to affect SAM’s inflections by using special characters, as well as change his pitch at will. With 15.ai, you’re at the mercy of whatever random inflections you get."{{sfn|Paltridge|2021}} Conversely, Lauren Morton of '']'' praised the depth of pronunciation control—"if you're willing to get into the nitty gritty of it".{{sfn|Morton|2021}} Takayuki Furushima of '']'' highlighted the "smooth pronunciations", and Yuki Kurosawa of '']'' noted its "rich emotional expression" as a major feature; both Japanese authors noted the lack of Japanese-language support.{{sfn|Yoshiyuki|2021}}{{sfn|Kurosawa|2021}} Renan do Prado of the Brazilian gaming news outlet ''Arkade'' pointed out that users could create amusing results in ], although generation primarily performed well in English.{{sfn|do Prado|2021}}


South Korean video game outlet ''Zuntata'' wrote that "the surprising thing about 15.ai is that , there's only about 30 seconds of data, but it achieves pronunciation accuracy close to 100%".<ref>{{cite web |date=January 20, 2021 |title=게임 캐릭터 음성으로 영어를 읽어주는 소프트 15.ai 공개. |trans-title=Software 15.ai Released That Reads English in Game Character Voices |url=https://zuntata.tistory.com/7283 |access-date=December 18, 2024 |website=] |language=ko |quote= |trans-quote=}}</ref> Machine learning professor Yongqiang Li wrote in his blog that he was surprised to see that the application was free.<ref>{{cite web |last=Li |first=Yongqiang |title=语音开源项目优选:免费配音网站15.ai |trans-title=Voice Open Source Project Selection: Free Voice Acting Website 15.ai |url=https://zhuanlan.zhihu.com/p/346417192 |access-date=December 18, 2024 |website=] |language=zh |quote= |trans-quote=}}</ref> South Korean video game outlet ''Zuntata'' wrote that "the surprising thing about 15.ai is that , there's only about 30 seconds of data, but it achieves pronunciation accuracy close to 100%".{{sfn|zuntata.tistory.com|2021|ref=Tistory-2021}} Machine learning professor Yongqiang Li wrote in his blog that he was surprised to see that the application was free.{{sfn|Li|2021}}


15.ai was an early pioneer of audio deepfakes, leading to the emergence of AI speech synthesis-based memes.<ref name="anything" /><ref>{{cite web |last=VK |first=Anirudh |date=March 18, 2023 |title=Deepfakes Are Elevating Meme Culture, But At What Cost? |url=https://analyticsindiamag.com/ai-origins-evolution/deepfakes-are-elevating-meme-culture-but-at-what-cost/ |access-date=December 18, 2024 |website=Analytics India Magazine |quote="While AI voice memes have been around in some form since '15.ai' launched in 2020, "}}</ref> Its influence has been noted in the years after it became defunct,<ref>{{cite web |last=Wright |first=Steven |date=March 21, 2023 |title=Why Biden, Trump, and Obama Arguing Over Video Games Is YouTube's New Obsession |url=https://www.inverse.com/gaming/youtube-ai-presidential-gaming-debates |url-status=live |access-date=December 18, 2024 |website=] |quote="AI voice tools used to create "audio deepfakes" have existed for years in one form or another, with 15.ai being a notable example." |archive-date=December 20, 2024 |archive-url=https://web.archive.org/web/20241220012854/https://www.inverse.com/gaming/youtube-ai-presidential-gaming-debates}}</ref> and since then, several commercial alternatives emerged, such as ]{{efn|which uses "11.ai" as a legal byname for its web domain<ref>{{cite web |title=Can I publish the content I generate on the platform? |url=https://help.elevenlabs.io/hc/en-us/articles/13313564601361-Can-I-publish-the-content-I-generate-on-the-platform |website=ElevenLabs |access-date=23 December 2024 |date=8 May 2024 |type=Official website}}</ref>}} and ].<ref name="ElevenLabs-2024">{{cite web |date=February 7, 2024 |title=15.AI: Everything You Need to Know & Best Alternatives |url=https://elevenlabs.io/blog/15-ai |url-status=live |access-date=December 18, 2024 |website=] |quote= |archive-date=July 15, 2024 |archive-url=https://web.archive.org/web/20240715151316/https://elevenlabs.io/blog/15-ai}}</ref><ref name="Play.ht-2024">{{cite web |date=September 12, 2024 |title=Everything You Need to Know About 15.ai: The AI Voice Generator |url=https://play.ht/blog/15-ai/ |access-date=December 18, 2024 |website=Play.ht |quote=}}</ref> The original claim that only 15 seconds of data is required to clone a human's voice was corroborated by ] in 2024.<ref>{{cite web |last= |first= |title=Navigating the Challenges and Opportunities of Synthetic Voices |url=https://openai.com/index/navigating-the-challenges-and-opportunities-of-synthetic-voices/ |website=] |url-status=live |date=March 9, 2024 |access-date=December 18, 2024 |archive-date=November 25, 2024 |archive-url=https://web.archive.org/web/20241125181327/https://openai.com/index/navigating-the-challenges-and-opportunities-of-synthetic-voices/}}</ref> 15.ai was an early pioneer of audio deepfakes, leading to the emergence of AI speech synthesis-based memes.{{sfnm|MrSun|2021|Anirudh VK|2023}} Its influence has been noted in the years after it became defunct,{{sfn|Wright|2023}} and since then, several commercial alternatives emerged, such as ]{{efn|which uses "11.ai" as a legal byname for its web domain{{sfn|ElevenLabs|2024b|ref=ElevenLabs-2024b}}}} and ].{{sfnm|ElevenLabs|2024a|1ref=ElevenLabs-2024a|Play.ht|2024|2ref=Play.ht-2024}} The original claim that only 15 seconds of data is required to clone a human's voice was corroborated by ] in 2024.{{sfn|OpenAI|2024|ref=OpenAI-2024}}


== See also == == See also ==
Line 60: Line 63:
*] *]


== Notes == == Explanatory footnotes ==
{{notelist}} {{notelist}}


== References == == References ==
=== Notes ===
{{reflist}} {{reflist}}

=== Works cited ===
{{refbegin}}
* {{cite web |last=遊戲 |first=遊戲角落 |date=January 20, 2021 |title=這個AI語音可以模仿《傳送門》GLaDOS講出任何對白!連《Undertale》都可以學 |trans-title=This AI Voice Can Imitate Portal's GLaDOS Saying Any Dialog! It Can Even Learn Undertale |url=https://game.udn.com/game/story/10453/5189551 |url-status=live |access-date=December 18, 2024 |website=] |language=zh-tw |quote= |trans-quote= |archive-date=December 19, 2024 |archive-url=https://web.archive.org/web/20241219214330/https://game.udn.com/game/story/10453/5189551}}
* {{cite web |last=Yoshiyuki |first=Furushima |date=January 18, 2021 |title=『Portal』のGLaDOSや『UNDERTALE』のサンズがテキストを読み上げてくれる。文章に込められた感情まで再現することを目指すサービス「15.ai」が話題に |trans-title=Portal's GLaDOS and UNDERTALE's Sans Will Read Text for You. "15.ai" Service Aims to Reproduce Even the Emotions in Text, Becomes Topic of Discussion |url=https://news.denfaminicogamer.jp/news/210118f |url-status=live |archive-url=https://web.archive.org/web/20210118051321/https://news.denfaminicogamer.jp/news/210118f |archive-date=January 18, 2021 |access-date=December 18, 2024 |website=] |language=ja |quote=日本語入力には対応していないが、ローマ字入力でもなんとなくそれっぽい発音になる。; 15.aiはテキスト読み上げサービスだが、特筆すべきはそのなめらかな発音と、ゲームに登場するキャラクター音声を再現している点だ。 |trans-quote=It does not support Japanese input, but even if you input using romaji, it will somehow give you a similar pronunciation.; 15.ai is a text-to-speech service, but what makes it particularly noteworthy is its smooth pronunciation and the fact that it reproduces the voices of characters that appear in games.}}
* {{cite web |last=Kurosawa |first=Yuki |date=January 19, 2021 |title=ゲームキャラ音声読み上げソフト「15.ai」公開中。『Undertale』や『Portal』のキャラに好きなセリフを言ってもらえる |trans-title=Game Character Voice Reading Software "15.ai" Now Available. Get Characters from Undertale and Portal to Say Your Desired Lines |url=https://automaton-media.com/articles/newsjp/20210119-149494/ |url-status=live |archive-url=https://web.archive.org/web/20210119103031/https://automaton-media.com/articles/newsjp/20210119-149494/ |archive-date=January 19, 2021 |access-date=December 18, 2024 |website=] |language=ja |quote=英語版ボイスのみなので注意。;もうひとつ15.aiの大きな特徴として挙げられるのが、豊かな感情表現だ。 |trans-quote=Please note that only English voices are available.;Another major feature of 15.ai is its rich emotional expression.}}
* {{cite magazine |last=Ruppert |first=Liana |date=January 18, 2021 |title=Make Portal's GLaDOS And Other Beloved Characters Say The Weirdest Things With This App |url=https://www.gameinformer.com/gamer-culture/2021/01/18/make-portals-glados-and-other-beloved-characters-say-the-weirdest-things |url-status=dead |archive-url=https://web.archive.org/web/20210118175543/https://www.gameinformer.com/gamer-culture/2021/01/18/make-portals-glados-and-other-beloved-characters-say-the-weirdest-things |archive-date=January 18, 2021 |access-date=December 18, 2024 |magazine=] |quote=}}
* {{cite web |last=Clayton |first=Natalie |date=January 19, 2021 |title=Make the cast of TF2 recite old memes with this AI text-to-speech tool |url=https://www.pcgamer.com/make-the-cast-of-tf2-recite-old-memes-with-this-ai-text-to-speech-tool |url-status=live |archive-url=https://web.archive.org/web/20210119133726/https://www.pcgamer.com/make-the-cast-of-tf2-recite-old-memes-with-this-ai-text-to-speech-tool/ |archive-date=January 19, 2021 |access-date=December 18, 2024 |website=] |quote=}}
* {{cite web |last=Morton |first=Lauren |date=January 18, 2021 |title=Put words in game characters' mouths with this fascinating text to speech tool |url=https://www.rockpapershotgun.com/2021/01/18/put-words-in-game-characters-mouths-with-this-fascinating-text-to-speech-tool/ |url-status=live |archive-url=https://web.archive.org/web/20210118213308/https://www.rockpapershotgun.com/2021/01/18/put-words-in-game-characters-mouths-with-this-fascinating-text-to-speech-tool/ |archive-date=January 18, 2021 |access-date=December 18, 2024 |website=] |quote=}}
* {{cite web |last=Ng |first=Andrew |date=April 1, 2020 |title=Voice Cloning for the Masses |url=https://www.deeplearning.ai/the-batch/voice-cloning-for-the-masses/|access-date=December 22, 2024 |website=] |quote=}}
* {{cite web |last=Zwiezen |first=Zack |date=January 18, 2021 |title=Website Lets You Make GLaDOS Say Whatever You Want |url=https://kotaku.com/this-website-lets-you-make-glados-say-whatever-you-want-1846062835 |url-status=live |archive-url=https://web.archive.org/web/20210117164748/https://kotaku.com/this-website-lets-you-make-glados-say-whatever-you-want-1846062835 |archive-date=January 17, 2021 |access-date=December 18, 2024 |website=] |quote=}}
* {{cite web |date=January 18, 2021 |title=这个网站可用AI生成语音 让ACG角色"说"出你输入的文本 |trans-title=This Website Can Use AI to Generate Voice, Making ACG Characters "Say" the Text You Input |url=https://www.gamersky.com/news/202101/1355887.shtml |url-status=live |access-date=December 18, 2024 |website=] |language=zh |quote= |trans-quote= |archive-date=December 11, 2024 |archive-url=https://web.archive.org/web/20241211221628/https://www.gamersky.com/news/202101/1355887.shtml |ref=GamerSky-2021}}
* {{cite web |last=Chandraseta |first=Rionaldi |date=January 21, 2021 |title=Generate Your Favourite Characters' Voice Lines using Machine Learning |url=https://towardsdatascience.com/generate-your-favourite-characters-voice-lines-using-machine-learning-c0939270c0c6 |url-status=live |access-date=December 18, 2024 |website=Towards Data Science |archive-date=January 21, 2021 |archive-url=https://web.archive.org/web/20210121132456/https://towardsdatascience.com/generate-your-favourite-characters-voice-lines-using-machine-learning-c0939270c0c6}}
* {{cite web |last=Williams |first=Demi |date=January 18, 2022 |title=Voiceverse NFT admits to taking voice lines from non-commercial service |url=https://www.nme.com/news/gaming-news/voiceverse-nft-admits-to-taking-voice-lines-from-non-commercial-service-3140663 |url-status=live |archive-url=https://web.archive.org/web/20220118162845/https://www.nme.com/news/gaming-news/voiceverse-nft-admits-to-taking-voice-lines-from-non-commercial-service-3140663 |archive-date=January 18, 2022 |access-date=December 18, 2024 |website=] |quote=}}
* {{cite web |last=Wright |first=Steve |date=January 17, 2022 |title=Troy Baker-backed NFT company admits to using content without permission |url=https://stevivor.com/news/troy-baker-nft-voiceverse-15-ai/ |url-status=live |archive-url=https://web.archive.org/web/20220117231918/https://stevivor.com/news/troy-baker-nft-voiceverse-15-ai/ |archive-date=January 17, 2022 |access-date=December 18, 2024 |website=Stevivor |quote=}}
* {{cite web |last=Phillips |first=Tom |date=January 17, 2022 |title=Troy Baker-backed NFT firm admits using voice lines taken from another service without permission |url=https://www.eurogamer.net/articles/2022-01-17-troy-baker-backed-nft-firm-admits-using-voice-lines-taken-from-another-service-without-permission |url-status=live |archive-url=https://web.archive.org/web/20220117164033/https://www.eurogamer.net/articles/2022-01-17-troy-baker-backed-nft-firm-admits-using-voice-lines-taken-from-another-service-without-permission |archive-date=January 17, 2022 |access-date=December 18, 2024 |website=] |quote=}}
* {{cite web|date=October 1, 2021|access-date=December 22, 2024|url=https://www.equestriacn.com/2021/10/15-ai-is-back-online-updated-to-v23.html|title=15.ai已经重新上线,版本更新至v23|trans-title=15.ai has been re-launched, version updated to v23|language=zh |ref=www.equestriacn.com}}
* {{cite web|author=MrSun |url=https://tw.news.yahoo.com/15-ai-044220764.html|date=January 19, 2021|access-date=December 22, 2024|title=讓你喜愛的ACG角色說出任何話! AI生成技術幫助你實現夢想|trans-title=Let your favorite ACG characters say anything! AI generation technology helps you realize your dreams|language=zh |quote=大家是否都曾經想像過,假如能讓自己喜歡的遊戲或是動畫角色說出自己想聽的話,不論是名字、惡搞或是經典名言,都是不少人的夢想吧。不過來到 2021 年,現在這種夢想不再是想想而已,因為有一個網站通過 AI 生成的技術,讓大家可以讓不少遊戲或是動畫角色,說出任何你想要他們講出的東西,而且相似度與音調都有相當高的準確度 |trans-quote=Have you ever imagined what it would be like if your favorite game or anime characters could say exactly what you want to hear? Whether it's names, parodies, or classic quotes, this is a dream for many. However, as we enter 2021, this dream is no longer just a fantasy, because there is a website that uses AI-generated technology, allowing users to make various game and anime characters say anything they want with impressive accuracy in both similarity and tone.}}
* {{cite web|url=https://arkade.com.br/faca-glados-bob-esponja-e-outros-personagens-falarem-textos-escritos-por-voce/|trans-title=Make GLaDOS, SpongeBob and other characters speak texts written by you!|last=do Prado|first=Renan|website=Arkade|access-date=December 22, 2024|date=January 19, 2021|language=pt-br|title=Faça GLaDOS, Bob Esponja e outros personagens falarem textos escritos por você!}}
* {{cite web |date=2024a<!--February 7, 2024--> |title=15.AI: Everything You Need to Know & Best Alternatives |url=https://elevenlabs.io/blog/15-ai |url-status=live |access-date=December 18, 2024 |website=] |quote= |archive-date=July 15, 2024 |archive-url=https://web.archive.org/web/20240715151316/https://elevenlabs.io/blog/15-ai |ref=ElevenLabs-2024a}}
* {{cite web |date=September 12, 2024 |title=Everything You Need to Know About 15.ai: The AI Voice Generator |url=https://play.ht/blog/15-ai/ |access-date=December 18, 2024 |website=Play.ht |ref=Play.ht-2024}}
* {{cite web |last=Button |first=Chris |date=January 19, 2021 |title=Make GLaDOS, SpongeBob and other friends say what you want with this AI text-to-speech tool |url=https://www.byteside.com/2021/01/15-ai-deepmoji-glados-spongebob-characters-ai-text-to-speech/ |url-status=live |access-date=December 18, 2024 |website=Byteside |quote= |archive-date=June 25, 2024 |archive-url=https://web.archive.org/web/20240625180514/https://www.byteside.com/2021/01/15-ai-deepmoji-glados-spongebob-characters-ai-text-to-speech/}}
* {{cite web |last=Scotellaro |first=Shaun |date=2020a<!--March 31, 2020--> |title=Rainbow Dash Voice Added to 15.ai |url=https://www.equestriadaily.com/2020/03/rainbow-dash-voice-added-to-15ai.html |url-status=live |access-date=December 18, 2024 |website=] |quote= |archive-date=December 1, 2024 |archive-url=https://web.archive.org/web/20241201163118/https://www.equestriadaily.com/2020/03/rainbow-dash-voice-added-to-15ai.html}}
* {{cite web |last=Scotellaro |first=Shaun |date=2020b<!--October 5, 2020-->|title=15.ai Adds Tons of New Pony Voices|url=https://www.equestriadaily.com/2020/10/15ai-adds-tons-of-new-pony-voices.html|access-date=December 21, 2024|website=]}}
* {{cite web |last1=Lawrence |first1=Briana |title=Shonen Jump Scare Leads to Company Reassuring Fans That They Aren't Getting Into NFTs |url=https://www.themarysue.com/shonen-jump-not-doing-nfts/ |website=] |access-date=23 December 2024 |date=19 January 2022}}
* {{cite web |last=Lopez |first=Ule |date=January 16, 2022 |title=Voiceverse NFT Service Reportedly Uses Stolen Technology from 15ai |url=https://wccftech.com/voiceverse-nft-service-uses-stolen-technology-from-15ai/ |url-status=live |archive-url=https://web.archive.org/web/20220116194519/https://wccftech.com/voiceverse-nft-service-uses-stolen-technology-from-15ai/ |archive-date=January 16, 2022 |access-date=June 7, 2022 |website=Wccftech}}
* {{Cite tweet |number=1482088782765576192 |user=fifteenai |title=Go fuck yourself. |date=January 14, 2022}}
* {{cite web |last=Knight |first=Will |date=August 3, 2017 |title=An Algorithm Trained on Emoji Knows When You're Being Sarcastic on Twitter |url=https://www.technologyreview.com/2017/08/03/105566/an-algorithm-trained-on-emoji-knows-when-youre-being-sarcastic-on-twitter/ |url-status=live |archive-url=https://web.archive.org/web/20220602215737/https://www.technologyreview.com/2017/08/03/105566/an-algorithm-trained-on-emoji-knows-when-youre-being-sarcastic-on-twitter/ |archive-date=June 2, 2022 |access-date=December 18, 2024 |website=]}}
* {{cite web |last1=Moto |first1=Eugenio |title=15.ai, el sitio que te permite usar voces de personajes populares para que digan lo que quieras |url=https://www.qore.com/noticias/78756/15ai-el-sitio-que-te-permite-usar-voces-de-personajes-populares-para-que-digan-lo-que-quieras/pagina/1/1000 |website=Qore |access-date=21 December 2024 |language=es |date=20 January 2021 |quote=Si bien los resultados ya son excepcionales, sin duda pueden mejorar más |trans-quote=While the results are already exceptional, without a doubt they can improve even more}}
* {{cite web |last=Scotellaro |first=Shaun |date=March 4, 2020 |title=Neat "Pony Preservation Project" Using Neural Networks to Create Pony Voices |url=https://www.equestriadaily.com/2020/03/neat-pony-preservation-project-using.html |url-status=live |access-date=December 18, 2024 |website=] |archive-date=June 23, 2021 |archive-url=https://web.archive.org/web/20210623210048/https://www.equestriadaily.com/2020/03/neat-pony-preservation-project-using.html |ref=Scotellaro-2020c}}
* {{cite web |last=Villalobos |first=José |date=January 18, 2021 |title=Descubre 15.AI, un sitio web en el que podrás hacer que GlaDOS diga lo que quieras |trans-title=Discover 15.AI, a Website Where You Can Make GlaDOS Say What You Want |url=https://www.laps4.com/noticias/descubre-15-ai-un-sitio-web-en-el-que-podras-hacer-que-glados-diga-lo-que-quieras/ |url-status=live |archive-url=https://web.archive.org/web/20210118172043/https://www.laps4.com/noticias/descubre-15-ai-un-sitio-web-en-el-que-podras-hacer-que-glados-diga-lo-que-quieras/ |archive-date=January 18, 2021 |access-date=January 18, 2021 |website=LaPS4 |language=es |quote=La dirección es 15.AI y funciona tan fácil como parece. |trans-quote=The address is 15.AI and it works as easy as it looks.}}
* {{cite web|last=Paltridge|first=Peter|url=https://animesuperhero.com/this-website-will-say-whatever-you-type-in-spongebobs-voice/|title=This Website Will Say Whatever You Type In Spongebob's Voice|access-date=December 22, 2024|date=January 18, 2021}}
* {{cite web |date=January 20, 2021 |title=게임 캐릭터 음성으로 영어를 읽어주는 소프트 15.ai 공개. |trans-title=Software 15.ai Released That Reads English in Game Character Voices |url=https://zuntata.tistory.com/7283 |access-date=December 18, 2024 |website=] |language=ko |quote= |trans-quote= |ref=Tistory-2021}}
* {{cite web |last=Li |first=Yongqiang |title=语音开源项目优选:免费配音网站15.ai |trans-title=Voice Open Source Project Selection: Free Voice Acting Website 15.ai |url=https://zhuanlan.zhihu.com/p/346417192 |access-date=December 18, 2024 |date=2021 |website=] |language=zh |quote= |trans-quote=}}
* {{cite web |author=Anirudh VK |date=March 18, 2023 |title=Deepfakes Are Elevating Meme Culture, But At What Cost? |url=https://analyticsindiamag.com/ai-origins-evolution/deepfakes-are-elevating-meme-culture-but-at-what-cost/ |access-date=December 18, 2024 |website=Analytics India Magazine |quote="While AI voice memes have been around in some form since '15.ai' launched in 2020, "}}
* {{cite web |last=Wright |first=Steven |date=March 21, 2023 |title=Why Biden, Trump, and Obama Arguing Over Video Games Is YouTube's New Obsession |url=https://www.inverse.com/gaming/youtube-ai-presidential-gaming-debates |url-status=live |access-date=December 18, 2024 |website=] |quote="AI voice tools used to create "audio deepfakes" have existed for years in one form or another, with 15.ai being a notable example." |archive-date=December 20, 2024 |archive-url=https://web.archive.org/web/20241220012854/https://www.inverse.com/gaming/youtube-ai-presidential-gaming-debates}}
* {{cite web |title=Can I publish the content I generate on the platform? |url=https://help.elevenlabs.io/hc/en-us/articles/13313564601361-Can-I-publish-the-content-I-generate-on-the-platform |website=ElevenLabs |access-date=23 December 2024 |date=2024b<!--8 May 2024--> |type=Official website |ref=ElevenLabs-2024b}}
* {{cite web |title=Navigating the Challenges and Opportunities of Synthetic Voices |url=https://openai.com/index/navigating-the-challenges-and-opportunities-of-synthetic-voices/ |website=] |url-status=live |date=March 9, 2024 |access-date=December 18, 2024 |archive-date=November 25, 2024 |archive-url=https://web.archive.org/web/20241125181327/https://openai.com/index/navigating-the-challenges-and-opportunities-of-synthetic-voices/ |ref=OpenAI-2024}}
{{refend}}


] ]

Revision as of 15:51, 25 December 2024

Real-time text-to-speech AI tool
An editor has nominated this article for deletion.
You are welcome to participate in the deletion discussion, which will decide whether or not to retain it.Feel free to improve the article, but do not remove this notice before the discussion is closed. For more information, see the guide to deletion.
Find sources: "15.ai" – news · newspapers · books · scholar · JSTOR%5B%5BWikipedia%3AArticles+for+deletion%2F15.ai+%283rd+nomination%29%5D%5DAFD

15.ai
Type of siteArtificial intelligence, speech synthesis
Available inEnglish
Founder(s)15
URL15.ai
CommercialNo
RegistrationNone
LaunchedMarch 2020; 4 years ago (2020-03)
Current statusInactive

15.ai was a free non-commercial web application that used artificial intelligence to generate text-to-speech voices of fictional characters from popular media. Conceived by an artificial intelligence researcher known as "15" during their time at the Massachusetts Institute of Technology and developed following their successful exit from a startup venture, the application allowed users to make characters from various media speak custom text with emotional inflections faster than real-time.

Launched in March 2020, The service gained widespread attention in early 2021 when it went viral on social media platforms like YouTube and Twitter, and quickly became popular among Internet fandoms, including the My Little Pony: Friendship Is Magic, Team Fortress 2, and SpongeBob SquarePants fandoms. The website had a role in the emergence of AI voice cloning (audio deepfake) memes.

In January 2022, Voiceverse NFT sparked controversy when it was discovered that the company, which had partnered with voice actor Troy Baker, had misappropriated 15.ai's work for their own platform. The service was ultimately taken offline in September 2022. Its shutdown led to the emergence of various commercial alternatives in subsequent years.

History

15.ai was conceived in 2016 as a research project in deep learning speech synthesis by a developer known as "15" during their undergraduate studies at the Massachusetts Institute of Technology (MIT). The developer was inspired by DeepMind's WaveNet paper, with development continuing through their studies as Google AI released Tacotron the following year. The name 15 is a reference to the creator's claim that a voice can be cloned with as little as 15 seconds of data. 15.ai was released in March 2020. More voices were added to the website in the following months.

In early 2021, the application went viral on Twitter and YouTube, with people generating skits, memes, and fan content using voices from popular games and shows. 15.ai use also resulted in memes and viral videos. These included recreations of the popular Source Filmmaker video Heavy is Dead, The RED Bread Bank, and Among Us Struggles, which have amassed millions of views on social media. Content creators, YouTubers, and TikTokers have also used 15.ai as part of their videos as voiceovers. According to the developer, at its peak, the platform incurred operational costs of US$12,000 per month from AWS infrastructure needed to handle millions of daily voice generations. They funded the website through their previous startup earnings.

On January 14, 2022, a controversy ensued after it was discovered that Voiceverse NFT, a company that video game and anime dub voice actor Troy Baker announced his partnership with, had misappropriated voice lines generated from 15.ai as part of their marketing campaign. Log files showed that Voiceverse had generated audio of characters from My Little Pony: Friendship Is Magic using 15.ai, pitched them up to make them sound unrecognizable from the original voices to market their own platform—in violation of 15.ai's terms of service. Voiceverse claimed that someone in their marketing team used the voice without properly crediting 15.ai; in response, 15 tweeted "Go fuck yourself."

In September 2022, 15.ai was taken offline. The developer claimed that this was due to legal issues surrounding artificial intelligence and copyright.

Features

The platform was non-commercial, and operated without requiring user registration or accounts. Users generated speech by inputting text and selecting a character voice, with optional parameters for emotional contextualizers and phonetic transcriptions. Each request produced three audio variations with distinct emotional deliveries. Characters available included multiple characters from Team Fortress 2 and My Little Pony: Friendship Is Magic; GLaDOS and Wheatley from the Portal series; SpongeBob SquarePants; Rise Kujikawa from Persona 4; Daria Morgendorffer and Jane Lane from Daria; Carl Brutananadilewski from Aqua Teen Hunger Force; Steven Universe from Steven Universe; Sans from Undertale; the Tenth Doctor Who; the Narrator from The Stanley Parable; and HAL 9000 from 2001: A Space Odyssey. Certain "silent" characters like Chell and Gordon Freeman were able to be selected as a joke, and would emit silent audio files when any text was submitted.

The deep learning model's nondeterministic properties produced variations in speech output, creating different intonations with each generation, similar to how voice actors produce different takes. 15.ai introduced the concept of "emotional contextualizers," which allowed users to specify the emotional tone of generated speech through guiding phrases. The emotional contextualizer functionality utilized DeepMoji, a sentiment analysis neural network developed at the MIT Media Lab. Introduced in 2017, DeepMoji processed emoji embeddings from 1.2 billion Twitter posts (2013-2017) to analyze emotional content. Testing showed the system could identify emotional elements, including sarcasm, more accurately than human evaluators.

The application provided support for a simplified version of ARPABET, a set of English phonetic transcriptions originally developed by the Advanced Research Projects Agency in the 1970s. This feature allowed users to correct mispronunciations or specify the desired pronunciation between heteronyms – words that have the same spelling but have different pronunciations. Users could invoke ARPABET transcriptions by enclosing the phoneme string in curly braces within the input box (for example, "{AA1 R P AH0 B EH2 T}" to specify the pronunciation of the word "ARPABET" (/ˈɑːrpəˌbɛt/ AR-pə-beht). The interface displayed parsed words with color-coding to indicate pronunciation certainty: green for words found in the existing pronunciation lookup table, blue for manually entered ARPAbet pronunciations, and red for words where the pronunciation had to be algorithmically predicted.

Later versions of 15.ai introduced multi-speaker capabilities. Rather than training separate models for each voice, 15.ai used a unified model that learned multiple voices simultaneously through speaker embeddings–learned numerical representations that captured each character's unique vocal characteristics. Along with the emotional context conferred by DeepMoji, this neural network architecture enabled the model to learn shared patterns across different characters' emotional expressions and speaking styles, even when individual characters lacked examples of certain emotional contexts in their training data.

The interface included technical metrics and graphs, which, according to the developer, served to highlight the research aspect of the website. As of version v23, released in September 2021, the interface displayed comprehensive model analysis information, including word parsing results and emotional analysis data. The flow and generative adversarial network (GAN) hybrid denoising function, introduced in an earlier version, was streamlined to remove manual parameter inputs.

Reception and legacy

Critics described 15.ai as easy to use and generally able to convincingly replicate character voices, with occasional mixed results. Natalie Clayton of PC Gamer wrote that SpongeBob SquarePants' voice was replicated well, but noted challenges in mimicking the Narrator from the The Stanley Parable: "the algorithm simply can't capture Kevan Brighting's whimsically droll intonation." Zack Zwiezen of Kotaku reported that " girlfriend was convinced it was a new voice line from GLaDOS' voice actor, Ellen McLain". Taiwanese newspaper United Daily News also highlighted 15.ai's ability to recreate GLaDOS's mechanical voice, alongside its diverse range of character voice options. Yahoo! News Taiwan reported that "GLaDOS in Portal can pronounce lines nearly perfectly", but also criticized that "there are still many imperfections, such as word limit and tone control, which are still a little weird in some words."

Multiple other critics also found the character limit and the prosody options as not entirely satisfactory. Peter Paltridge of anime and superhero news outlet Anime Superhero opined that "voice synthesis has evolved to the point where the more expensive efforts are nearly indistinguishable from actual human speech," but also noted that "In some ways, SAM is still more advanced than this. It was possible to affect SAM’s inflections by using special characters, as well as change his pitch at will. With 15.ai, you’re at the mercy of whatever random inflections you get." Conversely, Lauren Morton of Rock, Paper, Shotgun praised the depth of pronunciation control—"if you're willing to get into the nitty gritty of it". Takayuki Furushima of Den Fami Nico Gamer highlighted the "smooth pronunciations", and Yuki Kurosawa of AUTOMATON noted its "rich emotional expression" as a major feature; both Japanese authors noted the lack of Japanese-language support. Renan do Prado of the Brazilian gaming news outlet Arkade pointed out that users could create amusing results in Portuguese, although generation primarily performed well in English.

South Korean video game outlet Zuntata wrote that "the surprising thing about 15.ai is that , there's only about 30 seconds of data, but it achieves pronunciation accuracy close to 100%". Machine learning professor Yongqiang Li wrote in his blog that he was surprised to see that the application was free.

15.ai was an early pioneer of audio deepfakes, leading to the emergence of AI speech synthesis-based memes. Its influence has been noted in the years after it became defunct, and since then, several commercial alternatives emerged, such as ElevenLabs and Speechify. The original claim that only 15 seconds of data is required to clone a human's voice was corroborated by OpenAI in 2024.

See also

Explanatory footnotes

  1. The term "faster than real-time" in speech synthesis means that the system can generate audio more quickly than the actual duration of the speech – for example, generating 10 seconds of speech in less than 10 seconds would be considered faster than real-time.
  2. which uses "11.ai" as a legal byname for its web domain

References

Notes

  1. 遊戲 2021; Yoshiyuki 2021.
  2. Kurosawa 2021; Ruppert 2021; Clayton 2021; Morton 2021.
  3. Ng 2020.
  4. Zwiezen 2021; Chandraseta 2021.
  5. ^ GamerSky 2021.
  6. ^ Chandraseta 2021.
  7. ^ "The past and future of 15.ai". Twitter. Archived from the original on December 8, 2024. Retrieved December 19, 2024.
  8. Chandraseta 2021; Button 2021.
    • "About". fifteen.ai (Official website). February 19, 2020. Archived from the original on February 23, 2020. Retrieved December 23, 2024. 2020-02-19: The web app isn't fully ready just yet
    • "About". fifteen.ai (Official website). March 2, 2020. Archived from the original on March 3, 2020. Retrieved December 23, 2024.
  9. Scotellaro 2020a; Scotellaro 2020b.
  10. Zwiezen 2021; Clayton 2021; Ruppert 2021; Yoshiyuki 2021.
  11. ^ 遊戲 2021.
  12. ^ Kurosawa 2021.
  13. ^ Morton 2021.
  14. Play.ht 2024.
  15. Lawrence 2022; Williams 2022; Wright 2022.
  16. Phillips 2022; Lopez 2022.
  17. Wright 2022; Phillips 2022.
  18. fifteenai 2022. sfn error: no target: CITEREFfifteenai2022 (help)
  19. ^ ElevenLabs 2024a; Play.ht 2024.
  20. Williams 2022.
  21. Phillips 2022.
  22. Zwiezen 2021; Clayton 2021; Morton 2021; Ruppert 2021.
  23. ^ Yoshiyuki 2021.
  24. Knight 2017.
  25. ^ www.equestriacn.com 2022.
  26. Clayton 2021; Ruppert 2021; Moto 2021; Scotellaro 2020; Villalobos 2021.
  27. Clayton 2021.
  28. Zwiezen 2021.
  29. ^ MrSun 2021.
  30. Paltridge 2021.
  31. do Prado 2021.
  32. zuntata.tistory.com 2021.
  33. Li 2021.
  34. MrSun 2021; Anirudh VK 2023.
  35. Wright 2023.
  36. ElevenLabs 2024b.
  37. OpenAI 2024.

Works cited

Speech synthesis
Free software
Speaking
Singing
Proprietary
software
Speaking
Singing
Machine
Applications
Protocols
Developers/
Researchers
Process
My Little Pony (2010–2021)
Equestria
Friendship Is Magic
(2010–2019)
Episodes
Season 1 (2010–2011)
"Friendship Is Magic"
"The Cutie Mark Chronicles"
"The Best Night Ever"
Season 2 (2011–2012)
"The Return of Harmony"
"Hearts and Hooves Day"
"A Canterlot Wedding"
Season 3 (2012–2013)
"The Crystal Empire"
"One Bad Apple"
"Magic Duel"
"Spike at Your Service"
"Keep Calm and Flutter On"
"Games Ponies Play"
"Magical Mystery Cure"
Season 4 (2013–2014)
"Princess Twilight Sparkle"
"Power Ponies"
"Three's a Crowd"
"Pinkie Pride"
"Filli Vanilli"
"Twilight's Kingdom"
Season 5 (2015)
"The Cutie Map"
"Slice of Life"
"Amending Fences"
"Crusaders of the Lost Mark"
"The Cutie Re-Mark"
Season 6 (2016)
"A Hearth's Warming Tail"
Season 7 (2017)
"The Perfect Pear"
Season 8 (2018)
"Grannies Gone Wild"
Season 9 (2019)
"The Last Crusade"
Finale
My Little Pony: The Movie
(2017)
Other series
Games
Comics
Fandom
See alsoMy Little Pony: Equestria Girls
Categories: