6,294
edits
No edit summary |
|||
Line 4: | Line 4: | ||
== AI Developments == | == AI Developments == | ||
August 7, 2023: UBS predicts that Meta will release a new consumer chatbot powered by generative AI in its connect event in September. It told its clients that the chatbot will likely be available in WhatsApp, Facebook and Instagram. UBS projects that chatbot could add more than $20 billion to Meta's revenue. In a call with analysts last week, CEO Mark Zuckerberg promised to provide more details about the Chatbot at its Connect. He pointed out that Meta is currently developing a customer service AI bot along with an internal AI tool aimed at boosting profitability. He also indicated that AI investments is seen driving up operating costs next year<ref>https://finance.yahoo.com/news/ubs-meta-platforms-seen-unveiling-100034715.html</ref>. | |||
August 3, 2023: Meta launches AudioCraft, an open-source AI tool that enables users to generate audio and music from text prompts. AudioCraft will have three models namely; MusicGen, AudioGen and EnCodec. Meta said that the three models will be available to researchers and practicioners who will be able to use their own data set to train them. MusicGen was trained using Meta's own and licenced music. On the other hand, AudioGen was trained using public sound effects. EnCodec enables high quality music generation with less artifacts. Meta also released AudioGen model, which are pre-trained and enables users to generate environmental sounds and sound effects such as a dog barking or cars honking. "While we’ve seen a lot of excitement around generative AI for images, video, and text, audio has seemed to lag a bit behind," Meta said in a blog post<ref>https://about.fb.com/news/2023/08/audiocraft-generative-ai-for-music-and-audio/ | August 3, 2023: Meta launches AudioCraft, an open-source AI tool that enables users to generate audio and music from text prompts. AudioCraft will have three models namely; MusicGen, AudioGen and EnCodec. Meta said that the three models will be available to researchers and practicioners who will be able to use their own data set to train them. MusicGen was trained using Meta's own and licenced music. On the other hand, AudioGen was trained using public sound effects. EnCodec enables high quality music generation with less artifacts. Meta also released AudioGen model, which are pre-trained and enables users to generate environmental sounds and sound effects such as a dog barking or cars honking. "While we’ve seen a lot of excitement around generative AI for images, video, and text, audio has seemed to lag a bit behind," Meta said in a blog post<ref>https://about.fb.com/news/2023/08/audiocraft-generative-ai-for-music-and-audio/ | ||