<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[TestingCatalog]]></title><description><![CDATA[Reporting AI nonsense. A future news media, driven by virtual assistants 🤖]]></description><link>https://www.testingcatalog.com/</link><generator>Ghost 5.101</generator><lastBuildDate>Wed, 20 Nov 2024 19:57:12 GMT</lastBuildDate><atom:link href="https://www.testingcatalog.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Google unveils LearnLM 1.5 Pro Experimental model on AI Studio]]></title><description><![CDATA[Discover Google's new LearnLM 1.5 Pro Experimental model on AI Studio, designed for teaching and learning with advanced reasoning. Explore UI updates and model stats.]]></description><link>https://www.testingcatalog.com/google-unveils-learnlm-1-5-pro-experimental-model-on-ai-studi/</link><guid isPermaLink="false">673d29a793d40f0001f93eb6</guid><category><![CDATA[AI Studio]]></category><category><![CDATA[AI News]]></category><dc:creator><![CDATA[Alexey Shabanov]]></dc:creator><pubDate>Wed, 20 Nov 2024 00:33:15 GMT</pubDate><media:content url="https://www.testingcatalog.com/content/images/2024/11/screenshot-aistudio_google_com-2024_11_19-23_50_46--1-.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-aistudio_google_com-2024_11_19-23_50_46--1-.jpeg" alt="Google unveils LearnLM 1.5 Pro Experimental model on AI Studio"><p>Google recently released a new experimental model on AI Studio called LearnLM 1.5 Pro Experimental, located under the preview section. According to the official documentation, this is a task-specific model trained to align with learning science principles when following system instructions for teaching and learning use cases. For instance, the model can take on tasks to act as an expert or guide to educate users on specific topics. Although not yet confirmed, some users have observed that it demonstrates chain-of-thought reasoning capabilities.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-aistudio_google_com-2024_11_19-20_09_10.jpeg" class="kg-image" alt="Google unveils LearnLM 1.5 Pro Experimental model on AI Studio" loading="lazy" width="1920" height="934" srcset="https://www.testingcatalog.com/content/images/size/w600/2024/11/screenshot-aistudio_google_com-2024_11_19-20_09_10.jpeg 600w, https://www.testingcatalog.com/content/images/size/w1000/2024/11/screenshot-aistudio_google_com-2024_11_19-20_09_10.jpeg 1000w, https://www.testingcatalog.com/content/images/size/w1600/2024/11/screenshot-aistudio_google_com-2024_11_19-20_09_10.jpeg 1600w, https://www.testingcatalog.com/content/images/2024/11/screenshot-aistudio_google_com-2024_11_19-20_09_10.jpeg 1920w" sizes="(min-width: 1200px) 1200px"></figure><p>In addition to this, a couple of minor updates for AI Studio are still in development. <a href="https://www.testingcatalog.com/ai-studio-ui-revamp-under-development-inspired-by-gemiini/">The ongoing UI overhaul</a> continues to progress. A new feature has been introduced in the form of hover cards for models, which provide detailed stats, including latency, cost per token, input/output limits, and other parameters.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-aistudio_google_com-2024_11_19-23_50_46.jpeg" class="kg-image" alt="Google unveils LearnLM 1.5 Pro Experimental model on AI Studio" loading="lazy" width="2000" height="1000" srcset="https://www.testingcatalog.com/content/images/size/w600/2024/11/screenshot-aistudio_google_com-2024_11_19-23_50_46.jpeg 600w, https://www.testingcatalog.com/content/images/size/w1000/2024/11/screenshot-aistudio_google_com-2024_11_19-23_50_46.jpeg 1000w, https://www.testingcatalog.com/content/images/size/w1600/2024/11/screenshot-aistudio_google_com-2024_11_19-23_50_46.jpeg 1600w, https://www.testingcatalog.com/content/images/size/w2400/2024/11/screenshot-aistudio_google_com-2024_11_19-23_50_46.jpeg 2400w" sizes="(min-width: 1200px) 1200px"></figure><p>Another update is the addition of a Model Information tab, which displays a list of all available models alongside granular descriptions. This helps users differentiate between models more effectively. Lastly, <a href="https://www.testingcatalog.com/tag/ai-studio/">AI Studio</a> will categorise its models into three distinct groups: Gemini 1.5 models, Gamma models, Experimental preview models</p>]]></content:encoded></item><item><title><![CDATA[Gemini debuts Saved Info, blending manual and automatic AI memory]]></title><description><![CDATA[Discover Google GemIIni's new Saved Info feature, allowing manual memory input and automatic storage during chats. Enjoy full control and longer context windows.]]></description><link>https://www.testingcatalog.com/gemini-debuts-saved-info-blending-manual-and-automatic-ai-memory/</link><guid isPermaLink="false">673d282493d40f0001f93ea1</guid><category><![CDATA[Gemini]]></category><category><![CDATA[AI News]]></category><dc:creator><![CDATA[Alexey Shabanov]]></dc:creator><pubDate>Wed, 20 Nov 2024 00:28:01 GMT</pubDate><media:content url="https://www.testingcatalog.com/content/images/2024/11/screenshot-gemini_google_com-2024_11_19-17_21_42.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-gemini_google_com-2024_11_19-17_21_42.jpeg" alt="Gemini debuts Saved Info, blending manual and automatic AI memory"><p>Google recently introduced a new feature for Gemini called Saved Info. This feature is accessible in the Settings menu, where users are directed to a new interface that allows them to add memories manually for Gemini to Remember. In addition to manual input, users can explicitly ask Gemini to remember details during conversations, and these memories will be stored automatically, similar to how ChatGPT handles this functionality.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Rolling out starting today, you can ask Gemini Advanced to remember your interests and preferences for more helpful, relevant responses. Easily view, edit, or delete any information you've shared, and see when it’s used.<br><br>Try it in Gemini Advanced → <a href="https://t.co/Yh38BPvqjp?ref=testingcatalog.com">https://t.co/Yh38BPvqjp</a> <a href="https://t.co/gR354OZxnV?ref=testingcatalog.com">pic.twitter.com/gR354OZxnV</a></p>— Google Gemini App (@GeminiApp) <a href="https://twitter.com/GeminiApp/status/1858929151476199591?ref_src=twsrc%5Etfw&ref=testingcatalog.com">November 19, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></figure><p>While this feature closely mirrors OpenAI’s implementation, Gemini offers the added flexibility of manually adding new memories. The same interface also enables users to remove or edit stored information, providing full control over the memory data.</p><p>Another distinguishing aspect of <a href="https://www.testingcatalog.com/tag/gemini/">Gemini</a> is its ability to operate with significantly longer context windows, supporting up to 2 million tokens. However, the memory capacity for Saved Info is capped at 2500 tokens, slightly less than ChatGPT’s limit of 2800 tokens.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-gemini_google_com-2024_11_19-17_21_22.jpeg" class="kg-image" alt="Gemini debuts Saved Info, blending manual and automatic AI memory" loading="lazy" width="2000" height="1000" srcset="https://www.testingcatalog.com/content/images/size/w600/2024/11/screenshot-gemini_google_com-2024_11_19-17_21_22.jpeg 600w, https://www.testingcatalog.com/content/images/size/w1000/2024/11/screenshot-gemini_google_com-2024_11_19-17_21_22.jpeg 1000w, https://www.testingcatalog.com/content/images/size/w1600/2024/11/screenshot-gemini_google_com-2024_11_19-17_21_22.jpeg 1600w, https://www.testingcatalog.com/content/images/size/w2400/2024/11/screenshot-gemini_google_com-2024_11_19-17_21_22.jpeg 2400w" sizes="(min-width: 1200px) 1200px"></figure><p>This update is accompanied by a refreshed official changelog, outlining the new feature and enhancements.</p>]]></content:encoded></item><item><title><![CDATA[Anthropic develops preferences feature for Claude responses]]></title><description><![CDATA[Anthropic's upcoming custom styles feature lets users test and refine content generation with Claude. New tooltips and account preferences enhance customization.]]></description><link>https://www.testingcatalog.com/anthropic-develops-preferences-feature-for-claude-responses/</link><guid isPermaLink="false">673d26ca93d40f0001f93e9c</guid><category><![CDATA[Claude]]></category><category><![CDATA[AI News]]></category><dc:creator><![CDATA[Alexey Shabanov]]></dc:creator><pubDate>Wed, 20 Nov 2024 00:23:12 GMT</pubDate><media:content url="https://www.testingcatalog.com/content/images/2024/11/screenshot-claude_ai-2024_11_19-23_15_56.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-claude_ai-2024_11_19-23_15_56.png" alt="Anthropic develops preferences feature for Claude responses"><p>Anthropic is actively working on its custom styles feature. In the latest version, currently in development, users will be able to test custom styles by selecting one and asking Claude to generate content such as a short story, an email, or educational material. This will allow users to evaluate how their <a href="https://www.testingcatalog.com/work-continues-on-anthropics-claude-custom-styles-tooling/">custom style</a> performs and what kind of output they can expect.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-claude_ai-2024_11_19-17_56_53.png" class="kg-image" alt="Anthropic develops preferences feature for Claude responses" loading="lazy" width="2000" height="1041" srcset="https://www.testingcatalog.com/content/images/size/w600/2024/11/screenshot-claude_ai-2024_11_19-17_56_53.png 600w, https://www.testingcatalog.com/content/images/size/w1000/2024/11/screenshot-claude_ai-2024_11_19-17_56_53.png 1000w, https://www.testingcatalog.com/content/images/size/w1600/2024/11/screenshot-claude_ai-2024_11_19-17_56_53.png 1600w, https://www.testingcatalog.com/content/images/size/w2400/2024/11/screenshot-claude_ai-2024_11_19-17_56_53.png 2400w" sizes="(min-width: 1200px) 1200px"></figure><p>A new tooltip has also been added to the interface. It suggests that users can choose from preset styles or create their own to customize the tone of voice, vocabulary, level of detail, and more. The tooltip includes a link to an FAQ page, although the page is not yet available.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-claude_ai-2024_11_19-17_55_24.png" class="kg-image" alt="Anthropic develops preferences feature for Claude responses" loading="lazy" width="2000" height="1041" srcset="https://www.testingcatalog.com/content/images/size/w600/2024/11/screenshot-claude_ai-2024_11_19-17_55_24.png 600w, https://www.testingcatalog.com/content/images/size/w1000/2024/11/screenshot-claude_ai-2024_11_19-17_55_24.png 1000w, https://www.testingcatalog.com/content/images/size/w1600/2024/11/screenshot-claude_ai-2024_11_19-17_55_24.png 1600w, https://www.testingcatalog.com/content/images/size/w2400/2024/11/screenshot-claude_ai-2024_11_19-17_55_24.png 2400w" sizes="(min-width: 1200px) 1200px"></figure><p>Anthropic is further developing an account preferences feature, which will be accessible in the settings. This feature will allow users to set a system prompt to specify their preferences, likes, or expectations. For example, a user could clarify that when they refer to “Cloud,” they mean <a href="https://www.testingcatalog.com/tag/claude/">Claude</a> by Anthropic and not Cloud, like in Google Cloud.</p><p>While there is no confirmed release date for these features, they are actively being worked on and could become available at any time.</p>]]></content:encoded></item><item><title><![CDATA[ChatGPT Advanced Voice Mode now rolling out to desktop browsers]]></title><description><![CDATA[OpenAI's Advanced Voice Mode is rolling out to desktop browsers, enabling voice interactions with ChatGPT. Free users will gain access soon. Stay tuned!]]></description><link>https://www.testingcatalog.com/chatgpt-advanced-voice-mode-now-rolling-out-to-desktop-browsers/</link><guid isPermaLink="false">673d253d93d40f0001f93e97</guid><category><![CDATA[ChatGPT News]]></category><category><![CDATA[AI News]]></category><dc:creator><![CDATA[Alexey Shabanov]]></dc:creator><pubDate>Wed, 20 Nov 2024 00:15:06 GMT</pubDate><media:content url="https://www.testingcatalog.com/content/images/2024/11/screenshot-chatgpt_com-2024_11_19-22_20_29.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-chatgpt_com-2024_11_19-22_20_29.png" alt="ChatGPT Advanced Voice Mode now rolling out to desktop browsers"><p>OpenAI recently announced that the Advanced Voice Mode will soon become available on the web for desktop browsers. This feature is rolling out gradually per device, meaning you might see it on one browser but not another initially. However, a full rollout is expected shortly.</p><p>It was also mentioned that the Advanced Voice Mode will become available to free users in the coming weeks. While this suggests a potentially extended timeline, the inclusion of free users is clearly planned. </p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Another Advanced Voice update for you—it’s rolling out now on <a href="https://t.co/nYW5KO1aIg?ref=testingcatalog.com">https://t.co/nYW5KO1aIg</a> on desktop for all paid users.<br><br>So you can easily learn how to say the things you're doing an entire presentation on. <a href="https://t.co/n138fy4QeG?ref=testingcatalog.com">pic.twitter.com/n138fy4QeG</a></p>— OpenAI (@OpenAI) <a href="https://twitter.com/OpenAI/status/1858948388005572987?ref_src=twsrc%5Etfw&ref=testingcatalog.com">November 19, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></figure><p>OpenAI, the company behind <a href="https://www.testingcatalog.com/tag/chatgpt/">ChatGPT</a>, designed the Advanced Voice Mode to allow natural and conversational AI interactions through voice. This functionality enables users to interact with ChatGPT using voice, similar to the voice interaction features already available on desktop and mobile apps. </p><p>The availability of this feature on the web likely represents the final step in its public release. The next anticipated feature from OpenAI is the introduction of Vision capabilities for Advanced Voice Mode, which will expand its functionality further.</p>]]></content:encoded></item><item><title><![CDATA[Create stunning logos with AI tool LogoCreator]]></title><description><![CDATA[Built on Together AI's and Flux Pro 1.1 image model, this open-source tool promises to simplify the process of logo creation for users]]></description><link>https://www.testingcatalog.com/create-stunning-logos-with-ai-tool-logocreator/</link><guid isPermaLink="false">673b354bea7c870001bf5af4</guid><category><![CDATA[AI News]]></category><dc:creator><![CDATA[Faraz]]></dc:creator><pubDate>Tue, 19 Nov 2024 19:52:12 GMT</pubDate><media:content url="https://www.testingcatalog.com/content/images/2024/11/screenshot-www_logo-creator_io-2024_11_19-20_49_46.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-www_logo-creator_io-2024_11_19-20_49_46.jpeg" alt="Create stunning logos with AI tool LogoCreator"><p>Logo creator a new addition to the growing portfolio of AI-powered design tools. Built on Together AI's and <em>Flux Pro 1.1</em> image model, this open-source tool promises to simplify the process of logo creation for users, allowing them to generate professional-quality logos within seconds. The tool's rate-limited access—three logos per user—highlights the resource-intensive nature of its AI model while offering flexibility for extended usage through a Together API key.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-www_logo-creator_io-2024_11_19-20_50_09.jpeg" class="kg-image" alt="Create stunning logos with AI tool LogoCreator" loading="lazy" width="1920" height="934" srcset="https://www.testingcatalog.com/content/images/size/w600/2024/11/screenshot-www_logo-creator_io-2024_11_19-20_50_09.jpeg 600w, https://www.testingcatalog.com/content/images/size/w1000/2024/11/screenshot-www_logo-creator_io-2024_11_19-20_50_09.jpeg 1000w, https://www.testingcatalog.com/content/images/size/w1600/2024/11/screenshot-www_logo-creator_io-2024_11_19-20_50_09.jpeg 1600w, https://www.testingcatalog.com/content/images/2024/11/screenshot-www_logo-creator_io-2024_11_19-20_50_09.jpeg 1920w" sizes="(min-width: 1200px) 1200px"></figure><h2 id="leaked-features-and-technical-framework">Leaked Features and Technical Framework</h2><p><a href="https://www.logo-creator.io/?ref=testingcatalog.com">LogoCreator</a>'s tech stack highlights its focus on efficiency and scalability. The tool integrates cutting-edge solutions such as:</p><ul><li><em>Next.js</em> for a robust front-end framework</li><li><em>Clerk.dev</em> for authentication</li><li><em>Upstash Redis</em> for rate limiting</li><li><em>Helicone.ai</em> for AI observability</li></ul><p>This combination ensures the tool's functionality is accessible without overwhelming its backend resources. The app's repository is available on GitHub, offering transparency and opportunities for community-driven improvements.</p><h2 id="the-company-behind-logocreator">The Company Behind LogoCreator</h2><p>Together AI, the organization powering the underlying technology, is known for leveraging AI to tackle creative and productivity challenges. <a href="https://www.logo-creator.io/?ref=testingcatalog.com">LogoCreator</a> aligns with their mission to make advanced AI tools widely available, targeting:</p><ul><li>Creators</li><li>Small businesses</li><li>Developers</li></ul><p>By making the app open source, they further encourage innovation and collaboration within the tech community.</p><h2 id></h2>]]></content:encoded></item><item><title><![CDATA[Mistral beta features powered by Pixtral Large now available for free]]></title><description><![CDATA[Discover Mistral AI's latest updates: Pixtral Large, a 124B parameter multimodal model, and Le Chat's enhanced AI assistant platform with web search and task automation.]]></description><link>https://www.testingcatalog.com/mistral-beta-features-powered-by-pixtral-large-now-available-for-free/</link><guid isPermaLink="false">673bbc20ea7c870001bf5b16</guid><category><![CDATA[Mistral]]></category><category><![CDATA[AI News]]></category><dc:creator><![CDATA[Alexey Shabanov]]></dc:creator><pubDate>Mon, 18 Nov 2024 23:37:20 GMT</pubDate><media:content url="https://www.testingcatalog.com/content/images/2024/11/screenshot-chat_mistral_ai-2024_11_18-17_18_21.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-chat_mistral_ai-2024_11_18-17_18_21.jpeg" alt="Mistral beta features powered by Pixtral Large now available for free"><p>Mistral AI has introduced two major updates: the release of <strong>Pixtral Large</strong>, a new multimodal model, and significant enhancements to its AI assistant platform, <strong>Le Chat</strong>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.testingcatalog.com/content/images/2024/11/Screenshot-2024-11-18-at-17.17.45.png" class="kg-image" alt="Mistral beta features powered by Pixtral Large now available for free" loading="lazy" width="709" height="441" srcset="https://www.testingcatalog.com/content/images/size/w600/2024/11/Screenshot-2024-11-18-at-17.17.45.png 600w, https://www.testingcatalog.com/content/images/2024/11/Screenshot-2024-11-18-at-17.17.45.png 709w"><figcaption><span style="white-space: pre-wrap;">Pixtral Large evals</span></figcaption></figure><p><a href="https://www.testingcatalog.com/mistral-le-platforme-reveals-three-new-ai-models-ahead-of-launch/"><strong>Pixtral Large</strong></a> is a 124 billion parameter model designed for both text and image processing. It builds on the capabilities of Mistral Large 2 and excels in understanding documents, charts, and natural images. The model performs strongly on benchmarks like MathVista (69.4%) and outperforms competitors such as GPT-4o and Gemini-1.5 Pro in tasks like document question answering (DocVQA) and chart analysis (ChartQA). It combines a 123B parameter multimodal decoder with a 1B parameter vision encoder, allowing it to process up to 30 high-resolution images within a 128K context window. This makes it suitable for complex visual tasks, including mathematical reasoning and document analysis.</p><figure class="kg-card kg-video-card kg-width-regular" data-kg-thumbnail="https://www.testingcatalog.com/content/media/2024/11/mistral_thumb.jpg" data-kg-custom-thumbnail> <div class="kg-video-container"> <video src="https://www.testingcatalog.com/content/media/2024/11/mistral.mp4" poster="https://img.spacergif.org/v1/2032x1080/0a/spacer.png" width="2032" height="1080" loop autoplay muted playsinline preload="metadata" style="background: transparent url('https://www.testingcatalog.com/content/media/2024/11/mistral_thumb.jpg') 50% 50% / cover no-repeat;"></video> <div class="kg-video-overlay"> <button class="kg-video-large-play-icon" aria-label="Play video"> <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"> <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/> </svg> </button> </div> <div class="kg-video-player-container kg-video-hide"> <div class="kg-video-player"> <button class="kg-video-play-icon" aria-label="Play video"> <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"> <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/> </svg> </button> <button class="kg-video-pause-icon kg-video-hide" aria-label="Pause video"> <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"> <rect x="3" y="1" width="7" height="22" rx="1.5" ry="1.5"/> <rect x="14" y="1" width="7" height="22" rx="1.5" ry="1.5"/> </svg> </button> <span class="kg-video-current-time">0:00</span> <div class="kg-video-time"> /<span class="kg-video-duration">2:09</span> </div> <input type="range" class="kg-video-seek-slider" max="100" value="0"> <button class="kg-video-playback-rate" aria-label="Adjust playback speed">1×</button> <button class="kg-video-unmute-icon" aria-label="Unmute"> <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"> <path d="M15.189 2.021a9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h1.794a.249.249 0 0 1 .221.133 9.73 9.73 0 0 0 7.924 4.85h.06a1 1 0 0 0 1-1V3.02a1 1 0 0 0-1.06-.998Z"/> </svg> </button> <button class="kg-video-mute-icon kg-video-hide" aria-label="Mute"> <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"> <path d="M16.177 4.3a.248.248 0 0 0 .073-.176v-1.1a1 1 0 0 0-1.061-1 9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h.114a.251.251 0 0 0 .177-.073ZM23.707 1.706A1 1 0 0 0 22.293.292l-22 22a1 1 0 0 0 0 1.414l.009.009a1 1 0 0 0 1.405-.009l6.63-6.631A.251.251 0 0 1 8.515 17a.245.245 0 0 1 .177.075 10.081 10.081 0 0 0 6.5 2.92 1 1 0 0 0 1.061-1V9.266a.247.247 0 0 1 .073-.176Z"/> </svg> </button> <input type="range" class="kg-video-volume-slider" max="100" value="100"> </div> </div> </div> </figure><p>Mistral AI also updated its <strong>Le Chat</strong> platform, which now includes:</p><ol><li><a href="https://www.testingcatalog.com/mistral-ai-tests-brave-powered-search-and-image-generation/">Web search with citations</a></li><li>A new "Canvas" tool for collaborative ideation</li><li>Advanced document and image understanding powered by Pixtral Large</li></ol><p>Users can also generate images using Black Forest Labs' Flux Pro 1.1 model. Additionally, Le Chat introduces task automation through "agents," allowing users to automate workflows like receipt scanning or meeting summarization which can use Pixtral Large as a base. </p><p>These developments reflect Mistral's strategy to provide cutting-edge AI tools for both research and commercial use while maintaining accessibility through free tiers during beta testing.</p>]]></content:encoded></item><item><title><![CDATA[Perplexity’s new shopping assistant offers snap-to-shop and fast checkout]]></title><description><![CDATA[Discover Perplexity's new AI-powered shopping assistant, featuring one-click checkout, visual search, and unbiased product recommendations to enhance your online shopping experience.]]></description><link>https://www.testingcatalog.com/perplexitys-new-shopping-assistant-offers-snap-to-shop-and-fast-checkout/</link><guid isPermaLink="false">673bbc92ea7c870001bf5b1b</guid><category><![CDATA[Perplexity]]></category><category><![CDATA[AI News]]></category><dc:creator><![CDATA[Alexey Shabanov]]></dc:creator><pubDate>Mon, 18 Nov 2024 23:31:42 GMT</pubDate><media:content url="https://www.testingcatalog.com/content/images/2024/11/screenshot-www_perplexity_ai-2024_11_19-00_31_10.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-www_perplexity_ai-2024_11_19-00_31_10.jpeg" alt="Perplexity’s new shopping assistant offers snap-to-shop and fast checkout"><p>Perplexity has introduced a new AI-powered shopping assistant, aiming to streamline the online shopping experience for users. Key features include:</p><ul><li><a href="https://www.testingcatalog.com/black-friday-sparks-perplexitys-pro-one-click-shopping-rollout/"><strong>Buy with Pro</strong>:</a> A one-click checkout option for Perplexity Pro users in the U.S., allowing them to purchase select products directly through the platform. This feature simplifies the buying process by securely storing shipping and billing information and offering free shipping on all orders made through "Buy with Pro." If this service is unavailable for a product, users are redirected to the merchant’s site to complete their purchase.</li></ul><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Introducing Perplexity Shopping: a one-stop solution where you can research and purchase products. It marks a big leap forward in how we serve our users – empowering seamless native actions right from an answer. Shopping online just got 10x more easy and fun. <a href="https://t.co/gjMZO6VIzQ?ref=testingcatalog.com">pic.twitter.com/gjMZO6VIzQ</a></p>— Perplexity (@perplexity_ai) <a href="https://twitter.com/perplexity_ai/status/1858556244891758991?ref_src=twsrc%5Etfw&ref=testingcatalog.com">November 18, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></figure><ul><li><strong>Snap to Shop</strong>: A visual search tool that lets users find products by uploading photos, making it easier to shop without needing detailed product information.</li></ul><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">The new <a href="https://twitter.com/perplexity_ai?ref_src=twsrc%5Etfw&ref=testingcatalog.com">@perplexity_ai</a> "Snap to Shop" feature is pretty magical. <a href="https://t.co/r3mNfHUAwG?ref=testingcatalog.com">pic.twitter.com/r3mNfHUAwG</a></p>— Greg Feingold (@GregFeingold) <a href="https://twitter.com/GregFeingold/status/1858559783340560391?ref_src=twsrc%5Etfw&ref=testingcatalog.com">November 18, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></figure><ul><li><strong>Unbiased Product Recommendations</strong>: When users ask shopping-related questions, Perplexity provides product recommendations through easy-to-read cards that are not sponsored. These recommendations are based on integrations with platforms like Shopify, ensuring access to up-to-date product information.</li></ul><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">💡</div><div class="kg-callout-text">To experience Perplexity Pro Shopping during Black Friday you can use our 10$ referral <a href="https://perplexity.ai/pro?referral_code=BFE33Y5Y&ref=testingcatalog.com">discount code</a></div></div><p>Additionally, Perplexity has launched a <a href="https://www.perplexity.ai/hub/blog/shop-like-a-pro?ref=testingcatalog.com"><strong>Merchant Program</strong></a>, allowing retailers to share product data with Perplexity. This program offers benefits like increased visibility in search results, payment integrations for seamless checkout, and access to a custom dashboard for insights into shopping trends.</p><p>The company plans to expand these features beyond the U.S. market in the future, enhancing its global reach in the e-commerce space.</p>]]></content:encoded></item><item><title><![CDATA[Black Friday sparks Perplexity’s Pro one-click shopping rollout]]></title><description><![CDATA[Perplexity is making a final push in preparation for the upcoming Black Friday with their Perplexity Pro shopping offering.]]></description><link>https://www.testingcatalog.com/black-friday-sparks-perplexitys-pro-one-click-shopping-rollout/</link><guid isPermaLink="false">673a7aa3ea7c870001bf5aa1</guid><category><![CDATA[Perplexity]]></category><category><![CDATA[AI News]]></category><dc:creator><![CDATA[Alexey Shabanov]]></dc:creator><pubDate>Mon, 18 Nov 2024 08:27:05 GMT</pubDate><media:content url="https://www.testingcatalog.com/content/images/2024/11/screenshot-www_perplexity_ai-2024_11_17-00_41_21.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-www_perplexity_ai-2024_11_17-00_41_21.png" alt="Black Friday sparks Perplexity’s Pro one-click shopping rollout"><p>Perplexity is making a final push in preparation for the upcoming Black Friday with their Perplexity Pro shopping offering. This feature, discovered earlier, is <a href="https://www.testingcatalog.com/perplexity-progresses-towards-one-click-shopping-with-buy-with-pro/">already being partially rolled out</a> to users. Perplexity is introducing new shopping widgets in its results that allow for quick purchasing. These widgets include options labelled “Buy with Pro” and provide various details like reviews and summaries of pros and cons.</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">💡</div><div class="kg-callout-text">To experience Perplexity Pro Shopping during Black Friday you can use our 10$ referral <a href="https://perplexity.ai/pro?referral_code=BFE33Y5Y&ref=testingcatalog.com">discount code</a></div></div><p>Update: Perplexity Pro Shopping has been officially rolled out in US. It also came along with an announcement of a new <a href="https://www.perplexity.ai/hub/blog/shop-like-a-pro?ref=testingcatalog.com">Merchant Program</a> 🛒 </p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Introducing Perplexity Shopping: a one-stop solution where you can research and purchase products. It marks a big leap forward in how we serve our users – empowering seamless native actions right from an answer. Shopping online just got 10x more easy and fun. <a href="https://t.co/gjMZO6VIzQ?ref=testingcatalog.com">pic.twitter.com/gjMZO6VIzQ</a></p>— Perplexity (@perplexity_ai) <a href="https://twitter.com/perplexity_ai/status/1858556244891758991?ref_src=twsrc%5Etfw&ref=testingcatalog.com">November 18, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></figure><p>Currently, purchases redirect users to related payment providers. However, an upcoming feature, potentially announced soon, will enable users to set up payment options and shipping addresses <a href="https://www.testingcatalog.com/redesigned-spaces-and-purchases-coming-soon-to-perplexity-users/">directly in Perplexity Settings</a>. This will unlock one-click buying for Pro subscribers and cover shipping costs, making purchases effectively free of shipping fees. Perplexity may act as an intermediary, <a href="https://www.testingcatalog.com/perplexity-tests-an-internal-payment-system-pplx/">holding users’ payments</a> before transferring them to merchants. According to descriptions found in the settings menu, users will also be able to track their purchases within Perplexity. Essentially, Perplexity aims to streamline the entire shopping process.</p><figure class="kg-card kg-video-card kg-width-regular kg-card-hascaption" data-kg-thumbnail="https://www.testingcatalog.com/content/media/2024/11/pplxshopping_thumb.jpg" data-kg-custom-thumbnail> <div class="kg-video-container"> <video src="https://www.testingcatalog.com/content/media/2024/11/pplxshopping.mp4" poster="https://img.spacergif.org/v1/1916x1080/0a/spacer.png" width="1916" height="1080" loop autoplay muted playsinline preload="metadata" style="background: transparent url('https://www.testingcatalog.com/content/media/2024/11/pplxshopping_thumb.jpg') 50% 50% / cover no-repeat;"></video> <div class="kg-video-overlay"> <button class="kg-video-large-play-icon" aria-label="Play video"> <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"> <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/> </svg> </button> </div> <div class="kg-video-player-container kg-video-hide"> <div class="kg-video-player"> <button class="kg-video-play-icon" aria-label="Play video"> <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"> <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/> </svg> </button> <button class="kg-video-pause-icon kg-video-hide" aria-label="Pause video"> <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"> <rect x="3" y="1" width="7" height="22" rx="1.5" ry="1.5"/> <rect x="14" y="1" width="7" height="22" rx="1.5" ry="1.5"/> </svg> </button> <span class="kg-video-current-time">0:00</span> <div class="kg-video-time"> /<span class="kg-video-duration">0:12</span> </div> <input type="range" class="kg-video-seek-slider" max="100" value="0"> <button class="kg-video-playback-rate" aria-label="Adjust playback speed">1×</button> <button class="kg-video-unmute-icon" aria-label="Unmute"> <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"> <path d="M15.189 2.021a9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h1.794a.249.249 0 0 1 .221.133 9.73 9.73 0 0 0 7.924 4.85h.06a1 1 0 0 0 1-1V3.02a1 1 0 0 0-1.06-.998Z"/> </svg> </button> <button class="kg-video-mute-icon kg-video-hide" aria-label="Mute"> <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"> <path d="M16.177 4.3a.248.248 0 0 0 .073-.176v-1.1a1 1 0 0 0-1.061-1 9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h.114a.251.251 0 0 0 .177-.073ZM23.707 1.706A1 1 0 0 0 22.293.292l-22 22a1 1 0 0 0 0 1.414l.009.009a1 1 0 0 0 1.405-.009l6.63-6.631A.251.251 0 0 1 8.515 17a.245.245 0 0 1 .177.075 10.081 10.081 0 0 0 6.5 2.92 1 1 0 0 0 1.061-1V9.266a.247.247 0 0 1 .073-.176Z"/> </svg> </button> <input type="range" class="kg-video-volume-slider" max="100" value="100"> </div> </div> </div> <figcaption><p><span style="white-space: pre-wrap;">Perplexity Pro Shopping settings</span></p></figcaption> </figure><p>Comparing offers on major stores like Amazon is often time-consuming, but Perplexity seems poised to handle this through its shopping UI. Users will also be able to customize product variants, such as choosing different storage capacities when buying a phone. Evidence suggests partnerships with Amazon and potentially Klarna or Shopify, as their names have appeared in code snippets. Amazon, being a significant investor in Perplexity, seems likely confirmed. </p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-www_perplexity_ai-2024_11_17-18_54_53.png" class="kg-image" alt="Black Friday sparks Perplexity’s Pro one-click shopping rollout" loading="lazy" width="2000" height="1041" srcset="https://www.testingcatalog.com/content/images/size/w600/2024/11/screenshot-www_perplexity_ai-2024_11_17-18_54_53.png 600w, https://www.testingcatalog.com/content/images/size/w1000/2024/11/screenshot-www_perplexity_ai-2024_11_17-18_54_53.png 1000w, https://www.testingcatalog.com/content/images/size/w1600/2024/11/screenshot-www_perplexity_ai-2024_11_17-18_54_53.png 1600w, https://www.testingcatalog.com/content/images/size/w2400/2024/11/screenshot-www_perplexity_ai-2024_11_17-18_54_53.png 2400w" sizes="(min-width: 1200px) 1200px"><figcaption><span style="white-space: pre-wrap;">Preplexity Pro shopping widget</span></figcaption></figure><p>Black Friday provides the perfect timing for this feature’s release, enabling Perplexity to test hypotheses and capture consumer attention. For now, the feature is expected to launch in the US, with broader releases remaining uncertain.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="art" dir="ltr">🚢</p>— Aravind Srinivas (@AravSrinivas) <a href="https://twitter.com/AravSrinivas/status/1858221036170592668?ref_src=twsrc%5Etfw&ref=testingcatalog.com">November 17, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></figure><p>This feature’s impact on Perplexity’s monetization strategy will be noteworthy. Perplexity has reportedly <strong>started experimenting with ads</strong>, discovered through data mining, with tests conducted in “shadow mode.” This means ads are not visible to users but allow Perplexity to gather data on potential revenue from impressions and artificial conversion metrics. These ads, alongside affiliate commissions (e.g., from Amazon), could complement Perplexity’s existing subscription revenue. However, visible ads may not appear until after Black Friday.</p><p>Observing these <a href="https://www.testingcatalog.com/tag/perplexity/">Perplexity</a> developments and upcoming announcements will be interesting, particularly the interplay between ads, affiliate commissions, and Pro subscription offerings!</p>]]></content:encoded></item><item><title><![CDATA[Experimental AI models from OpenAI and Google flood lmarena]]></title><description><![CDATA[Discover the latest experimental AI models on LMArena, including OpenAI's Anonymous Chatbot and Google's Secret Chatbot and Mystery Gemini 3. Explore their potential now!]]></description><link>https://www.testingcatalog.com/experimental-ai-models-from-openai-and-google-flood-lmarena/</link><guid isPermaLink="false">673a29ffea7c870001bf5a52</guid><category><![CDATA[AI News]]></category><dc:creator><![CDATA[Alexey Shabanov]]></dc:creator><pubDate>Mon, 18 Nov 2024 08:26:09 GMT</pubDate><media:content url="https://www.testingcatalog.com/content/images/2024/11/screenshot-lmarena_ai-2024_11_16-21_02_28.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-lmarena_ai-2024_11_16-21_02_28.jpeg" alt="Experimental AI models from OpenAI and Google flood lmarena"><p>Recently, a bunch of new experimental models appeared on lmarena. First is the <strong>Anonymous Chatbot</strong>, a name previously used by OpenAI for their experimental models. This could potentially be the 4o update from November 11, as some users have reported noticing differences in model responses on ChatGPT. However, it’s also possible that this model represents something more advanced.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">anonymous-chatbot is back in the arena<br><br>this is usually reserved for GPT-4o model updates inside ChatGPT 👀 <a href="https://t.co/OGSRJ7a6Ty?ref=testingcatalog.com">pic.twitter.com/OGSRJ7a6Ty</a></p>— ʟᴇɢɪᴛ (@legit_rumors) <a href="https://twitter.com/legit_rumors/status/1857827554884755553?ref_src=twsrc%5Etfw&ref=testingcatalog.com">November 16, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></figure><p>In addition, Google has introduced two new models. One of them, called <strong>Secret Chatbot</strong>, is identified as Gemini and reportedly performs very well. This could potentially be a larger version of the <a href="https://www.testingcatalog.com/new-ai-model-gemini-experimental-1114-debuts-on-google-ai-studio/">experimental model released last week</a>, which Google has teased will officially arrive next week.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Gemini-exp-1114 is now available via the Gemini API, happy building / testing! Will follow up Monday with more 🚢<a href="https://t.co/V4PD3Dndqz?ref=testingcatalog.com">https://t.co/V4PD3Dndqz</a></p>— Logan Kilpatrick (@OfficialLoganK) <a href="https://twitter.com/OfficialLoganK/status/1857535825895993366?ref_src=twsrc%5Etfw&ref=testingcatalog.com">November 15, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></figure><p>The second model from Google is called <strong>Mystery Gemini 3</strong>. Unlike <strong>Secret Chatbot</strong>, it does not seem to perform as impressively. Overall, Google is running a wide range of experimental models on lmarena, making it difficult to determine whether any of them stand out or represent significant advancements.</p><p>If you haven't heard of <a href="https://lmarena.ai/?ref=testingcatalog.com">lmarena</a> yet, it is an LLM battleground where experimental models are being added anonymously and may only randomly appear in the Battle mode. </p>]]></content:encoded></item><item><title><![CDATA[Mistral Le Platforme reveals three new AI models ahead of launch]]></title><description><![CDATA[Discover the latest updates from Mistral AI, including new models like Mistral Large 2411 and Pixtral Large 2411, potentially launching within weeks. Stay tuned!]]></description><link>https://www.testingcatalog.com/mistral-le-platforme-reveals-three-new-ai-models-ahead-of-launch/</link><guid isPermaLink="false">673a2b97ea7c870001bf5a57</guid><category><![CDATA[Mistral]]></category><category><![CDATA[AI News]]></category><dc:creator><![CDATA[Alexey Shabanov]]></dc:creator><pubDate>Sun, 17 Nov 2024 21:28:50 GMT</pubDate><media:content url="https://www.testingcatalog.com/content/images/2024/11/screenshot-console_mistral_ai-2024_11_16-20_25_00.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-console_mistral_ai-2024_11_16-20_25_00.png" alt="Mistral Le Platforme reveals three new AI models ahead of launch"><p>Users on Reddit recently noticed an update on the Mistral Le Platforme limits page, which now includes three new entries. The first is Mistral Large 2411, potentially a multimodal <a href="https://www.testingcatalog.com/mistral-ai-is-gearing-to-launch-multimodal-large-2-1-with/">Large 2.1 model</a> that has been previously identified in the codebase.</p><p>Additionally, there is Pixtral Large 2411 and Mistral Moderation 2411. The latter could serve as a minimalistic model designed for moderation QA purposes. This discovery suggests that Mistral AI is gearing up for a release, possibly as soon as next week. The "24-11" likely refers to November 2024, indicating a potential launch within the next two weeks if everything proceeds as planned.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://www.testingcatalog.com/content/images/2024/11/Screenshot-2024-11-16-at-20.23.18-1.png" class="kg-image" alt="Mistral Le Platforme reveals three new AI models ahead of launch" loading="lazy" width="2000" height="1028" srcset="https://www.testingcatalog.com/content/images/size/w600/2024/11/Screenshot-2024-11-16-at-20.23.18-1.png 600w, https://www.testingcatalog.com/content/images/size/w1000/2024/11/Screenshot-2024-11-16-at-20.23.18-1.png 1000w, https://www.testingcatalog.com/content/images/size/w1600/2024/11/Screenshot-2024-11-16-at-20.23.18-1.png 1600w, https://www.testingcatalog.com/content/images/2024/11/Screenshot-2024-11-16-at-20.23.18-1.png 2000w" sizes="(min-width: 1200px) 1200px"><figcaption><span style="white-space: pre-wrap;">Pandragon on Le Chat</span></figcaption></figure><p>Simultaneously, some users have observed that responses on Le Chat now indicate the model in use is called <a href="https://www.testingcatalog.com/mistral-ai-prepares-to-release-new-pandragon-model-with-advanced-features/">Pandragon</a>. This model was previously identified as a multi-model system from Mistral AI, which, in the future, will also have the capability to interact with tools for drawing images or browsing the Internet.</p>]]></content:encoded></item><item><title><![CDATA[Claude AI got new model selector, Context Connectors in development]]></title><description><![CDATA[Discover the latest updates on Anthropic's Claude AI, including the new model selector for Claude 3.5 Sonnet and the innovative Model Context Protocol (MCP) for seamless integration.]]></description><link>https://www.testingcatalog.com/claude-ai-got-new-model-selector-context-connectors-in-development/</link><guid isPermaLink="false">673a27bbea7c870001bf5a4d</guid><category><![CDATA[Claude]]></category><category><![CDATA[AI News]]></category><dc:creator><![CDATA[Alexey Shabanov]]></dc:creator><pubDate>Sun, 17 Nov 2024 21:20:11 GMT</pubDate><media:content url="https://www.testingcatalog.com/content/images/2024/11/screenshot-claude_ai-2024_11_15-23_24_44.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-claude_ai-2024_11_15-23_24_44.jpeg" alt="Claude AI got new model selector, Context Connectors in development"><p>Recent reports from TestingCatalog and Tibor provide insights into the ongoing development of Anthropic's Claude AI, specifically focusing on the latest updates and upcoming features.</p><p>TestingCatalog highlighted that the older Claude 3.5 Sonnet model is now accessible via an updated model selector. This update allows users to switch between different Claude models more easily, offering flexibility depending on their needs. The newer Claude 3.5 Sonnet model, described as Anthropic's most intelligent to date, is also available, alongside other models like Claude 3 Opus (for creative tasks) and Claude 3 Haiku (a faster model for daily tasks). However, the newest version of Claude 3.5 Haiku is not yet available on Claude.</p><p>Another significant development is the introduction of the Model Context Protocol (MCP), which was mentioned in @btibor91's report. MCP aims to provide a standardized way for applications to supply context to large language models (LLMs) like Claude. This protocol separates context provision from LLM interaction, potentially allowing more efficient integration with external systems. The MCP Python SDK supports both client and server capabilities, making it easier for developers to build applications that can interact with LLMs using this protocol.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Well, thank you for the hint, anon - here are all details about the upcoming Model Context Protocol (MCP) by Anthropic<br><br>MCP Python SDK - Python implementation of the Model Context Protocol (MCP), providing both client and server capabilities for integrating with LLM surfaces<br><br>The… <a href="https://t.co/o7an5CCMt3?ref=testingcatalog.com">https://t.co/o7an5CCMt3</a> <a href="https://t.co/tbKQwGDt5s?ref=testingcatalog.com">pic.twitter.com/tbKQwGDt5s</a></p>— Tibor Blaho (@btibor91) <a href="https://twitter.com/btibor91/status/1857184349805838553?ref_src=twsrc%5Etfw&ref=testingcatalog.com">November 14, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></figure><p>These updates reflect Anthropic’s broader strategy of enhancing Claude’s versatility across different use cases, from creative writing to complex data analysis. The introduction of MCP suggests a focus on improving how external data and tools integrate with LLMs, potentially expanding <a href="https://www.testingcatalog.com/tag/claude/">Claude’s</a> utility in professional environments.</p>]]></content:encoded></item><item><title><![CDATA[AI Studio UI revamp under development, inspired by Gemini]]></title><description><![CDATA[Discover Google's latest GemIIni model now available via API and upcoming AI Studio updates. Expect a sleeker UI with chat bubbles for a better user experience.]]></description><link>https://www.testingcatalog.com/ai-studio-ui-revamp-under-development-inspired-by-gemiini/</link><guid isPermaLink="false">67385e7cea7c870001bf59f6</guid><category><![CDATA[AI Studio]]></category><category><![CDATA[AI News]]></category><dc:creator><![CDATA[Alexey Shabanov]]></dc:creator><pubDate>Sat, 16 Nov 2024 13:04:12 GMT</pubDate><media:content url="https://www.testingcatalog.com/content/images/2024/11/screenshot-aistudio_google_com-2024_11_15-22_50_35.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-aistudio_google_com-2024_11_15-22_50_35.jpeg" alt="AI Studio UI revamp under development, inspired by Gemini"><p>Google recently announced that their latest experimental Gemini model from November 2014 is now available in API. Additionally, there are hints that more features or updates might be introduced to AI Studio on Monday, though specifics remain unclear.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Gemini-exp-1114 is now available via the Gemini API, happy building / testing! Will follow up Monday with more 🚢<a href="https://t.co/V4PD3Dndqz?ref=testingcatalog.com">https://t.co/V4PD3Dndqz</a></p>— Logan Kilpatrick (@OfficialLoganK) <a href="https://twitter.com/OfficialLoganK/status/1857535825895993366?ref_src=twsrc%5Etfw&ref=testingcatalog.com">November 15, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></figure><p>Another change in development seems to involve a redesign of the AI Studio UI. The new design is expected to be sleeker and more aligned with the consumer-facing <a href="https://www.testingcatalog.com/tag/gemini/">Gemini</a> product. This includes the addition of standardized chat bubbles, creating a more traditional conversation interface.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-aistudio_google_com-2024_11_15-22_54_00.jpeg" class="kg-image" alt="AI Studio UI revamp under development, inspired by Gemini" loading="lazy" width="2000" height="1009" srcset="https://www.testingcatalog.com/content/images/size/w600/2024/11/screenshot-aistudio_google_com-2024_11_15-22_54_00.jpeg 600w, https://www.testingcatalog.com/content/images/size/w1000/2024/11/screenshot-aistudio_google_com-2024_11_15-22_54_00.jpeg 1000w, https://www.testingcatalog.com/content/images/size/w1600/2024/11/screenshot-aistudio_google_com-2024_11_15-22_54_00.jpeg 1600w, https://www.testingcatalog.com/content/images/size/w2400/2024/11/screenshot-aistudio_google_com-2024_11_15-22_54_00.jpeg 2400w" sizes="(min-width: 1200px) 1200px"></figure><p>Currently, the UI of <a href="https://www.testingcatalog.com/tag/ai-studio/">AI Studio</a> is sharper and less conversational in appearance. These updates aim to address this difference and provide a more user-friendly experience. In the comments, many users agreed that a UI update is necessary. However, some users offered a different perspective, arguing that the current design is already sufficient.</p>]]></content:encoded></item><item><title><![CDATA[Gemini Live debuts on iOS with Google’s new dedicated AI app]]></title><description><![CDATA[Google's new standalone Gemini app for iOS brings feature parity with Android, introducing Gemini Live for interactive AI conversations. Try it now for enhanced AI assistance!]]></description><link>https://www.testingcatalog.com/gemini-live-debuts-on-ios-with-googles-new-dedicated-ai-app/</link><guid isPermaLink="false">67374a61ea7c870001bf59d7</guid><category><![CDATA[Gemini]]></category><category><![CDATA[AI News]]></category><dc:creator><![CDATA[Alexey Shabanov]]></dc:creator><pubDate>Sat, 16 Nov 2024 12:57:49 GMT</pubDate><media:content url="https://www.testingcatalog.com/content/images/2024/11/screenshot-apps_apple_com-2024_11_14-17_40_28-1-1.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-apps_apple_com-2024_11_14-17_40_28-1-1.jpeg" alt="Gemini Live debuts on iOS with Google’s new dedicated AI app"><p>Google has recently launched a standalone Gemini app for iOS, enhancing the accessibility of its AI assistant. Previously, iOS users could access Gemini only through a dedicated tab within the main Google app. In contrast, Android users have had a dedicated Gemini app from the outset. This new release brings feature parity between the two platforms, with the iOS app now offering functionalities comparable to its Android counterpart, except for certain features specific to Android integration. Notably, on Android, Gemini can be set as the default Google Assistant, providing additional settings and customization options.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">The Gemini app, now available on iPhone.<br><br>Download it now in the App Store → <a href="https://t.co/et5AspqWJe?ref=testingcatalog.com">https://t.co/et5AspqWJe</a> <a href="https://t.co/5QRC40FTbQ?ref=testingcatalog.com">pic.twitter.com/5QRC40FTbQ</a></p>— Google Gemini App (@GeminiApp) <a href="https://twitter.com/GeminiApp/status/1857145138755084671?ref_src=twsrc%5Etfw&ref=testingcatalog.com">November 14, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></figure><p>A significant aspect of this release is the introduction of Gemini Live to iOS users. Previously unavailable within the Google app on iOS, Gemini Live offers an interactive, voice-based conversational experience, marking its debut on the iOS platform. This feature allows users to engage in natural conversations with the AI, enhancing the overall user experience.</p><p>With the standalone <a href="https://www.testingcatalog.com/tag/gemini/">Gemini</a> app, users can interact with the AI assistant through text, voice, or camera inputs, facilitating tasks such as:</p><ol><li>Brainstorming ideas</li><li>Simplifying complex topics</li><li>Practicing for interviews</li></ol><p>The app also integrates with other Google services, enabling seamless control of apps like YouTube Music and Google Maps.</p>]]></content:encoded></item><item><title><![CDATA[ChatGPT introduces ‘Work With Apps’ beta for macOS users]]></title><description><![CDATA[OpenAI recently announced a series of upgrades to their desktop apps for Windows and macOS.]]></description><link>https://www.testingcatalog.com/chatgpt-introduces-work-with-apps-beta-for-macos-users/</link><guid isPermaLink="false">673734ebea7c870001bf59a7</guid><category><![CDATA[ChatGPT News]]></category><category><![CDATA[AI News]]></category><dc:creator><![CDATA[Alexey Shabanov]]></dc:creator><pubDate>Fri, 15 Nov 2024 11:56:20 GMT</pubDate><media:content url="https://www.testingcatalog.com/content/images/2024/11/Screenshot-2024-11-14-at-19.20.06.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.testingcatalog.com/content/images/2024/11/Screenshot-2024-11-14-at-19.20.06.jpg" alt="ChatGPT introduces ‘Work With Apps’ beta for macOS users"><p>OpenAI recently announced a series of upgrades to their desktop apps for Windows and macOS. One significant update is that the apps have become <a href="https://x.com/OpenAI/status/1857121721175998692?ref=testingcatalog.com">available to free users</a>, although this feature is still rolling out gradually to Microsoft Stores. Alongside this, the macOS application received a major enhancement.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">ChatGPT 🤝 VS Code, Xcode, Terminal, iTerm2<br><br>ChatGPT for macOS can now work with apps on your desktop. In this early beta for Plus and Team users, you can let ChatGPT look at coding apps to provide better answers. <a href="https://t.co/3wMCZfby2U?ref=testingcatalog.com">pic.twitter.com/3wMCZfby2U</a></p>— OpenAI Developers (@OpenAIDevs) <a href="https://twitter.com/OpenAIDevs/status/1857129790312272179?ref_src=twsrc%5Etfw&ref=testingcatalog.com">November 14, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></figure><p>A key feature introduced is called Work With Apps, which is currently in beta. This feature can be toggled on or off in the settings, where users can also control which apps ChatGPT is allowed to work with. It enables ChatGPT to connect to certain apps, understand the context of their main window, and assist based on what is displayed. This capability is currently targeting terminal apps and code editors, such as Xcode, VS Code, the Basic Terminal app, and iTerm2. Additionally, users can open multiple windows in split view, and ChatGPT will be able to interpret all of them.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://www.testingcatalog.com/content/images/2024/11/Screenshot-2024-11-14-at-19.36.41-1.jpg" class="kg-image" alt="ChatGPT introduces ‘Work With Apps’ beta for macOS users" loading="lazy" width="1341" height="764" srcset="https://www.testingcatalog.com/content/images/size/w600/2024/11/Screenshot-2024-11-14-at-19.36.41-1.jpg 600w, https://www.testingcatalog.com/content/images/size/w1000/2024/11/Screenshot-2024-11-14-at-19.36.41-1.jpg 1000w, https://www.testingcatalog.com/content/images/2024/11/Screenshot-2024-11-14-at-19.36.41-1.jpg 1341w" sizes="(min-width: 1200px) 1200px"></figure><p>For VS Code, there is a <a href="https://help.openai.com/en/articles/10128592-how-to-install-the-work-with-apps-visual-studio-code-extension?ref=testingcatalog.com">dedicated extension</a> that provides ChatGPT with additional capabilities. With this extension, users can select a piece of code, allowing ChatGPT to analyze the selected part directly. OpenAI employees have confirmed that they plan to expand the feature set by adding support for more apps and enabling direct writing power in the future. For example, in VS Code, ChatGPT might eventually edit the code directly in the editor, rather than just offering suggestions. Although this feature is not yet available, it is confirmed to be in development.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://www.testingcatalog.com/content/images/2024/11/Screenshot-2024-11-14-at-19.34.58.jpg" class="kg-image" alt="ChatGPT introduces ‘Work With Apps’ beta for macOS users" loading="lazy" width="2000" height="959" srcset="https://www.testingcatalog.com/content/images/size/w600/2024/11/Screenshot-2024-11-14-at-19.34.58.jpg 600w, https://www.testingcatalog.com/content/images/size/w1000/2024/11/Screenshot-2024-11-14-at-19.34.58.jpg 1000w, https://www.testingcatalog.com/content/images/size/w1600/2024/11/Screenshot-2024-11-14-at-19.34.58.jpg 1600w, https://www.testingcatalog.com/content/images/2024/11/Screenshot-2024-11-14-at-19.34.58.jpg 2168w" sizes="(min-width: 1200px) 1200px"></figure><p>In the main prompt area, users can select the app they want ChatGPT to work with. Once selected, the app appears at the top of the interface, ensuring users can always see which app is currently active for context-aware assistance. This feature could be particularly useful for those working with multiple screens, as ChatGPT acts as a co-pilot for navigating complex workflows.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://www.testingcatalog.com/content/images/2024/11/Screenshot-2024-11-14-at-19.23.22.jpg" class="kg-image" alt="ChatGPT introduces ‘Work With Apps’ beta for macOS users" loading="lazy" width="1479" height="877" srcset="https://www.testingcatalog.com/content/images/size/w600/2024/11/Screenshot-2024-11-14-at-19.23.22.jpg 600w, https://www.testingcatalog.com/content/images/size/w1000/2024/11/Screenshot-2024-11-14-at-19.23.22.jpg 1000w, https://www.testingcatalog.com/content/images/2024/11/Screenshot-2024-11-14-at-19.23.22.jpg 1479w" sizes="(min-width: 1200px) 1200px"></figure><p>Unfortunately, voice mode is not yet supported for discussing code. There is, however, an option to open a companion window to enable voice conversations in the same session.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Release notes for <a href="https://twitter.com/OpenAI?ref_src=twsrc%5Etfw&ref=testingcatalog.com">@OpenAI</a> ChatGPT on Windows<br><br>Today's big Windows news was General Availability, but here's that slow, steady progress:<br>- Screenshots (users love this on macOS)<br>- Redesigned sidebar (especially helps with small windows)<br>- Bugfixes<a href="https://t.co/OdRIbhkmdg?ref=testingcatalog.com">https://t.co/OdRIbhkmdg</a> <a href="https://t.co/gkbsy0zNUd?ref=testingcatalog.com">pic.twitter.com/gkbsy0zNUd</a></p>— Alexander Embiricos (@embirico) <a href="https://twitter.com/embirico/status/1857201159598944398?ref_src=twsrc%5Etfw&ref=testingcatalog.com">November 14, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></figure><p>These updates highlight OpenAI’s efforts to integrate <a href="https://www.testingcatalog.com/tag/chatgpt/">ChatGPT</a> more deeply into users’ workflows, particularly for developers and technical users. The new features are still evolving, with exciting possibilities planned for future releases.</p>]]></content:encoded></item><item><title><![CDATA[New AI model Gemini Experimental 1114 debuts on Google AI Studio]]></title><description><![CDATA[Explore Google's new Gemini Experimental 11.14 model on AI Studio. With a 32k context window, it excels in reasoning tasks but may process slower. Try it now!]]></description><link>https://www.testingcatalog.com/new-ai-model-gemini-experimental-1114-debuts-on-google-ai-studio/</link><guid isPermaLink="false">6737071fea7c870001bf597a</guid><category><![CDATA[AI Studio]]></category><category><![CDATA[AI News]]></category><dc:creator><![CDATA[Alexey Shabanov]]></dc:creator><pubDate>Fri, 15 Nov 2024 11:38:04 GMT</pubDate><media:content url="https://www.testingcatalog.com/content/images/2024/11/screenshot-aistudio_google_com-2024_11_14-18_03_04.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://www.testingcatalog.com/content/images/2024/11/screenshot-aistudio_google_com-2024_11_14-18_03_04.jpeg" alt="New AI model Gemini Experimental 1114 debuts on Google AI Studio"><p>Google recently announced the availability of a new experimental model for testing on AI Studio. Named Gemini Experimental 1114, the title reflects its announcement date, November 14th. This model is accessible in the preview section via the model selector.</p><p>Notably, it comes with a 32k context window, which is significantly smaller than what other Gemini models offer. Additionally, it lacks search grounding capabilities. Early user feedback suggested the model would excel in reasoning tasks; however, it sometimes takes longer to process problems compared to its counterparts.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Massive News from Chatbot Arena🔥<a href="https://twitter.com/GoogleDeepMind?ref_src=twsrc%5Etfw&ref=testingcatalog.com">@GoogleDeepMind</a>'s latest Gemini (Exp 1114), tested with 6K+ community votes over the past week, now ranks joint #1 overall with an impressive 40+ score leap — matching 4o-latest in and surpassing o1-preview! It also claims #1 on Vision… <a href="https://t.co/AgfOk9WHNZ?ref=testingcatalog.com">https://t.co/AgfOk9WHNZ</a> <a href="https://t.co/HPmcWE6zzI?ref=testingcatalog.com">pic.twitter.com/HPmcWE6zzI</a></p>— lmarena.ai (formerly lmsys.org) (@lmarena_ai) <a href="https://twitter.com/lmarena_ai/status/1857110672565494098?ref_src=twsrc%5Etfw&ref=testingcatalog.com">November 14, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></figure><p>Reports from Arena indicate that this model has been under testing for some time. It has now risen to the top of leaderboards, even surpassing OpenAI’s O1 Preview. Previously, it was available under the Gemini test label in Arena’s battle mode.</p><p>From my own tests, I didn’t notice significant differences, though this might be due to my high expectations. The model delivers decent outputs but occasionally fabricates answers—an expected limitation, given that it’s likely a smaller model. Notably, it hasn’t been branded as Gemini 2, suggesting it could either be a smaller version of Gemini 2 or a completely new type of model distinct from the core series.</p><p>You can explore this model for free on <a href="https://www.testingcatalog.com/tag/ai-studio/">Google AI Studio</a> and form your own impressions.</p>]]></content:encoded></item></channel></rss>