Google releases Bard to a limited number of users in the US and UK

# · 🔥 379 · 💬 621 · one year ago · www.nytimes.com · jbegley · 📷
Like similar chatbots, Bard is based on a kind of A.I. technology called a large language model, or L.L.M., which learns skills by analyzing vast amounts of data from across the internet. This means the chatbot often gets facts wrong and sometimes makes up information without warning - a phenomenon A.I. researchers call hallucination. Continue reading the main story When executives demonstrated the chatbot on Monday, it refused to answer a medical question because doing so would require precise and correct information. Google posts a disclaimer under Bard's query box warning users that issues may arise: "Bard may display inaccurate or offensive information that doesn't represent Google's views." The company also provides users three options of responses for each question, and lets them provide feedback on the usefulness of a particular answer. Much like Microsoft's Bing chatbot and similar bots from start-ups like You.com and Perplexity, the chatbot annotates its responses from time to time, so people can review its sources. Continue reading the main story This may make the chatbot more accurate in some cases, but not all. Even with access to the latest online information, it still misstates facts and generates misinformation.
Google releases Bard to a limited number of users in the US and UK



Send Feedback | WebAssembly Version (beta)