Google has announced their model, has been getting some criticism. But as they say it is still just a test. There are lots of models jumping out there. Google seems to have moved early without enough testing. Expectations are high. Also, it competes directly with very profitable Google Search. Use directly with caution.
Try Bard and share your feedback
Mar 21, 2023 in the Google Blog.
We’re starting to open access to Bard, an early experiment that lets you collaborate with generative AI. We're beginning with the U.S. and the U.K., and will expand to more countries and languages over time.
SissieHsiao.png, Sissie Hsiao, VP, Product
Eli Collins, VP, Research
Animation of text: “Meet Bard, an early experiment by Google.” Followed by sentences explaining what Bard can do, like draft a packing list for a fishing and camping trip.
Today we’re starting to open access to Bard, an early experiment that lets you collaborate with generative AI. This follows our announcements from last week as we continue to bring helpful AI experiences to people, businesses and communities.
You can use Bard to boost your productivity, accelerate your ideas and fuel your curiosity. You might ask Bard to give you tips to reach your goal of reading more books this year, explain quantum physics in simple terms or spark your creativity by outlining a blog post. We’ve learned a lot so far by testing Bard, and the next critical step in improving it is to get feedback from more people.
Bard can help you brainstorm some ways to read more books this year.
About Bard
Bard is powered by a research large language model (LLM), specifically a lightweight and optimized version of LaMDA, and will be updated with newer, more capable models over time. It’s grounded in Google's understanding of quality information. You can think of an LLM as a prediction engine. When given a prompt, it generates a response by selecting, one word at a time, from words that are likely to come next. Picking the most probable choice every time wouldn’t lead to very creative responses, so there’s some flexibility factored in. We continue to see that the more people use them, the better LLMs get at predicting what responses might be helpful.
While LLMs are an exciting technology, they’re not without their faults. For instance, because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs. And they can provide inaccurate, misleading or false information while presenting it confidently. For example, when asked to share a couple suggestions for easy indoor plants, Bard convincingly presented ideas…but it got some things wrong, like the scientific name for the ZZ plant. .... '
No comments:
Post a Comment