Loading stock data...

This week’s AI news focuses on addressing racism in AI-generated images.

GettyImages 1062086882

This appears to be a newsletter article summarizing recent news and developments in the field of Artificial Intelligence (AI). Here’s a breakdown of the main points:

Bias in AI Data Sources

  • A study found that imagery from Google was more likely to reinforce gender stereotypes for certain jobs and words than text mentioning the same thing.
  • Images were also found to be more effective at associating roles with one gender, even when people viewed them days later.

Google’s Image Generator Diversity Fiasco

  • The article mentions a recent controversy surrounding Google’s image generator, which was criticized for its lack of diversity in generated images.

LLMs in Chemistry and Literature

  • Researchers found that Large Language Models (LLMs) can be fine-tuned to help with chemistry tasks after minimal training.
  • LLMs were also shown to be able to extrapolate from yes/no question-and-answer data, making them a useful tool for chemists.

Other Developments

  • A group of researchers opened sourced a "reasoning" AI model called Sky-T1 that can be trained for under $450.
  • Nvidia’s AI empire was highlighted, with the company investing in various startups related to AI research and development.

Concerns about Bias and Stereotypes

  • The article notes that these biases have real effects on people and that it’s essential to address them to ensure fairness and accuracy in AI systems.

Overall, this newsletter provides a summary of recent developments in AI, highlighting the importance of addressing bias and stereotypes in AI data sources.