In the world of artificial intelligence (AI), new developments and breakthroughs are constantly being made. From major tech companies consolidating AI research labs to startups revolutionizing property assessments, the possibilities of AI seem limitless. However, as AI systems become more advanced, concerns and debates over ethical and responsible AI development also emerge. In this blog post, we will explore the latest AI news, including Google's consolidation of AI research labs, Bill Gates' predictions about AI chatbots, warnings from the FTC Chairwoman, the revolutionary technology of MIT startup Hosta a.i., calls for consciousness research in AI development, and concerns over AI training data sources.
1. Google Consolidates AI Research Labs to Create Google DeepMind
Google has formed a new AI division, Google DeepMind, by consolidating its two research labs, Google Brain and DeepMind. The new unit aims to compete with OpenAI and maintain Google's edge in the highly competitive AI industry. With the consolidation, Google DeepMind aims to accelerate AI advancements while adhering to ethical standards. The new division will work closely with other Google product areas to deliver AI research and products. Google Research, the former parent division of Google Brain, will remain an independent division focused on fundamental advances in computer science. Jeff Dean, Google's chief scientist, will head the new unit and oversee the direction of AI research at the company. The move underscores Google's commitment to furthering AI research and development.
2. Bill Gates Predicts AI Chatbots Will Teach Children Literacy in 18 Months
Microsoft co-founder Bill Gates has predicted that AI chatbots will be able to improve children’s reading and writing skills in the next 18 months. Gates believes that AI chatbots will eventually be as good a tutor as any human ever could. Although teaching writing skills has traditionally been difficult for computers, AI chatbots have developed rapidly in recent months and can now compete with human-level intelligence on some standardised tests. Gates believes that the technology will improve within the next two years and it could make private tutoring available to a wider range of students who may not otherwise be able to afford it.
3. FTC Chairwoman Warns Against Illegal AI Practices
FTC Chairwoman Lina Khan has stated that US regulators are committed to stopping biased or deceptive AI tools that violate existing laws on civil rights and fraud. Khan also raised concerns about AI-generated content and the potential for scammers to use AI tools to manipulate and deceive people on a large scale. Additionally, she warned about the dangers of a few dominant companies controlling the raw materials, data, cloud services, and computing power required to develop and deploy AI products. The FTC may use its antitrust authority to protect competition and prevent established players from crushing new entrants. Khan emphasized that there is no AI exemption to existing laws.
4. MIT Startup Hosta a.i. Revolutionizes Property Assessments with Photo Analysis Technology
Hosta a.i., an alumni-founded startup from MIT, is changing the way property assessments are made by analyzing photos to create detailed assessments. Property assessments are important for home appraisals, insurance claims, and renovation projects. Inaccurate or delayed assessments can lead to higher costs and delays. Hosta a.i. can produce precise measurements of spaces, detailed floor plans, 3D models of rooms, and bills of materials, and can evaluate the conditions of materials to identify risks. Hosta a.i. is currently working with insurers, contractors, and mortgage lenders to give fast and accurate information about built surroundings. The technology can also help to speed up the transition to more energy-efficient buildings by creating an understanding of how heat moves through a room. The platform is user-friendly, requiring users to snap a few photos, which are then analyzed by artificial intelligence to create floor plans and CAD models. Hosta a.i. has participated in several MIT startup accelerator programs, and today, anyone on a job site can use their phone to take photos that are automatically processed into a detailed assessment by Hosta a.i.
5. Experts Call for AI Developers to Study Consciousness as AI Systems Advance
Experts warn that AI developers must study consciousness as AI systems become more advanced. In an open letter signed by dozens of academics worldwide, it calls for a greater scientific understanding of consciousness, how it could apply to AI and how society might live alongside it. Although most experts agree that AI is nowhere near having human-level consciousness, the rapid development of AI exposes the urgent need to accelerate research in the field of consciousness science. The letter pushes for the responsible development of AI to include consciousness research. It is signed by academics from universities in the UK, US and Europe, including Dr Susan Schneider, who chaired US space agency NASA. While there is excitement and investment in AI-related projects, it also creates nervousness as it could replace the equivalent of 300 million full-time jobs, according to a recent report by Goldman Sachs.
6. Concerns Arise Over AI Training Data Sources
Several investigations have revealed that the data used to train some of the largest and most powerful artificial intelligence models, including Google's LaMDA and Meta's LLaMA, contains fascist, pirated, and malicious material. One such dataset, the Colossal Clean Crawled Corpus (C4), compiled by Google from over 15 million websites, is used for training these AI systems. While the dataset is meant to be "clean" and free of offensive and racist language, it still contains such material from less reputable websites like VDARE, Breitbart, and RT. Some websites, like b-ok.org, which was formerly a repository of pirated ebooks, remain in the C4 database even after being seized by the FBI. Assembling such vast amounts of data from explicitly licensed sources for AI creation is a challenging task. AI researchers, therefore, rely on "fair use" defences to copyright and often skip the "cleaning" process to access more data for their systems to learn from. The London-based Stability AI recently released its new LLM, StableLM, which was trained on the Pile, an 850GB dataset that includes the entire, uncleaned Common Crawl database, 2 million pirate ebooks from the BitTorrent site Bibliotik, 100GB of data scraped from GitHub, and more esoteric sources like every internal email sent by Enron and the entire proceedings of the European parliament. The Pile is hosted publicly by a group of anonymous "data enthusiasts" called the Eye, whose copyright takedown policy links to a video of a choir of clothed women pretending to masturbate imaginary penises while singing.
In conclusion, the field of AI continues to advance at a rapid pace, with new developments and applications emerging all the time. From the consolidation of research labs to the use of chatbots in education, from concerns over data sources to the need for consciousness research, AI is a complex and multifaceted area of technology that requires careful attention and consideration. As these six latest AI news items demonstrate, there are both opportunities and challenges associated with AI, and it will be important for individuals, organizations, and society as a whole to stay informed and engaged as AI continues to shape our world. Whether you are a researcher, a developer, a policymaker, or simply an interested citizen, there is much to learn and explore in the exciting and dynamic world of AI.