Welcome to our latest blog post on the most recent developments in the field of Artificial Intelligence. This week, we have witnessed some exciting and ground-breaking news that is sure to shape the future of AI as we know it. From innovations in drug manufacturing to ethical considerations in military applications, this week's news highlights the diverse range of areas in which AI is being applied. Additionally, privacy concerns regarding AI are being addressed, and watchdogs are investigating the industry to prevent monopolies and misinformation. Read on to learn more about the latest AI news, including the notable resignation of an AI pioneer and a controversial case involving Amnesty International's use of AI-generated images.
AI Pioneer Geoffrey Hinton Quits Google, Warns of Risks Posed by AI Development
Geoffrey Hinton, a pioneer in the field of artificial intelligence and known as the "Godfather of AI," has left his position at Google to speak out about the potential dangers of AI. In an interview with The New York Times, Hinton warned that the rapid development of generative AI products was "racing towards danger" and could lead to false text, images, and videos that would make it difficult for people to distinguish truth from fiction. He also expressed concerns about job automation, as AI could replace roles such as paralegals, personal assistants, and translators. Hinton's concerns are shared by other experts, including Elon Musk and Stephen Hawking, who have warned about the risks of unchecked AI development. While AI has the potential to bring benefits to society, it is crucial to ensure its development is responsible and ethical to minimize risks and maximize benefits.
Palantir's AI Platform Demonstrates Ethical Use of AI in Military Applications
Palantir, a technology company, has showcased its AI Platform (AIP), which aims to implement Large Language Models (LLMs) and algorithms ethically in military applications. AIP allows for the deployment of LLMs and AI across networks, from classified networks to tactical edge devices, and connects highly sensitive intelligence data to create a real-time representation of the environment. It also implements security features and guardrails to ensure control and governance, increasing trust, and mitigating legal, regulatory, and ethical risks posed by LLMs and AI in sensitive and classified settings. In a demo of AIP, a military operator used LLMs to monitor activity in Eastern Europe and ask questions related to military equipment in a field near friendly forces. AIP's transparency and ethical principles make it a responsible and compliant solution for AI in the military.
MIT-Takeda Researchers Develop New AI-based Estimator for More Efficient Drug Manufacturing
Researchers from the MIT-Takeda Program have developed a novel AI-based estimator for manufacturing medicine. They combined physics and machine learning to categorize the rough surfaces that characterize particles in pharmaceutical pills and powders. The method uses a physics-enhanced autocorrelation-based estimator (PEACE), which doesn't require stopping and starting the manufacturing process, making it more efficient and secure. The machine learning algorithm can be trained with only a small amount of data, allowing drug production to be more efficient, sustainable, and cost-effective. The team has already filed for two patents and plans to file for a third.
UK Competition Watchdog Reviews AI Market Amid Concerns Over Misinformation
The Competition and Markets Authority (CMA) in the UK has launched a review of the artificial intelligence market, citing concerns over false or misleading information. The CMA will focus on the underlying systems of AI tools, including large language models like ChatGPT and generative AI tools such as Stable Diffusion. The review will examine how the markets for foundation models could evolve, the opportunities and risks for consumers and competition, and produce guiding principles to support competition and protect consumers. The CMA has been asked to consider supporting the development and use of AI in line with principles of safety, transparency, fairness, accountability, and new player entry into the market.
Amnesty International Removes AI-Generated Images of Colombia Protests Following Criticism
Amnesty International has faced criticism for using AI-generated images to promote their reports on social media, including fake photos of Colombia’s 2021 protests. The images were removed after it was revealed that they were not accurate depictions of the events, with smoothed-off and warped faces of protesters and police, and an outdated police uniform. While Amnesty International has documented hundreds of cases of human rights abuses committed by Colombian police during the protests, using AI-generated images was considered to be an insult to photojournalists who cover protests from the frontline. Amnesty International has since removed the images from social media posts and expressed its commitment to addressing the ethical dilemmas posed by the use of such technology.
ChatGPT Chatbot Accessible Again in Italy After Privacy ConcernsAaddressed
OpenAI's ChatGPT chatbot is available once again in Italy after being temporarily restricted by the Italian data-protection authority due to concerns over privacy violations. OpenAI, which is backed by Microsoft, implemented measures to address the issues raised by Garante, including making its privacy policy accessible before registration and providing a tool to verify the age of users. OpenAI also introduced a new form for European Union users to object to the use of their personal data to train its models. While Garante welcomed the measures, it called for further compliance, including an age verification system and an information campaign for Italians to opt-out from personal data processing. Millions of people have used ChatGPT since its launch in November 2022, and it has recently been added to Bing and will be embedded in Microsoft Office apps.
AI Enables Non-Invasive Mind-Reading by Turning Thoughts into Text
Researchers at the University of Texas at Austin have developed an AI-based decoder that can translate brain activity into a continuous stream of text, allowing a person's thoughts to be read non-invasively for the first time. The breakthrough has raised the prospect of new ways to restore speech in patients struggling to communicate due to a stroke or motor neurone disease. The decoder reconstructs speech with remarkable accuracy using only fMRI scan data, overcoming the fundamental limitation of fMRI which makes tracking activity in real-time impossible due to an inherent time lag. The learning process was intensive and involved training the decoder to match brain activity to meaning using a large language model. The team now hopes to assess whether the technique could be applied to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).
In conclusion, these recent AI news stories showcase the incredible advancements and potential of artificial intelligence, as well as the ongoing ethical concerns and regulatory issues surrounding its use. From the development of new tools for more efficient and sustainable drug manufacturing to the demonstration of ethical AI use in military applications, there are countless ways AI can improve our world. However, there are also concerns about the concentration of power and potential misuse of AI technology, as well as the need for transparency and accountability in its development and implementation. As the AI industry continues to evolve, it will be crucial to balance its potential benefits with its potential risks and ensure that it is used ethically and responsibly.