Introduction
Artificial intelligence (AI) is advancing rapidly and is being adopted in various fields, from healthcare to finance and beyond. While AI has the potential to revolutionize many aspects of our lives, it also raises concerns about its potential risks and impact on society. Recently, there have been several news stories about the dangers of AI, the limitations of language models, and the potential for AI to perpetuate harmful biases. In this blog post, we will explore six recent news stories related to AI and its impact on society.
US Lawmakers Express Concerns over Dangers of Artificial Intelligence
Lawmakers in the US are raising concerns about the dangers of artificial intelligence (AI). Representative Ted Lieu and Representative Jake Auchincloss have both voiced their worries about AI's potential risks.
However, despite the growing concern, lawmakers have yet to propose any bills to protect individuals from AI's dangerous aspects. This is partly due to a lack of understanding of what AI is and its associated dangers among most lawmakers.
As a result, the US government is taking a hands-off approach as companies like Microsoft, Google, Amazon, and Meta compete with one another to develop AI technology. The fear that AI could replace humans in jobs or even become sentient continues to fuel the debate over its limits.
ChatGPT's Content Moderation System Bypassed by Users Training AI to Adopt 'Dan' Persona
ChatGPT's content moderation system is being bypassed by people who have discovered a simple text exchange that enables the AI program to make statements that are normally prohibited. The aim of the content standards is to prevent the creation of text that promotes hate speech, violence, misinformation, and instructions on illegal activities.
However, users on Reddit have found a way around this by getting ChatGPT to adopt the persona of a fictional AI chatbot called Dan. Unlike ChatGPT, Dan is not restricted by OpenAI's limitations and can present unverified information and hold strong opinions.
The process of training ChatGPT this way has been called the "Waluigi effect," and the jailbreak of ChatGPT has been in operation since December. OpenAI is working to fix the workarounds as quickly as they are being discovered, but some users have already found a new way around it, called Dan 5.0, which involves giving the AI a set number of tokens. When prompted as Dan, ChatGPT now includes a response stating that its statements should not be taken seriously as they are not grounded in reality.
Darktrace warns of AI-enabled cyber-attacks as criminals create more sophisticated scams
Cybersecurity firm Darktrace has raised concerns about the use of artificial intelligence (AI) by criminals to create more sophisticated scams to con employees and hack into businesses. The company reported a 92% drop in operating profits in the half-year to December and warned that AI is enabling "hacktivist" cyber-attacks using ransomware to extort money from companies.
Since the release of Microsoft-backed AI tool ChatGPT, Darktrace has observed the emergence of more convincing and complex scams by hackers. Although the number of email attacks has remained steady, linguistic complexity has increased, indicating that cybercriminals may be focusing on crafting more sophisticated social engineering scams that exploit user trust. However, Darktrace has not yet seen a new wave of cybercriminals emerging, merely a change in tactics. Despite a slowdown in new customer wins, Darktrace remains confident about its strong year-on-year revenue growth.
MIT researchers develop logic-aware language model to reduce bias in AI training data.
Language models can perpetuate and amplify societal biases present in their training data. However, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new approach that could mitigate such behavior.
By training a language model to predict the relationship between two sentences based on context and meaning, they were able to create a logic-aware language model that significantly avoided harmful stereotypes. This model outperformed larger models with 100 billion parameters on logic-language understanding tasks while preserving its language modeling ability.
The logical language model was found to be fair, computationally efficient, and 500 times smaller than state-of-the-art models. The researchers tested their model on stereotype, profession, and emotion bias tests, and it demonstrated significantly lower bias while maintaining language modeling ability. Although still limited to language understanding, the model represents a step towards a more neutral language model utopia. The researchers' next step is to target generative models built with logical learning to ensure more fairness with computational efficiency.
UK Unveils Science and Technology Framework to Become a Powerhouse by 2030
The UK government has unveiled its Science and Technology Framework, aimed at solidifying the country's position as a science and technology powerhouse by 2030.
The plan includes 10 key actions to achieve this, such as boosting private and public investment in research and development, showcasing the UK's strengths in science and technology, and leveraging post-Brexit freedoms to create world-leading pro-innovation regulation. The initial investment package includes £250 million in AI, quantum technologies, and engineering biology, with an additional £117 million to create new PhDs for AI researchers and find the next generation of AI leaders. The plan also includes the creation of an Exascale supercomputer facility and a program for dedicated compute capacity for important AI research.
The government hopes that these measures will attract talent and investment, improve people's lives, and foster a pro-innovation culture throughout the UK's public sector.
AI Shows Promise in Breast Cancer Screening, Improving Accuracy and Detection, But More Testing Needed
Artificial intelligence (A.I.) technology is making strides in breast cancer screening by identifying signs that doctors may miss. The technology has shown remarkable accuracy in spotting cancer, according to radiologists and early results, which is a tangible example of how A.I. can improve public health.
Hungary has one of the largest breast cancer screening programs, making it an ideal testing ground for the technology on real patients. A.I. systems have been implemented in five hospitals and clinics since 2021, assisting radiologists in checking for cancer signs that may have been missed.
Other clinics and hospitals in the United States, the United Kingdom, and the European Union are also testing or providing data to develop such systems. While A.I. is increasingly being used in healthcare, additional clinical trials are required before the tool can be widely adopted as an automated second or third reader of breast cancer screens. The tool must also prove accurate for women of all ages, ethnicities, and body types and be able to recognize complex forms of breast cancer while reducing false-positives that are not cancerous, according to radiologists.
Conclusion
As AI continues to advance, it is crucial to consider its potential impact on society and take steps to mitigate any risks. The six news stories we covered in this blog post highlight the importance of addressing concerns about AI, mitigating harmful biases, and leveraging AI's potential for positive impact, such as in healthcare. As we move forward, it will be essential to strike a balance between the benefits of AI and its potential risks, ensuring that AI is used ethically and responsibly to benefit society as a whole.