With this new app developed in China taking the United States by storm, here is everything you need to know about Deepseek, and the rise of AI
AI has spread to every corner of the digital world, providing tools designed to make your experience online easier. But did you know that we have been using AI without even realising?
TheCity spoke to Alex Bradley – An AI and Computer Science researcher for Bloomberg- about AI and why exactly it has become so popular in recent years.
“Believe it or not, AI has already been an integral part of our day for a while now. For example, we’ve been using Google Maps to give us the fastest route to a certain location. Another example would be filtering all our emails into spam and non-spam. These require AI algorithms”
However, the main reason AI has come into the spotlight suddenly is due to a form of AI called LLM, or Large Language Models.
“LLM gives users a much more open-ended way of providing input and therefor gives AI more of a spotlight to show its potential in responding to a wide range of inputs.”
So, what is Deepseek?
Deepseek is an app that was developed in China in December 2023 by Liang Wenfeng and works almost the exact same as ChatGPT (also known as OpenAI), using the LLM format.
Both apps have similar models when it comes to mathematics, coding and even reasoning, imitating the way the human brain comes to reason on a problem.
But what is the difference?
Deepseek is cheaper to operate, as it uses less memory than its competitors.
The app itself cost only six million dollars to train, in comparison to ChatGPT’s alleged training price of over 100 million dollars.
The fact that the app was developed in China has led politicians across the globe to be concerned about the use of Deepseek, due to contrasting AI and cybersecurity regulations from those in the EU and US.
Deepseek also has an issue with providing certain information due to China’s censorship laws.
As it cannot answer certain questions related to topics such as historic events truthfully, the app will simply refuse to answer the question (example below)
Deepseek’s response to a historic event in China brings doubt to the apps ability to relay information neutrally. Photo: Deepseek
So, what is the likelihood of creating one singular AI app that can do everything?
TheCity spoke to Carol Cronin, educator for teachers in Computer Science about the possibility of a single app for AI generated information.
“As AI models develop, some will be used as broad tools for research and information much like search engines are used, while others will occupy a niche and trained for a particular purpose.”
“Between time, money and resources, the possibility of having a single AI tool is not looking likely in the short-term future, but the possibilities are endless once these factors are considered.”
After speaking with an expert, I realised how much AI has influenced people’s work, and we spoke to Michaela, a young person specialising in data protection for an international legal firm about how exactly AI has influenced her career.
Michaela’s job focuses on privacy and data protection, ensuring that the client’s information is protected within the privacy policy of the company.
She found that AI was being used constantly in her work, but not without regulations.
“Many clients have now drafted AI guidelines/policies that set out when AI tools may be deployed in their company and when they may not.”
This includes the use of AI through Grammarly, customer assistance, and data analytics.
Michaela said that the fines placed on certain companies for data protection violations is not sufficient for the problem to be solved
“It is more valuable to have a huge database of information than it is to have 1.2 billion euro”, referring to the recent fine on Meta for data privacy violations.
So, while AI comes with its benefits, it needs to be regulated in all aspects.
We asked Alex Bradley whether AI should be embraced or received with caution.
“AI relies on data that is collected in an unbiased manner to provide unbiased answers. However, a lot of the data that it is trained off has some level of bias, and therefore the AI models learn this bias when they are training off said data.”
“While I believe that AI is something that should be used to improve aspects of society, it does need a level of human intervention”.
