NEW DELHI: A govt-initiated report on cyber threats in the banking, financial services, and insurance (BFSI) sector has forecasted a rise in deep fake and AI-generated content attacks, while warning about growing dangers of attacks on supply chain.
The report also says that as Large Language Models (LLMs) become increasingly integrated into various applications, there is a growing threat of LLM prompt hacking, where attackers manipulate the inputs to these models to induce unintended and potentially harmful behaviours.
The report has been prepared by national cyber security watchdog CERT-In, Computer Security Incident Response Team in Finance sector (CSIRT-Fin) and forensics-driven cybersecurity company SISA.
IT Secretary S Krishnan said cyberattacks are growing more sophisticated, frequent, and targeted. “A cyberattack on a financial institution can have disastrous results. Cyberattacks in financial institutions can have systemic effects that are exacerbated by technological and financial ties between other financial and non-financial institutions, resulting in exponential losses.”
The report says that the first half of 2024 saw a 175% surge in phishing attacks on the Indian financial sector compared to the same period last year, highlighting the heightened activity within an “increasingly volatile” threat landscape.
“By 2025, we expect AI-driven cyber-attacks to become one of the most scalable and adaptable threats, challenging traditional defenses and requiring innovative countermeasures,” it says.
On the rise of rise of deep fakes and AI-generated content, the report says that attackers are expected to increasingly leverage them as potent tools for intrusion, particularly in social engineering attacks. “The advancement of deep fake technology enables the creation of highly realistic and manipulated audio and video content that can convincingly impersonate individuals… For example, an attacker might use a deep fake video during a virtual meeting to deceive a finance team into authorizing a unauthorized transfer or employ a deep fake voice to trick individuals into revealing one-time passwords for multi-factor authentication (MFA), passwords, or other sensitive information.”
Speaking about threats to supply chain, the report says that attackers will likely exploit vulnerabilities in software development processes to compromise multiple organizations simultaneously. “One primary method involves the exploitation of code repositories. Cyber attackers gain unauthorized access to developers’ accounts on platforms like GitHub or inject malicious code into the source code of widely used applications. By infiltrating the development environment, attackers can insert malware directly into the codebase, which is then unknowingly distributed to clients through regular software updates or new releases.”
Regarding the threat of LLM prompt hacking, the report says that attackers may manipulate the inputs to these models to induce unintended and potentially harmful behaviours. “This threat is particularly pronounced in applications that host LLMs locally, rather than relying on APIs from established providers like OpenAI or Anthropic… Many locally hosted LLMs may not have sufficient safeguards against adversarial inputs, leaving them vulnerable.”
The report also says that quantum computing can be a “looming threat” to cryptography. “Quantum computing holds the potential to break existing encryption algorithms and keys that safeguard our digital communications.”
It also sees crypto as a “new frontier” for cyber threats.
The report also says that as Large Language Models (LLMs) become increasingly integrated into various applications, there is a growing threat of LLM prompt hacking, where attackers manipulate the inputs to these models to induce unintended and potentially harmful behaviours.
The report has been prepared by national cyber security watchdog CERT-In, Computer Security Incident Response Team in Finance sector (CSIRT-Fin) and forensics-driven cybersecurity company SISA.
IT Secretary S Krishnan said cyberattacks are growing more sophisticated, frequent, and targeted. “A cyberattack on a financial institution can have disastrous results. Cyberattacks in financial institutions can have systemic effects that are exacerbated by technological and financial ties between other financial and non-financial institutions, resulting in exponential losses.”
The report says that the first half of 2024 saw a 175% surge in phishing attacks on the Indian financial sector compared to the same period last year, highlighting the heightened activity within an “increasingly volatile” threat landscape.
“By 2025, we expect AI-driven cyber-attacks to become one of the most scalable and adaptable threats, challenging traditional defenses and requiring innovative countermeasures,” it says.
On the rise of rise of deep fakes and AI-generated content, the report says that attackers are expected to increasingly leverage them as potent tools for intrusion, particularly in social engineering attacks. “The advancement of deep fake technology enables the creation of highly realistic and manipulated audio and video content that can convincingly impersonate individuals… For example, an attacker might use a deep fake video during a virtual meeting to deceive a finance team into authorizing a unauthorized transfer or employ a deep fake voice to trick individuals into revealing one-time passwords for multi-factor authentication (MFA), passwords, or other sensitive information.”
Speaking about threats to supply chain, the report says that attackers will likely exploit vulnerabilities in software development processes to compromise multiple organizations simultaneously. “One primary method involves the exploitation of code repositories. Cyber attackers gain unauthorized access to developers’ accounts on platforms like GitHub or inject malicious code into the source code of widely used applications. By infiltrating the development environment, attackers can insert malware directly into the codebase, which is then unknowingly distributed to clients through regular software updates or new releases.”
Regarding the threat of LLM prompt hacking, the report says that attackers may manipulate the inputs to these models to induce unintended and potentially harmful behaviours. “This threat is particularly pronounced in applications that host LLMs locally, rather than relying on APIs from established providers like OpenAI or Anthropic… Many locally hosted LLMs may not have sufficient safeguards against adversarial inputs, leaving them vulnerable.”
The report also says that quantum computing can be a “looming threat” to cryptography. “Quantum computing holds the potential to break existing encryption algorithms and keys that safeguard our digital communications.”
It also sees crypto as a “new frontier” for cyber threats.
You may also like
South Sudan launches 1st household budget survey since independence
FPJ Analysis: Rejig In CPM, Revival The Goal
'Formed to do damage to US': Trump rejects EU's offer of zero for zero tariffs
The Trump Tariffs In Perspective: A Futurist's View On Trade, Power And The Coming World Order
Trump threatens China with 50 per cent additional tariff, doubling total levies beyond product value