McKinsey realises the risk of rapid adoption of AI after hackers gain access to 46.5 million employee chat messages, 728000 ‘sensitive files’ and … – The Times of India


McKinsey realises the risk of rapid adoption of AI after hackers gain access to 46.5 million employee chat messages, 728000 ‘sensitive files’ and ...

McKinsey & Company rushed to patch a serious security flaw in its internal AI platform after a cybersecurity researcher gained access to tens of millions of employee chat messages and hundreds of thousands of sensitive files – all within two hours. According to a report by The Financial Times (via CodeWall), the target was Lilli, the management consultancy’s in-house AI platform used daily by its 40,000 employees to plan strategy, analyse data, and build project plans and client presentations.Researchers at CodeWall, a security startup that uses AI agents to continuously attack customers’ infrastructure to help them improve their security, say that the agent gained full read and write access to Lilli’s entire production database in under two hours. McKinsey’s security team was alerted to CodeWall’s findings at the end of February. The firm patched the identified vulnerabilities.According to CodeWall, the AI agent accessed:

  • 46.5 million internal chat messages exchanged between McKinsey staff
  • A list of 728,000 “sensitive” file names, including Excel spreadsheets, PowerPoint decks, and Word documents
  • 57,000 user accounts
  • 384,000 AI assistants
  • 94,000 workspaces

CodeWall accessed ‘intellectual crown jewels’

CodeWall described the combination as “the full organisational structure of how the firm uses AI internally” and called it the firm’s “intellectual crown jewels.” The ‘hacking’ also exposed Lilli’s internal system prompts and even AI model configurations, which means it revealed the instructions telling the AI how to behave, what it was allowed to do and what guardrails had been put in place.

What McKinsey has to say about the ‘breach’

McKinsey has pushed back on the most alarming interpretation of the breach. Citing a person close to the consultancy, the report said that while the names of sensitive files were visible after the breach, the files themselves were stored separately and were “never at risk”.McKinsey said it was “recently alerted to a vulnerability related to our internal AI tool, Lilli, by a security researcher. We promptly confirmed the vulnerability and fixed the issue within hours”.“Our investigation, supported by a leading third-party forensics firm, identified no evidence that client data or client confidential information were accessed by this researcher or any other unauthorized third party. McKinsey’s cyber security systems are robust, and we have no higher priority than the protection of client data and information that we have been entrusted with,” the was quoted as saying.

How CodeWall breached McKinsey AI

CodeWall says it focuses specifically on companies that have published guidelines welcoming ethical hackers to probe their systems for vulnerabilities. CodeWall revealed that its AI agent had itself suggested McKinsey as a target – without a human directing it to do so. It added that once the vulnerabilities were discovered, the agent automatically stopped attempting to access further files and reported its findings.“In the AI era, the threat landscape is shifting drastically — AI agents autonomously selecting and attacking targets will become the new normal,” the company said.

Source link