September 2025 Newsletter
Posted By: Mark Friday 17th October 2025 Tags: AI, Artificial Intelligence, cyber attacks, cyber crime, Cyber Security, Data Breach, Data Leak, Data Privacy, Data Protection, Deepfake, Newsletter, ransomware, technologyThis month: Insider threats, deepfake calls target businesses, data disposal after updating tech, AI actor controversy, plus the latest LaneSystems news.

The Threat Within: Understanding and Tackling Insider Threats
A BBC cyber journalist has detailed his own personal encounter with insider threat hackers looking for access to the broadcaster’s systems. Something he calls ‘A chilling insight into the ever-evolving tactics of cyber criminals’.
The issue of insider threat risks isn’t theoretical — they’re real, rising, and often overlooked. We think insider threats are a cyber security issue that companies should be aware of.
What Is an Insider Threat?
The UK’s National Cyber Security Centre (NCSC) defines an insider threat as a risk posed by individuals within an organisation who have authorised access to systems and data. This includes employees, contractors, and third-party partners. The threat may be deliberate — such as espionage, sabotage, or data theft — or accidental, stemming from carelessness or lack of awareness.
Insider threats can manifest as:
- Data leaks or unauthorised sharing
- Misconfigured systems or poor security hygiene
- Clicking on phishing links or falling for scams
- Abuse of privileged access
- Collusion with external attackers
Unlike external threats, insiders don’t need to breach defences — they’re already inside the perimeter.
Cybercrime groups increasingly target employees as potential insider assets because they offer something no external attacker can: direct access. Workers with privileged credentials, knowledge of internal systems, or access to sensitive data are prime candidates for exploitation. These groups often use social engineering, coercion, or financial incentives to persuade insiders to leak information, install malware, or bypass security protocols. Remote work and digital collaboration tools have only widened the attack surface, making it easier for threat actors to initiate contact discreetly.
For some employees, the appeal can be disturbingly pragmatic. Financial stress, job dissatisfaction, or a sense of being undervalued can make illicit offers seem tempting — especially when cybercriminals dangle rewards that far exceed a monthly salary. In some cases, individuals rationalise their actions as victimless or temporary, unaware of the long-term damage to their organisation and career. The combination of opportunity, access, and perceived anonymity makes insider recruitment a low-risk, high-reward strategy for cybercrime syndicates — and a high-stakes vulnerability for businesses.
Insider Threats: The Statistics
Recent research from global cybersecurity firms paints a concerning picture. A Ponemon Institute survey reports that the total average annual cost of insider incidents in $17.4m, with an average cost per incident of around $675,000. The average time to contain an incident is 81 days, and the longer the containment time the greater the costs.
IBM writes that 83% of businesses reported insider incidents in the last year.
Sectors such as healthcare, finance, and government are particularly vulnerable due to the sensitivity of their data and the complexity of access controls, but a Verizon report finds that SMBs are actually being targeted nearly four times more than large organisations.
How to Reduce Insider Threat Risks
Leading cybersecurity providers and advisory bodies recommend a multi-layered approach:
1. Behavioural Monitoring & Analytics
Deploy tools to detect unusual activity. Get insights into user behaviour.
2. Principle of Least Privilege
Limit access rights to only what’s necessary for each role. Enforce multi-factor authentication to prevent misuse of credentials.
3. Staff Training & Awareness
Provide regular training on phishing, data handling, and recognising suspicious behaviour. Reinforce the importance of reporting concerns early.
4. Separation of Duties
Avoid giving any one individual full control over critical systems or audit logs. Helps prevent fraud and unauthorised changes.
5. Secure Offboarding Procedures
Revoke access immediately when staff leave or change roles. Monitor for unusual activity during notice periods.
6. Data Loss Prevention (DLP) Tools
Use DLP solutions to detect and block unauthorised data transfers. Integrate across email, cloud platforms, and endpoints.
Insider threats are complex because they involve trust, access, and human behaviour. But with the right mix of technology, policy, and vigilance, organisations can significantly reduce their exposure. As IT professionals, we must be aware of threats from within as much those that are external.
If you’re a business in the North East of England and would like an audit of your company’s cyber security, get in touch today..

LaneSystems News
Charity News
We have recently contributed labour, worth £400, towards Windows 11 upgrades for Carers Trust Tyne & Wear. We have also donated staff security awareness training, worth £400, to Teesside Hospice. Both charities provide valuable services to the region.
Windows 10 End of Support Reminder
We’re going to keep leaving a gentle reminder here that Microsoft will no longer officially support Windows 10 on October 14, 2025. After this date, Microsoft will no longer provide security updates, bug fixes, or technical assistance for Windows 10. Read more about why it’s essential to keep systems up to date in the article below.
We have been contacting all of our clients during the past year to make them aware, so that we can plan a smooth transition to Windows 11 where necessary. If you’re a business in the North East of England who needs help with the update, contact us today for assistance.

Deepfake Calls Target Businesses
A recent Gartner survey has revealed a sharp rise in AI-driven cyber threats targeting businesses, with deepfake audio calls now the most widespread form of attack. Of the security leaders polled, 62% reported experiencing AI-related incidents in the past year, ranging from prompt injection exploits to synthetic media manipulation.
Deepfake Audio impersonations
The most prevalent tactic involves deepfake audio impersonations of colleagues or senior executives. Alarmingly, 44% of organisations reported at least one such incident, and 6% suffered serious repercussions—including financial loss, operational disruption, or data breaches. Firms employing audio screening tools saw that risk drop significantly to just 2%.
Chester Wisniewski, a principal research scientist at Sophos, highlighted the growing realism and accessibility of audio deepfakes. “You can generate these calls in real time now,” he explained, noting that while close associates might detect inconsistencies, casual workplace interactions are far more susceptible.
Video-based Deception
Video deepfakes, though less common, remain a serious concern. Around 36% of businesses encountered video-based deception, with 5% reporting major consequences. Real-time impersonation of specific individuals is still costly—often requiring substantial resources—but attackers are adapting. Sophos has observed scams where a brief video of a CEO or CFO is shown during a WhatsApp call, followed by a switch to text messaging under the guise of poor connectivity. This approach exploits trust while reducing technical complexity.
AI Prompt Injection Attacks
The report also draws attention to the rise of prompt injection attacks, where malicious instructions are embedded in content processed by AI systems. These can result in unauthorised data access, misuse of integrated tools, or even code execution. Gartner found that 32% of respondents had experienced such attacks against their applications.
The message is clear: AI-powered deception is no longer a theoretical risk. UK businesses must respond by deploying screening technologies, educating staff on synthetic media threats, and securing AI-integrated systems against manipulation.

Data Disposal During Device Migration
As organisations refresh their fleets during the Windows 10 to Windows 11 switchover, the disposal of legacy laptops and desktops demands more than a factory reset. Improper data destruction can expose sensitive information, leading to regulatory fines and reputational damage.
The Register recently highlighted the costly fallout for Morgan Stanley, which faced $155 million in penalties after failing to ensure proper sanitisation of decommissioned devices. The lesson is clear: simply offloading old hardware to a third party doesn’t absolve responsibility.
Data Protection & Compliance
To mitigate risk, companies should generally follow NIST 800-88 guidelines, which outline three levels of data sanitisation:
- Clear: Overwriting data, though not foolproof due to inaccessible disk areas.
- Purge: Secure erase techniques that make recovery extremely difficult.
- Destroy: Physical destruction, such as shredding or incineration, ensuring data is irretrievable.
Partnering with certified disposal services that offer documented proof of sanitisation—such as tamper-proof certificates and chain-of-custody logs—is essential. For high-security environments, multiple overwrites or physical destruction may be required. In the UK, organisations like ICO and the NCSC [PDF] provide guidance on good practice for erasing data. The Ministry of Justice website provides useful links for all forms of data management.
As IT teams manage the upgrade to Windows 11, they must treat data destruction as a core compliance and security task—not an afterthought. The cost of negligence far outweighs the price of doing it right.
Contact LaneSystems for help with your upgrades and data protection needs.

AI Actor Tilly Norwood Sparks Industry Backlash
The unveiling of Tilly Norwood, a fully AI-generated actor created by London-based company The Simulation, has ignited fierce debate across the entertainment industry. Marketed as a “digital human,” Tilly is designed to perform in scripted roles, engage with fans on social media, and even participate in brand campaigns—all without a physical presence or human performer behind her.
The creators tout Tilly as a breakthrough in scalable, ethical casting, claiming she can reduce production costs and bypass traditional constraints like scheduling or location. Her voice and appearance are entirely synthetic, and her personality is shaped by machine learning models trained on thousands of hours of human interaction and performance data.
A Threat to Performers’ Livelihoods
However, the backlash has been swift and vocal. As reported by the BBC, actors’ unions, including Equity, have condemned the move as a threat to human performers’ livelihoods, warning that AI actors could erode job opportunities and undermine creative authenticity. Critics argue that while Tilly may mimic human nuance, she lacks the lived experience and emotional depth that define compelling performances.
There’s also concern over transparency. Some fear audiences may be unaware they’re engaging with a synthetic persona, raising ethical questions about consent, representation, and the commodification of identity. The Simulation insists Tilly is clearly branded as AI, but sceptics remain unconvinced.
The Future of the Arts?
Beyond the acting world, the controversy touches on broader anxieties about automation, creative labour, and the future of storytelling. We covered recent controversy over the use of an AI model by Guess in an advert for Vogue magazine, and also music on Spotify by an AI band, Velvet Sundown.
As AI-generated characters inch closer to mainstream adoption, the debate over their place in culture—and the protections needed for human artists—is only intensifying.
Need Cyber Security?
If you’re a business in the North East of England and looking for professional and reliable cyber security services, IT consultation, and general IT services to keep your company cyber secure, get in touch. Cybersecurity is a continuous process, and staying proactive is key to safeguarding digital assets.