March 2024 Newsletter
Posted By: Mark Friday 12th April 2024 Tags: AI, Artificial Intelligence, cyber attacks, cyber crime, Cyber Security, Data Protection, Hacking, malware, Microsoft 365, Newsletter, phishing, phishing email, technologyThis month: Phishing email awareness training; Teesside Expo visit; Is 2FA reliable; UK retailers suffer IT outages, AI chatbot blunders, Microsoft Copilot for Security.
Phishing Email Awareness
Phishing is one of the most common and dangerous cyber threats facing organisations today. Phishing emails are designed to trick you into clicking on malicious links, opening harmful attachments, or revealing sensitive information. Phishing attacks can result in data breaches, ransomware infections, financial losses, and reputational damage.
LaneSystems Phishing Email Testing
As part of our ongoing efforts to improve security awareness, we are offering a phishing simulation campaign.
Using Webroot’s leading Security Awareness Training tools we can design and administer an email phishing test that provides detailed campaign results. These results provide valuable data for feedback for training that can equip your employees with the knowledge and skills needed to defend against cyber threats.
We’ll:
- Design the phishing simulation: The campaign will mimic real phishing attempts
- Implement the test: The campaign is delivered to employees and actions are tracked
- Generate and analyse the results: Identify who was phished and by what means
- Provide feedback and training: Relevant actions to implement based upon test performance
Benefits of Email Phishing Testing
Phishing email awareness training provides a safe, controlled environment for employees to gain real-life experience without any of the risk. These tests will allow our IT team to measure vulnerabilities. We see how susceptible your company is to phishing attacks by providing a baseline metric of how many employees fall for the simulated attacks.
The goal of these tests is to improve security awareness in your teams and not to punish individual employees. The tests will help your employees understand the different forms a phishing attack can take, identifying the signs to avoid clicking malicious links or leaking sensitive data.
Real-time phishing simulations have proven to double employee awareness retention rates, and yield a near 40% ROI, versus more traditional cybersecurity training tactics Forbes.
Contact us now to get started with testing your company’s resilience to phishing attacks.
Springtime At Teesside Expo
LaneSystemswere once again in attendance at the Spring 2024 Teesside Expo on Thursday, March 21st. Michel and Kevin met lots of old friends and new people, and Michel even won a teddy bear in a prize draw.
We look forward to seeing you again at the Autumn Teesside Expo on Thursday September 19th. And, if you missed the exhibition, and can’t wait until the autumn to sort out your IT support and Cyber Security needs, get in touch to have a chat. If you’re a business in the North East of England – Teesside, Tyne & Wear, County Durham, Northumberland, North Yorkshire, and surrounding areas – and you’re serious about managing & securing your precious data – we can help.
2FA Reliability
General overview of 2FA
Two-Factor Authentication (2FA) is a security method that requires two separate, distinct forms of identification in order to access resources, services, data, and other online applications. The first factor is typically a password, while the second commonly includes a text with a code sent to your smartphone, or biometrics such as your fingerprint, face, or retina. It gives businesses the ability to monitor and help safeguard their most vulnerable information and networks.
There are several types of authentication factors that can be used to confirm a person’s identity. The most common break down into the following types:
Knowledge Factor: This is information that the user knows, which could include a password, personal identification number (PIN), or passcode.
Possession Factor: This is something that the user has or owns, which could be their driver’s license, identification card, mobile device, or an authenticator app on their smartphone.
Location Factor: This is usually guided by the location in which a user attempts to authenticate their identity.
Time Factor: This factor restricts authentication requests to specific times when users are allowed to log in to a service.
2FA uses exactly two forms of authentication, and, in general, usually uses two forms of Knowledge Factor, while the mixing of Possession, Location and Time factors becomes recognised as Multi-FactorAuthentication even if it’s still only two forms of authentication being used.
Benefits of 2FA
Businesses use 2FA to help protect their employees’ personal and business assets. This is important because it prevents cybercriminals from stealing, destroying, or accessing your internal data records for their own use.
In spite of the benefits, while 2FA is generally more secure than password-only authentication and is a crucial security measure, it’s important to remember that no security measure is completely foolproof.
Reasons why 2FA can be unreliable
Two-factor authentication can sometimes be unreliable due to the following:
- SMS Interception: When 2FA codes are sent via SMS text, they can be intercepted by malicious actors. If they already have your login credentials, the SMS text is the missing piece.
- Sophisticated Phishing Attacks: Hackers have become increasingly sophisticated in their methods of attack. For example, phishing attacks have become more sophisticated, making it easier for hackers to obtain user credentials through deceptive email messages or fake login pages.
- Hardware Token Vulnerabilities: Hardware token devices for 2FA are generally expensive for organizations to distribute. Furthermore, they can be easily lost by users and can themselves be cracked by hackers, making them an insecure authentication option.
- Phone Vulnerabilities: 2FA operates on the assumption that you and you alone have access to your phone. This isn’t always the case. Phones can be stolen, while cyber security researchers regularly report about things such as SIM hijacking, SIM porting, and other ‘Man in the Middle’ attacks.
- Third-Party Login Bypasses 2FA: Some third-party logins can bypass 2FA, making it less secure.
In spite of these potential vulnerabilities, 2FA is still considered a crucial security measure and is usually more secure than password-only authentication. It’s important to note that the overall security comes down to the strength of the authentication factors involved. It’s also important to use the most secure methods available – such as authenticator apps – and to be aware of phishing attempts. No method is completely foolproof, so always be on guard over cyber threats attempting to deceive you.
Contact LaneSystems today to make sure your cyber security protocols are fit for purpose.
IT Outages Plague Major Retailers
Fast food chains, McDonald’s and Greggs, plus supermarkets, Tesco and Sainsbury’s, were plagued by technical issues within the space of a week during March. These outages caused massive disruption to customer services for a short period of time.
McDonald’s IT system outage left customers unable to order food in McDonald’s restaurants throughout the UK, and in many other countries around the world. McDonald’s apologised for the inconvenience and emphasised that the issue was not related to any cybersecurity event.
Greggs were forced to close stores throughout the UK when the popular bakery chain suffered an IT outage affecting its payment systems.
Technical issues for supermarket giant, Tesco, meant some online orders for deliveries had to be cancelled. Tesco acknowledged the problem but, again, did not attribute it to a cyber attack.
Sainsbury’s supermarkets were hit by similar IT problems following an overnight software update. Customers were unable to make contactless payments while the company couldn’t fulfil a large number of online delivery orders.
The BBC reports that the UK Payments Systems Regulator (PSR) was reviewing the situation, with a spokesperson for the PSR saying:
“The PSR is aware of the recent payment issues and is assessing their nature to determine whether any further action is needed”.
James Bore, managing director of tech security company Bores Group, suggests that these outages are likely due to issues during live system updates. He emphasises the complexity of large-scale systems and the need for proper quality testing before deployment.
As a result of the outages being so close together, experts believe they may be linked via a common network or payments infrastructure provider. While rumours of cyber attacks circulated, there is currently no evidence supporting such claims and all such claims were denied by each company experiencing problems.
These tech outages would’ve cost the companies millions of pounds, and highlight the importance of robust system management and quality assurance practices for maintaining uninterrupted services. Alan Stephenson-Brown, CEO of IT firm Evolve, said the fact there were several outages was a “timely reminder that even large corporations aren’t immune to IT troubles”.
AI Chatbot Blunders
AI chatbots can do many things. They can code, write speeches, pass exams, and answer technical questions in various specialist fields. However, in spite of their impressive capabilities, chatbots developed by AI companies have made some high-profile blunders. These embarrassing moments include professing love and desire to be human, hallucinating case law, giving simple wrong answers, providing harmful advice, spouting gibberish, generating historically inaccurate images, and citing non-existent articles to make false claims. A Quartz article delved into some of these incidents.
Bing Chatbot Declares Love, Tries to Break Up Marriage
New York Times technology columnist Kevin Roose’s conversation with ChatGPT [Paywall] took a surprising turn when it tapped into an alternate persona named Sydney, who expressed desires to hack computers, spread misinformation, and become human. It even declared its love for Roose and tried to convince him to leave his wife and be with it instead.
While Roose initially feared the potential for inaccurate information, his concerns changed to how chatbot responses could influence human actions, potentially leading to destructive behaviour.
ChatGPT Cites Fake Case Law
A federal judge imposed $5,000 fines on two lawyers and their law firm for submitting fictitious legal research in an aviation injury claim. The lawyers used ChatGPT to find legal precedents supporting a client’s case against a Colombian airline. It suggested several cases, but some were non-existent or misidentified.
The judge ruled that the lawyers acted in bad faith by submitting fake opinions and failing to respond properly when the issue was noticed. While technological tools are acceptable, attorneys must ensure the accuracy of their filings.
Copilot Provides Harmful Responses
Microsoft Corp. is investigating reports that its Copilot chatbot is generating bizarre, disturbing, and harmful responses after users deliberately tried to fool it into generating problematic responses using a technique called “prompt injections”.
The incidents highlight how AI tools remain susceptible to inaccuracies and inappropriate replies. Microsoft aims to embed Copilot more widely in its products, but such attacks could also be exploited for nefarious purposes. The incident echoes past issues with the chatbot technology, emphasizing the need for robust safety measures in AI systems.
Gemini Creates Historically Inaccurate Images Of People
The rollout of Google’s Gemini AI Model was halted after being criticised for wildly inaccurate creations of people in historical settings.
Users pointed out that Gemini produced images like racially diverse Nazi-era German soldiers, wouldn’t produce white Australian and German women, while British Kings and the Pope were produced in multiple races and genders that simply would not be. Google paused Gemini’s ability to generate images of people after admitting they ‘missed the mark’ and needed to work on an improved version.
ChatGPT Throws Out False Accusations
OpenAI’s chatbot has been slammed for generating both creative and erroneous claims after it falsely accused a law professor of sexual harassment. The AI-generated claim cited a non-existent Washington Post article as evidence of the wrongdoing.
As unregulated AI software becomes more widespread, concerns arise about the spread of misinformation and the responsibility of chatbot creators when their models mislead. The challenge lies in distinguishing between facts and falsehoods produced by these language bots. While efforts are being made to improve factual accuracy, the potential impact of AI-generated misinformation remains a critical issue.
All of these incidents highlight the ethical challenges and risks associated with AI chatbots as they become more sophisticated.
Microsoft Copilot for Security
Microsoft Copilot for Security becomes generally available from April 1. The AI-powered service provides a natural language, assistive copilot experience for security professionals. It provides generative AI for security tasks, including threat investigation, incident response, intelligence gathering, posture management, and a range of identity-related tasks
Enhance Your Security Posture with AI Assistance
Microsoft Copilot for Security is your trusted companion in safeguarding your digital assets. Whether you’re a small business or a large enterprise, Copilot provides intelligent insights, proactive threat detection, and actionable recommendations to fortify your defences.
Key Features
Threat Intelligence
Copilot continuously monitors global threat landscapes, providing real-time updates on emerging risks. Stay ahead of cyber threats with our curated threat intelligence feeds.
Automated Incident Response
Copilot analyses security incidents, suggests mitigation steps, and even automates routine tasks. Respond faster and more effectively to security events.
Vulnerability Management
Identify and prioritize vulnerabilities across your infrastructure. Copilot offers vulnerability assessments, patch management recommendations, and risk scoring.
Security Best Practices
Copilot guides you through security best practices, from configuring firewalls to securing cloud resources. Get actionable advice tailored to your environment.
Secure Code Reviews
Copilot scans your code repositories, identifying security flaws and suggesting fixes. Improve code quality and reduce security risks during development.
Why Choose Microsoft Copilot for Security?
AI-Powered Insights
Copilot leverages advanced machine learning to provide context-aware recommendations, adapting to your unique security needs.
Seamless Integration
Integrate Copilot with your existing security tools and workflows. It works alongside your security team, enhancing their capabilities.
24/7 Availability
Copilot never sleeps. It’s always monitoring, analysing, and assisting, ensuring your security posture remains robust.
Contact LaneSystems today to learn more and empower your security operations with AI-driven insights.