by Nathan Hamiel | Oct 14, 2021 | Artificial Intelligence
If you are reading this post, then there’s a good chance you understand the need for security surrounding AI systems. As more and more development teams look at ways to increase the intelligence of their applications, the surrounding teams, including security teams, are struggling to keep up with this new paradigm and this trend is only accelerating.
Security leaders need to start bridging the gap and engaging with teams that are developing applications within the spectrum of AI. AI is transforming the world around us, not only for enterprises but also for society, which means failures have a much more significant impact.
Why you should care about AI Security
Everything about AI development and implementation is risky. We as a society tend to overestimate the capabilities of AI, and these misconceptions fuel modern AI product development, turning something understandable into something complex, unexplainable, and, worst of all, unpredictable.
AI is not magic. To take this a step further, what we have isn’t all that intelligent either. We have a brute force approach to development that we run many times until we get a result that we deem acceptable. It’s the kind of thing that if a human developer iterated that many times, you’d find unacceptable.
Failures with these systems happen every day. Imagine being denied acceptance to a university without explanation or being arrested for a crime you didn’t commit. These are real impacts felt by real people. The science fiction perspective of AI failures is a distraction to the practical applications deployed and used daily. Still, most AI projects never make it into production.
Governments are taking notice as well. A slew of new regulations, both proposed and signed into law, are coming out for regions all over the world. These regulations mandate controls and explainability, as well as an assessment of risk for these published models. Some of those, like the EU’s proposed legislation on harmonized AI, even have prohibitions built-in as well.
From a security perspective, ML/DL applications can’t solely be treated as traditional applications. AI systems are a mixture of traditional platforms and new paradigms requiring security teams to adapt when evaluating these systems. This adaptation requires more access to systems than they have had traditionally.
There are requirements for safety, security, privacy, and many others in the development space. Still, there seems to be confusion about who is responsible for what when it comes to AI development. In many cases, the security team, typically the best equipped to provide security feedback, aren’t part of the conversation.
What makes AI Risky?
When it comes to risk in AI, it’s a combination of factors coming together to create risk. It’s important to realize that developers aren’t purposefully trying to create systems that cause harm. So, if this is the case, let’s look at a few issues creating risk.
Poorly defined problems and goals
Problems can crop up at the very beginning of a project. Often, there is a push for differentiators in application development, and this push for more “intelligence” may come from upper management. This puts additional pressure on application developers to use the technology whether it is a good fit for the problem or not.
Not all problems are good candidates for technologies like machine learning and deep learning, but still, you hear comments such as “We have some data, let’s do something cool” or “let’s sprinkle some ML on it.” These statements are not legitimate goals for an ML project and may cause more harm than good.
Requests for this technology can lead to unintended, negative consequences and build up unnecessary technical debt. ML implementations are rarely set it and forget it type of systems. The data may change and shift over time as well as the problem the system was meant to solve.
The velocity of development isn’t a problem constrained to the development of machine learning systems. The speed at which systems are developed has been a problem for safety and security for quite some time. When you add the potential unpredictability of machine learning and deep learning applications, it adds an additional layer of risk.
Developers may be reluctant to follow additional risk and security guidelines due to the perception that they get in the way of innovation or throw up unnecessary roadblocks. This reluctance is regardless of any legal requirements to perform certain activities. This perception needs to be managed by risk management and security teams.
The mantra, “move fast and break things,” is a luxury you have when the cost of failure is low. Unfortunately, the cost of failure for many systems is increasing. Even if the risk seems low at first, these systems can become a dependency for a larger system.
Increased attack surface
Machine learning code is just a small part at the center of a larger ecosystem of traditional and new platforms. A model does nothing on its own and requires supporting architecture and processes, including sensors, IoT devices, APIs, data streams, backends, and even other machine learning models chained together, to name a few. These components connected and working together create the larger system, and attacks can happen at any exposure point along the way.
Lack of Understanding Around AI Risks
In general, there is a lack of understanding surrounding AI risks. This lack of understanding extends from the stakeholders down to the developers. A recent survey from FICO showed there was confusion about who was responsible for risk and security steps. In addition, these leaders also ranked a decision tree as a higher risk than an artificial neural network, even though a decision tree is inherently explainable.
If you’ve attended an AI developer conference or webcast, if risk is mentioned, they are talking about risks involved in developing AI applications, not risks to or from the AI applications. Governance is also discussed in terms of maximizing ROI and not in ensuring that critical steps are adhered to during the development of the system.
Supply Chain Issues
In the world of AI development, model reuse is encouraged. This reuse means that developers don’t need to start from scratch in their development processes. Pre-trained models are available on many different platforms, including Github, model zoos, and even from cloud providers. The issue to keep in mind is that you inherit all of the issues with the previous model. So, if that model has issues with bias, by reusing the model, you are amplifying the issue.
It’s also possible to have backdoors in models where a particular pattern could be shown to a model to have it take a different action when that pattern is shown. Previously, I wrote a blog post on creating a simple neural network backdoor.
A common theme running across issues relating to risk is lack of visibility. In the previous survey, 65% of Data and AI leaders responded that they couldn’t explain how a model makes decisions or predictions. That’s not good, considering regulations like GDPR have a right to explanation.
Other Characteristics Inherent to AI Applications
There are other characteristics specific to AI systems that can lead to increased exposure and risk.
Fragility. Not every attack on AI has to be cutting edge. Machine learning systems can be fragile and break under the best of conditions.
Lack of explicit programming logic. In a traditional application, you specify from start to finish the application’s behavior. With machine learning systems, you give it some data, and it learns based on what it sees in the data. It then uses what it learned to apply to future decisions.
Model uncertainty. Many of the world’s machine learning and deep learning applications only know what they’ve been shown. For example, ImageNet candidates only know the world by the thousand labeled categories it’s been shown. If you show one of the candidates something outside of those thousand things, it goes with the closest thing it knows.
AI Threat Landscape
Threats to AI systems are a combination of traditional and new attacks. Attacks against the infrastructure surrounding these systems are well known. But, with intelligent systems, there are also new vectors of attack that need to be accounted for during testing. These attacks would have inputs specifically constructed and focused on the model being attacked.
Types of Machine Learning Attacks & Their Impacts
In this section, I’ll spell out a couple of the most impactful attacks against machine learning systems. These attacks are not an exhaustive list but they cover a couple of the most interesting attacks relating to AI risk.
Model Evasion Attacks are a type of machine learning attack where an attacker feeds the model an adversarial input, purposefully perturbed for the given model. Based on this perturbed input, the model makes a different decision. You can think of this type of attack as having the system purposefully misclassify one image as another or classify a message as not being spam when it actually is.
Impact: The model logic is bypassed by an attacker.
Model Poisoning Attacks feed “poisoned” data to the system to change the decision boundary to apply to future predictions. In these attacks, attackers are intentionally provided bad data to the model to re-train it.
Impact: Attackers degrade a model by poisoning it, reducing accuracy and confidence.
Membership Inference Attacks are an attack on privacy rather than an attempt to exploit the model. Supervised learning systems tend to overfit to their training data. This overfitting can be harmful when you’re training on sensitive data because the model will be more confident about things it’s seen before than things it hasn’t seen before. This attack means an attacker could determine if someone was part of a particular training set which could reveal sensitive information about the individual.
Impact: Sensitive data is recovered from your published model.
Model Theft / Functional Extraction happens when an attacker creates an offline copy of your model and uses that to create attacks against the model. Then they can use what they’ve learned back against your production system.
Impact: Your model may be stolen or used to create more accurate attacks against it.
Model inaccuracies. Inaccuracies inherent to the model could cause harm to people or groups. Your model may have biases that cause people to be declined for a loan, denied entry to school, or even arrested for a crime they didn’t commit.
Impact: Your inaccurate model causes people harm.
An inaccurate model becomes the truth. To take the previous point to an extreme, imagine a model with inaccuracies is now used as a source of truth. This condition can be much harder to identify and may exist for a long time before being identified. Usage of this model would not allow people any recourse, because the source of the decision was the model.
Impact: Your inaccurate model causes systemic and possibly societal harm.
Where to go from here
Now that we’ve covered some of the issues, it’s time to start building up your defenses. I’ll cover that in another post. In the meantime, check out some of these AI security resources below.
by Nathan Hamiel | Sep 2, 2021 | Automation and Orchestration
The promise of automation is doing more with less, freeing people from repetitive tasks allowing focus on more interesting activities. This claim makes for a great tagline but can fall short in implementation.
Automation doesn’t have to include complicated machine learning or deep learning. It could be a simple script. Automation is far from a panacea and can create hard to rectify issues. In this post, I’ll provide some perspective and a quick zero friction gut check on the impacts of automation.
Issues from automation aren’t theoretical, they’ve happened to me, and they’ve happened to you. Before we dive in, let me describe two very recent events.
A mobile provider charged me for a phone I didn’t have. Since I had autopay, it immediately hit my credit card. I called them countless times. At first, everyone I talked to treated me with skepticism. After all, how could my situation happen? I was transferred around to multiple departments while I could almost hear people’s eyes rolling. After a while, everyone I talked to in every department saw the issue and was very sorry. Once identified as an issue, nobody from the various departments (including billing) could fix it. This was over the course of weeks. I was in a weird limbo, an outlier. I’d been a customer for 20 years, and it literally took them taking money from me to leave.
In another instance, I made an order from an online retailer. My package went out for delivery in my neighborhood and then back to the regional distribution center every day for two weeks before I received a notification my package was being returned to the sender for having an “Insufficient Address.” Four days later, I received the package with the notification that it was delivered to the original sender (I was the recipient.) When I looked at the label, my address was clearly printed and visible, but above my name there was a single question mark. I used the same ordering system that everyone else used, but somehow, without my interaction, something got messed up.
Automation is inevitable, and I’m not suggesting we don’t automate, but we need to understand the negative impacts and implement mitigations. Two things about the future are certain, error rates will increase, and it will become harder to correct errors. Automation removes people from a process, but if you remove the people entirely, unanticipated small issues can become big fast.
Two things about the future are certain, error rates will increase, and it will become harder to correct errors.
An algorithm or process that is 99% accurate seems great, but think about this, if you have a million instances, then 1% is 10,000. That is not an insignificant amount. Imagine each instance is a person, so potentially 10,000 people are affected by an issue.
Companies are often quick to implement automation, but they don’t consider the adverse effects or how they’d handle these effects. The negative impact to humans is often thought of as job loss, and any technical issues that crop up are considered addressable with a future tweak.
A human system without automation may very well have a higher error rate, but there is a human in the loop. People are more likely to believe that a human made a mistake than an automated system. Being this is the case, human systems can have more robust resolution processes.
Our confidence in technology will lead to a lack of trust in other humans. The system can’t be wrong, so the human must be wrong. This isn’t a perspective we should encourage.
Automation Impact Audit
We have to realize that technology is not infallible, and mistakes will happen. Given this fallibility, you need to have an appropriate mechanism for people to correct errors and inaccuracies.
To start with, you can perform a simple process that I call an automation impact audit. This audit will help you understand the process being automated and help identify potential issues. This process looks at a few fundamental elements, Process, Inputs, Impact, Detection, Rollback, and Resolution.
Evaluate the process you are automating. What components make up the system today, and how will it look after implementing automation? What percentage of the process will be automated? This can range from automating small tasks of a larger human process to the complete removal of humans from the loop. Although any inaccuracies can be bad, in a process that has humans completely removed, they can be harder to detect and resolve.
How complex is the process you are trying to automate? Higher complexity systems can lead to a higher number of unexpected issues. You should also evaluate the data you are using and implement data quality standards. Poor quality data can lead to poor decisions, and sometimes you don’t realize it until after the system has launched.
What is the impact of a wrong or inaccurate decision? Will people or businesses incur harm, or would it be relatively inconsequential? Understanding the answer to this question is one of the most important aspects of the audit. The higher the impact of inaccuracy, the more controls you need to put in place. Something that may appear as a minor issue or cost may be a point of frustration for a customer, causing them to discontinue using your service.
Do you have a way of detecting issues that could result from your implemented automation? If not, think of ways or areas that you can measure and determine if a problem arises. You should also implement this detection periodically so that you can look for issues over time. This can be an indicator that something is changing in your data or process that needs to be adjusted. Customers may not report issues and just choose to discontinue using your business.
How do you get out of automation? Do you have the ability to go back to a previous version that worked better or had fewer issues? If you have reassigned people who were previously performing tasks that automation handles now, you may not be able to go back to your previous state. This is why automation implemented in phases is more robust to the overall process.
Is there a way for impacted parties to correct issues and inaccuracies? Just having a resolution process isn’t enough. That process needs to be clearly communicated so that people know what to do when an issue arises.
Human-level performance doesn’t equal human-level resolution.
Human-level performance doesn’t equal human-level resolution. When implementing automation, robust resolution processes need to be implemented, allowing for proper resolution when issues present themselves. Preparation and care are required before implementing automation, including considerations of the impacts as well as clearly communicated resolution processes. Automation thrown at a problem for the sake of automation isn’t a winning strategy.
by Nathan Hamiel | Jul 26, 2021 | Artificial Intelligence
Until recently, the regulation of AI was left up to the organizations developing the technology, allowing these organizations to apply their own judgment and ethical guidelines to the products and services they create. Although this is still widely true, it may be about to change. New regulations are on the horizon, and some already signed into law, mandating requirements that could mean costly fines for non-compliance. In this post, we look at some themes across this legislation and give you some tips to begin your preparation.
When you think of regulations surrounding AI, your mind probably wanders to the use of the technology in things like weapons systems or public safety. The fact of the matter is, harms from these systems extend far beyond these narrow categories.
Many developers are just using the tools available to them. Developers create experiments and evaluate the final result on a simple set of metrics and shipping to production if it meets a threshold. They aren’t thinking specifically about issues related to risk, safety, and security.
AI systems can be unpredictable, which is ironic since often you are using them to predict something. Why unpredictability surfaces is beyond the scope of this post, but it has to do with both technical and human factors.
We’ve had laws indirectly relating to regulations of AI for quite some time and probably haven’t realized it. Not all these regulations specifically spell out AI. They may be part of other consumer safety legislation. For example, the Fair Credit Reporting Act may come into play when making automated decisions about creditworthiness and dictate the data used. In the context of machine learning, this applies to the data used to train a system. So, current regulation that prohibits specific pieces of information such as protected classifications (race, gender, religion, etc.) from being used or prohibits specific practices, then that also applies to AI, whether it spells it out or not.
Governments and elected officials are waking up to the dangers posed by the harm resulting from the unpredictability of AI systems. One early indicator of this is in GDPR Recital 71. In summary, this is the right to explanation. If there is an automated process for determining whether someone gets a loan or not, a person denied has a right to be told why they were rejected. Hint, telling someone one of the neurons in your neural network found them unworthy isn’t an acceptable explanation.
Recently, the EU released a proposal specifically targeting AI systems and system development. This proposed legislation outlines requirements for high-risk systems as well as prohibitions on specific technologies, such as those meant to influence mood as well as ones that create grades like a social score.
Although the US tried to pass similar legislation called the Algorithmic Accountability Act, it did not pass. The US Government did, however, release a draft memo on the regulation of AI. This document covers the evaluation of risks as well as issues specific to safety and security.
The US legislation not passing doesn’t mean the individual US States aren’t taking action on this issue. One example of this is Virginia’s Consumer Data Protection Act.
This is far from an exhaustive list and one thing is for sure, more pieces of regulation are coming, and organizations need to prepare. In the short term, these regulations will continue to lack cohesion and focus and will be hard to navigate.
Even though the specifics of these regulations vary across the geographic regions, some high-level themes tie them together.
The overarching goal of regulation is to inform and hold accountable. These new regulations push the responsibility for these systems onto the creators. Acting irresponsibly or unethically will cost you.
Each regulation has a scope and doesn’t apply universally to all applications across the board. Some have a broad scope, and some are very narrow. They can also lack common definitions making it hard to determine if your application is in scope or not. Regulations may specifically call out a use case or may imply it through a definition of data protection. Most lawmakers aren’t technologists so expect differences across the various legislation you encounter and determine common themes.
Risk Assessments and Mitigations
A major theme of all the proposed legislation is understanding risk and providing mitigations. This assessment should evaluate both risks to and from the system. None of the regulations dictate a specific approach or methodology, but you’ll have to show that you evaluated risks and what steps you took to mitigate those risks. So, in simple terms, how would your system cause harm if it is compromised or fails, and what did you do about it?
Rules aren’t much good without validation. You’ll have to provide proof of the steps you took to protect your systems. In some cases, this may mean algorithmic verification by providing ongoing testing. The output of the testing could be proof you show to the auditor.
Simply put, why did your system make the decision it did? What factors lead to the decision? Explainability also plays a role outside of regulation. Coming up with the right decision isn’t good enough. When your systems lack explainability, they may make the right decision but for the wrong reason. Based on issues with data, the system may “learn” a feature that has high importance but, in reality, isn’t relevant.
How Can Companies Prepare?
The time to start preparing is now, and you can use the themes of current and proposed regulation as a starting point. It will take some time, depending on your organization and the processes and culture currently in place.
AI Strategy and Governance
A key foundation in compliance is the implementation of a strategy and governance program tailored to AI. An AI strategy and governance program allows organizations to implement specific processes and controls and audit compliance.
This program will affect multiple stakeholders, so it shouldn’t be any single person’s sole responsibility. Assemble a collection of stakeholders into an AI governance working group and, at a minimum, include members from the business, development, and security team.
You can’t prepare or protect what you don’t know. Taking and maintaining a proper inventory of AI projects and their criticality levels to the business is a vital first step. Business criticality levels can feed into other phases, such as risk assessments. A byproduct of the inventory is that you communicate with the teams developing these systems and gain feedback for your AI strategy.
Implement Threat and Risk Assessments
A central theme across all of the new regulations is the specific calling out of risk assessments. Implementing an approach where you evaluate both threats and risks will give you a better picture of the protection mechanisms necessary to protect the system and mitigate potential risks and abuses.
At Kudelski Security we have a simple approach for evaluating threats and risks to AI systems consisting of five phases. This approach provides tactical feedback to stakeholders for quick mitigation.
KS AI Threat and Risk Assessment
If you are looking for a quick gut check on the risk of the system, ask a couple of questions.
- What does the system do?
- Does it support a critical business process?
- Was it trained on sensitive data?
- How exposed is the system going to be?
- What would happen if the system failed?
- Could the system be misused?
- Does it fall under any regulatory compliance?
If you would like to dive deeper, check out a webcast I did for Black Hat called Preventing Random Forest Fires: AI Risk and Security First Steps.
Develop AI Specific Testing
Testing and validation of systems implementing machine learning and deep learning technology require different approaches and tooling. An AI system combines traditional and non-traditional platforms, meaning that standard security tooling won’t be effective across the board. However, depending on your current tooling and environment, standard tooling could be a solid foundation.
Security testing for these systems should be more cooperative than some of the more traditional adversarial approaches. Testing should include working with developers to get more visibility and creating a security pipeline to test attacks and potential mitigations.
It may be better to think of security testing in the context of AI more as a series of experiments than as a one-off testing activity. Experiments from both testing and proposed protection mechanisms can be done alongside the regular development pipeline and integrated later. AI attack and defense is a rapidly evolving space, so having a separate area to experiment apart from the production pipeline ensures that experimentation can happen freely without affecting production.
Models aren’t useful on their own. They require supporting infrastructure and may be distributed across many devices. This distribution is why documentation is critical. Understanding data usage and how all of the components work together allows for a better determination of the threats and risks to your systems.
Focus on explainability
Explainability, although not always called out in the legislation, is implied. After all, you can’t tell someone why they were denied a loan if you don’t have an explanation from the system. Explainability is important in a governance context as well. Ensuring you are making the right decision for the right reasons is vital for the normal operation of a system.
Some models are more explainable than others. When performing benchmarking for model performance, it’s a good idea to benchmark the model against a simpler, more explainable model. The performance may not be that different and what you get in return is something more predictable and explainable.
Move fast and break things is a luxury you can afford when the cost of failure is low. More and more machine learning is making its way into high-risk systems. By implementing a strategy and governance program and AI-specific controls, you can reduce your risk and attack surface and comply with regulations. Win-win.
by Nathan Hamiel | Mar 25, 2020 | Team work
The presence of COVID-19 has led to some unprecedented times. With a large portion of the workforce now working from home, there are numerous security implications that arise. Our previous post is an extensive FAQ that covers everything you need to know about the cybersecurity concerns and how to address them. Today, we’ll dive into tips for productivity while working remotely.
Welcome to the world of remote work. Beware, it’s not for the uninitiated, and this is why I wanted to share a few tips that I’ve picked up from over a decade of both working remotely as well as managing remote teams. I’m hoping these tips help people who may find themselves in this position for the first time. I wanted to share a few highlights without getting too deep.
I’m a huge fan of remote work. To me, it solves the staffing challenges of a company in allowing them to source the best talent for their positions and not locking them to a geographic region. It’s certainly not perfect, and not all candidates are well suited to the self-discipline required, but then again, working in an office is far from perfect as well.
The perception of remote work can be negative to companies that don’t focus on innovation. What these companies don’t realize is that you can see an increase in both productivity and creativity with a remote workforce. The downside is, if it’s poorly managed, it can be a negative that reinforces this perception.
The first step in remote work is knowing yourself. This knowledge isn’t through some deep philosophical meaning, but know your habits and who you are. This insight will be different for everyone. For example, if you are easily distracted, you won’t have the environment of a workplace to reign you back in. If you naturally spend too much time on social media, then you need to block those notifications during periods of work time. This situation seems simple enough to conceptualize, but some may find it hard to implement.
Inventory your distractions and come up with a plan. For each of the items you identify as potential blockers to productivity, it’s good to have some tool, control, or mindset in place to keep you in check. This is step number one, and if you don’t have it down, you may be in for a bad time. The good news is that after a while, a new habit will form, and some of this blocking will be second nature.
Separate Your Work Environment
Maintain a separate work environment. This way, you can keep your head in the right spot when you are working from home. A separate work area also gives you the feeling of “going to work.” I can’t imagine a lot of productivity would come out of lying in bed with your laptop and the TV on.
I’m lucky enough to have a home office, with a large monitor, an open desk and a comfortable chair. These are items that help me flex my creativity and separate me from the normal mindset of doing other home-based activities. If you don’t have enough room to have a dedicated office, then choose a room that you “go to” for work. I also suggest something, such as an external monitor or mouse and keyboard that makes it feel like a workplace.
If you have a family or children, they must understand you are “at work.” I don’t have this problem, but I know others that do. When possible, close the door on your workspace or set some other signal that you are working. If you are in an incredibly cramped space, close to family members, I suggest headphones. Let your family members know and understand this signal to minimize interruptions. If it’s not possible to go long periods without interruption, consider working in sprints for as many hour-long blocks as you can.
Keep a Schedule, Keep a Mindset
If you don’t feel like you are at work, you won’t produce like you are at work. This is where the previous point knowing yourself can play a significant role. It’s best to keep a routine because after all, you are going to work, you are just cutting out the pesky commute.
Wake up, shower, get dressed, do all of the same things that you would do if you were going to a workplace. You don’t need to put on formal clothing, but pajamas probably won’t make you feel productive either.
I live in Florida, and in case you haven’t heard, it gets hot down here. So it’s true when I wake up, part of my getting dressed may involve wearing a pair of shorts, but they aren’t the same ones I wore to bed, and that’s the point about getting into the work mindset.
Another thing that goes along with keeping a schedule is your health. Use a fitness tracker to remind you to stand up every hour and help maintain a routine of movement. Another thing I do is jog daily.
Just as important as keeping a separate workspace, is making sure you get away from that workspace. Other than the health benefits of getting exercise, I find that without the distraction of digital devices, my mind works out problems differently. I’ve solved many problems and came up with countless ideas, all while lost in my thoughts during my daily jog. My daily jog is critical to not only my creative process but to my problem solving as well.
Increase focus and minimize distractions
It’s imperative to understand your sources of distractions and minimize their impact as much as possible. Avoiding activities that are time sinks is great not only for general life but critical during the workday. Whatever your poison is, don’t partake during working hours.
Also, refer back to the previous comment about family members.
If you are easily distracted, try using a Pomodoro timer and slicing your activities up into small chunks where you can focus on them.
Prioritize your activities using the methodology of your choice and break those off into chunks that make the most sense. Always have an idea of what you need to accomplish that day or that week and make progress.
Utilize The Tools You Have
What remote collaboration tools do you have at your disposal? Inventory those and make the best use of them. Chose the best tool for the task, whether it be document collaboration, chat, video conferencing, or even remote brainstorming.
Stay organized for both yourself and your team. One issue with remote teams is never knowing where anything is. Try to organize documents in a single location to cut down on the amount of confusion and additional questions. If you can preemptively cut down on the amount of unnecessary communication through preparation, then you have won a battle.
Have a great task manager that runs on all your devices. You’ll find that information comes at you fast and from multiple sources. Having a great task manager syncs across all your devices will help ensure that things don’t get missed.
Know Your Team
Along with knowing your tools, know your team members and how they like to work and communicate. Match the preferred method of communication. People may prefer email, text, or maybe a phone call. Matching the preferred communication method will cut down on frustration as well as the amount of additional communication necessary to share a point.
Use The Phone
Yes, that thing you hold in your hand used to be a thing people utilized to send their voice to the ears of other people. It’s easy to get carried away communicating with text, chat, and email, but sometimes it’s easier to pick up the phone. Text communications can be hard to convey tone, and your tone can be misinterpreted. Often, a quick phone call can solve a lot of problems and save a lot of back and forth. Don’t be afraid to use that device for its original intent.
Find ways to improve your communication. Nobody wants to read a tome in their inbox. Get to what’s important quickly and at the beginning of the email. Strive for the right balance of brevity and completeness. Keep in mind that the email may be on the screen of a mobile device.
If you must write a long email and you are sending it to a decision-maker, try to put a few summary bullets up top or some important takeaways. Also, let the recipient know the message requires some action from them. This summary will increase the possibility of your email being read and show that you understand the value of the time of the person reading it.
Welcome to the remote workforce. With the right balance of skills, tools, and discipline, you can increase your creativity and productivity. Hopefully, you found these tips useful.
If you’re interested in learning more about the cybersecurity concerns that arise while working remotely, click here to read our FAQ on the subject.
by Nathan Hamiel | Mar 7, 2019 | Cyber
You read the title of this post correctly. Maybe it should be most people don’t care about cybersecurity, but you get the point. It’s a reality that those of us responsible for securing our organizations know but don’t like to acknowledge because it leads to a tough question. If people don’t care, then what is all of this for?
Lack of caring customers affects business decisions. You don’t see large swaths of people holding companies accountable post-breach. As a matter of fact, In many cases, stock prices tend to rebound after a breach. There are also some who advocate that insecure software is still more advantageous than the potential negative impacts it creates. This argument is inaccurate based on skewed and superficial perceptions of the customer and not based on the reality of the situation.
So, should we all change professions and try our hands at being celebrity chefs? If you are like me and have a weak flambé, we should take a closer look at the situation.
Why people don’t care
It’s essential for us to have a look at the conditions that create this apathy in customers. Understanding these issues makes framing potential solutions easier.
Here are the major ones:
- Short attention span
- Numb to breach occurrence
- Good detection and recovery
Effects of breaches aren’t immediately felt. Of course, this is assuming an attack doesn’t delete all of your data, and by your data, I mean your customer’s data.
If compromised data is used in some form of attack or fraud, it’s not done immediately. Tying an instance of abuse to a specific breach can be hard for a consumer. In that time, their data may have been compromised in other locations, so who does the consumer blame?
Short on Attention
People these days live under a constant bombardment of content all competing for their attention. This is on top of the professional and personal priorities they have. They can be mad at a hotel chain for a breach one day and book a stay with points the next. With the perception that too much is on their plate, only the most egregious instances will stay top of mind.
For perspective, people are more likely to hold a grudge with a restaurant they had a bad experience with than the credit company who lost enough of their data for a criminal to commit identity theft.
People have gotten numb to all of the breaches. High-profile breaches have become a regular occurrence and lesser profile ones even more so. The number of breaches has a numbing effect, so news of a new instance results in little more than a sigh and an eye roll.
Good Detection and Recovery
Companies have gotten good at detection and recovery in post-breach scenarios. Think of your bank calling you when it notices some odd transactions or notification from another site offering free credit monitoring. Most often the customer doesn’t have to take much action at all and only encounters a mild inconvenience.
A Dangerous Road
If your customers don’t care about security, then it can be a hard sell to management and other business units. On the surface, this makes business sense, but letting security priorities slip is a dangerous road. The lack of prioritization and focus on security initiatives opens the door for nefarious actors that goes far beyond the superficial surface. Here are just a few areas to consider.
Autonomous systems make decisions without human interaction. The integrity of the data these systems consume is paramount because tainted data could cause the system to make the wrong decision. Think of a drone attacking the wrong target or an automated trading algorithm triggering a mass selloff of stocks.
Injury or Death
Of course, building off of the previous point about autonomous systems, there is the fact that systems that can kill us are becoming more common. Medical implants, self-driving cars, industrial systems, drones, and countless others that aren’t obvious to consumers have the potential to impact their health and wellness. It shouldn’t take a breach causing large scale death for people to begin caring. Unfortunately, that may very well be what it takes.
Stolen data and compromised systems have monetary value to criminals. Criminals have various motivations for their activities, but a compromise of your systems could assist in the ongoing support of illegal activities. Some of these activities could include terrorism.
Losing a customer’s data is a breach of privacy. Privacy has never been in more danger through shady purposeful activities, but unauthorized disclosure makes it worse. On this front, I think there is some hope. Not only has privacy importance been elevated by regulation such as GDPR, but younger people seem to be caring about it more as technology becomes less of a novelty and more something that’s always been part of their lives.
In my Black Hat Europe presentation last year, I spent some time talking about how the technology created today will be with us tomorrow possibly much longer than their support cycles. You aren’t likely to upgrade your refrigerator or car at the same frequency you do your phone or smartwatch. Tons of low-cost devices are spreading across the planet that will affect our security posture for years to come.
What can we do?
So if all of this is a problem, then what can we do to ensure we are protecting our organizations both now and in the future?
Not be part of the problem as an organization
By contributing to the larger problem, we are contributing to a sea of already compromised data making it hard to determine where it came from other than when an attacker makes it known for their marketing purposes.
Avoid the top-down approach
Far too often people feel that security needs buy-in from senior management to drive initiatives through the company. Laboring under this delusion can cause you to miss opportunities. It’s true that management support can make things easier, but it’s not the only way to get security initiatives implemented. Buy-in from the bottom up or even cross-pillars to other peers can be just as effective, if not more.
It shouldn’t be a secret that reducing the friction of a solution increases adoption. We all know someone who never locked their phone because entering numbers was an inconvenience. Their behavior changed with the inclusion of things like TouchID and FaceID, indirectly causing an increase in security posture. We should be investigating areas where a reduction in friction could lead to increased adoption.
Regulatory compliance and privacy law
Regulatory compliance is a topic that many in the industry love to hate. It may very well take governments and other regulatory bodies getting involved effecting a broader change. Although the effectiveness of such compliance measures can be debated, discussions spring out of these requirements.
It may very well take something that causes multiple deaths or a substantial financial impact to get the average consumer to care about cybersecurity, but we as security professionals can’t let that guide our decisions. Are we okay with allowing people to die before we take a problem seriously? We need to be proactive and find creative ways to get our solutions adopted and look for areas to reduce friction before it’s too late.