Anticipating Issues With Automation Impact Audits

Anticipating Issues With Automation Impact Audits

The promise of automation is doing more with less, freeing people from repetitive tasks allowing focus on more interesting activities. This claim makes for a great tagline but can fall short in implementation.

Automation doesn’t have to include complicated machine learning or deep learning. It could be a simple script. Automation is far from a panacea and can create hard to rectify issues. In this post, I’ll provide some perspective and a quick zero friction gut check on the impacts of automation.

Real Issues

Issues from automation aren’t theoretical, they’ve happened to me, and they’ve happened to you. Before we dive in, let me describe two very recent events.

A mobile provider charged me for a phone I didn’t have. Since I had autopay, it immediately hit my credit card. I called them countless times. At first, everyone I talked to treated me with skepticism. After all, how could my situation happen? I was transferred around to multiple departments while I could almost hear people’s eyes rolling. After a while, everyone I talked to in every department saw the issue and was very sorry. Once identified as an issue, nobody from the various departments (including billing) could fix it. This was over the course of weeks. I was in a weird limbo, an outlier. I’d been a customer for 20 years, and it literally took them taking money from me to leave.

In another instance, I made an order from an online retailer. My package went out for delivery in my neighborhood and then back to the regional distribution center every day for two weeks before I received a notification my package was being returned to the sender for having an “Insufficient Address.” Four days later, I received the package with the notification that it was delivered to the original sender (I was the recipient.) When I looked at the label, my address was clearly printed and visible, but above my name there was a single question mark. I used the same ordering system that everyone else used, but somehow, without my interaction, something got messed up.

Automation

Automation is inevitable, and I’m not suggesting we don’t automate, but we need to understand the negative impacts and implement mitigations. Two things about the future are certain, error rates will increase, and it will become harder to correct errors. Automation removes people from a process, but if you remove the people entirely, unanticipated small issues can become big fast.

Two things about the future are certain, error rates will increase, and it will become harder to correct errors.

An algorithm or process that is 99% accurate seems great, but think about this, if you have a million instances, then 1% is 10,000. That is not an insignificant amount. Imagine each instance is a person, so potentially 10,000 people are affected by an issue.

Companies are often quick to implement automation, but they don’t consider the adverse effects or how they’d handle these effects. The negative impact to humans is often thought of as job loss, and any technical issues that crop up are considered addressable with a future tweak.

A human system without automation may very well have a higher error rate, but there is a human in the loop. People are more likely to believe that a human made a mistake than an automated system. Being this is the case, human systems can have more robust resolution processes.

Our confidence in technology will lead to a lack of trust in other humans. The system can’t be wrong, so the human must be wrong. This isn’t a perspective we should encourage.

Automation Impact Audit

We have to realize that technology is not infallible, and mistakes will happen. Given this fallibility, you need to have an appropriate mechanism for people to correct errors and inaccuracies.

To start with, you can perform a simple process that I call an automation impact audit. This audit will help you understand the process being automated and help identify potential issues. This process looks at a few fundamental elements, Process, Inputs, Impact, Detection, Rollback, and Resolution.

Process

Evaluate the process you are automating. What components make up the system today, and how will it look after implementing automation? What percentage of the process will be automated? This can range from automating small tasks of a larger human process to the complete removal of humans from the loop. Although any inaccuracies can be bad, in a process that has humans completely removed, they can be harder to detect and resolve.

Inputs

How complex is the process you are trying to automate? Higher complexity systems can lead to a higher number of unexpected issues. You should also evaluate the data you are using and implement data quality standards. Poor quality data can lead to poor decisions, and sometimes you don’t realize it until after the system has launched.

Impact

What is the impact of a wrong or inaccurate decision? Will people or businesses incur harm, or would it be relatively inconsequential? Understanding the answer to this question is one of the most important aspects of the audit. The higher the impact of inaccuracy, the more controls you need to put in place. Something that may appear as a minor issue or cost may be a point of frustration for a customer, causing them to discontinue using your service.

Detection

Do you have a way of detecting issues that could result from your implemented automation? If not, think of ways or areas that you can measure and determine if a problem arises. You should also implement this detection periodically so that you can look for issues over time. This can be an indicator that something is changing in your data or process that needs to be adjusted. Customers may not report issues and just choose to discontinue using your business.

Rollback

How do you get out of automation? Do you have the ability to go back to a previous version that worked better or had fewer issues? If you have reassigned people who were previously performing tasks that automation handles now, you may not be able to go back to your previous state. This is why automation implemented in phases is more robust to the overall process.

Resolution

Is there a way for impacted parties to correct issues and inaccuracies? Just having a resolution process isn’t enough. That process needs to be clearly communicated so that people know what to do when an issue arises.

Conclusion

Human-level performance doesn’t equal human-level resolution.

Human-level performance doesn’t equal human-level resolution. When implementing automation, robust resolution processes need to be implemented, allowing for proper resolution when issues present themselves. Preparation and care are required before implementing automation, including considerations of the impacts as well as clearly communicated resolution processes. Automation thrown at a problem for the sake of automation isn’t a winning strategy.

 

 

Preparing For New AI Regulations

Preparing For New AI Regulations

Until recently, the regulation of AI was left up to the organizations developing the technology, allowing these organizations to apply their own judgment and ethical guidelines to the products and services they create. Although this is still widely true, it may be about to change. New regulations are on the horizon, and some already signed into law, mandating requirements that could mean costly fines for non-compliance. In this post, we look at some themes across this legislation and give you some tips to begin your preparation.

Landscape

When you think of regulations surrounding AI, your mind probably wanders to the use of the technology in things like weapons systems or public safety. The fact of the matter is, harms from these systems extend far beyond these narrow categories.

Many developers are just using the tools available to them. Developers create experiments and evaluate the final result on a simple set of metrics and shipping to production if it meets a threshold. They aren’t thinking specifically about issues related to risk, safety, and security.

AI systems can be unpredictable, which is ironic since often you are using them to predict something. Why unpredictability surfaces is beyond the scope of this post, but it has to do with both technical and human factors.

We’ve had laws indirectly relating to regulations of AI for quite some time and probably haven’t realized it. Not all these regulations specifically spell out AI. They may be part of other consumer safety legislation. For example, the Fair Credit Reporting Act may come into play when making automated decisions about creditworthiness and dictate the data used. In the context of machine learning, this applies to the data used to train a system. So, current regulation that prohibits specific pieces of information such as protected classifications (race, gender, religion, etc.) from being used or prohibits specific practices, then that also applies to AI, whether it spells it out or not.

Governments and elected officials are waking up to the dangers posed by the harm resulting from the unpredictability of AI systems. One early indicator of this is in GDPR Recital 71. In summary, this is the right to explanation. If there is an automated process for determining whether someone gets a loan or not, a person denied has a right to be told why they were rejected. Hint, telling someone one of the neurons in your neural network found them unworthy isn’t an acceptable explanation.

Recently, the EU released a proposal specifically targeting AI systems and system development. This proposed legislation outlines requirements for high-risk systems as well as prohibitions on specific technologies, such as those meant to influence mood as well as ones that create grades like a social score.

Although the US tried to pass similar legislation called the Algorithmic Accountability Act, it did not pass. The US Government did, however, release a draft memo on the regulation of AI. This document covers the evaluation of risks as well as issues specific to safety and security.

The US legislation not passing doesn’t mean the individual US States aren’t taking action on this issue. One example of this is Virginia’s Consumer Data Protection Act.

This is far from an exhaustive list and one thing is for sure, more pieces of regulation are coming, and organizations need to prepare. In the short term, these regulations will continue to lack cohesion and focus and will be hard to navigate.

Themes

Even though the specifics of these regulations vary across the geographic regions, some high-level themes tie them together.

Responsibility

The overarching goal of regulation is to inform and hold accountable. These new regulations push the responsibility for these systems onto the creators. Acting irresponsibly or unethically will cost you.

Scope

Each regulation has a scope and doesn’t apply universally to all applications across the board. Some have a broad scope, and some are very narrow. They can also lack common definitions making it hard to determine if your application is in scope or not. Regulations may specifically call out a use case or may imply it through a definition of data protection. Most lawmakers aren’t technologists so expect differences across the various legislation you encounter and determine common themes.

Risk Assessments and Mitigations

A major theme of all the proposed legislation is understanding risk and providing mitigations. This assessment should evaluate both risks to and from the system. None of the regulations dictate a specific approach or methodology, but you’ll have to show that you evaluated risks and what steps you took to mitigate those risks. So, in simple terms, how would your system cause harm if it is compromised or fails, and what did you do about it?

Validation

Rules aren’t much good without validation. You’ll have to provide proof of the steps you took to protect your systems. In some cases, this may mean algorithmic verification by providing ongoing testing. The output of the testing could be proof you show to the auditor.

Explainability

Simply put, why did your system make the decision it did? What factors lead to the decision? Explainability also plays a role outside of regulation. Coming up with the right decision isn’t good enough. When your systems lack explainability, they may make the right decision but for the wrong reason. Based on issues with data, the system may “learn” a feature that has high importance but, in reality, isn’t relevant.

How Can Companies Prepare?

The time to start preparing is now, and you can use the themes of current and proposed regulation as a starting point. It will take some time, depending on your organization and the processes and culture currently in place.

AI Strategy and Governance

A key foundation in compliance is the implementation of a strategy and governance program tailored to AI. An AI strategy and governance program allows organizations to implement specific processes and controls and audit compliance.

This program will affect multiple stakeholders, so it shouldn’t be any single person’s sole responsibility. Assemble a collection of stakeholders into an AI governance working group and, at a minimum, include members from the business, development, and security team.

Inventory

You can’t prepare or protect what you don’t know. Taking and maintaining a proper inventory of AI projects and their criticality levels to the business is a vital first step. Business criticality levels can feed into other phases, such as risk assessments. A byproduct of the inventory is that you communicate with the teams developing these systems and gain feedback for your AI strategy.

Implement Threat and Risk Assessments

A central theme across all of the new regulations is the specific calling out of risk assessments. Implementing an approach where you evaluate both threats and risks will give you a better picture of the protection mechanisms necessary to protect the system and mitigate potential risks and abuses.

At Kudelski Security we have a simple approach for evaluating threats and risks to AI systems consisting of five phases. This approach provides tactical feedback to stakeholders for quick mitigation.

Threat and Risk

KS AI Threat and Risk Assessment

If you are looking for a quick gut check on the risk of the system, ask a couple of questions.

  • What does the system do?
  • Does it support a critical business process?
  • Was it trained on sensitive data?
  • How exposed is the system going to be?
  • What would happen if the system failed?
  • Could the system be misused?
  • Does it fall under any regulatory compliance?

If you would like to dive deeper, check out a webcast I did for Black Hat called Preventing Random Forest Fires: AI Risk and Security First Steps.

Develop AI Specific Testing

Testing and validation of systems implementing machine learning and deep learning technology require different approaches and tooling. An AI system combines traditional and non-traditional platforms, meaning that standard security tooling won’t be effective across the board. However, depending on your current tooling and environment, standard tooling could be a solid foundation.

Security testing for these systems should be more cooperative than some of the more traditional adversarial approaches. Testing should include working with developers to get more visibility and creating a security pipeline to test attacks and potential mitigations.

It may be better to think of security testing in the context of AI more as a series of experiments than as a one-off testing activity. Experiments from both testing and proposed protection mechanisms can be done alongside the regular development pipeline and integrated later. AI attack and defense is a rapidly evolving space, so having a separate area to experiment apart from the production pipeline ensures that experimentation can happen freely without affecting production.

Documentation

Models aren’t useful on their own. They require supporting infrastructure and may be distributed across many devices. This distribution is why documentation is critical. Understanding data usage and how all of the components work together allows for a better determination of the threats and risks to your systems.

Focus on explainability

Explainability, although not always called out in the legislation, is implied. After all, you can’t tell someone why they were denied a loan if you don’t have an explanation from the system. Explainability is important in a governance context as well. Ensuring you are making the right decision for the right reasons is vital for the normal operation of a system.

Some models are more explainable than others. When performing benchmarking for model performance, it’s a good idea to benchmark the model against a simpler, more explainable model. The performance may not be that different and what you get in return is something more predictable and explainable.

Conclusion

Move fast and break things is a luxury you can afford when the cost of failure is low. More and more machine learning is making its way into high-risk systems. By implementing a strategy and governance program and AI-specific controls, you can reduce your risk and attack surface and comply with regulations. Win-win.

 

Tips From Over A Decade of Working Remotely

Tips From Over A Decade of Working Remotely

The presence of COVID-19 has led to some unprecedented times. With a large portion of the workforce now working from home, there are numerous security implications that arise. Our previous post is an extensive FAQ that covers everything you need to know about the cybersecurity concerns and how to address them. Today, we’ll dive into tips for productivity while working remotely. 

Welcome to the world of remote work. Beware, it’s not for the uninitiated, and this is why I wanted to share a few tips that I’ve picked up from over a decade of both working remotely as well as managing remote teams. I’m hoping these tips help people who may find themselves in this position for the first time. I wanted to share a few highlights without getting too deep.

I’m a huge fan of remote work. To me, it solves the staffing challenges of a company in allowing them to source the best talent for their positions and not locking them to a geographic region. It’s certainly not perfect, and not all candidates are well suited to the self-discipline required, but then again, working in an office is far from perfect as well.

The perception of remote work can be negative to companies that don’t focus on innovation. What these companies don’t realize is that you can see an increase in both productivity and creativity with a remote workforce. The downside is, if it’s poorly managed, it can be a negative that reinforces this perception.

Know Yourself

The first step in remote work is knowing yourself. This knowledge isn’t through some deep philosophical meaning, but know your habits and who you are. This insight will be different for everyone. For example, if you are easily distracted, you won’t have the environment of a workplace to reign you back in. If you naturally spend too much time on social media, then you need to block those notifications during periods of work time. This situation seems simple enough to conceptualize, but some may find it hard to implement.

Inventory your distractions and come up with a plan. For each of the items you identify as potential blockers to productivity, it’s good to have some tool, control, or mindset in place to keep you in check. This is step number one, and if you don’t have it down, you may be in for a bad time. The good news is that after a while, a new habit will form, and some of this blocking will be second nature.

Separate Your Work Environment

Maintain a separate work environment. This way, you can keep your head in the right spot when you are working from home. A separate work area also gives you the feeling of “going to work.” I can’t imagine a lot of productivity would come out of lying in bed with your laptop and the TV on.

I’m lucky enough to have a home office, with a large monitor, an open desk and a comfortable chair. These are items that help me flex my creativity and separate me from the normal mindset of doing other home-based activities. If you don’t have enough room to have a dedicated office, then choose a room that you “go to” for work. I also suggest something, such as an external monitor or mouse and keyboard that makes it feel like a workplace.

If you have a family or children, they must understand you are “at work.” I don’t have this problem, but I know others that do. When possible, close the door on your workspace or set some other signal that you are working. If you are in an incredibly cramped space, close to family members, I suggest headphones. Let your family members know and understand this signal to minimize interruptions. If it’s not possible to go long periods without interruption, consider working in sprints for as many hour-long blocks as you can.

Keep a Schedule, Keep a Mindset

If you don’t feel like you are at work, you won’t produce like you are at work. This is where the previous point knowing yourself can play a significant role. It’s best to keep a routine because after all, you are going to work, you are just cutting out the pesky commute.
Wake up, shower, get dressed, do all of the same things that you would do if you were going to a workplace. You don’t need to put on formal clothing, but pajamas probably won’t make you feel productive either.

I live in Florida, and in case you haven’t heard, it gets hot down here. So it’s true when I wake up, part of my getting dressed may involve wearing a pair of shorts, but they aren’t the same ones I wore to bed, and that’s the point about getting into the work mindset.

Another thing that goes along with keeping a schedule is your health. Use a fitness tracker to remind you to stand up every hour and help maintain a routine of movement. Another thing I do is jog daily.

Just as important as keeping a separate workspace, is making sure you get away from that workspace. Other than the health benefits of getting exercise, I find that without the distraction of digital devices, my mind works out problems differently. I’ve solved many problems and came up with countless ideas, all while lost in my thoughts during my daily jog. My daily jog is critical to not only my creative process but to my problem solving as well.

Increase focus and minimize distractions

It’s imperative to understand your sources of distractions and minimize their impact as much as possible. Avoiding activities that are time sinks is great not only for general life but critical during the workday. Whatever your poison is, don’t partake during working hours.

Also, refer back to the previous comment about family members.

If you are easily distracted, try using a Pomodoro timer and slicing your activities up into small chunks where you can focus on them.

Prioritize your activities using the methodology of your choice and break those off into chunks that make the most sense. Always have an idea of what you need to accomplish that day or that week and make progress.

Utilize The Tools You Have

What remote collaboration tools do you have at your disposal? Inventory those and make the best use of them. Chose the best tool for the task, whether it be document collaboration, chat, video conferencing, or even remote brainstorming.

Stay organized for both yourself and your team. One issue with remote teams is never knowing where anything is. Try to organize documents in a single location to cut down on the amount of confusion and additional questions. If you can preemptively cut down on the amount of unnecessary communication through preparation, then you have won a battle.

Have a great task manager that runs on all your devices. You’ll find that information comes at you fast and from multiple sources. Having a great task manager syncs across all your devices will help ensure that things don’t get missed.

Know Your Team

Along with knowing your tools, know your team members and how they like to work and communicate. Match the preferred method of communication. People may prefer email, text, or maybe a phone call. Matching the preferred communication method will cut down on frustration as well as the amount of additional communication necessary to share a point.

Use The Phone

Yes, that thing you hold in your hand used to be a thing people utilized to send their voice to the ears of other people. It’s easy to get carried away communicating with text, chat, and email, but sometimes it’s easier to pick up the phone. Text communications can be hard to convey tone, and your tone can be misinterpreted. Often, a quick phone call can solve a lot of problems and save a lot of back and forth. Don’t be afraid to use that device for its original intent.

Be Clear

Find ways to improve your communication. Nobody wants to read a tome in their inbox. Get to what’s important quickly and at the beginning of the email. Strive for the right balance of brevity and completeness. Keep in mind that the email may be on the screen of a mobile device.

If you must write a long email and you are sending it to a decision-maker, try to put a few summary bullets up top or some important takeaways. Also, let the recipient know the message requires some action from them. This summary will increase the possibility of your email being read and show that you understand the value of the time of the person reading it.

In Closing

Welcome to the remote workforce. With the right balance of skills, tools, and discipline, you can increase your creativity and productivity. Hopefully, you found these tips useful.

If you’re interested in learning more about the cybersecurity concerns that arise while working remotely, click here to read our FAQ on the subject.

People Don’t Care About Cybersecurity

People Don’t Care About Cybersecurity

You read the title of this post correctly. Maybe it should be most people don’t care about cybersecurity, but you get the point. It’s a reality that those of us responsible for securing our organizations know but don’t like to acknowledge because it leads to a tough question. If people don’t care, then what is all of this for?

Lack of caring customers affects business decisions. You don’t see large swaths of people holding companies accountable post-breach. As a matter of fact, In many cases, stock prices tend to rebound after a breach. There are also some who advocate that insecure software is still more advantageous than the potential negative impacts it creates. This argument is inaccurate based on skewed and superficial perceptions of the customer and not based on the reality of the situation.

So, should we all change professions and try our hands at being celebrity chefs? If you are like me and have a weak flambé, we should take a closer look at the situation.

Why people don’t care

It’s essential for us to have a look at the conditions that create this apathy in customers. Understanding these issues makes framing potential solutions easier.
Here are the major ones:

  • Immediacy
  • Short attention span
  • Numb to breach occurrence
  • Good detection and recovery

Immediacy

Effects of breaches aren’t immediately felt. Of course, this is assuming an attack doesn’t delete all of your data, and by your data, I mean your customer’s data.

If compromised data is used in some form of attack or fraud, it’s not done immediately. Tying an instance of abuse to a specific breach can be hard for a consumer. In that time, their data may have been compromised in other locations, so who does the consumer blame?

Short on Attention

People these days live under a constant bombardment of content all competing for their attention. This is on top of the professional and personal priorities they have. They can be mad at a hotel chain for a breach one day and book a stay with points the next. With the perception that too much is on their plate, only the most egregious instances will stay top of mind.

For perspective, people are more likely to hold a grudge with a restaurant they had a bad experience with than the credit company who lost enough of their data for a criminal to commit identity theft.

Numb

People have gotten numb to all of the breaches. High-profile breaches have become a regular occurrence and lesser profile ones even more so. The number of breaches has a numbing effect, so news of a new instance results in little more than a sigh and an eye roll.

Good Detection and Recovery

Companies have gotten good at detection and recovery in post-breach scenarios. Think of your bank calling you when it notices some odd transactions or notification from another site offering free credit monitoring. Most often the customer doesn’t have to take much action at all and only encounters a mild inconvenience.

A Dangerous Road

If your customers don’t care about security, then it can be a hard sell to management and other business units. On the surface, this makes business sense, but letting security priorities slip is a dangerous road. The lack of prioritization and focus on security initiatives opens the door for nefarious actors that goes far beyond the superficial surface. Here are just a few areas to consider.

Autonomous Systems

Autonomous systems make decisions without human interaction. The integrity of the data these systems consume is paramount because tainted data could cause the system to make the wrong decision. Think of a drone attacking the wrong target or an automated trading algorithm triggering a mass selloff of stocks.

Injury or Death

Of course, building off of the previous point about autonomous systems, there is the fact that systems that can kill us are becoming more common. Medical implants, self-driving cars, industrial systems, drones, and countless others that aren’t obvious to consumers have the potential to impact their health and wellness. It shouldn’t take a breach causing large scale death for people to begin caring. Unfortunately, that may very well be what it takes.

Funding Criminals

Stolen data and compromised systems have monetary value to criminals. Criminals have various motivations for their activities, but a compromise of your systems could assist in the ongoing support of illegal activities. Some of these activities could include terrorism.

Privacy

Losing a customer’s data is a breach of privacy. Privacy has never been in more danger through shady purposeful activities, but unauthorized disclosure makes it worse. On this front, I think there is some hope. Not only has privacy importance been elevated by regulation such as GDPR, but younger people seem to be caring about it more as technology becomes less of a novelty and more something that’s always been part of their lives.

Longevity

In my Black Hat Europe presentation last year, I spent some time talking about how the technology created today will be with us tomorrow possibly much longer than their support cycles. You aren’t likely to upgrade your refrigerator or car at the same frequency you do your phone or smartwatch. Tons of low-cost devices are spreading across the planet that will affect our security posture for years to come.

What can we do?

So if all of this is a problem, then what can we do to ensure we are protecting our organizations both now and in the future?

Not be part of the problem as an organization

By contributing to the larger problem, we are contributing to a sea of already compromised data making it hard to determine where it came from other than when an attacker makes it known for their marketing purposes.

Avoid the top-down approach

Far too often people feel that security needs buy-in from senior management to drive initiatives through the company. Laboring under this delusion can cause you to miss opportunities. It’s true that management support can make things easier, but it’s not the only way to get security initiatives implemented. Buy-in from the bottom up or even cross-pillars to other peers can be just as effective, if not more.

Reduce friction

It shouldn’t be a secret that reducing the friction of a solution increases adoption. We all know someone who never locked their phone because entering numbers was an inconvenience. Their behavior changed with the inclusion of things like TouchID and FaceID, indirectly causing an increase in security posture. We should be investigating areas where a reduction in friction could lead to increased adoption.

Regulatory compliance and privacy law

Regulatory compliance is a topic that many in the industry love to hate. It may very well take governments and other regulatory bodies getting involved effecting a broader change. Although the effectiveness of such compliance measures can be debated, discussions spring out of these requirements.

Conclusion

It may very well take something that causes multiple deaths or a substantial financial impact to get the average consumer to care about cybersecurity, but we as security professionals can’t let that guide our decisions. Are we okay with allowing people to die before we take a problem seriously? We need to be proactive and find creative ways to get our solutions adopted and look for areas to reduce friction before it’s too late.

Let Your People Speak at Security Conferences

Let Your People Speak at Security Conferences

Now that Black Hat USA and DEF CON are over, it allows for some reflection on conferences and speaking engagements. I’ve been involved in the conference review and submission process for quite some time. In that time, there have been multiple instances where someone submits a good talk, it gets accepted, and their company makes them pull it. This situation is frustrating not only for the conference staff but also for the individual who submitted the talk in the first place.

On a less extreme side, I’ve seen many talks given by people who aren’t allowed to say where they work. They also had to take vacation time and pay their expenses. That’s pretty humiliating.

Why does this happen? The reason isn’t always apparent, but often it indicates an antiquated idea of the risk associated with presenting at a security conference. There may also be a healthy dose of not understanding the benefits mixed in as well.

With a few highlights, I hope to provide some benefits and dispel some myths. My aim is to give you some solid talking points for these conversations with your organization.

Benefits of Speaking

If you are a security leader who finds conferences valuable, then you already understand the value of presenting. Some companies, however, don’t see the benefits. But these most likely aren’t security companies. If you have any doubts, what if I told you that your people speaking at conferences gives you a leg up on your competition from both a perception as well as recruiting perspective?

Here are just a few of the benefits:

  • Employee Retention / Morale / Quality of Life

Employees are more likely to stick around at companies that support them. Saying no to speaking engagements could mean you lose good people. Working on something more significant than your everyday job is fulfilling.

  • Recruiting Tool / Differentiator

Future employees want to work with smart people and perform “cool” work. One of the best ways they can find out about that is through conference activities. We all know not everything we do is glamorous, but knowing there are interesting opportunities to engage and present research could be a good differentiator for future employees.

  • Customer Confidence

Customers get an idea that you have experienced people and that you take security seriously. Even if the research points out something you weren’t doing so well in the past, it engenders confidence that you continue to be proactive and make improvements.

  • Information Sharing / Greater Good / Community Support

You send a strong signal to the industry and peers that you’re willing to be a part of the community by sharing knowledge. This makes it much more likely that other organizations will share as well. Lead by example.

  • Demonstration of Expertise

Speaking and sharing your experience at conferences can be incredibly rewarding. Not only is it a notch in the belt professionally, it just feels good to share with peers. Show the industry, peers, and customers that you are proactive.

Fear of the Unknown

Given the benefits, why do some companies not allow their people to speak? In my opinion, this comes down to fear. Let me break this up into 3 main areas.

  • Unnecessary Attention
  • Disclosure
  • Policy

Unnecessary Attention

Throughout the years, unnecessary attention has been the excuse I’ve heard most often. Companies feel that if their people speak at conferences, it puts a target on them and invites attackers to try and show them up. I’ve got some news for you; your company is most likely already a target.

Vulnerabilities these days are worth money. So if an attacker is sitting on a 0day, they aren’t likely to burn it to make a point about you having someone speak at a conference.

If you are worried about elevating your position on an attacker’s radar because of public speaking, a lot of this comes down to how the speaker presents the content. If the presenter is claiming to be the smartest person around and says their organization is “unbreakable” then that can undoubtedly invite some negative attention. If the presenter is merely sharing some experiences and trying to further the conversation, then it’s rarely an issue.

Disclosure

Internal Disclosure

In some cases, there may be a fear of disclosing sensitive internal information or internal process. Maybe the company feels an attacker can use the information to formulate more accurate attacks.

Your people should be smart enough to know what content is sensitive internally and not disclose. After all, don’t you have an awareness program for that? If there are any doubts, you could always review the content before submitting rather than creating a blanket denial.

On the disclosure front, I think there is also a little bit of not wanting to look “stupid.” Security problems can be tough to solve (even simple ones in some cases), and many are just trying to figure it out. Some may worry about their customers thinking they don’t have it together, but one thing I’ve learned in my career is customers appreciate due diligence.

We have real problems with information sharing in the security community as it is without further restrictions. Information such as lessons learned, information on attacks and intelligence as well as mitigation of risk could be helpful to the community as a whole. The more share, the better off we’ll be.

External Disclosure

On the other side, it may be pressure from a vendor over a responsible disclosure process. I’ve seen a few companies push deadlines to try and stop people from presenting their findings at a conference.

Healthy responsible disclosure pushes vendors to ensure they are performing due diligence on their side. If you’ve given a vendor 60 to 90 days, then that is more than fair. At that point, you have fulfilled your obligation when it comes to responsible disclosure, and you should support the continuation of the process by disclosing.

Policy

Somewhere, buried deep inside your organization is an ancient policy that states people can’t speak at conferences. This policy hasn’t seen an update since its creation because everything in the company is more important.

I think we can all agree that policy for the sake of policy is bad. The intention of that policy is probably lost (or relates to the previous two points) and the default answer when you ask about it is, “well, that’s just the way it’s always been.”

Don’t look at that policy as a fixed object. Maybe the reason it has never changed is that there hasn’t been a champion to address the issues with it. If the policy is necessary, adjust it with new processes, where there is a certain amount of review (hopefully not painful and lengthy).

Times when you can’t speak

In this post, I’ve covered why you should let people speak. You may be wondering if there are situations which you shouldn’t support a conference presentation. The answer to this question, unfortunately, is yes.

The first situation that comes to mind is if there is an NDA in place or some terms and conditions that prohibit disclosure. This should be obvious, but if you have an NDA that prohibits disclosure of details, then you have to abide by it. Keep in mind that some companies can use T’s and C’s to try and discourage disclosure, see Adventures in Vulnerability Disclosure from Google’s Project Zero.

There may be other times as well, such as revealing your intellectual property or damaging a business relationship. I will say that each of these is highly situational and should be fairly obvious. None of them are good reasons to create a blanket statement of not allowing people to present.

Call to Action

If you are a security leader, hopefully, this has softened your position on the subject of speaking at security conferences. If you are in favor, but someone above you objects or a policy related issue exists, then start now to add some clarity around this topic. Lead with the benefits and do your best to dispel any myths or old beliefs. It may not be easy, but in the long run, it will be worth it. Be the change agent your company needs you to be.

If you found this article interesting, you may also be interested in this article ‘Keys to a Successful Infosec Conference Submission’