Wednesday, August 9, 2017

BHUSA17: Datacenter Orchestration Security and Insecurity: Assessing Kubernetes, Mesos, and Docker at Scale

Speaker: Dino Dai Zovi

This was a challenging session to take notes on, given the speed of the slides and the mountain of information, but suffice it to say - Docker and Kubernetes need security help and consistency!

Kubernetes (K8) is a young project, but very active. Many companies have full time engineers working on the project

The security mechanisms in K8 are all very new - only in alpha or beta, or less than 1 month old - seems like an add on.  For example, RBAC is enabled by default in K8 1.6, but many people turn it off to work with older versions.

But, because most security features are new, there are many private distros forked earlier that may be missing the security features entirely! And some will "dumb down" to successfully connect to older versions - so you may have the security feature, but it's not configured. Plenty of potential attacks distributed.

BHUSA17: Tracking Ransomware End to End


Only 37% of people backup their data, which leaves them open to ransomeware.

Victims are shown a URL to pay to get back their data. Posted in Tor, so the source is hard to take down. They will only accept BitCoin, so they can use the blockchain to see who paid and who didn't.

BitCoin is anonymous and irrefutable - cannot be reversed! If you find the ledger, you can go back and see who else was ransomed.  Gathering seeds from victim reports and synthetic victims means you have to pay a small amount to find out more about the network.

The researchers initial data was for 34 families with 154,000 ransomed files. by using clustering for dataset expansion to find other victims, they are now working with 300,000 files.  This one ransomware has made approximately $25,253,505 (low ball estimate) - so there's money to be made no doubt!

In 2017, ransomeware increased binary diversity in order to evade AVs.

Many victims don't have any BitCoin, so they buy it from "LocalBitCoins" site (think Craigslist for BitCoin).

BlackHat 2017
The researchers found that 90% of the transactions went through as a single transaction, 9% did not account for the transaction fees and a small percent are doing multiple transactions for unknown reasons.

Locky - a ransomeware family increased spread - started seeing it in infrastructure like hospitals. It was making about $1million/month!

Dridex, Locky and Cerber are all distributed via botnets. Cerber recruits low-tech criminals to help them make a consistent income of $200K/month.

Cerber includes real time chats to talk to customer "service" to help you simply recover certain files.

WannaCray seems more like wipeware, than ransomeware. Even if victims paid, the way it was done was hard to track that you did indeed pay and harder to get your files back.

The researchers have also seen a rise in NotPetya lately - another wipeware.

This is not going away. this is a multi-million dollar industry. Cerber has even introduced the concept of an affiliate model - so more people can "play".  yikes!

Tuesday, August 8, 2017

BHUSA17: Breaking Electronic Door Locks Like You're on CSI: Cyber



Colin O'Flynn  |  CEO/CTO, NewAE Technology, Inc. – won’t be focusing on “evil maid” problems or commercial locks, just residential. Yes, sometimes it’s easier to just knock down the door – but that’s not this talk. Looked at high security locks (for safes and residential) – high security are $300-$1000, residential are $100-$300.  Inside a keypad, there really isn’t a lot of electronics. From the front side of the lock, it’s hard to do any attacks to the back side.

With residential locks he can sometimes send messages to the back. For vendor A, there’s an easy method to add a new access code. There’s a way to turn that off, but how many people do?  Vendor B did not have this special bypass, but attackers can easily find the existing codes. The lock contained a Zwave radio for IoT, there’s a siren for the alarm (and a transformer to make it loud) and a motor driver. The researcher did not look into the Z-Wave attack vectors, just physical attacks. There is an accelerometer that can detect various levels of tampering. It will also alarm if you enter too many wrong PINs.  So, brute force is not a good plan.

The Vendor B lock has a front panel, so you can use a key or a screwdriver to lift off the front panel. Vendor A’s lock was not susceptible to the same attack. The issue with this attack vector, it would be difficult to replace the panel w/out being detected. There is a cable to send messages to the backend – you can send guesses! No timeout on the backend.  The front end has timers for how often you can put in PINS, no suck protection on the backend.  There is power to the lock – if you short out the power, the alarm will reset the code and disable the alarm.

We were treated to a live demo of the attack.

He built an attack modules – which can do a little over 120 tries/min. Searches 4-digit key space in ~85 minutes. It’s a pretty simple countdown from 9999, does 3 tries then resets lock to continue to try (and thus avoid the alarm).   Think you can set a 6 digit code to prevent this? Think again – once you find the correct first 4 digits, instead of giving you an error or an “okay” it gives you a delay, as it waits for the last 2 digits. Then you only have to brute force the final two.

Fixes: a timeout after wrong guesses, power-on delay, add circuitry to fix in the field.

Future work: look at Z-Wave, power analysis and a variety of other attacks.

Vendors have been very useful on working on a fix, and even doing overall security improvements.  You can check your lock at home by testing if the 30 second bad PIN happens if you reset the power (w/battery disconnect).

BHUSA17: Keynote!

BlackHat 2017


BlackHat/DefCon founder, Jeff Moss! Lots of lasers! This is the twentieth year for BlackHat – incredible (and I’ve only been twice, though many more times to DefCon, starting with DefCon 2).  There are attendees from over 80 countries and over 200 scholarship recipients.

The first year’s speakers were basically all of Jeff’s friends – he just wanted to know what they were working on. People say that if hackers and security researchers are talking about a problem now, it will be a problem for the rest of us in 6 months or a year. It’s a “crystal ball” of computer security.  He learned in the first year to never hold a BlackHat in the same hotel as DefCon – otherwise the DefCon attendees come early and eat all of your food and drink all of you booze! DefCon is more of a hacking conference – a creative way to explore.

Moss found the Internet to be quite liberating as a 13 year old boy – he could go online and discuss things like rock and roll and nobody knew he was just a kid. It took him awhile to fully understand, but he’s learned the importance of being social. Your future success will be based on how social you are.  How much money can you spend on defense and protecting the systems? Sounds technical, but to get budget needed, it’s really a social and political conversation to do defense greater than offence.

Security is no longer a local problem – it is a global problem, though the problems vary by geo. Issues faced in Palo Alto are different than those being faced in Bangalore or on a remote island.

We have to get engaged in the problems of lack of diversity and lack of generalists – mentor, help people write CFPs, advise upcoming students, get out there and help.

Alex Stamos, CSO, Facebook. Twenty years ago he couldn’t afford BlackHat, but he was at DefCon and he found a place where he belonged. Coming to the desert every year and hanging out with DarkTangent (Jeff Moss) is like a reunion. Coming together as a group now for weddings, birthdays and baby showers. In 2002, he brought his then girlfriend to BlackHat for their first vacation – she’s been with him every years since.

Attending and speaking at DefCon and BlackHat is not always safe for people for their career or livelihood. For example, one man quit his job on stage so he could discuss router vulnerabilities, another engineer was arrested in the airport, others have had federal injunctions against them to prevent them speaking.  But this work is important and impactful, and we need to share.

Nowadays people finally understand why they need to build secure systems – no longer a fringe idea. We are no longer the ‘hacker kids’ – we are CSOs, working for the federal government, and industry experts.

Many people in this room got into security well before they were paid for it – on bbs’es, in hacker meetings and saving up summer job money to come to Vegas for DefCon. The things we are talking about now will become startups over the next 2 years – yet, we are not living up to our potential.  We are finding problems, but we need to think about what we do after we discover bugs. We have to realize how many people depend on the technology.

We have a tendency to focus on the complexity, not harm caused. Adversaries will do the simpliest thing they can to exploit a technology. It is fun to see really complicated attacks that someone worked really hard to figure out – but that’s unlikely where the actual abuse will be. Abuse is the technically correct use of technology to cause harm. This can include exploitation of adults and children – can be done very easily, not through complicated attacks.

We are suffering from lack of empathy. Think about the expression of the problem being behind the keyboard – that attitude helps you shift responsibility away from actually securing a system to an uninformed user.  “Just use your knowledge of X.503 to decide if this certificate is safe to use” “don’t click on that link” “don’t use that site w/out HTTPS".  We have to understand there are more and more people coming online that don’t have experience on the Internet and they need to be safe.

We have a problem with security nihilism – that we are all under attack by the most sophisticated adversaries possible, any security that doesn’t use encryption is “security through obscurity”.

About 10 years ago, there was a bunch of research on technologies that are deployed in the cloud. The research on GPUS and hypervisors was great – and made the public cloud more safe. This gave the impression, though, that the public cloud was not safe, that the existing protections were not good enough – those weren’t the real problems. The real problems were excessive privileges, poorly defined network policies – things that are much easier to address and to exploit.

We don’t want to discourage people to deploy any security features just because they are not perfect – they are better than nothing and need to help the bulk of users.

There is another fallacy where attackers believe they are just as smart or smarter than people who design the systems, which is not necessarily the case. Systems are designed under all kinds of constraints and nobody is perfect.

Stamos feels strongly that people have a right to secure and private communications, even though some people (law enforcement) don’t always agree.

Think about people who have to try to put pedophiles and people that exploit children behind bars. How can we help them, without creating backdoors? How can we relate to and understand their needs?

At Facebook, they have a dedicated red team that all they do is try to break into their systems – unannounced the blue teams.  Stamos and Facebook are big proponents of bug bounties – particularly for open source that everyone uses, but don’t necessarily have big owners.

Millions of people are getting inexpensive smart phones that are shipping with out of date operating systems – it’s still Facebook’s responsibility to make sure their app is still secure on these devices. They are worth protecting.

We also have to worry about protecting users during elections – there are many issues (slide font too small to read), but we need to think about what we can help with and what we can do.

The Belfer Center is working on a project to help protect future elections from outside influence. Facebook is sponsoring this effort. In November of next year, there will be many house seats, senate seats, gubernatorial campaigns and local offices participating in elections. All of these campaigns are built up from scratch from a technology point of view, often with volunteers. How we can we help them build secure systems, easily? If things go wrong, can we help them with mitigation and analysis? It needs to be a practical solution – to do this, we must work as a team and we need to have diverse teams. You wouldn’t want a toolbox with only the best screwdrivers in the world, would you?

Facebook is sponsoring legitimate CTF competitions in middle and high schools. The winner are treated like athletes – this is important to increase interest in this field. Make sure your team is open and respectful of discussing diversity. Be open to criticism, do not assume how a minority wants to be treated.  But remember, don’t make snide comments, don’t ask women if they are here with their boyfriend – that has impact.  Be respectful. Things are getting worse, not better. Let’s make this a special week here in Vagas this week to be respectful of other people – if you see something that isn’t right, call people out.  This is a critical moment – we’ve been asking for people to pay attention to us – now they are, and let’s show them something great.

Saturday, August 5, 2017

BHUSA2017: Intel AMT Stealth Breakthrough

Dmitriy Evdokimov, CTO Embedi
Aleander Ermolov, Security Researcher, Embedi
Maksim Malyutin, Security Researcher, Embedi

Presented by Donald Anderson and Dmitriy Evdokimov.

[Note: As a reminder, these are my notes. The opinions are generally of the presenter, unless specifically noted.]

The best known execution environments are Intel CPU and Intel ME. UEFI BIOS and Intel ME firmware (and a few other blobs) are system firmware. Ring 3 in the CPU is the least privileged (for user applications and the like), Ring 0 is the kernel and Ring -2 and Ring -3 have many more privileges. Intel ME is based on the MCU with ROM and SRAM, the most privileged and hidden execution environment. It has a runtime memory in DRAM, hidden from the CPU. It works even if the device is turned off, as long as there is power.
BlackHat 2017
There have been known issues with reverse engineering, which tend to fall into various categories: Unknown ME ROM contents, code is partially compressed with Huffman (dictionary is unknown), undocumented MEI communication protocol and inaccessible UMEI.

The main firmware components are the bringup module, kernel and drivers and services (to support timers, network, heci, ...)and applications that implements different Intel technologies: PTT and AMD, etc. Intel AMT features a web-interface, SOL, IDE-R, KVM. It is part of the vPro brand. Allows remote power on and other things. A very powerful tool.

Intel AMT can be accessed cia a network or a local interface.

How can this be attacked? When accessed through a regular we-browser, Intel AMT redirects us to a logon page and challenges us with a password. If you send the wrong password, you'll get an error. They snooped the packets to look at the authorization headers. They did a quick search on things like nonce, user, login, etc - found use of cnonce.  Discovered an issue with how strncmp() was used: if an empty string is given, strncmp() returns 0 which also means authentication success.

There is a vuln where an attacker can log in as admin user, as long as the right ports are open. Turned off devices can be attacked as well. This was previously released.

Intel has created a patch for this, provided to all OEM vendors and they have all made new firmware patches. As it's in the firmware, it requires manual updates from the user. There is Intel AMT code in all modern chips. There is an Intel MEI(HECI) that can be used to check the state of he Intel ME subsystem.

HECI is used to configure Intel AMT. HECI is based on DCMI-HI protocol. Message sent to Intel ME should contain the command description, groupID, command, and results.

Non-vPro systems do not have user interface for disable Intel AMT. Once it's activated, you're stuck.  After some reverse engineering, they found the commands for activation and the code to acknowledge. If you don't want/need Intel AMT, make sure you check often to make sure it is turned off.


BHUSA17: SGX Remote Attestation is not sufficient

Yogesh Prem Swami, Secure Substrates

[Note: As a reminder, these are my notes. The opinions are generally of the presenter, unless specifically noted.]

Swami used to work at Cryptography Research and is now at a startup working on secure containers. A typical misconception is that SGX is a black box, but that's not true. Interrupting the processor can reveal information.

How to ensure that software runs on real hardware?  The cloud provider can actually put it on a simulator. Two types of attacks - run simulation on general hardware or run simulator inside real hardware to man-in-the-middle.  SGX protects against both of these attacks.

You need to make sure you're using real HSM hardware, which is hard as the cloud provider may not do that.

Looking at the common SGX Enclave Design
  • Define a generic remote attestation scheme
  • Arbitrarily compose different crypto schemes (generate keys, save to disk, generator CSR requests and create audit log)
  • Define a workflow that combines first 2 steps

BlackHat 2017
Example is vacuously broken. Attacker runs attestation correctly and runs rest of the protocol outside of the enclave. Allows attacker to simulate some sub-computations, commitment log of confidential data (eg sha256 of someone's birthdate), send confidential data over encrypted TLS and simulates states related to birthdate. Enclave is a single protocol sequentially composed of sub-protocols.

BlackHat 2017 Imagine where the user wants to store their keys on disk. The problem here is SGX does not have an internal counter, so it needs to get it from outside source. The trusted way to do this is via the TPM, but you want to make sure nobody switches out the motherboard and reset the counter. Sounds good? But this scheme is still insecure. The enclave doesn't have a way to control how many instances of the enclave exist. Attacker runs the same enclave concurrently and feeds the same tpm_cntr and tpm_sig to both instances. Concurrent composition not limited to running the same operation. SGX has no built-in replay protections. Launch enclave cannot limit concurrency, EINITOKEN is a long-term credential and the whitelist is ineffective.

The damage is not limited to that piece of data but anything that depends on that data. You're not just effecting one node, you might be impacting multiple notes.

BlackHat 2017
Another issue is state malleability and knowledge extractors. SGX enclave is nota  black box, as adversary can force the enclave to exit at arbitrary execution point via AEX and then control what happens after AEX. Global enclave state is malleable! Partial rewinding effect possible, since SGX allows multiple threads within the same enclave, interrupt one-thread at appropriate point and ecall other threads. Careful wit interactive proof-of-knowledge protocols, as they require just two responses per commitment to reveal the secret.

Now onto other issues...

Group signatures allow members to anonymously sign messages on behalf of the group with a single group public-key (unique private key per member).  Group manager decides who can join the group and grants credentials to each member. Security goals are full anonymity and member revocation. Can use EPID for anonymity, blind join and member revocation.

EPID signatures have two distinct components: basic signature and non-revoked signature proof. Basic signature is based on BBS+. In addition to basic signature, each signature also contains a lot of math that is hard to capture on a blog. EPID doesn't have full anonymity if you revoke a key.

BlackHat 2017
Each platform given a provisioning ID (PPID), which is known by the CPU and Intel, so there can be no anonymity of join (though it also doesn't claim). The researcher believes there is an online database, maintained by Intel, online of all of these highly sensitive IDs that would be an attractive target for attackers.

BlackHat 2017
Enclave creates local attestation for quoting enclave (QE) and optionally requests QE to generate local attestation for for quote for anclave. QE creates encrypted EPID signature and the enclave validates QE's lcoal attestation on encrypted quote. The enclave sends encrypted quote to Service Provider (SP), SP cannot validate the quote itself even if it has access to group public key.

BlackHat 2017
Provisioning enclave and quoting enclaves are securely implemented, but lots of bike shedding crypto. It is secure against sequential, concurrent and state malleability. There is no privacy in spite of group signatures. Remote attestation quotes are encrypted and can only be validated by Intel, which destroys privacy and could be abused by MitM (Intel?)