Thursday, March 9, 2023

Using Trusted Platform Module (TPM) backed certificates for Secure Shell (SSH)

Trusted Platform Modules are awesome devices for storing & securing credential material.

Googling around I found a great write-up for a non-enterprise environment (note: self-signed/ephemeral CA's are used to get a 10-year certificate). That article is found here:

Windows SSH client with TPM (habets.se)


However, after following the instructions I've still experienced troubles getting PuTTY Common Access Card (PuTTY-CAC) to successfully login to machines. Receiving an error of "Server refused public-key signature despite accepting key!"

Modifying the sshd_config on a server to set a LogLevel of DEBUG & monitoring /var/log/auth.log during an authentication cycle seems to indicate the key will be accepted, but then the connection closes. This led me to believe that the issue was in the configuration of the client with the TPM. To be honest, I didn't know much about how any of it works. In fact, I still don't. But research & trying stuff out took me to read articles like this:
Microsoft Cryptographic Service Providers - Win32 apps | Microsoft Learn

Listing Cryptographic Service Providers can be done using the "-csplist" switch using (certutil). Note: this isn't clearly documented as an option!

Experimentally I discovered that instead of using the proposed "Microsoft Base Smart Card Crypto Provider" using the Provider name:
ProviderName = "Microsoft Smart Card Key Storage Provider"


Let's me complete all of the steps and successfully connect to servers without error. I think the issue is that the base crypto provider is not correctly signing challenges. But, of course, I have no idea. But this has worked for me!

Monday, October 28, 2013

Hacking OrangeHRM, Between My Very Busy Days

This is just a brief update on some of the things I've been working on the last few months. As some of you know, I've been super busy with a variety of projects both at work and outside of work. But I managed to get a little "play" in too.

Back in May I had decided to do my usual. Find a piece of software to hunt for bugs in, report the bugs to the vendor, and assist in anyway I could in fixing those bugs. I find it rewarding to do this and it benefits the world as a whole. Usually I look in software that's a little off to the way side. But this time I decided to look in the most downloaded Human Resources Management application on sourceforge. The application I looked into was: OrangeHRM.

It clocks in at about 2K downloads per week, which is significantly higher than my normal <= 100 downloads per week software I like to look in because there tends to be less researchers analyzing them it seems.

Even in my initial review of this software I could tell there was some proactive response to security concerns. First it appeared there were known vulnerabilities in the past which no longer existed, indicating they have a response team of developers ready to patch bugs. It was at this point I grew a bit concerned about my ability to find bugs in their system... but I tried anyway.

And I found some.

I gave them a list of the vulnerabilities I had found and it was very well received. A very positive response in fact. A reply came with a thank you, a note that they'll be actively pursuing fixes, and a request for my resume. Flattered, I indicated that I am happily employed and not actually seeking work. I merely enjoy bug hunting in what little time I have. None-the-less they managed to coax a resume out of me.

And then they made a request I was not expecting...

They asked if I had any interest in looking into their enterprise systems as well. Like a "thanks for hacking our software, would you like to do it more?" but not in those exact words.

My exact words were: "Um.... Heck Yeah!"

Okay... not my exact words, but close enough!

So I agreed to jiggle their door knobs and see what, if anything, would open up. I found enough vulnerabilities to provide real feedback (I'm not positive about the status of all these so sorry no in-depth right now, but keep an eye out for future posts!). And of course areas I found to be secure or very well done (which there are plenty) I could provide that as feedback as well.

One of the things I found particularly interesting about OrangeHRM is that it appears there must be code review in place for their core application. I say this because the core application seems to lack any SQL Injection or XSS.

Now when I say this, I don't mean there are *none,* just that the core lacks them. I did find some opportunities for these in the application, but I believe the risk for them is relatively low in general. In fact in their enterprise systems I believe they've managed to mitigate these issues entirely. It seems they've adopted a defense-in-depth approach as well.

First, they're being proactive in bringing in 3rd party auditors to assess their applications and environments (at least me, if not more).

Second they seem to employ some coding standards for things like SQL parameterization.

And third they've added PHP IDS as an additional layer atop their application. This is another Open Source application which may be used in order to identify, report, and block some potentially heinous actions by malicious users.

So really, I've had a lot of fun working with this company. Getting to expand out a bit into the penetration testing world. It's been absolutely a positive experience. They are highly receptive to feedback. If you're willing to work with them, they're quite receptive to concerns and have been markedly very solution oriented.

Hopefully I'll have the opportunity to work with them more in the future. And that's some of the recent stuff I've been working on!

Until next time, Hack Legal, Hack Safe, but most of all Hack fun!

Friday, August 23, 2013

Cross Site Scripting vs. ASP.Net EnableValidation="true"

A friend of mine recently told me about a debate he was participating in with a colleague of his. His colleague stated that the reason ASP.Net applications are so secure, is because when EnableEventValidation flag on a page is set to true it will catch any Cross Site Scripting (XSS) attempt and throw an error.

I'm here to say... his Colleague is absolutely correct!

Wait... that's not right. They are actually mistaken. And unfortunately I've met a number of individuals who carry this same belief erroneously.

This is a perfect example of the "Defender's Dilemma." Where-in a defensive posture must account for 100% of all attacks and vectors, which may be costly to the point of exhaustion, or accepting attacks as inevitable because the attacker need only be right one in a million attacks to perform a breach.

This said, yes, EnableEventValidation may present a thorn in an attacker's side. Particularly when all of the sweet sweet pwnage is so rife with opportunity because of that one field that is so obviously vulnerable. But then when you drop your <script> it's thwarted immediately and you immediately wish it was a PHP host you were attacking instead.

But reality is contrary to the surprisingly common misconception that the this flag prevents XSS attempts in all cases. It just doesn't. And there are good reasons why it doesn't. XSS has a huge key-space for potential vectors, arguably an insurmountable key-space. This is compounded by the fact that new methods can be evolved from previously benign unexpected vectors. A perfect example of this would by UTF-7 encoding attacks. A bug that affects only few browsers nowadays. Like I said the defender has to always be right and the attacker only has to be right once. Another issue is that some XSS is represented in manners which may easily represent real valid data. e.g. they may not contain tags at all.

Consider the following as a simple example.

The ASP.Net Page definition, simple example:
<%@Page ... EnableEventValidation="true" %>
...
<div style="<%= Request.QueryString["style"] %>">Welcome Home Marty</div>

Though this is obvious to any conscientious developer, this sort of code can slide into a code base if it is written by a less experienced coder and approved too hastily by an experienced coder. How the code gets into a code base is largely unimportant, what is important is assume you've got something like this, and maybe it's a good time to do a code review if you're relying entirely on the EnableEventValidation flag to protect you.

When exploiting this, and this is one of my favorite methods personally, try leveraging javascript events. For instance, use the onmouseover event to fire your arbitrary javascript when the affected user mouses over a particular object:

http://host/vulnerablePage.aspx?style="%20onmouseover="alert('xss');"

No tags, but the server will process the request and render the onmouseover event with the javascript payload. An important note when testing this is, you know it's worked because it passes by the EnableEventValidation - no exception thrown. However, if you attempt to exploit this against users of I.E. or Chrome you'll likely see little to no success (error towards no success). This is because these clients recognize that script that is executing was passed to the server by the client. This is not a feature of ASP.Net but instead of the client itself.  So if you use Firefox without NoScript installed, it should be responsive to this attack.

Now in a similar situation where a client might get to set their choice of style by inserting into a database, this is a different story. This is the more troublesome persistent XSS attack and the vulnerable code may take the likeness of something such as:

<%@Page ... EnableEventValidation="true" %>
...
<div style="<%= myDataBaseObject.ChosenStyle %>">Welcome Home Marty</div>

In which case even I.E. and Chrome will not have a frame of reference for the XSS. The will not be able to distinguish the XSS from an attack or valid data and will render the page with the onmouseover event ready and willing to fire!

The solution to this is, encode your outputs in a context sensitive manner when dumping information to your clients. By all means EnableEventValidation, but do not rely on it as a replacement for secure programming practices!

And it should be noted, this flag does not provide any protection for other vectors like SQLi, CSRF, LFI, CR/LF, etc. It is really designed for anti-XSS.

So until next time... Hack legal, Hack safe, but most of all... Hack Fun!

Monday, February 18, 2013

When Windows Hosts Files Stop Working

So, working on what should have been a quick PowerShell script today, I ran into quite the hiccup. Annoying as hell.

My Hosts file on my Windows 7 Machine stopped working!

This was a rage inducing issue for me. But luckily I think I've identified the "problem." And yes, it is technically PEBKAC - but I think Microsoft should carry a little of the blame too.

First let us set the stage with a very basic PowerShell script to setup the hosts file the way we'd like.

$hosts = @()
$hosts = $hosts + [String]::Format("{0} {1}", "192.168.0.1", "some.local.domain")
$hosts = $hosts + [String]::Format("{0} {1}", "127.0.0.1", "www.somesite.com")
$hosts > $env:windir\system32\drivers\etc\hosts

Note: This must be run with appropriate permissions to write to the hosts file!

Pretty straight forward, make an array, add two strings to it with formatted IP/domain pairs. Write the array out to the file. When this is all said and done you should have something which looks like the following:
This is what we would expect. A hosts file with two entries that says some.local.domain -> 192.168.0.1 and another entry that says www.somesite.com -> 127.0.0.1.

Looks correct to me - and if you're reading this yours probably looks correct to you too. But as you can see the ping commands are not resolving based on the IPs in the hosts file! This is certainly not the desired effect. As if the hosts file were being "skipped."

What's the deal?

The error that resides in the above script does not reveal itself when opening the hosts file in your text editor. And in fact in some text editors it'll exist all of the time. Why you ask?

Simple; Your text editor probably detects and allows you to edit Unicode files. Optionally your text editor only supports Unicode files.

Yep, the whole annoying as hell bug - is just a simple encoding problem.

Here's how I noticed it - I opened my hosts file in a hex editor (HxD in my case, cause it's free). Viewing the raw hex we see:
This thought occurred to me as a mere suspicion. I did what I've seen a number of sources say to do: open the hosts file, copy it's contents, remove the hosts file and then re-save the contents using just plain-ol'e notepad. This does work, but it's not a solution for automation in the case of scripts (like I need it to be).

Also remember notepad will save it with the extensions .txt and you'll need to rename it.

So this works, in theory, because when notepad saves the contents it saves them in ASCII and stripping the Unicode encoding. Which is seen when viewing the contents of the new hosts file in HxD:

So, how does one fix the PowerShell script to work too? You're in luck,  it's an easy fix. Annoyingly easy after so many hours bashing your head against a keyboard over nothing.

Setup your array in the same way, when you're ready just pass your array to Out-File like so:
$hosts | Out-File -Encoding ASCII -FilePath $env:windir\system32\drivers\etc\hosts
And now, you should have similar - but better - results! Notice how notepad looks exactly the same! But the pings resolve the correct names, etc. Thus confirming my hypothesis.
That's it, I hope this helps someone. So as usual....
Hack Legal, Hack Safe, buy most of all Hack Fun!
Until next time...

Monday, December 17, 2012

Persistence is Key, Another Bug Hunt

Introduction:
There are few things I find more frustrating than looking for bugs and not finding a single one. I've seen and used buffer overflows and format string vulnerabilities in war games, I've even seen some of these bugs in real applications. Sometimes I choose a piece of software to assess and it's just rock solid.

But that's not a reason to stop.

I am specifically talking about Cerberus FTP Server which is, in my opinion, very well written - and it showed in that it provided me a great deal of difficulty in finding bugs through binary analysis like I have done in the past. I did get lucky enough to find a couple of bugs though, but only once I shifted my paradigm. One set is fairly trivial and the other is not as trivial and kind of hard to get to fire. This was all performed using Cerberus FTP Server v5.0.5.1.

Usually I'd discuss the whole process of my bug hunt, but in this case it was a long time to get so far. I started by seeking out printf/buffer overflow vulnerabilities, running some fuzzers and the like. All to no avail. An analysis of the binary showed a good indication of why:

This call to sprintf is of the "_s" variant. This means it is "security enhanced", as described here. I checked every single possibility for *printf and found very few potentially viable options. In part compilers now have warnings for these sorts of bugs, but also when the developer is aware of them too it's that much worse. Of the possibilities I did find I was later informed they exist in unused code sections which are presently unreachable. So whether or not they'll even be available is unknown much less vulnerable too, especially after mentioning them!

Moving Along:

Just to get into it, I gave up on the classics and moved to the modern world - web based attacks. This is an area of vast opportunity and means it's often fruitful. The bugs can slip in through many avenues, sometimes even shrouded under the guise of guessed safe practices.

Let's begin with the trivial bug. It is a Cross Site Scripting (XSS) scripting bug and it can be located in the "/servermanager" page of the web admin interface for Cerberus FTP Server. This interface is by default disabled. One enabled an administrative user may login at "http://localhost:10000/" Once the administrator is logged in they may find a link to the server manager page in the left menu. Or you may use "http://localhost:10000/servermanager"

It should look, something like this:
Select the "Messages" tab and you will be presented with the vulnerable page - though if the server has already been exploited, those should have already fired. The messages page looks like so:
Each of the message fields may be exploited trivially like so:
</textarea><script>alert('trivial xss');</script>

Click update to save the message, then reload the server manager page for the effect:

This as fun and all, but as one should note there is only 1 administrative user for the web interface and thus if this XSS bug is being leveraged - you probably have larger problems. None the less, this bug has been fixed and the latest version of Cerberus FTP has the corrections.

And a Little Harder:
Now for the more difficult bug. Having found a trivial XSS bug I now know that at least some fields may or may not be properly escaped. So the goal is to find other methods of interacting with the web interface that may not require authentication. Most of the options are available only to administrative users, and after exploring all the available options I finally decide to attempt to attack the "http://localhost:10000/log" page

First thing to note is the log is empty. Javascript is required for using the log page and it is on an 8 second update cycle. When log entries are shown you have approximately 2-3 seconds to review it before they are cleared. This plays a role in making this bug irritatingly difficult to fire. It should also be noted that the data transfer from the servers are in fact html encoded, hiding this bug from view in post-mortem analysis. No time like the present.

The log page is shown here, empty, waiting on it's 8 second cycle:
Normal usage will fill the log with events, but we're most curious about usage which does not require any form of authentication. To generate some "usage" traffic I write application in python (test.py):

import sys
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('192.168.0.200', 21)
sock.connect(server_address)
data = sock.recv(128)
print >>sys.stderr, '%s' % data
user = "USER testuser\r\n"
sock.sendall(user)
data = sock.recv(128)
print >>sys.stderr, '%s' % data
quit = "QUIT\r\n"
sock.sendall(quit)
sock.close()
Then I execute this from bash using a simple for loop, and I note that it returns the XSS banner we've already exploited.
for each in {1..100}
do
python test.py
done
We execute this loop and look for the results, sometimes I had to run the script multiple times to get it to show:
Obviously we have some control over what is shown in this log even though we have not authenticated in any way. The real question is, does this render in a way which might be dangerous. To test, we merely adjust our 'testuser' to be something more effective fun:

USER <div onmouseover="alert('xss');" />

And run the loop again, because we're using the onmouseover - we'll need to fire this by mousing over the areas where the divs will be. If it works it should be immediately apparent.

And a Cross Site scripting bug is discovered. As with the other one I am told this bug has been patched and is no longer an issue. Special thanks to Grant @ Cerberus FTP for the fixes and extremely timely support given, this has been the best company I've had the pleasure to deal with so far in regards to security related matters.

For reference these bugs were assigned CVE-2012-6339.

See you next time and until then, Hack Safe, Hack Legal, but most of all Hack Fun!

Thursday, October 18, 2012

My First Experiences Bug Hunting, Part 2

So in Part 1 I disclosed the specifics of CVE-2012-3819, and I ended with the concept of loading a debugger (OllyDbg in this case) in order to analyze the program while it crashes from the stack overflow exception. During this process it is easy to note that no control of the EIP register is obtained - and thus the bug is relegated to a generic DoS type bug.


As you can see the registers remain largely intact. EAX is a pointer on the stack (which is very low at the point of crashing, due to the stack consumption). So it is shown that the impact of this bug is likely very small. But since we have the debugger open anyway, we go ahead and do a search for strings. This is so we can see what else might be of interest just while we're in town.

To do this we first want to be analyzing the actual executable, in this case: "Campaign11.exe" which is the primary executable for Campaign Enterprise 11, my installation is a previous version (11.0.538). This software, of which I found another five vulnerabilities within, is used to send e-mails to large lists of individuals. It is an excellent marketing tool. Including the recent fixes for these bugs (version 11.0.551), I believe it will be that much better now. We click View in the menu and select "Executable Modules" which loads a list of the images presently mapped to memory. We then double click the entry which lists Campaign11.exe and this loads that module into the CPU window.

Right clicking in the instruction pane of the CPU window we choose "Search for -> All referenced text strings." This loads a list of things that look like strings in the Campaign11.exe application. This will provide us a great deal of insight into the inner-workings of the application. First I note at the top of this list are several tell-tale signs that it was programmed in Visual Basic. Including typical VB style nomenclature for control names, etc.

Knowing one of the key failures in many web based applications is SQL Injection (SQLi) I begin looking for SQL strings of interest. I right click in the list of strings and select "Search for text," running this command I seek out instances of "select * from" which is ubiquitous. I repeat this search, seeking things that look interesting. I find several entries that could be fun:
004C6143   PUSH Campaign.00426428                    UNICODE "select * from tblUsers where "UID"=" 
004C624D   PUSH Campaign.004264AC                    UNICODE "select * from tblUsers where UID="
There are many of these, but they all seem invaluable - they appear to be the first start of a string concatenation which may lead to SQL injection - and this is the sort of code you would associate with the login screen. I find all instances of "select * from tblUsers" and set break points on each. I use Ctrl+L to go to the next, then use F2 to speed the process of "red-pointing."

Then I try to login (plus a little SQL Injection since I'm expecting the possibility) to see if I won anything....


Damn. I see my SQLi is thwarted likely by a call to replace("'", "''"). Scrolling up some you can see this call does in fact happen. I see down in the current operands area that the string we're specifically dealing with is  "select * from tblUsers where username=". So back in the strings window I limit my breakpoints down to just the ones that have an instance of the string we landed on first. There are only two, and I notice something interesting nearby the second instance. A string "User-Edit.asp" - I try to load this URL "http://localhost:82/User-Edit.asp"

Bingo! I see a screen with *USERNAMEINPUT* as the username - I presume this is some sort of place holder. I combine this with the test for UID in the strings above and take a stab in the dark - I use UID as a Query String parameter and set it to 1. "http://localhost/User-Edit.asp?UID=1"

No dice, but I did land on a break point which may show me what's happening a bit. I step forward a little bit and wait for a string to appear in the top right pane of the CPU window - eventually it does, right around: 
005FD39A   . 8B85 60FFFFFF  MOV EAX,DWORD PTR SS:[EBP-A0]
The string is in EDX, and I see it's carrying my "UID=1". I wonder... what about SQLi here? I try it. "http://localhost:82/User-Edit.asp?UID=1%20OR%201=1"

I see my SQLi seems to be unhindered, so I let the program run through (F9).



I'm greeted with the user-edit page - and I have the admin user (the only user in my case). Reviewing the source of the page, I see it populates the username as well as the password boxes. In my example it appears random - but that's just the password I punched in. Its  in plaintext!

I go ahead and enumerate all of the available .asp pages and continue similar testing on these. Then I contact the vendor as well as cve-assign@mitre.com and these are what was assigned:
CVE-2012-3820: Multiple SQL Injection: activate.asp – SerialNumber field, User-Edit.asp – UID field 
CVE-2012-3821: Unauthorized access to the activate.asp page, allows modification of stored database field SerialNumber without authentication or authorization.
CVE-2012-3822: Unauthorized access to the User-Edit.asp page, allows attacker to enumerate users and their credentials by supplying their UID in the querystring. 
CVE-2012-3823: The product has stores passwords in clear text and these may be retrieved  using the User-Edit.asp page. 
CVE-2012-3824: Multiple pages accessible without authentication or authorization which may lead to the unintended disclosure of information or functionality but was not assessed. Register.asp, Group-Edit.asp, Subscriber-Edit.asp, SMTP-Edit.asp, Email-Edit.asp, Admin-GlobalConfig.asp, Admin-Users.asp, Campaign-Datasource.asp 
And that sums up my first experiences bug hunting.

See you next time and until then, Hack Safe, Hack Legal, but most of all Hack Fun!

Monday, September 24, 2012

Pandora Jacking

So Pandora.com is a very cool and very awesome website. It introduces me to all sorts of new music. Expanding my repertoire using the  heuristic capabilities of Pandora to determine what I like is very rewarding. Sometimes it gets it wrong, and corrections are recommended. But sometimes I like to skip to the chase. Sometimes I just want to listen to my music. And I know I could just pause Pandora and start up my media player... but this is a tech blog, gotta make it fancy.

So, enter the desire to inject my own music into the Pandora experience. At first this seems like a daunting task. There is a lot of traffic that flies across the wire when loading the Pandora browser. A lot of data to sift through to find the meaningful parts. I worry about file formats and structures and interfacing with flash etc. And then I calm my nerves (with whiskey) and begin logging some data. I setup chrome to pass all my web traffic through a web proxy, OWASP ZAP.

First off... holy crap, ads/ad tracking. I damn near died when I saw the list of resources that load when visiting pandora!


But it's not the time to get caught up in the overwhelming nature of ads. Sifting through all the requests, I identify the ones that look particularly pertinent. They kinda look like:
GET http://audio.*\.pandora\.com/access/?(.*)
This is a regular expression representation to simplify the otherwise very long requests. That is '.' represents any character and '*' the previous character repeating zero or more times. So '.*' means any number of any characters. The '\' acts as an escape which removes the special meaning of the character following, thus '\.' means a literal period.

The portion following the '?' seems to contain tokens and keys and etc. for pandora,  so I'm actually going to ignore this because it occurs to me - if done right the content server at pandora will never even receive this request! How you ask? Well it's simple really... Domain Naming System (DNS) Spoofing.

It's somewhat a sure fire bet that your machine does not already have the Internet Protocol (IP) address of the content server already stored. This means before you make the request to GET http://.... you're gonna have to connect to the server - you'll need the IP address to do this. That is, you'll need to make a DNS request - and this is where we'll hijack control to inject our own content.

By controlling the DNS requests we can control where the pandora client goes to get the music. Then we just server up a file of our choosing and hope it works. For simplicity I use a file delivered from pandora themselves to avoid issues with encryption, or formatting, etc.

So I open up etter.dns, add a line for:
audio*.pandora.com A 192.168.0.10
 The "192.168.0.10" is the IP address of my new content server. I start up ettercap with the dns_spoofing plugin started:
ettercap -T -M ARP /192.168.0.1/ /192.168.0.2/ -P dns_spoof
 Now, with luck, I should be in the middle of the gateway (192.168.0.1) and my pandora player (192.168.0.2), and redirecting the content requests to a local server (192.168.0.10) running on port 80. I chose to use apache, which is installed in Backtrack 5 by default. We just need to modify the file at:
/etc/apache2/sites-available/default
 We add an AliasMatch, which does regular expression and rewriting for access.
AliasMatch /access/(.*) /var/www/music.m4a
 This redirects all requests for /access/* to return the file "music.m4a".

For testing, I played Pandora normally but through ZAP and intercepted one of the /access/ requests on my "Classic Rock" station. Def Leppard - Pour Some Sugar on Me, I copied the Request-URI to the clipboard and pasted it into Chrome's URL bar. This downloads the file locally, which gives us an actual Pandora encoded  file to use. I 'save page as...' and save the file to /var/www/music.m4a.

Then I started up the web server and started a new pandora station, "Rick Astley." Obviously you can't hear it but this looks like this:

But, I hear Def Leppard - Pour Some Sugar on Me.

Replacing Rick Astley with Def Leppard... Hmm... I suppose it could be reversed too.

Pandora Jacking is successful!

Happy hacking, I'll bring something else cool soon I hope. Until then, Hack Legal, Hack Safe, but most of all Hack Fun!