Securing the Forms Authentication Cookie with Secure Flag
Filed under: Development, Security
One of the recommendations for securing cookies within a web application is to apply the Secure attribute. Typically, this is only a browser directive to direct the browser to only include the cookie with a request if it is over HTTPS. This helps prevent the cookie from being sent over an insecure connection, like HTTP. There is a special circumstance around the Microsoft .Net Forms Authentication cookie that I want to cover as it can become a difficult challenge.
The recommended way to set the secure flag on the forms authentication cookie is to set the requireSSL attribute in the web.config as shown below:
<authentication mode="Forms">
<forms name="Test.web" defaultUrl="~/Default" loginUrl="~/Login" path="/" requireSSL="true" protection="All" timeout="20" slidingExpiration="true"/>
</authentication>
In the above code, you can see that requireSSL is set to true.
In your login code, you would have code similar to this once the user is properly validated:
FormsAuthentication.SetAuthCookie(userName, false);
In this case, the application will automatically set the secure attribute.
Simple, right? In most cases this is very straight forward. Sometimes, it isn’t.
Removing HTTPS on Internal Networks
There are many occasions where the application may remove HTTPS on the internal network. This is commonly seen, or used to be commonly seen, when the application is behind a load balancer. The end user would connect using HTTPS over the internet to the load balancer and then the load balancer would connect to the application server over HTTP. One common reason for this was easier inspection of internal network traffic for monitoring by the organization.
Why Does This Matter? Isn’t the Secure Attribute of a Cookie Browser Only?
There was an interesting decision made by Microsoft when they created .Net Forms Authentication. A decision was made to verify that the connection to the application server is secure if the requireSSL attribute is set on the authentication cookie (specified above in the web.config example), when the cookie is set using FormsAuthentication.SetCookie(). This is the only cookie that the code checks to verify it is on a secure connection.
This means that if you set requireSSL in the web.config for the forms authentication cookie and the request to the server is over HTTP, .Net will throw an exception.
This code can be seen in https://github.com/microsoft/referencesource/blob/master/System.Web/Security/FormsAuthenticationModule.cs in the ExtractTicketFromCookie method:
The obvious solution is to enable HTTPS on the traffic all the way to the application server. But sometimes as developers, we don’t have that control.
What Can We Do?
There are two options I want to discus. This first one actually doesn’t provide a complete solution, even though it seems like it would.
FormsAuthentication.GetAuthCookie
At first glance, FormsAuthentication.GetAuthCookie might seem like a good option. Instead of using SetAuthCookie, which we know will break the application in this case, one might try to call GetAuthCookie and just manage the cookie themselves. Here is what that might look like, once the user is validated.
var cookie = FormsAuthentication.GetAuthCookie(userName, false);
cookie.Secure = true;
HttpContext.Current.Response.Cookies.Add(cookie);
Rather than call SetAuthCookie, here we are trying to just get a new AuthCookie and we can set the secure flag and then add it to the cookies collection manually. This is similar to what the SetAuthCookie method does, with the exception that it is not setting the secure flag based on the web.config setting. That is the key difference here.
Keep in mind that GetAuthCookie returns a new auth ticket each time it is called. It does not return the ticket or cookie that was created if you previously called SetAuthCookie.
So why doesn’t this work completely?
Upon initial testing, this method appears to work. If you run the application and log in, you will see that the cookie does have the secure flag set. This is just like our initial response at the beginning of this article. The issue here comes with SlidingExpiration. If you remember, in our web.config, we had SlidingExpiration set to true (the default is true, so it was set in the config just for visual aide).
<authentication mode="Forms">
<forms name="Test.web" defaultUrl="~/Default" loginUrl="~/Login" path="/" requireSSL="true" protection="All" timeout="20" slidingExpiration="true"/>
</authentication>
Sliding expiration means that once the ticket is reached over 1/2 of its age and is submitted with a request, it will auto renew. This allows a user that is active to not have to re-authentication after the timeout. This re-authentication occurs again in https://github.com/microsoft/referencesource/blob/master/System.Web/Security/FormsAuthenticationModule.cs in the OnAuthenticate method.
We can see in the above code that the secure flag is again being set by the FormsAuthentication.RequireSSL web.config setting. So even though we jumped through some hoops to set it manually during the initial authentication, every ticket renewal will reset it back to the config setting.
In our web.config we had a timeout set to 20 minutes. So requesting a page after 12 minutes will result in the application setting a new FormsAuth cookie without the Secure attribute.
This option would most likely get scanners and initial testers to believe the problem was remediated. However, after a potentially short period of time (depending on your timeout setting), the cookie becomes insecure again. I don’t recommend trying to set the secure flag in this manner.
Option 2 – Global.asax
Another option is to use the Global.asax Application_EndRequest event to set the cookie to secure. With this configuration, the web.config would not set requireSSL to true and the authentication would just use the FormsAuthentication.SetAuthCookie() method. Then, in the Global.asax file we could have something similar to the following code.
protected void Application_EndRequest(object sender, EventArgs e)
{
if(Response.Cookies.Count > 0)
{
for(int i = 0; i < Response.Cookies.Count; i++)
{
//Response.Cookies[i].Secure = true;
}
}
}
The above code will loop through all the cookies setting the secure flag on them on every request. It is possible to go one step further and check the cookie name to see if it matches the forms authentication cookie before setting the secure flag, but hopefully your site runs on HTTPS so there should be no harm in all cookies having the secure flag set.
There have been debate on how well this method works based on when the EndRequest event will fire and if it fires for all requests or not. In my local testing, this method appears to work and the secure flag is set properly. However, that was local testing with minimal traffic. I cannot verify the effectiveness on a site with heavy loads or load balancing or other configurations. I recommend testing this out thoroughly before using it.
Conclusion
For something that should be so simple, this situation can be a headache for developers that don’t have any control over the connection to their application. In the end, the ideal situation would be to enable HTTPS to the application server so the requireSSL flag can be set in the web.config. If that is not possible, then there are some workarounds that will hopefully work. Make sure you are doing the appropriate testing to verify the workaround works 100% of the time and not just once or twice. Otherwise, you may just be checking a box to show an example of it being marked secure, when it is not in all cases.
JavaScript in an HREF or SRC Attribute
Filed under: Development, Security, Testing
The anchor (<a>) HTML tag is commonly used to provide a clickable link for a user to navigate to another page. Did you know it is also possible to set the HREF attribute to execute JavaScript. A common technique is to use the onclick event of the anchor tab to execute a JavaScript method when the user clicks the link. However, to stop the browser from actually redirecting the HREF can be set to javascript:void(0);. This cancels the HREF functionality and allows the JavaScript from the onclick to execute as expected.
In the above example, notice that the HREF is set with a value starting with “javascript:”. This identifier tells the browser to execute the code following that prefix. For those that are security savvy, you might be thinking about cross-site scripting when you hear about executing JavaScript within the browser. For those of you that are new to security, cross-site scripting refers to the ability for an attacker to execute unintended JavaScript in the context of your application (https://www.owasp.org/index.php/Cross-site_Scripting_(XSS)).
I want to walk through a simple scenario of where this could be abused. In this scenario, the application will attempt to track the page the user came from to set up where the Cancel button will redirect to. Imagine you have a list page that allows you to view details of a specific item. When you click the item it takes you to that item page and passes a BackUrl in the query string. So the link may look like:
https://jardinesoftware.com/item.php?backUrl=/items.php
On the page, there is a hyperlink created that sets the HREF to the backUrl property, like below:
<a href=”<?php echo $_GET[“backUrl”];?>”>Back</a>
When the page executes as expected you should get an output like this:
<a href=”/items.php”>Back</a>
There is a big problem though. The application is not performing any type of output encoding to protect against cross-site scripting. If we instead pass in backUrl=”%20onclick=”alert(10); we will get the following output:
<a href=”” onclick=”alert(10);“>Back</a>
In the instance above, we have successfully inserted the onclick event by breaking out of the HREF attribute. The bold section identifies the malicious string we added. When this link is clicked it will prompt an alert box with the number 10.
To remedy this, we could (or typically) use output encoding to block the escape from the HREF attribute. For example, if we can escape the double quotes (” -> " then we cannot get out of the HREF attribute. We can do this (in PHP as an example) using htmlentities() like this:
<a href=”<?php echo htmlentities($_GET[“backUrl”],ENT_QUOTES);?>”>Back</a>
When the value is rendered the quotes will be escapes like the following:
<a href=”" onclick=&"alert(10);“>Back</a>
Notice in this example, the HREF actually has the entire input (in bold), rather than an onclick event actually being added. When the user clicks the link it will try to go to https://www.developsec.com/” onclick=”alert(10); rather than execute the JavaScript.
But Wait… JavaScript
It looks like we have solved the XSS problem, but there is a piece still missing. Remember at the beginning of the post how we mentioned the HREF supports the javascript: prefix? That will allow us to bypass the current encodings we have performed. This is because with using the javascript: prefix, we are not trying to break out of the HREF attribute. We don’t need to break out of the double quotes to create another attribute. This time we will set backUrl=javascript:alert(11); and we can see how it looks in the response:
<a href=”javascript:alert(11);“>Back</a>
When the user clicks on the link, the alert will trigger and display on the page. We have successfully bypassed the XSS protection initially put in place.
Mitigating the Issue
There are a few steps we can take to mitigate this issue. Each has its pros and many can be used in conjunction with each other. Pick the options that work best for your environment.
- URL Encoding – Since the HREF is meant to be a URL, you could perform URL encoding. URL encoding will render the javascript benign in the above instances because the colon (:) will get encoded. You should be using URL encoding for URLs anyway, right?
- Implement Content Security Policy (CSP) – CSP can help limit the ability for inline scripts to be executed. In this case, it is an inline script so something as simple as ‘Content-Security-Policy:default-src ‘self’ could be sufficient. Of course, implementing CSP requires research and great care to get it right for your application.
- Validate the URL – It is a good idea to validate that the URL used is well formed and pointing to a relative path. If the system is unable to parse the URL then it should not be used and a default back URL can be substituted.
- URL White Listing – Creating a white list of valid URLs for the back link can be effective at limiting what input is used by the end user. This can cut down on the values that are actually returned blocking any malicious scripts.
- Remove javascript: – This really isn’t recommended as different encodings can make it difficult to effectively remove the string. The other techniques listed above are much more effective.
The above list is not exhaustive, but does give an idea of ways to help reduce the risk of JavaScript within the HREF attribute of a hyper link.
Iframe SRC
It is important to note that this situation also applies to the IFRAME SRC attribute. it is possible to set the SRC of an IFRAME using the javascript: notation. In doing so, the javascript executes when the page is loaded.
Wrap Up
When developing applications, make sure you take this use case into consideration if you are taking URLs from user supplied input and setting that in an anchor tag or IFrame SRC.
If you are responsible for testing applications, take note when you identify URLs in the parameters. Investigate where that data is used. If you see it is used in an anchor tag, look to see if it is possible to insert JavaScript in this manner.
For those performing static analysis or code review, look for areas where the HREF or SRC attributes are set with untrusted data and make sure proper encoding has been applied. This is less of a concern if the base path of the URL has been hard-coded and the untrusted input only makes up parameters of the URL. These should still be properly encoded.
Does the End of an Iteration Change Your View of Risk?
Filed under: Development, Security, Testing
You have been working hard for the past few weeks or months on the latest round of features for your flagship product. You are excited. The team is excited. Then a security test identifies a vulnerability. Balloons deflate and everyone starts to scramble.
Take a breath.
Not all vulnerabilities are created equal and the risk that each presents is vastly different. The organization should already have a process for triaging security findings. That process should be assessing the risk of the finding to determine its impact on the application, organization, and your customers. Some of these flaws will need immediate attention. Some may require holding up the release. Some may pose a lower risk and can wait.
Take the time to analyze the situation.
If an item is severe and poses great risk, by all means, stop what you are doing and fix it. But, what happens when the risk is fairly low. When I say risk, I include in that the ability for it to be exploited. The difficulty to exploit can be a critical factor in what decision you make.
When does the risk of remediation override the risk of waiting until the next iteration?
There are some instances where the risk to remediate so late in the iteration may actually be higher than waiting until the next iteration to resolve the actual issue. But all security vulnerabilities need to be fixed, you say? This is not an attempt to get out of doing work or not resolve issues. However, I believe there are situations where the risk of the exploit is less than the risk of trying to fix it in a chaotic, last minute manner.
I can’t count the number of times I have seen issues arise that appeared to be simple fixes. The bug was not very serious and could only be exploited in a very limited way. For example, the bug required the user’s machine to be compromised to enable exploitation. The fix, however, ended up taking more than a week due to some complications. When the bug appeared 2 days before code freeze there were many discussions on performing a fix, and potentially holding up the release, and moving the remediation to the next iteration.
When we take the time to analyze the risk and exposure of the finding, it is possible to make an educated decision as to which risk is better for the organization and the customers. In this situation, the assumption is that the user’s system would need to be compromised for the exploit to happen. If that is true, the application is already vulnerable to password sniffing or other attacks that would make this specific exploit a waste of time.
Forcing a fix at this point in the game increases the chances of introducing another vulnerability, possibly more severe than the one that we are trying to fix. Was that risk worth it?
Timing can have an affect on our judgement when it comes to resolving security issues. It should not be used as an escape goat or reason not to fix things. When analyzing the risk of an item, make sure you are also considering how that may affect the environment as a whole. This risk may not be directly with the flaw, but indirectly due to how it is fixed. There is no hard and fast rule, exactly the reason why we use a risk based approach.
Engage your information security office or enterprise risk teams to help with the analysis. They may be able to provide a different point of view or insight you may have overlooked.
ASP.Net Insufficient Session Timeout
Filed under: Development, Security, Testing
A common security concern found in ASP.Net applications is Insufficient Session Timeout. In this article, the focus is not on the ASP.Net session that is not effectively terminated, but rather the forms authentication cookie that is still valid after logout.
How to Test
- User is currently logged into the application.
- User captures the ASPAuth cookie (name may be different in different applications).
- Cookie can be captured using a browser plugin or a proxy used for request interception.
- User saves the captured cookie for later use.
- User logs out of the application.
- User requests a page on the application, passing the previously captured authentication cookie.
- The page is processed and access is granted.
Typical Logout Options
- The application calls FormsAuthentication.Signout()
- The application sets the Cookie.Expires property to a previous DateTime.
Cookie Still Works!!
Following the user process above, the cookie still provides access to the application as if the logout never occurred. So what is the deal? The key is that unlike a true “session” which is maintained on the server, the forms authentication cookie is self contained. It does not have a server side component to stay in sync with. Among other things, the authentication cookie has your username or ID, possibly roles, and an expiration date. When the cookie is received by the server it will be decrypted (please tell me you are using protection = all) and the data extracted. If the cookie’s internal expiration date has not passed, the cookie is accepted and processed as a valid cookie.
So what did FormsAuthentation.Signout() do?
If you look under the hood of the .Net framework, it has been a few years but I doubt much has changed, you will see that FormsAuthentication.Signout() really just removes the cookie from the browser. There is no code to perform any server function, it merely asks the browser to remove it by clearing the value and back-dating the expires property. While this does work to remove the cookie from the browser, it doesn’t have any effect on a copy of the original cookie you may have captured. The only sure way to really make the cookie inactive (before the internal timeout occurs) would be to change your machine key in the web.config file. This is not a reasonable solution.
Possible Mitigations
You should be protecting your cookie by setting the httpOnly and Secure properties. HttpOnly tells the browser not to allow javascript to have access to the cookie value. This is an important step to protect the cookie from theft via cross-site scripting. The secure flag tells the browser to only send the authentication cookie over HTTPS, making it much more difficult for an attacker to intercept the cookie as it is sent to the server.
Set a short timeout (15 minutes) on the cookie to decrease the window an attacker has to obtain the cookie.
You could attempt to build a tracking system to manage the authentication cookie on the server to disable it before its time has expired. Maybe something for another post.
Understand how the application is used to determine how risky this issue may be. If the application is not used on shared/public systems and the cookie is protected as mentioned above, the attack surface is significantly decreased.
Final Thoughts
If you are facing this type of finding and it is a forms authentication cookie issue, not the Asp.Net session cookie, take the time to understand the risk. Make sure you understand the settings you have and the priority and sensitivity of the application to properly understand “your” risk level. Don’t rely on third party risk ratings to determine how serious the flaw is. In many situations, this may be a low priority, however in the right app, this could be a high priority.
Static Analysis: Analyzing the Options
Filed under: Development, Security, Testing
When it comes to automated testing for applications there are two main types: Dynamic and Static.
- Dynamic scanning is where the scanner is analyzing the application in a running state. This method doesn’t have access to the source code or the binary itself, but is able to see how things function during runtime.
- Static analysis is where the scanner is looking at the source code or the binary output of the application. While this type of analysis doesn’t see the code as it is running, it has the ability to trace how data flows the the application down to the function level.
An important component to any secure development workflow, dynamic scanning analyzes a system as it is running. Before the application is running the focus is shifted to the source code which is where static analysis fits in. At this state it is possible to identify many common vulnerabilities while integrating into your build processes.
If you are thinking about adding static analysis to your process there are a few things to think about. Keep in mind there is not just one factor that should be the decision maker. Budget, in-house experience, application type and other factors will combine to make the right decision.
Disclaimer: I don’t endorse any products I talk about here. I do have direct experience with the ones I mention and that is why they are mentioned. I prefer not to speak to those products I have never used.
Budget
I hate to list this first, but honestly it is a pretty big factor in your implementation of static analysis. The vast options that exist for static analysis range from FREE to VERY EXPENSIVE. It is good to have an idea of what type of budget you have at hand to better understand what option may be right.
Free Tools
There are a few free tools out there that may work for your situation. Most of these tools depend on the programming language you use, unlike many of the commercial tools that support many of the common languages. For .Net developers, CAT.Net is the first static analysis tool that comes to mind. The downside is that it has not been updated in a long time. While it may still help a little, it will not compare to many of the commercial tools that are available.
In the Ruby world, I have used Brakeman which worked fairly well. You may find you have to do a little fiddling to get it up and running properly, but if you are a Ruby developer then this may be a simple task.
Managed Services or In-House
Can you manage a scanner in-house or is this something better delegated to a third party that specializes in the technology?
This can be a difficult question because it may involve many facets of your development environment. Choosing to host the solution in-house, like HP’s Fortify SCA may require a lot more internal knowledge than a managed solution. Do you have the resources available that know the product or that can learn it? Given the right resources, in-house tools can be very beneficial. One of the biggest roadblocks to in-house solutions is related to the cost. Most of them are very expensive. Here are a few in-house benefits:
- Ability to integrate directly into your Continuous Integration (CI) operations
- Ability to customize the technology for your environment/workflow
- Ability to create extensions to tune the results
Choosing to go with a managed solution works well for many companies. Whether it is because the development team is small, resources aren’t available or budget, using a 3rd party may be the right solution. There is always the question as to whether or not you are ok with sending your code to a 3rd party or not, but many are ok with this to get the solution they need. Many of the managed services have the additional benefit of reducing false positives in the results. This can be one of the most time consuming pieces of a static analysis tool, right there with getting it set up and configured properly. Some scans may return upwards of 10’s of thousands of results. Weeding through all of those can be very time consuming and have a negative effect on the poor person stuck doing it. Having a company manage that portion can be very beneficial and cost effective.
Conclusion
Picking the right static analysis solution is important, but can be difficult. Take the time to determine what your end goal is when implementing static analysis. Are you looking for something that is good, but not customizable to your environment, or something that is highly extensible and integrated closely with your workflow? Unfortunately, sometimes our budget may limit what we can do, but we have to start someplace. Take the time to talk to other people that have used the solutions you are looking at. Has their experience been good? What did/do they like? What don’t they like? Remember that static analysis is not the complete solution, but rather a component of a solution. Dropping this into your workflow won’t make you secure, but it will help decrease the attack surface area if implemented properly.
A Pen Test is Coming!!
Filed under: Development, Security, Testing
You have been working hard to create the greatest app in the world. Ok, so maybe it is just a simple business application, but it is still important to you. You have put countless hours of hard work into creating this master piece. It looks awesome, and does everything that the business has asked for. Then you get the email from security: Your application will undergo a penetration test in two weeks. Your heart skips a beat and sinks a little as you recall everything you have heard about this experience. Most likely, your immediate action is to go on the defensive. Why would your application need a penetration test? Of course it is secure, we do use HTTPS. No one would attack us, we are small. Take a breath.. it is going to be alright.
All too often, when I go into a penetration test, the developers start on the defensive. They don’t really understand why these ‘other’ people have to come in and test their application. I understand the concerns. History has shown that many of these engagements are truly considered adversarial. The testers jump for joy when they find a security flaw. They tell you how bad the application is and how simple the fix is, leading to you feeling about the size of an ant. This is often due to a lack of good communication skills.
Penetration testing is adversarial. It is an offensive assessment to find security weaknesses in your systems. This is an attempt to simulate an attacker against your system. Of course there are many differences, such as scope, timing and rules, but the goal is the same. Lets see what we can do on your system. Unfortunately, I find that many testers don’t have the communication skills to relay the information back to the business and developers in a way that is positive. I can’t tell you how may times I have heard people describe their job as great because they get to come in, tell you how bad you suck and then leave. If that is your penetration tester, find a new one. First, that attitude breaks down the communication with the client and doesn’t help promote a secure atmosphere. We don’t get anywhere by belittling the teams that have worked hard to create their application. Second, a penetration test should provide solid recommendations to the client on how they can work to resolve the issues identified. Just listing a bunch of flaws is fairly useless to a company.
These engagements should be worth everyone’s time. There should be positive communication between the developers and the testing team. Remember that many engagements are short lived so the more information you can provide the better the assessment you are going to get. The engagement should be helpful. With the right company, you will get a solid assessment and recommendations that you can do something with. If you don’t get that, time to start looking at another company for testing. Make sure you are ready for the test. If the engagement requires an environment to test in, have it all set up. That includes test data (if needed). The testers want to hit the ground running. If credentials are needed, make sure those are available too. The more help you can be, the more you will benefit from the experience.
As much as you don’t want to hear it, there is a very high chance the test will find vulnerabilities. While it would be great if applications didn’t have vulnerabilities, it is fairly rare to find them. Use this experience to learn and train on security issues. Take the feedback as constructive criticism, not someone attacking you. Trust me, you want the pen testers to find these flaws before a real attacker does.
Remember that this is for your benefit. We as developers also need to stay positive. The last thing you want to do is challenge the pen testers saying your app is not vulnerable. The teams that usually do that are the most vulnerable. Say positive and it will be a great learning experience.