Network and Application Security Explained

Network and Application Security Explained

July 26, 2011 2:00 pm

(Save to cal)

Online

This webinar explains the need for network and application security (firewalls, intrusion prevention systems, virus scanning, penetration testing, etc.) and provides tactical recommendations.

Tuesday 7.26.11 @ 2PM ET

This webinar also reviews advantages and disadvantages of several options for assessing present security stance, cover frequently asked security questions, provides recommendations for a solid and sensible approach to securing your environment and explores typical network and application layer security flaws.

Mike Klein, president of Online Tech, and we have with us Adam Goslin, co-founder of High Bit Security.

View Slides

 

 

 


Mike: Welcome to our webinar today. I’m Mike Klein, president of Online Tech, and we have with us Adam Goslin, co-founder of High Bit Security. Adam, welcome.

Adam: Thanks for having me Mike.


Mike: With all of the high profile IT security breaches with companies like Sony, Google email, even the U.S. Senate and it gives all of us in the IT world a little bit of the willies thinking the bad guys have the upper hand. They seem to be adapting pretty quickly. As we put up new barriers and use more and more technology they seem to find other ways to get into our systems. So let’s start with the big picture here. How does one go about testing risk to a similar IT technology?

Adam: The biggest factor for companies today is first taking the step to assess their security. It’s surprising how many organizations have never done a true assessment of their security standards. In the cases you mentioned, these would certainly be large scale organizations. They have a lot of capabilities that would be capable of putting a good foot forward from a security perspective and yet even those companies are open and vulnerable.

The bottom line for the security landscape is it continuously changes. Every new patch, firmware release, new code your developers put up on your website, or change you make to an environment has the potential to open up new security holes and flaws. Those security holes and flaws are found and located on a daily basis, not only by the good guys, but the hackers as well.

Now with that, there are a couple of things companies can and should be doing to be able to asses their security and they come in various forms with different drawbacks. Certainly looking at and assessing the security from a physical and infrastructure perspective. A policy and procedures perspective are things companies can do internally that will take steps to mitigate the possibility of opening new security holes. The predominant tools on the marketplace are vulnerability scans, as well as penetration tests.

A vulnerability scan is basically a preconfigured application that looks at the network and application layers looking for specific vulnerabilities it can find. Because it is preconfigured it will go down its lists looking for this or for that and if it finds it, it will alert you. If nothing shows up with that particular signature it won’t. Very similar to a virus malware scanner.


Mike: That’s something you would do on a regular basis?

Adam: Yes, absolutely. Vulnerability scanning is something one does on a regular basis. Companies out there in general should be running vulnerability scans at least quarterly, preferably more often, because things are constantly changing in the environment. Those pattern files are always warping and changing to keep up with the times.

So, just like you scan for viruses on your local machine, do the same thing with your network and application layers. Go in and run a vulnerability scanner against your environment so that you have some idea if you have gaping holes that have been opened up or something you have just done.


Mike: Is there a standard set of tools?

Adam: Yes. There is a wide variety of methods you can use. For vulnerability scanners they have defacto ones you can get. In fact a lot of companies have them. You can get them typically through virus scanning, malware scanning providers, the McAfee’s of the world, etc. There are other companies that specialize in vulnerability scanning as well. What you want to look for is an organization that is maintaining their rule base for vulnerability scanner on a regular basis. The expertise of the person running the vulnerability scanner is another piece that plays into that too.

There are some shortcomings to a vulnerability scanner. Just because you run a vulnerability scanner doesn’t mean you are secure. It just means that that devices didn’t find one of those patterns in what it looked at. It’s a good broad brush stroke way to go at assessing a high level of security. The next step and the difference really comes in when you do a penetration test.

A penetration test is a hired engagement with an IT security firm where they will go in and run a whole bunch of tests and scans with a vulnerability scan being one of the tests leveraged. They will also run a bunch of other tests and scans. They will be looking for things on your network or environment that are not typically included in a vulnerability scan. The most has been out in the space, has a lot of experience looking at systems, network layers, important part of a penetration scan is that you get the expertise of a security engineer that application layers, etc. They really have that background of experience and can poke, prod and go outside of the lines.

With every single engagement, as HighBit’s done for example, we are working with companies who have already used a vulnerability scanner and yet with every single penetration test we perform, we always find a security hole outside of the lines.


Mike: So if you are doing vulnerability testing every quarter, how often should you do the penetration test?

Adam: It really depends on what you are doing with your environment. If you are an organization where you do maintenance patches throughout the year, maybe you have one large release you do throughout the year. The recommendation is if you are doing large changes to either your network layer or application layer, specifically standing up a brand new application, maybe releasing a whole new section of functionality that has never been there before, that wold be the time you want to go ahead and run it. So, at least once a year. Again, the vulnerability scan will do a broad brush stroke cure for the minor patches or functionality enhancements, but the penetration test is sort of the catch all. That’s your insurance you get at the end of the year.


Mike: People talk about the security assessment, what is involved with that? A vulnerability scan or a penetration scan? Or is there more than just that involved when somebody comes in and does a security assessment?

Adam: From a security assessment perspective, and under the guise of a penetration test example, let’s walk through what that typical process would look like. Typically, you are starting out looking at the scope. What is the scope of the test we want to include? If there are sections of someone's network or the entire network, we will look at what it looks like from the outside. How many IP address are involved? How many web applications, do they require credentials testing (i.e. do you have to use a user name and password to be able to get into the application)?

Or is just the entire application open to the Internet? You do a lot of discovery around that external layer. In an external penetration test we will run a vulnerability scanner and a port scanner. We will be looking at every port for the TCP/UDP layers of communications channels that servers use. We will look at every communication path from a network layer perspective from every machine that is on that network. From an application layer, then we have automated tools that we will go through. In our case, we exercise the site manually.

There are a couple of different ways that security companies will retrieve all of the pages involved in a typical web layer penetration test. One is to go out and spider the application. Spidering the application effectively, you start it off at the root directory that basically looks for links to different pages and it will crawl all the way through the website and pull all of the web pages and images it can find. That is one method. We actually prefer to do a manual method, because spidering will not find every piece of functionality.

For instance, if you are on a web page and you have hover over functionality or something running flash, spider’s are not real good at picking up that functionality or those pieces of the application, so they typically leave some areas that are not explored. We will actually go through and exercise the site manually and make sure we capture every single page. We will then run the pages through a series of tests as well. The results of the network layer testing, as well as the application exercising, are all reviewed and validated.

So for instance, when you run a vulnerability scanner it will spit out a whole slew of results. That is one of the challenges for people who are not accustomed to running a vulnerability scanner. It will spit out a slew of results, because anything it sees that matches that pattern is going to get flagged. It will set de facto settings if it is a critical issue, medium issue or a low issue. The report will spit out a whole series of results and when you have a security company engaged they should be going through those and taking a look at every single result making sure it is really a valid result. As well as looking at the severity of the issue in your context. In one case a security that may apply to Company A, may truly be a low, but to Company B because of their application circumstances it may really be a critical issue. It’s a contextual assessment as well.

Once you have gone through and reviewed all of the results of the automated tools that will help us get the breadth, then we will take a look through the nature of the application, types of components involved, types of servers involved and look at it from a hackers perspective. What’s my primary target? If I’m going after this company, I’m trying to breach their system, so what is the first thing I’m going to look for? Let’s say the primary function of this company is in transferring files back and forth. The first thing I’m going to be going after is their FTP server. How can I get in and breach that, etc. The security engineers will effectively spend time and focus in on one specific aspect or area of the application that would present to an outside attacker as the primary target.


Mike: So in doing this assessment, are you doing a black box approach? Or are you looking at the company’s architecture so you know where to look where the vulnerabilities might be?

Adam: Yes, because in some cases the customer wants it to be a black box approach. They don’t want to tell you anything about the environment, they just want to see what you can do with the environment just like a hacker would. Those occasions are more rare. If you think about it from this perspective, the more information that you can arm us with, the faster we can get through the easy stuff and really get to the nuts and bolts and spend the time of the engagement really scratching away at the areas where you are going to be most sensitive.

Instead of trying to make the security company jump through a bunch of hoops trying to figure out what you could just as easily have told them. In some cases the customer will want the black box approach. In some cases, they will use a penetration test almost as a test of their internal personnel to see if when the hackers really go it, if their team will be able to identify that we have a problem and someone is trying to get in.

Some cases will spend the first few days of the engagement like that and then tell the personnel team what is going on and we will go ahead and do the test. We can really gear it to whatever the customer needs and whatever their objective is for the engagement.


Mike: What are three of the most surprising security flaws that you have found that your clients were really shocked and surprised were there?

Adam: Sure, I will come around to that. I just wanted to finish up the last piece of the penetration testing. We talked a lot about the external test and one of the biggest things that companies don’t realize or understand about a penetration test is, yes it is done externally mimicking the behavior an external hacker would be able to impart on my systems, but the one piece a lot of companies don’t do is an internal penetration test.

Now in an internal penetration test it basically makes the assumption that either someone from the outside has breached their way into your systems and they are now on your internal network or that it is an internal resource, because in a lot cases, particularly with a hacking and information exposure, it is internal personnel that are taking advantage of the openings within the internal environment to gain access to data they should not have had ever. It works under that approach and you are effectively sitting on the internal network.

In some of these cases, the barrier from the inside of the network to the outside of the network, that barrier is usually well guarded. There are external firewalls that are locked down so it’s not wide open and other things are not open they would not want open. That wall is fairly well guarded, but in a lot cases when they get to the inside of the network they say, “I want it to be open, I don’t want it to be restrictive and I’m just going to make access easier for my internal personnel.” That is a thought to walk into it with, but the only problem with that is it is a hacker’s playground.

Once they get past your external defenses and to get onto your network, now your entire kingdom is theirs. If they get past your external defenses into your internal network, now your entire network is theirs. The surprising finding that we will find when we are doing the penetration test, is on the network layer. In a lot of cases the company will discover when we walk into a netlayer engagement, we will ask them what the potential block of IP address that they have. In some cases the company has five in other cases the company has one thousand.

Sometimes they will give us list of the IP address on the block and we will scan through the block. In a lot of cases they will find either IP address that are sent to the outside they didn’t realize were or they will find ports on IP address open that they didn’t realize were open. In a lot of cases it’s innocuous as to why it happened in the first place.

Typically it’s an internal IT personnel doing some type of test for new functionality, who forgets to close it back up again. Often times we find they say, “Really? That’s open? That’s the area on network layer.” On the application layer, the biggest things we will find are things like logical faults. A logical fault aspect is what the vulnerability scanners can’t test for. They are going through their list of preconfigured notions and they either hit or they don’t hit.

A logical fault would be how the developers set up the communication to make your way from this secure access portal, as an example, to what should only be accessed internally around user login with permission to what they can and can’t gain access to. One example would be you log into the login portal and this part of logging into the login portal, once I’m an authenticated user, I would get this token that would be added to the user URL. So once you get to this internal host and when the URL is coming in there it is looking for the token.

As long as you come with the token you must have come from the authenticated portal, right? Well if it’s not coded correctly hackers will just pass in with that token in the URL, go to your internal resource server and be able to gain access to it when they should not have. We see a lot of things around access issues. The area of the application area that is most surprising are common issues that have been out there for five, ten, fifteen years. Things like SQL injections (20:23) and cross-site request forgery. These are issues in the IT security arena that have been out there for a decade. Yet, every week we are coming across companies that have these types of issues that are inherent in their applications.

When we get to internal penetration testing, one of the most surprising ones we have run across is the treasure trove of risk and security in those all-in-one scanner/fax/copier units that people connect up to the network with. They don’t realize that they have internal hard drives on them and they will store copies of all of the information they come across. In some cases the management portion of that device is not secure and even if it is secured it has a password of Password.


Mike: Well the title of our session here is, “New Tools for Protecting Sensitive Data.” So let’s talk about that a little bit. You know I often hear, “We have a firewall, we have antivirus, isn’t that enough?” Besides the firewalls, the antivirus and standard stuff, what are some of the types of tools should IT personnel be using to make sure their environment is secure?

Adam: I’m going to give you a long answer and then I will shorten it up. A lot of the elements for the typical IT manager are really dependent on the nature of the data and information that exist in that environment. How much they are looking to protect it? How sensitive is it? How damaging would be if it got out? - etc. All of these discussions are around the choices to do vulnerability testing, penetration testing, employing an intrusion detection system, employee integrity monitoring, etc. The choices that are involved are centered around the nature of a companies data and what is their tolerance risk and what is the risk is the information gets out. That is where the straddle comes into play.

Most organizations are going to have a firewall. One of the areas that is fantastic to use is what I will call modern firewall that has come out in the last five years or so. If you have a firewall that has come out recently almost all of them have the option to have an intrusion detection system or an intrusion prevention system built onto the firewall itself. Take a look at that those and see if it has those capabilities on it. If it’s a little old, maybe upgrade it, because those are fantastic features to have. What they will do is they will monitor the traffic and the hits that are coming from the outside and provide some layer of protection for your internal network against the automated scans and port scans that people on the outside are performing.

The typical hacker or hacker organization, basically has boxes of them. Remember back in the day you would get those marketing calls and you had no idea how they got your phone number? It was usually someone with a random dialer who would punch in the area code and dial random numbers. This same kind of thing is what hackers will do with IP addresses for computers.

They will keep scanning until they find a device that makes them interested. If they find that, they will pass it off to a second group and the second group will then start digging further saying, well that port is open what other ports are open? Once they group them based on basic ports, they pass them off to other groups who specialize in hacking in through remote desktops or hacking in through a remote port like the Internet. They will just pass it off to all of these various groups.

Certainly stopping the port scan, the IDS or IPS is certainly going to mitigate a lot of the threat traffic. Certainly employing anti-virus, malware, centralized scanning internally is a must have for any business these days. Especially when you have users that are able to remote access the Internet, those are really good ideas. One of the options that can be employed that companies typically don’t have these days, but should, is some sort of central logging solution. What that means is every firewall, server, workstation you have, whether or not it got its virus update, is having all of them go to a central logging station.

The reason this is really important is if someone breaches your environment you want to be able to go in and figure out where the problem lies within your environment. You could go run over and dig through the firewalls logs and jump off that and then run to the server and go through those. By the time you piece together all of the components of what you need to be able to assess what is going on there, you have jumped through three or four machines, looking at all of the logs separately and going back and forth.

With central logging, all logs for all devices go to one location. Then you have the capability to look through that log. The one assumption that central logging works off of is that you have employed network time protocol or NTP across the environment so know that your firewall has the same time as your servers which has the same time stamp as your work station.


Mike: So, are we talking about logging all of the network traffic? Or are there certain components we are trying to capture?

Adam: You can set the logging levels based on the device. You can set it to various levels of logging. You certainly want to get that dialed in for your organization based on the device, but you certainly want any critical or warning level information. Usually the recommendation, especially when starting, is back the logging level down for each device to a painful level and then start ratcheting it up and seeing the differences between the types of information you are getting and how helpful it is. You don’t want to be employ a central logging system and be logging blind because you are only getting the worst of the worst and yet you don’t want every ad nauseum notification.


Mike: So we were talking earlier with an example of a firewall where there is a lot of traffic and security information that you want to do a lot of logging on. Maybe you just want to run a management switch or somebody who is trying to get in and change the management of the switch itself.

Adam: Right. If you are running a high availability environment when does switch one switch over to switch two, things along those lines. So yes, with a firewall you definitely want to know what is going on. Especially with an outside port. Another area would be File Integrity Monitoring. It’s another area that a lot of companies will over look.


Mike: So what is that?

Adam: File Integrity Monitoring is a tool you would set up on your device, say your web server. It basically takes a once a day snap shot of the certain area you have it configured to watch or monitor. With a File Integrity Monitoring system, you will typically point it at the web server files, web code, windows directory system 32 monitoring all of these files.

Basically, it is going to do a comparison between the files that exist on the system today against the files that existed on the images it created yesterday. If there are any difference between those, it will send out an alert. Now if you have central logging, then you would want file integrity monitoring. Pushing it to your central logging tools so that you have those also in the same spot.

As an example, if core windows files were modified between yesterday and today then there is a pretty quick assessment to say did we run updates? Did we release updates? Typically on a production system, you’ve got a change control that sits behind everything that is happening with the environment and the production administrators know when they are accruing within that environment. Then they can go in and look if they authorized changes to that web code for instance. If, all of a sudden, you have a change and nobody knows how or why it occurred, now you know you have a problem and can go back to your central logging and take a look through when you saw activity.


Mike: Sometimes I hear about a tool called Web Application Firewall. Where does that apply?

Adam: Web Application Firewall is basically a firewall that is specific to a web application. It will inspect the web traffic that is coming through the website itself and determine whether or not it is expected traffic. When you first turn on a web application firewall solution, you will typically turn it on in training mode so that you can capture what are the typical standard, expected things that would normally occur within my environment. It is going to be different for everybody, because everybody has coded up their pages and functionality of their websites differently. It’s very customized to the individual application of a web application firewall. 

Once it has been trained, then you typically have some options with what to do once you turn on the web application firewall. You can set it into warning mode so that anything that falls outside of the preconfigured rule set will send off a warning to your network administrator. You also have the capability to block traffic that doesn’t meet the rule set. I always recommend it as a cautious approach as you step into this.

You want to make sure you spend time in that training process so that you capture a majority of the traffic. Once you move past that point, you want to turn it on to alert mode, so that you will be able to see those things that are falling outside of the box and the expected traffic for the website. At some point in the game for each company, they can then make the decision to say, okay it matches this rule set enough that we are going to turn on logging.

A web application firewall is challenging from the perspective that it takes a fair amount of time to get through the assessments and get your rule base configured. The other thing you have to keep in mind is every time you do a new code, functionality release or you change something in terms of patterns that the web application firewall is not expecting, you have to be careful you are not blocking traffic for this new functionality you just released. It is something that definitely needs some care.


Mike: Are there certain places or applications where you are going to see the web application firewall used more frequently? Or does it apply to every website?

Adam: No. The web application firewall s something that is typically only leveraged for the most sensitive of data. So for instance, you have a website handling credit card data or personally identifiable information or health information. That is where a web application firewall would make a lot of sense. A typical marketing web page would not employ one because they are pretty expensive and take a fair amount of time to be able to configure.


Mike: So we are hearing a lot about the cloud. Certainly from all the surveys and studies I have read, private cloud is a lot more trusted so people feel it is a lot more secure than the public utility model than we have seen with the Amazon’s of the world. How does security change when we start talking about cloud computing? Now that is a wide question.

Adam: Actually that is fairly easy to answer. When you are talking about a private cloud hosting environment where you have absolute control over what is going on in that environment, there is very little that changes from a security perspective. You still need to guard your network. You still need to make sure you don’t have holes in your application layer.

You still need to assess the internal network and look for its vulnerabilities. For the cloud specifically, the biggest requirement is looking at the management for that cloud solution or the hyper visor. You want to make sure that hyper visor is only accessible to trusted personnel. You also want to make sure that the security around that management console for the cloud based environment is as secure as the most secure box within your environment. That is the standard recommendation.

Aside from paying close attention to the hypervisor itself and the security recommendations from the provider for how to securely configure your cloud, which would be the same if you had a physical infrastructure, Windows has a secure baseline recommendation for how to configure it.

VMware, for example, has a secure cloud implementation recommendation. You should be doing that on either platform, but the hyper visor is really the only difference on a cloud based environment. The rest of this technology we have talked about: firewalls, IPS’s, web application firewalls, file integrity monitoring, making sure NTP’s are set up correctly, are all going to come into play on a cloud-based environment.


Mike: Adam, I want to thank you for sitting down with us today and sharing your expertise.

Back to Top



Webinars    |    Online


Get started now. Exceptional service awaits.