Tuesday, 30 July 2013

IBM Makes Pivotal Deal for Cloud Foundry Innovation

IBM and Pivotal on Wednesday announced a development partnership to collaborate on The Cloud Foundry platform and open source project. The companies pledged to work toward establishing an open governance model for the community.
Cloud Foundry is an interoperable platform-as-a-service framework that gives users freedom of choice across cloud infrastructure and application programming models, and cloud applications. IBM said an open Cloud Foundry platform makes it possible to build, deploy  and manage cloud applications in a more agile, more scalable manner.
Doubtless, there's an increasing appetite for cloud-based mobile , social and analytics applications from line-of-business executives. Big Blue argues that CIOs need a more open and vibrant cloud development platform, such as Cloud Foundry, to meet that demand and avoid vendor lock-in.
Vast Transformation Potential
"Cloud Foundry's potential to transform business is vast, and steps like the one taken today help open the ecosystem up for greater client innovation," said Daniel Sabbah, general manager of Next Generation Platforms at IBM. "IBM will incorporate Cloud Foundry into its open cloud architecture , and put its full support behind Cloud Foundry as an open and collaborative platform for cloud application development, as it has done historically for key technologies such as Linux and OpenStack."
For its part, Pivotal will establish a community advisory board of Cloud Foundry users and vendors -- including IBM -- to guide the community as outlined on the Cloud Foundry site. Pivotal will also continue to steward the Cloud Foundry brand and preserve the trademark from direct commercial use in product names.
"We believe that the Cloud Foundry platform has the potential to become an extraordinary asset that many players can leverage in an open way to enable a new generation of applications for the cloud," said Paul Maritz, CEO of Pivotal. "IBM's considerable investment in Cloud Foundry is already producing great results with application-centric cloud offerings such as making IBM WebSphere Liberty available on Cloud Foundry."
Pivotal for Pivotal
IBM and Pivotal are also collaborating on the technology that would allow programming languages or frameworks to be deployed on the Cloud Foundry platform. The companies pointed to early fruit of this collaboration: a preview version of WebSphere Application Server Liberty Core, IBM's lightweight version of the WebSphere Application Server.
WebSphere Application Server Liberty Core works to help clients respond to enterprise  and market needs more quickly by making possible less complex, rapid development and deployment of web, mobile, social and analytic applications using fewer resources. With the capabilities of the WebSphere Application Server Liberty Core running on Cloud Foundry, IBM said developers benefit from the robustness of the WebSphere platform and the innovation of the emerging cloud architecture.
"The IBM-Pivotal announcement is a significant event that should boost the impact and influence of Cloud Foundry," Charles King, principal analyst at Pund-IT , told us. "The agreement has much in common with previous IBM support of Linux and OpenStack which helped to validate those technologies among enterprise IT customers. The agreement is certainly a major win for Pivotal but significant long-term benefits should also accrue to businesses and other organizations that embrace the company's open cloud architecture."

Connecting clouds; building Ecosystems - Telco journey to the Cloud

In the journey towards the cloud, long term, enterprises are more likely to buy all services from a single provider integrating the different cloud and connectivity needs. Therefore we’ll see a number of ecosystems emerging that bring in all the important things needed. Are telecom operators ready for it?

Service providers anchored in the old world of VPN connectivity won’t certainly be well positioned… while those that created a more dynamic infrastructure will (see Telco journey to Cloud part 1).
Who’s going to keep the relationship with the customer?

Let’s look at a couple of examples.

1.       OTT cloud services owning the relationship with customers

 IBM cloud services, can offer today their services directly to the enterprises, and it includes both the cloud and connectivity services, provided by different local providers. But, will IBM be able to include in their ecosystem enough services to satisfy the customer needs? However, it’s unlikely that IBM could resell Oracle’s or SalesForce.com services. OTT cloud services (like IBM) can provide access to its cloud using a variety of telecom operators, offering flexibility in the connectivity but the number of services might be limited. Oracle and SalesForce.com have recently announced an agreement to merge their two cloud offerings (see Oracle joins rivals to advance cloud computing).

2.       Telecom operator owning the relationship with customers

Orange Business VPN Gallery is a Telco cloud offering that integrates connectivity services (i.e. VPNs) with its own cloud services a third party services through a partnership ecosystem model.
See video: 


                          
Orange Business Services_Business VPN Gallery by UNITEAM_Paris

As we can see in that model, the end customer gets all cloud services directly into their VPN with security and reliability. This model looks great from the telecom operator side as it maintains the relationship with its customers, but also has the opportunity to show its value. It’s also good for the third party cloud providers, as they can benefit of the proximity of the telecom operator to sell its service. In this model, customer expectations would be high; they would expect a fully integrated model between the cloud service and the connectivity piece.

This second model is not going to be easy for service providers as it requires to establish its ecosystem and include the big names on it.



So, I guess the two models will coexist. Enterprises with large IT departments will be able to deal simultaneously with multiple connectivity providers and cloud providers to get best of bread. Enterprises with modest IT departments will rely either with the telecom operator or the OTT cloud provider for all their services.

Network services any Telecom Operator should be offering - Telco journey to the Cloud

In the journey to the cloud, the needs of enterprises have evolved and there is a mismatch between their needs and what they get offered by the telecom operators.

Enterprises need cloud services, but in many cases all what they  get from the service provider are connectivity services (Internet, IPVPN, remote access VPNs…). The enterprise require access to a variety of cloud services, while in the best case, the telecom operator just offers its own service. Service agility is also key in the cloud world, but the SP offerings are very static and slow to provision. Enterprises need security and reliable connectivity to the cloud, but all what they get are VPNs and CPEs (customer premises equipment).


The gap between enterprise needs and service provider offerings represents the new opportunity. But where should telecom operators start? In my opinion, there are three steps they can take, depending on the level of implication.

  1. Evolve the current connectivity service offering
  2. Orchestrate network and cloud services
  3. Build it’s own cloud offering
Let’s see briefly what the telecom operator should aim to achieve in each one of those steps.

Evolving the traditional connectivity service offering

The current service offering on connectivity is composed of a variety of access technologies plus a variety of VPN topologies (L2/L3 point-to-point, point-to-multipoint, any-to-any…). That model, served pretty well the traditional needs of the enterprises, but are not solving the needs created by the adoption of cloud (see my previous blog)

The purpose of the connectivity should be to build reliable paths between the enterprise employees to the cloud services in a secure and reliable way. The model, should be flexible enough so new features can be incorporated as new needs appear.
Current VPNs are still very static, and one of the reasons sits on the CPE. In the overall cost model of VPNs, CPEs are very expensive elements carefully selected to serve the minimum needs an optimise costs. Adding new features takes a lot of time and in most cases it requires a physical upgrade or replacement. So the CPE is becoming a piece that limits the agility and ultimately the peace of innovation provided by the Telco. For some service providers, the CPE is seen as the service delivery point. As CPE is limited by its CPU processing and memory , it becomes a bottle neck to the innovation delivered.

Some operators are evolving towards a virtual CPE (or cloud CPE) composed of a simple piece of hardware at the customer premises and all the complex functionality sitting in the cloud (network Edge, Data Centre…). The set of services is not limited any longer by the capacity of the CPE. This model brings many advantages: lower cost, agile infrastructure, up-sell opportunities… (read the guest blog by Colt)

Orchestrate Network and cloud services

Imagine a model where the network responds to the application (when possible). A simple example could be a backup service provided from the cloud: as the backup starts, the network tries to provision additional capacity to serve it better, and if not possible, the network can interact with the backup application to try to reschedule it… By embracing the network programmability principles it is technically possible, however it requires both sides (network and application) to use a common language.

Software Defined Networking (SDN) makes it possible as in a good implementation the controller(s) will be able to orchestrate services end-to-end, across multiple network operators and cloud providers.

Building the Telco Cloud offering

Enterprises are currently served mainly by multiple OTT Cloud providers. Telecom operators however, with its close relationship with the customers, get to know the gaps in the cloud services and can try to build themselves. As an example, an enterprise might be served by Salesforce.com and Amazon clouds, but might be interested in a storage cloud service provided by its local telecom operator.

To do so, service providers and carriers, should be building scalable and simplified datacenters, to provide its own cloud services in a competitive way, and host there its own services as well as licensed models (i.e.Microsoft Office 365 for SPs)
The journey to the cloud is not easy but service providers should take small steps towards it. Read my next blog, to understand the telco cloud ecosystems

Will enterprise cloud services kill the VPN?

The business VPN service is still a healthy portfolio for Service Providers, but the rapid adoption of cloud services by the enterprises might be an inflection point. To understand that, let’s visualise the traditional Enterprise network topology with servers distributed in the different branches and its connectivity to headquarters. Let’s now visualise the next-generation enterprise that fully embraced employee mobility and have moved the IT infrastructure to the (private or public) cloud. In the traditional model, the VPN was the backbone spine of the Enterprise, while the next generation enterprise with all services in the cloud, has it’s backbone spin in the cloud or maybe on the Internet.  The enterprise adoption of cloud and mobility are moving Enterprise center of gravity from the VPN to the Internet. And of course, selling cost-effective pipes to the Internet is not as attractive as a business as selling high-margin VPNs.

If you can’t beat the enemy, join it

Cloud can be seen as the enemy, but it can also bring lots of new opportunities to service providers, as it does to enterprises. If we look at any analyst report, cloud is rapidly being adopted by enterprises. The top two aspects that hold back enterprises to adopt cloud massively are: availability and security. And telecom operators solve it.

Over-the-top (OTT) cloud providers can offer a service SLAs (service level agreement) within the infrastructure they control, typically their data centers. If something goes wrong in the access network, peering-points, etc… is completely out-of-control. Enterprises want higher availability and would prefer an end-to-end SLA, from the device (desktop, device…) to the cloud, no matter where the problem might be.

Security is also a big concern. Where the data sits? How safe is the environment?  Any regulatory compliance?… Once more, the (local) telecom operator that connects the enterprise users to the cloud can provide a better security environment compared to the OTT cloud provider.

Service providers can reposition themselves to become the backbone spine of the enterprise communications by offering a reliable and secure access to cloud services.

How we tested next-generation firewalls

We tested next generation firewalls by looking at seven separate areas that we felt would be important to network managers trying to deploy these products in enterprise networks.
We evaluated the basic firewall functionality of each product by examining the features for building security policy, VPNs, applying NAT and QoS, enabling dynamic routing, and supporting high availability. We tried to build VPNs with other IPSec-based firewalls and tried to synchronize each device with Cisco IOS BGP routers running both IPv4 and IPv6protocols. We looked at global management features, where the vendor provided a global management system, and evaluated other enterprise areas of concern, such as VLAN and link aggregation support. We also looked at IPv6 support separately.
We investigated visibility features in the firewalls and any reporting or global management system provided by the vendor to see how well each product gave a view into the traffic flowing through the network. Since this is a next-generation test, we focused more on application identification than simple traffic statistics. We looked at differences between "on-box" and "external" reporting systems, and evaluated areas such as debugging and long-term reporting.
Next-gen firewalls: Off to a good start
When we tested next-generation firewall control features, we tried to understand the model for applying application identification and control features. Since next-generation firewalls all use categories to help group applications, we tried to evaluate whether these categories made sense and were at an appropriate level of granularity. Within applications, we looked for features such as the ability to block different sub-applications and directions (upload/download or read/post).
We looked into the application layer controls (such as blocking applications) and variations offered by different vendors, such as applying QoS on a particular application. We also looked at other next-generation features, such as controlling traffic based on username or group, IP address reputation, or geographic region.
Because SSL decryption is an important part of application visibility, we configured SSL decryption using our own certification authority and looked at how well this decryption worked. We evaluated important PKI features, such as handling of self-signed certificates and revoked certificates. We also looked at the features that each firewall offers to configure SSL decryption, such as exempting some traffic. Where these features existed, we tested them as well.
To test next generation application identification and control, we identified 40 separate applications in nine separate areas. We included commonly mentioned applications for next generation firewalls (such as Facebook, peer-to-peer networks, streaming video, and public webmail servers), along with enterprise applications (such as Microsoft Exchange, Terminal Services, VoIP, and Sharepoint). We also used very simple evasion to see if the next generation firewall could be easily fooled, and we scored those tests separately. We did not use elaborate evasion techniques.
We configured a client to use the applications, and tried to get the next-generation firewall to identify and block the application. All applications were "real" and running either on our test lab network or across the Internet. We mixed both public applications (such as Yahoo Mail and Google Mail) with private applications (such as Lotus Notes or Microsoft Exchange) to get an idea of how well the next generation firewall operated when tools such as URL filtering would not apply. We also tried to mix both web-based applications with non-web applications, such as Skype and BitTorrent.
When testing applications, we counted a partial blockage as a failure. For example, some of our firewalls were able to block some BitTorrent and Skype traffic, but as long as we were able to succeed in downloading the selected Torrent or making calls with Skype after a short delay, we didn't give the firewall credit for actually blocking traffic.
We evaluated IPS features for each firewall using both subjective measures, such as manageability and policy controls, and objective tests. We used a Mu Dynamics appliance running Mu Studio Security to test each firewall with published vulnerabilities. We tested the firewalls in two different configurations, one optimized to protect end users, and a second one optimized to protect servers. For each configuration, we sent a different set of about 1,000 vulnerabilities designed to attack first users, and then servers. For each vulnerability set (server attacking and client attacking), we created two policies for each firewall. One policy included all of the IPS signatures and the other one just had the subset of signatures marked as having the highest priority.
We tested URL filtering and anti-malware to determine how well each firewall would catch current viruses. We did not attempt a virus coverage test, but instead picked a small set of 10 viruses that were a few weeks old — long enough for each firewall to have signatures to catch them, but not so old that the signatures would have aged out. Then we tested each firewall to see how well it would catch and block those viruses across a variety of protocols, encryption methods, and evasion techniques (such as using non-standard ports).

Cisco impresses with first crack at next-gen firewall

When we tested next-generation firewalls last May, at least one important security vendor wasn’t there: Cisco, because they weren’t ready to be tested.  Now that the ASA CX next-generation firewall has had a year to mature, we put the product through its paces, using the same methodology as our last NGFW test.  
We found that Cisco has an outstanding product, with good coverage and strong application identification and control features. Enterprise security managers who have upgraded to the “-X” versions of the ASA firewalls (announced at the RSA Conference in March 2012) can add next-generation features to the hardware in their data centers and branch offices and gain immediate benefits.  
Network managers who haven’t upgraded their hardware or who are considering a switch from a different vendor should make a competitive scan before deciding on the Cisco ASA. We found the ASA CX to be a solid “version 1” effort, but Cisco still has significant work to do in improving the management, integration, threat mitigation and application controls, leaving the ASA CX a work in progress.
Introducing the ASA CX
When Cisco decided to add next-generation features to its ASA firewall, it must have faced a daunting task: how to take a mature firewall architecture and add the next-generation features, especially application identification and control, that security managers were asking for. And by next-generation features, we mean application identification and control. Cisco took a stab at this in 2009, when it added the Modular Policy Framework, which brought many application-layer controls to the ASA. Rather than touch the delicately constructed NAT and policy rules of existing ASA firewalls, the MPF layered on top of existing security policies.
The ASA 5515-X is a standard ASA firewall with an additional processing module, called “CX” (for “context”) that handles application identification and control. In the ASA 5512-X through 5555-X, the CX next-generation firewall runs as a software module. In the high-end ASA 5585-X, Cisco has two hardware accelerators available today (the SSP10 and SSP20) with two additional models (the SSP40 and SSP60) targeted for end-of-year release, designed to take ASA throughput to 10Gbps and beyond.  
Running CX does come with a performance penalty. For example, the ASA 5515-X we tested is rated for 1.2Gbps of raw firewall throughput, but only 350Mbps of next-generation throughput. With a list price of $5,600, the 5515-X delivers very competitive price/performance compared to other next-gen firewalls.
For Cisco engineers, adding the CX set of next-generation features meant either going back to the drawing board on the ASA or wedging the next-generation feature set in without tipping the boat too much. Cisco took a little of each option: the next-generation features are glued on the side of the ASA in a way that leaves the core firewall completely undisturbed. This is the approach Cisco has taken when adding other security features to the ASA, such as IPS and anti-malware scanning, and will continue to take, as add-ons like web security make their way into the ASA.
But Cisco also promised us that it was serious about a unified management system that would bring ASA and next-gen features together in a single GUI before the end of 2013. Even if the security features are run by very separate policy engines, good policy management tools can give a unified experience to security managers building firewall policies.  
But in the current Version 9.1 we tested, network managers will be very aware that there are two distinct policy engines at work. The ASA’s next-generation features don’t even share an IP address with the base ASA firewall — next-gen policies are configured using Cisco Prime Security Manager (PRSM), a completely different management system from the ASA firewall’s Adaptive Security Device Manager (ASDM).
The basic ASA firewall is still handling access control, NAT and VPN. To enable next-generation features, an entry is made in Service Rules, part of the Modular Policy Framework, that defines which traffic is sent over to the CX part of the firewall. This means that any traffic has to be passed first by the normal access control rules, and then is subject to additional checks and controls based on application and user identification information.   
As each connection passes through the CX engine, three different policies come into play. First, the CX engine decodes SSL. Next, it ties user authentication information to the connection. And finally, the access control policies are applied, blocking or allowing the connection based on user identification and application-layer information (including application id, application type, and URL category) and user identification.
Although most application identification and controls are in the new CX policy set, they’re not all there — everything added to the ASA before CX as part of the Modular Policy Framework is still down in the core ASA. This leads to some overlap and confusion, because you have to look in two places to do very similar application controls.
In some places, the CX and the ASA MPF completely overlap; in other areas, the division of labor is more intuitive. Cisco told us that their engineers are working on an 18-month road map to push application-layer features into the CX code, and move common services, such as identity-based access controls, into the ASA base, with progress expected at each release.
The current release of the ASA 5515-X hardware has a choice of running IPS or next-generation firewall (CX), but can’t run both. Cisco told us that IPS will be integrated with the CX code by the end of 2013, with a separate license to enable the IPS feature set. As for anti-malware, Cisco couldn’t give us a definite answer. Like many security companies, they are shying away from traditional anti-virus scanners as being ineffective against many new threats. With reputation services beautifully integrated into the ASA CX policies, along with botnet detection at the ASA level, Cisco thinks it has the luxury of sitting back and looking at alternative approaches to provide anti-malware protection rather than rushing into yet another anti-virus engine.  
How’s that firewall looking?
We started our testing by reviewing the ASA CX’s basic firewall capabilities. As the great-grandchild of Cisco’s PIX and VPN Concentrator 3000 series, the ASA remains a significant presence in many enterprises. When we tested the ASA as an end-user VPN concentrator with the AnyConnect Secure Mobility Solution v3.0 two years ago, we knew enterprise network managers would be happy -- Cisco delivered solid client support across multiple platforms, end-point posture assessment, integration with their WSA web security gateways and solid performance.  
Although we didn’t dive deep into this part of the ASA, things haven’t changed, and the ASA still looks great as a VPN concentrator for remote access.
For this review, we tested firewall features in 10 areas and found a mixed bag with strong plusses and a few minuses.
Unfortunately, our biggest complaint is in the most important part of any basic firewall: policy management. The core ASA firewall has to filter traffic before it gets to the next-generation application controls, making policy management important. We found this part of the ASA policy, called ACLs (access control lists), to be problematic.
The ASA is unusual among enterprise firewalls because it’s not zone-based (although Cisco IOS firewall is), which means that any deployment with more than two interfaces (or security zones) can get very complicated, very quickly. For example, to build our policy which differentiated between three types of trusted users, servers and the Internet, we had to write more rules than one would use in a zone-based firewall, in some cases defining two rules in different directions on different interfaces to cover the same traffic. This is rules to cover both traffic going into an interface and traffic going out of the same interface — a strange way of thinking that takes a while to get used to and, more importantly, leads to larger rule sets.  
The larger the rule set, the more likely it is to have an error, and the more difficult it is to understand. Since the default behavior for the core firewall is “drop everything” and the next-generation firewall is “permit everything,” you don’t want anything to percolate up to the next-generation part that you’re not comfortable with.  
The ASA’s lack of integration of VPN access controls with other ACLs is also a complicating factor that can lead to human error in networks where the same ASA is used both for VPN and standard firewalling. Our advice: don’t do that, at least not in any complicated network. These devices are inexpensive enough that you can get one for remote access and a another one for firewalling and avoid potential security problems at very little cost.
Areas of the base firewall that we take for granted, such as NAT, VLAN support, dynamic routing, joint Layer 2/Layer 3 support, and IPv6 capabilities are all done very well in the ASA.   
We were disappointed that Cisco still hasn’t pushed BGP routing into the ASA, especially since they’ve done a really incredible job with the routing protocols (OSPF, OSPFv3, EIGRP, RIP and several multicast routing protocols) which come already inside the ASA. It took a while for the outstanding routing feature set we all know and love from IOS to migrate into the ASA, but it was worth the wait, and network managers looking for the ASA to be a strong participant in their dynamic routing will be happy with its capabilities.
Since we couldn’t test BGP according to our standard NGFW test methodology, we used OSPFv3 instead to integrate the ASA into our existing IPv4 and IPv6 network, a very painless experience. The ASA is also missing full integration between the VPN and dynamic routing. Network managers hoping to use dynamic routing to build large VPN-based WANs will need to stick with IOS for their site-to-site tunnels. We did not test high availability, but Cisco told us that the ASA CX currently supports active/passive failover and will support active/active clustering next year.
Aside from the firewall ACLs, we found two other areas lacking: QoS enforcement, and central management. We’ll cover central management separately, but our testing of QoS enforcement found that the ASA has a very weak feature set. Most firewalls have some ability to help prioritize and control traffic during periods of congestion, but not the ASA. We found that the ASA has only simple policing (limiting bandwidth of a particular application, even when there is plenty available) and queuing without any attempt to manage traffic. QoS is one big area where the ASA could pick up a lot of great features from Cisco IOS.
We think that security managers who have grown up with other firewalls and are not used to the ASA’s quirks will find it a more difficult product to integrate into complicated networks. However, in simpler topologies and especially in environments with a lot of remote-access VPN, the ASA fits in with the rest of the basic firewall marketplace and remains a competitive solution.
Next generation application identification and control
If anything defines next-generation firewalls, it’s application identification and control, and the ASA CX next-generation features aim squarely at that target. To evaluate how well the ASA CX could identify and control applications, we used the same set of 41 test scenarios in nine categories that we tried in our next-gen firewall test last year.  
In the ASA CX, application control is straightforward and simple: “block” or “allow.” There are no other options, such as redirecting to a warning page or sending an alert (although this could be done by scanning the logs). With that in mind, we dove in and started testing the ability of the ASA CX to dive deep into applications.
Compared to our test of other NGFW appliances a year ago, the ASA CX did surprisingly well with a 60% identification and block rate, essentially tying with our top performer (SonicWALL) and narrowly edging our second-best performer, Check Point.
The ASA CX has nice granularity for many applications. For example, with LinkedIn, the ASA CX lets you allow most of LinkedIn, but block job searches or posts. In some cases, the ASA CX divides applications into multiple categories -- for example, there are five LinkedIn applications and 10 Facebook ones. In popular applications, the ASA CX let us focus on more specific application behaviors, such as posting vs. reading or uploading vs. downloading. The design is well thought-out from a security manager’s perspective trying to map a security policy to a firewall rule set.
Failure to identify and block applications can come from two sources: bugs, or just not supporting the application in the first place. With the ASA CX, we found a little of each. For example, with Skype, the ASA CX just didn’t work if we engaged in any type of evasion.
In our case, we evaded the Skype filter using the oh-so-stealthy “wait for a few minutes” uber-hacker technique. While the ASA CX blocked Skype initially, if we simply waited a few minutes, calls would go through. In other cases, such as H.323 conferencing, Microsoft Exchange, and Sophos anti-virus updates, the ASA CX didn’t work at all even though these applications were part of its repertoire.  
Our testing from a year ago was all based on laptop clients, both Mac and Windows. With the ASA CX, though, we added a new twist by trying handheld devices for two commonly mentioned applications: Facebook and LinkedIn. The ASA CX didn’t do its job when we used the native applications to connect to these services.  
We did see that the ASA CX team learned from the mistakes of their peers. For example, in our testing last year we were able to work around all the other next-generation firewalls attempts to block Google Mail by simply using the basic HTML interface. That trick didn’t work with the ASA CX, which stubbornly blocked us no matter what we tried.  
The ASA CX also was fully IPv6 aware, and if it blocked an application on IPv4, the same block worked with IPv6, whether we used a laptop or an iPad.  
We also tested the ASA CX’s ability to use user- and group-based identification as part of application controls. Anyone who has tried to deploy identity-based networking or NAC knows that one of the hardest parts is gathering the identity from the user or device.
The ASA CX has two mechanisms available to it: one is a captive portal approach, which works in certain guest user environments where people don’t have much opportunity to complain. The other is called “passive authentication” and it depends on installing a Cisco agent on Windows domain controllers in a network. When a Windows user logs onto their domain-connected PC, this generates an incidental log entry in the Windows Event Logs which happens to have the apparent IP address of the PC as well as the user name used to login. We installed this agent on our lab’s domain controllers very easily and were able to link the ASA CX authentication controls to user identity. Sort-of.
It happens that our lab has some internal NAT going on to simplify routing when we build networks quickly for tests like this and this confounded the ASA CX, as the user-to-IP mappings were not correct. The same issue would happen in any network with NAT between the PC and the domain controllers. Of course, there are also scalability issues — if one domain is all you need, that’s great, but some enterprises have dozens or even hundreds of domains running around on their wide area network. Of course, this only works for domain-attached Windows PCs — still a dominant force in enterprise networks, but not the only way to deliver computing nowadays. And let’s not even mention that there’s no real tracking of when someone disconnects from the LAN.
None of this is Cisco’s fault -- any vendor who tries the hokey pokey of sniffing Windows logins or capturing the information from the logs is going to have the same problem. There are other tricks and techniques various NAC vendors have created to work around this fundamental problem, but the only 100% real solution is having a client tool on the connecting PC which collects authentication information and interacts actively with enforcement devices.
And this brings us to a serious problem in the ASA CX: all of Cisco’s products for NAC, and all of Cisco’s products for Secure Mobility, don’t interact with the ASA CX. The problem is especially ludicrous when it comes to remote access VPN -- even if the remote access tunnel is connected to the ASA CX, it stubbornly won’t pass user identity information up to the CX.   
Failing to make this obvious link even in a v1.0 product like the ASA CX is poor product management.
SSL decryption
If one of the main advantages of a next generation firewall is application and protocol identification and control, then SSL decryption is a basic requirement. In the ASA CX, SSL Decryption is handled by a separate set of policy rules, essentially defining which traffic is decrypted and which is not. From a management point of view, configuration and control of the SSL Decryption part of the ASA CX is outstanding. We started by installing our own certification authority, which is trusted by all systems in our lab, into the ASA CX, a simple matter. Then, we enabled decryption for all traffic and let ‘er rip.
At first glance, the ASA CX worked great, catching SSL on non-standard ports, and extending application identification even into encrypted sessions. And for the most part, the ASA CX SSL decryption is just fine and didn’t create any performance problems in our limited lab testing. But it’s not perfect.  
The biggest issue we ran into was application identification of encrypted traffic. Although the ASA CX does a good job of decrypting and re-encrypting SSL traffic, the application identification engine doesn’t work properly for many protocols. It’s not a complete failure -- the most important protocol that most people care about, HTTP, is successfully unpacked and identified. But security managers who want to identify and control non-HTTP other protocols which might be blocked by policy can’t see into the encrypted session. For example, the ASA CX was great at identifying unencrypted IMAP on standard and non-standard ports. But if we opened up a connection to an IMAP server on the standard IMAP-over-SSL port, the ASA CX didn’t identify the traffic as IMAP, but just as SSL. We had the same issue with SMTP using standard ports. Once the traffic was encrypted, the ASA CX didn’t identify these protocols. For HTTP traffic, there wasn’t a problem -- the ASA CX unpacked encrypted HTTP just fine and identified applications, such as Google Mail and Facebook inside of the encrypted session.  
Early on, we also had smaller issues with some SSL negotiation that the ASA CX didn’t like, and hence inappropriately blocked. This first showed up in our lab’s iPhone and iPad, which started behaving poorly and couldn’t communicate back to the iCloud mother ship. The problem was easy to see in the ASA CX logs, but there was no simple solution other than to exempt certain IP addresses at iCloud from SSL decryption. Problem solved, but maintenance and documentation headache begun. And we didn’t know how that would affect other controls. For example, we tried blocking iTunes installation of applications, which didn’t work. Was that because the SSL wasn’t being properly decrypted, or was it just a bug in the iTunes blocking?
A different issue that slowed us is related to the list of trusted root certification authorities pre-loaded by Cisco. The ASA CX will engage in man-in-the-middle SSL decryption only if the server presents a certificate trusted by the ASA CX. Cisco doesn’t release the list of CAs that it will trust, leaving you to guess and debug. Most of our tests were fine, but we did run into a  few perfectly legitimate servers that the ASA CX wouldn’t trust until we tracked down their CA and intermediate CA certificates. The lack of a list of CAs makes debugging hard, and there is no legitimate reason to keep this list secret -- every other product with similar trust settings, such as Windows and Mac OS X, is happy to let you see and edit the list of trusted CAs.
The ASA CX did not fare well in our test of invalid certificates, potentially hiding revoked or incorrect certificates from the end user. When a server hands a certificate to the ASA CX as part of the SSL handshake, the ASA does not do full checks on that certificate. Then, it replaces the certificate with one that it creates — and any revocation information is lost. In simpler terms, the main mechanisms in place in the world of digital certificates and X.509 are not correctly implemented by the ASA CX, leading to a small but very significant vulnerability, especially in the world of spear-phishing attacks.
Next generation visibility and management
We found the ASA CX with Cisco Prime Security Manager (PRSM) provides outstanding visibility into application type and flow statistics, with a strong drill-down capabilities and a well-designed interface. However, ASA overall management is in rapid motion, and we found it difficult to evaluate the rest of the management interface.
The ASA’s standard management interface, for those who don’t want to use the command line interface, is ASDM (Adaptive Security Device Manager), a Java-based GUI that is used to handle most aspects of ASA configuration and management. ASDM has evolved into a stable and powerful product. ASDM isn’t the most elegant interface for managing a firewall and is tightly tied to the command-line configurations it generates, but it’s solid and gets the job done fairly quickly.  
Prime is Cisco’s new management interface, designed to work across security, switching and routing product lines. When managing firewalls, Cisco calls it PRSM, a web-based GUI that can be run either on-box directly on the ASA or on a separate management server. Although on-box operation is supported, we think that any manager with more than a single ASA CX should elect for a separate management appliance, even though there is an additional modest cost ($3,000 list price for five devices).  
Without a separate management server, PRSM presents a risk to the firewall by running hosting reporting, log storage, and management all on the same CPU that is handling packets. On-box PRSM makes for a fast demo of PRSM capabilities, but we prefer to ship logs to a separate server so that any analysis and debugging won’t risk slowing down a production device.  
The ASA CX is only managed by PRSM, which creates a disconnect between the next-generation firewall rules and the rest of the ASA management. However strange this interface is, it’s a bit unfair to grade the ASA down because of it -- Cisco told us that migration of firewall configuration to PRSM (from ASDM) is their No.1 priority for 2013 for the ASA line.  
So, yes, today, network managers need to go through contortions to manage a firewall, but we are looking at a product in transition. Even though PRSM’s visibility capabilities are excellent, network managers will find PRSM’s configuration capabilities to be clumsy and disappointing compared to ASDM. Firewall rules are spread sparsely across a web page without critical information, making policy analysis and editing difficult; policy objects are named with the least helpful terms possible, and redundant and inconsistent terminology pervades the interface. We hope that this poor configuration interface is cleaned up as part of the migration of ASDM functionality into PRSM.  
How We Tested
We used the methodology from our 2012 Next Generation firewall test so that readers could evaluate the Cisco ASA CX against competition from Barracuda, Check Point, Fortinet, Palo Alto Networks, and SonicWALL.  
Because the Cisco ASA CX does not support anti-malware and traditional IPS, we did not repeat those tests.  
We also changed our application identification testing slightly to include a wider range of clients, including smart phone and tablet devices. We highlighted the performance of those clients so that readers can more fairly compare coverage and accuracy of the ASA CX against competitive products.

F5 data center firewall aces performance test

Huge data center, check. Multiple 10G Ethernet pipes, check. Load balancer, check. Firewall? Really? Do network architects need to buy yet another box, and likely take a performance hit?
Not according to F5 Networks, which says its BIG-IP 10200v with Advanced Firewall Manager (AFM) can handle traffic at 80-Gbps rates while screening and protecting tens of millions of connections, and simultaneously load-balancing server traffic.
In this exclusive Clear Choice test, we put those claims to the test. The F5 firewall came up aces, maxing out network capacity while also offering sophisticated filtering and attack protection capabilities. In some cases, traffic rates were higher with the firewall in place than without, probably because the F5 device managed server loading more efficiently.
Although F5 is mainly known for its BIG-IP application delivery controllers (ADC), the company has steadily been adding to its security suite, especially for the data center. The BIG-IP 10200v, introduced early this year, is the second largest member of the family, with 16 10G Ethernet interfaces and two 40G Ethernet interfaces in a 2RU form factor. The only larger unit is the chassis-based Vipiron 4800.
While most of the attention in next-generation firewalls has focused on client protection, F5 targets the BIG-IP 10200v mainly for data-center use, protecting servers. Adding stateful firewall capability at very high rates is one part of that strategy.
Another part is iRules, an existing feature that allows users to inspect, modify, and reroute traffic based on HTTP and HTTPS headers. IRules use a syntax similar to many scripting languages. For network managers without scripting skills, the F5 appliance includes some canned iRules for common tasks, such as redirecting HTTP requests to HTTPS, or preventing Windows Mobile users from being locked out when they’ve changed their passwords in Active Directory but not on their mobile devices.
Both firewall and iRules can be configured via command-line or Web interfaces. The Web interface will look familiar to anyone who’s used F5 load balancers. We don’t have a lot of experience with F5 gear, but found the Web UI generally easy to navigate.
Another server protection feature is built-in denial-of-service attack (DoS) protection. The device includes nearly 40 DoS filters, all enabled by default. These filters work at layers 2-4, and cover both IPv4 and IPv6. (The firewall also works with IPv6 traffic, but time constraints limited us to testing with IPv4 traffic.)
More DoS protection comes from the IP-Intelligence feature, which identifies and blocks IP addresses for various classes of threats. Using information from a worldwide sensor network, IP-Intelligence can block traffic from botnets, Windows exploits, phishing exploits, and other classes of threats. IP-Intelligence is not enabled by default, and we did not use it in performance testing.
These features are included as part of F5’s Advanced Firewall Manager (AFM) package. F5 separately sells an Application Security Manager (ASM) package, which includes application inspection and intrusion detection, but we did not test this. So, the BIG-IP 10200v is best suited to end users who want to merge firewall and load balancer in one appliance. However, if you’re looking for an all-in-one security device, you’ll need to buy the additional ASM package.

Thursday, 25 July 2013


Amazon Beat Out IBM And Won A $600 Million Cloud Computing Contract With The CIA

SAN FRANCISCO (Reuters) - The tech industry maxim that "no one ever got fired for buying IBM" is a testament to how Big Blue has been the gold standard in computing services for decades.
But IBM faces an unlikely challenger in Amazon.com Inc, the e-commerce retail giant that is becoming a force in the booming business of cloud computing, even winning backing from America's top spy agency.
After years of being dismissed as a supplier of online computer services to startups and small businesses, Amazon Web Services (AWS) beat out International Business Machines this year to snag a $600 million contract with the Central Intelligence Agency.
IBM has successfully appealed its loss in the contest, stalling it for now. But the episode highlights how Amazon is evolving from an online retailer into a competitive provider of information technology and services to big companies, and government bodies.
That has helped push Amazon shares to a new record ahead of the company's second-quarter results due on Thursday. Amazon doesn't break out AWS results, but Wall Street believes it is expanding faster than the retail business and is more profitable.
"AWS is one of the main spokes of the bull case on Amazon shares," argues Ron Josey, an analyst at JMP Securities. "Software and IT investors are aware of and are trying to size AWS, and what the impact could be on their sector."
IBM is entrenched in corporations across the globe; and with one of the industry's biggest research budgets, is likely to remain so for some time. But it and other players like Oracle are taking note of AWS as cloud computing takes off.
Public cloud computing, which AWS pioneered in 2006, lets companies rent computing power, storage and other services from data centers shared with other customers - typically cheaper and more flexible than maintaining their own.
Amazon has begun to build a portfolio of significant clients, including Samsung, Pfizer, the Public Broadcasting Service and NASA, the U.S. space agency.
That unexpected threat is rippling through the sector. After two quarters of falling sales, Oracle announced partnerships in June with former foes Microsoft and Salesforce.com, a response in part to AWS's expansion.
"AWS is having a really meaningful impact on IT and the big incumbent companies like IBM are reacting to that now," said Colby Synesael, an analyst at Cowen & Co, who covers RackspaceHosting, one of Amazon's main rivals in the cloud.

FROM BOOKSTORE TO TECH

Amazon began life as an online bookseller, but in past years has expanded into everything from tablet computers to video. Critics say it is spending heavily with little regard for the bottom line.
But its stock hit a record $309.39 on July 16 and is up more than 22 percent this year. In contrast, Oracle is down 4 percent in 2013. IBM, which reported a fifth straight quarterly sales fall on Wednesday, is up 1 percent. (For a chart: http://link.reuters.com/nah79t)
AWS slashed prices on one of its popular services, EC2, this month and Rackspace shares promptly slid, leaving them down more than 45 percent so far this year.
AWS generates at least $2 billion a year in revenue now from a total pie of more than $60 billion, according to analysts who expect that to quintuple to more than $10 billion in coming years, partly driven by higher government cloud spending.
The tussle with IBM over the CIA contract has helped burnish Amazon's credentials, increasing Wall Street's confidence in the ability of AWS to compete with the big boys of enterprise IT.
Five companies vied for the contract - AWS, IBM, Microsoft, AT&T and another unidentified firm, according to a report on the bidding by the U.S. Government Accountability Office.
When AWS won, IBM protested, triggering the report by the GAO. The agency recommended in June that the CIA re-do some parts of its contract negotiations, giving IBM another chance.
But the GAO also stated that AWS's offering was superior.
"In every technical criterion Amazon out-scored IBM, one of the most sophisticated and capable IT companies in the world," said Carlos Kirjner, an analyst at Bernstein Research.
The CIA had "grave" concerns, according to the GAO report, about IBM's ability to provide "auto-scaling," a feature that automatically adds or removes computing power in response to applications use.
"Auto-scaling is very complex and there are not many cloud providers that can do it well, but Amazonis great at it," said Kyle Hilgendorf, a cloud computing analyst at Gartner. "I don't think anyone thinks IBM has a better cloud service."
IBM spokesman Clint Roswell said there were "inaccuracies" in the government's assessment of its CIA proposal.
"IBM remains committed to providing enterprise-level secure and robust cloud solutions and looks forward to a renewed opportunity to show our capabilities to fulfill the requirements of this important agency," he added.
An Amazon spokeswoman declined to comment on the CIA contract. A CIA spokesman also declined to comment.
IBM bought SoftLayer Technologies, a rival to AWS, for $2 billion in June. That could help it when the CIA comes calling again, said Bill Moran at Ptak Associates.
"They do not need any other issue like 'auto-scaling' to bedevil them the next time around," he added.