Are you facing a growing communication infrastructure and needing greater insight into all corners of the network?  NetFlow has become the turn to technology for security and network teams who thirst for details on who, what, when and where.  The reason:  NetFlow and IPFIX are the network traffic analysis technologies that meet nearly all of the cyber attack incident response requirements.  Why has the market selected flows over other technologies such as SNMP or packet capture?

 

  1. Wide Adoption: Flow data like SNMP is widely adopted and has been implemented on nearly every router manufactured by most vendors.  This is because it has several insightful advantages over traditional traffic analysis protocols.  Flows are a push technology whereby, unlike SNMP, no polling is needed.  Although NetFlow was invented by Cisco, years ago they presented it to the IETF as a potential standard.  After further improvements to the technology such as support for variable length strings for details such as URLs, the technology has emerged as the official standard for all flow technologies and is known today as IPFIX.
  2. Accessible: Accessibility is another driver of flow growth.  It’s readily accessible in almost all corners of every network.  It simply needs to be turned on. Nearly all firewalls support it as do many server operating systems (e.g. VMware, Citrix). Cisco is leading the way in delivering switches at every price level with either NetFlow or IPFIX support. 
  3. Inexpensive: It’s inexpensive.  In most cases, it is found to be freely available and simply needs to be turned on. In contrast to expensive packet analyzers that have to be purchased, deployed and maintained, flow technologies are free, part of existing maintenance programs and provide insight into more areas of the network.  Flow technologies as cited  by the Gartner Group should be done 80% of the time and that packet capture with probes should be done 20% of the time.
  4. Details: When most consumers think about flow data, the details available in NetFlow v5 come to mind. The technology however, has come a long way from the 20 or so elements in NetFlow v5 to the tens of thousands available in NetFlow v9 and IPFIX.  Flow technologies are now being used to export such details as: system messages, CPU utilization, round trip time, HTTP Host, URLs, packet loss, retransmits, jittter, VoIP codec, caller ID, layer 7 application, TCP window size and much more.  This is a technology that is starting to rival the details previously only available through packet capture.  Today flow analysis can be used 90% of the time and packet analysis only 10% of the time.  This is not to say that the flow industry will replace packet analysis because, it probably never will.

 

The NetFlow Problem - Well, Sort of .....

As the elements of what flow technology can be used to export proliferates, it carries an Achilles heel: volume. The more details administrators try to stuff into a single flow tuple, the more overhead is produced.  For example, requesting URL could cause what was a single flow to be broken up into multiple flows.  To add salt to the wound, more details means more bytes pushed into the same flow.  A single NetFlow datagram used to send 30 flows could be used to send only 4 flows if excessive details are quested from each flow. Also, if less packets have matching details, the device ends up sending many more flows.  In other words, Asking NetFlow v9 or IPFIX to export greater details starts encroaching on packet capture turf and begins to defeat one of the underlying intentions of flow technologies (i.e. less is sometimes more).  For this reason, the trend in the industry that is supported by multiple hardware vendors is to allow the user to select what they want to export. This is the brilliance behind Cisco Flexible NetFlow which is a configuration interface for setting up either NetFlow v5/v9 or IPFIX.

 

Another area that needs consideration when expanding the flow tuple to include more details is overhead.  Asking the devices (e.g. routers) to match on more criteria and provide more information about each flow places more overhead on the hardware.  When the tuple (i.e. matching criteria) is fixed, the processing can be done in ASICs however, if the vendor chooses to make the flow fields (i.e. elements) definable by the end user (e.g. Flexible NetFlow), it can sometimes require more CPU.

 

Regardless of what we decide to export in flow datagrams, the future will likely demand more details and sampling is often the least favorable option.  Because of this, end users may have to make some tough decisions on what to send to the NetFlow and IPFIX collector.  And the servers consuming the flows will need to be able to consume well over 100,000 flows per second and operate in a distributed format that allows collection rates to stretch into the millions of flows per second. Get started with an incident response system today and open your eyes to the malware and congestion issues that need to be eradicated from your network.