Examining aspects of encrypted traffic through Zeek logs

By Richard Bejtlich, Principal Security Strategist, Corelight

In my last post I introduced the idea that analysis of encrypted HTTP traffic requires different analytical models. If you wish to preserve the encryption (and not inspect it via a middlebox), you have to abandon direct inspection of HTTP payloads to identify normal, malicious, and suspicious activity.

In this post I will use Zeek logs to demonstrate alternative ways to analyze encrypted HTTP traffic.The goal is to reduce a sea of uncertainty to a subset of activity worth investigating. If we can resolve the issue with Zeek data, wonderful. If we cannot, at least we have decided where we need to apply additional investigation, perhaps by applying intelligence, or host-based log data, or other resources.

Because we are talking about encryption woes, I start with Zeek’s x509.log. X509 is an Internet standard which defines the format of public key certificates. These certificates are an important element of Secure Sockets Layer (SSL) and Transport Layer Security (TLS) encryption used with HTTPS traffic.

In the following example I want to profile the algorithms used to sign x509 certificates.

The last result is worrisome. I would prefer not to see any certificates signed by the SHA1 algorithm in use in my environment. As explained by Mozilla, SHA1 suffers many problems that render it unsuitable in modern environments. Is this perhaps suspicious or malicious? I could imagine a scenario where an intruder doesn’t worry about signing collisions, because his malware doesn’t care about being ranked lower by Google’s web page search algorithms.

Next I search for Zeek x509.log entries with the SHA1 algorithm.

I collect several bits of important information here, in addition to a specific log containing a match. First, I get a file identifier, FTGvvp4TC5GHCel6ad, which I will leverage shortly. This file identifier uniquely identifies the x509 certificate that Zeek observed during an encrypted session. Second, I see this certificate was issued by CN=http.l.root-servers.org,OU=LROOT,O=ICANN,L=New Taipei,C=TW. I do not know if that is a problem in and of itself. I also note that the certificate appears to have been issued in late 2018, which is odd given the warnings against using SHA1 for x509 certificates.

Using the file ID, I begin looking for other Zeek log entries. This demonstrates the real power of Zeek logs: they can be linked by entries like the file ID. I will examine each in turn as they appear. (Note that I could have searched for the certificate identifier for other log entries. I could have also turned to sources outside by logs for more information on this identifier.)

Above we have a files.log entry.

This was generated by Zeek in the process of tracking the encrypted session and writing the x509.log. This is a key log because it provides the connection ID, CJDF553HmA2WdUq1Af, which we can use to look for additional Zeek logs. The files.log also contains the source and destination IP addresses, but we will rely on linked logs for more information on the session.

Next we have the ssl.log.

This log entry offers details on the nature of the encryption used in the session of interest. We have the same IP addresses seen earlier, as well as ports. Again, I will turn to these later. Note momentarily the last two bolded entries, for the ja3 and ja3s fields. I will return to those shortly as well. The most important part of this log, for immediate use once we finish the results of this search, is the uid of CJDF553HmA2WdUq1Af. This is a connection identifier that we will search for shortly.

The last log is the x509.log again. I show it here to demonstrate that searching for the file ID results in the three types of logs just shown — files.log, ssl.log, and x509.log. In order of logical creation, they would be listed as ssl.log, x509.log, and files.log.

Returning to the results of the ssl.log, you will remember we found a connection ID. Let’s search for it and see what we find. Again, I will show one entry at a time and explain the pertinent aspects.

The conn.log is sort of the “top level” Zeek log.

Zeek creates conn.log entries for “connections,” whether they are connection-oriented (like TCP) or connectionless (like UDP). This entry shows us flow details about the connection, like the source IP (10.10.40.48), the destination IP (199.7.83.80), and the source and destination ports (36780 and 443), along with the IP protocol (TCP).

I have slightly reordered these three results in order to group and skip them. The conn-summary log is basically a repeat (in this instance) of the conn.log, and we have already seen the files.log and ssl.log. Let’s continue our interpretation with the next unique result.

Above we see the notice.log. Zeek generated this entry for the connection of interest because it was a self-signed certificate. By itself, this does not tell us if the event is normal, suspicious, or malicious, but it is still unwanted.

Above we see the notice.log. Zeek generated this entry for the connection of interest because it was a self-signed certificate. By itself, this does not tell us if the event is normal, suspicious, or malicious, but it is still unwanted.

If we wanted to think about these logs as a chain, I would order them thusly (ignoring the conn-summary.log as it is a “meta” log in most cases).

conn.log, ssl.log, x509.log, files.log, notice.log

Let’s pivot on two items of interest from the ssl.log, the ja3 and ja3s entries. JA3 refers to a wonderful addition to the Zeek code base, donated by engineers from Salesforce.com. JA3 fingerprints connections based on aspects of the client and server TLS connections. A ja3 entry reflects the client and a ja3s entry reflects the server. For our ssl.log, we had these elements:

First we will look for the ja3 client fingerprint. What systems are offering the same sort of aspects of a TLS session to their servers? I omitted the server we already looked at in the following results, and showed only new information.

It looks like our host of interest, 10.10.40.48, is the only system on our network in play, but we have found two other servers to whom 10.10.40.48 communicates — 193.0.6.139 and 193.0.6.158. We could pivot on those IP addresses if we so chose. Note the new ja3s values also.

Now let’s look at the server side to see if any other servers offer similar TLS connection aspects to their clients. We grep for the jas3 value from the earlier ssl.log.

How interesting! It appears we have three Apple iTunes servers which use the same TLS connection aspects as those accepting connections from 10.10.40.48, and we have three unique clients connecting to each of them. This is likely normal, but interesting nevertheless. Remember that if we wanted to pivot off these results, we could pick one session and search for the connection ID. In the following example I look for the connection ID of the first of the last three results (which was bolded).

As you can see, Zeek provides a wealth of identifier-linked logs to make it possible to pull on various threads.

In this example, I was not able to determine the nature of the usage of the SHA1 certificate signing algorithm from within the Zeek logs themselves.

However, the Zeek logs provided information that I could use to do additional investigation. I have the source and destination IP addresses as well as information about the encryption certificates in play. At the very least, I have found a way to focus a microscope on a problem; I’m not stuck wondering where I should look for problems.

For example, I could simply choose to look at other Zeek logs for the odd host in question, 10.10.40.48. What other protocols does it use? To whom does it connect, and how? The Zeek dns.log could be specifically interesting. Perhaps we will turn to those in the next blog entry.

This concept of using network-level data in the face of encryption to identify issues of interest is my main point, and I hope you enjoyed the review of Zeek logs along the way!

Network security monitoring is dead, and encryption killed it.

By Richard Bejtlich, Principal Security Strategist, Corelight

This post is part of a multi-part series on encryption and network security monitoring. This post covers a brief history of encryption on the web and investigates the security analysis challenges that have developed as a result.

I’ve been hearing this message since the late-2000s, and wrote a few blog posts about network security monitoring (NSM) and encryption in 2008. I’ve learned to recognize that encryption is a potentially vast topic, but often a person questioning the value of NSM versus “encryption” has basically one major use case in mind: Hypertext Transfer Protocol (HTTP) within Transport Layer Security (TLS), or Hypertext Transfer Protocol Secure (HTTPS).

Those worrying about NSM vs encryption usually started their security career when websites mainly advertised their services over HTTP, without encryption. Gmail, for example, has always offered HTTPS, but only in 2008 did it give users the ability to redirect access to its HTTPS service if they initially tried the HTTP version. In 2010, Gmail enabled HTTPS access as the default.

Today, Google strives to encrypt all of its web properties, and the  HTTPS encryption on the web section of Google’s Transparency Report makes for fascinating reading. Unfortunately, properly implementing HTTPS seems to be a challenge for most organizations, as shown by the prevalence of “mediocre” and outright “bad” ratings at the HTTPSWatch site. The 2017 paper Measuring HTTPS Adoption on the web offers a global historical view that is also worth reading.

Prior to widespread adoption of HTTPS, security teams could directly inspect traffic to and from web servers. This is the critical concern of the “encryption killed NSM” argument. For example, consider this transcript of web traffic taken from a presentation David Bianco and I delivered to ShmooCon in 2006. (Incidentally, when we spoke at this conference, it was the first time we had ever met in public!) David investigated a suspected intrusion, and was able to systematically inspect transcripts of web traffic to confirm that a host had been attacked via malicious content but not compromised. (His original blog post is still online.)

nsm vs encryption slide example bianco

Using the Zeek network security monitor (formerly “Bro”), we could have produced similar analysis using the conn.log, the http.log, and possibly the files.log.

Because David could see all of the activity affecting the victim system, and directly inspect and interpret that traffic, he could decide whether it was normal, suspicious, or malicious.
Encryption largely eliminates this specific method of investigation. When one cannot directly inspect and interpret the traffic, one is left with fewer options for validating the nature of the activity. Encryption, however, did not introduce this problem. One could argue that modern web technologies have rendered many web sites incomprehensible to the average security analyst.

Consider the “simple” Google home page. Looking at the page in a web browser, it looks fairly simple.

google home page rendered
Inspecting the source for the web page shows a different story: over 33 pages, or nearly 100,000 characters, of mostly Javascript code.

google home page javascript

How could any security analyst visually inspect and properly interpret the content of this page? I submit that the very nature of modern websites killed the security methodology that allowed an analyst to manually read web traffic and understand what it meant. Yes, tools have been introduced over the years to assist analysts, but the web content of 2018 is vastly different from that of 2006.

Even if modern websites were unencrypted, they are generally beyond the capability of the average security analyst to understand in a reliable and repeatable manner. This means that without encryption, security teams would need alternatives to direct inspection and interpretation to differentiate among normal, suspicious, and malicious activity involving web traffic. In the next article I will discuss some of those alternative models, placed within the context of HTTPS. I will likely expand beyond HTTPS in a third post. Please let me know if you want to see me discuss other aspects of this problem as well in the comments below or over on Twitter. You can find me at @taosecurity.

Monitoring. Why Bother?

By Richard Bejtlich, Principal Security Strategist, Corelight

In response to my previous article in this blog series, some readers asked “why monitor the network at all?” This question really struck me, as it relates to a core assumption of mine. In this post I will offer a few reasons why network owners have a responsibility to monitor, not just the option to monitor.

Please note that this is not a legal argument for monitoring. I am not a lawyer, and I can’t speak to the amazing diversity of regulations and policies across our global readership. I write from a practical standpoint. I consider how monitoring will help network owners fulfill their responsibilities as custodians of data, computational power, and organizational assets.

I learned a lot about network security monitoring when I started as a midnight shift analyst at the Air Force Computer Emergency Response Team (AFCERT). Monitoring the network was integral to our operations. However, it wasn’t always the case. Prior to 1993, each Air Force base was responsible for its own security. There was no centralized “managed security service provider” (MSSP) offering global visibility. When the AFCERT deployed trial versions of Todd Heberlein’s Network Security Monitor (NSM) software in the early 1990s, officials were shocked to find intruders in their enterprise.

From a practical standpoint, monitoring is a way to validate the assumptions one makes about the computing environment. In the case of the Air Force in the 1990s, officials assumed that intruders weren’t active in the enterprise. The Air Force had just pummeled the world’s fourth largest army in the first Gulf War. How could intruders be present? The AFCERT’s deployment of Todd’s NSM software provided irrefutable evidence to the contrary.

The first responsibility to monitor, then, is to provide evidence to support or deny one’s assumptions. Assumptions matter because they are the basis for decision making. If leaders make decisions based on faulty assumptions, then they will likely make poor choices. Those decisions can result in harm to the organization and its constituents. Significantly, that constituency can extend well beyond the organizational boundary, to include customers and other third parties who may unknowingly depend on the decisions made by the network owner.

Beyond understanding what is happening on the network, one has a duty to know what is not happening on the network. This sort of “negative knowledge” becomes critical when one is accused of nefarious activities that they did not commit, or when one is accused of ignoring activity that did not occur.

Let’s address the first case. Consider instances where rogue actors flood false Border Gateway Protocol (BGP) routes into the Internet routing plane. If other service providers carry those routes, then the parties can perform BGP hijacking. From the perspective of downstream network users whose ISPs carry the rogue routes, the BGP hijacker is, for all intents and purposes, the owner of the hijacked Internet protocol (IP) addresses. This means that if a victim sees an attack from another party’s hijacked IP addresses, the victim may accuse the authorized owner of the IP addresses as being the perpetrator.

In this BGP hijack scenario, which occurs on a daily basis, monitoring egress traffic from the hijacked IP address space can show, by omission, that no attack took place. Remember, in reality the offending traffic is generated by the party conducting the BGP hijacking. Records of traffic from the legitimate network owner would not show any attack traffic. One could argue that the BGP hijack victim could have altered his or her logs to remove evidence of attack. However, various means, if necessary, could be applied to show that, while possible, altering the evidence would have introduced forensic artifacts tipping a forger’s hand.

Now imagine the second scenario: ignoring activity that did not occur. My first work after the AFCERT involved helping to create a managed security service provider in Texas. One Monday morning, one of our clients, a financial institution, called me to complain that we had not caught the penetration test they had scheduled for the previous weekend. They were quite upset with me, but I managed to review all of the activity to their IP address space over the weekend, thanks to our deployment of NSM software and processes. I found a single instance of an Nmap scan that occurred on Saturday afternoon, which our analysts had reported as a reconnaissance event with no need for follow-on reporting. NSM data showed no other unusual activity to the customer that weekend.

I asked my customer if their “penetration tester” used a cable modem registered to a certain provider, and I offered the IP address. The customer confirmed that I had located the correct IP address, and I explained to them that the totality of the activity that my customer had paid to the “penetration tester” was an Nmap scan. I asked how much money that scan had cost, and I remember the answer being a five digit number. The customer then excused himself to make another call, which was to the firm that had tried to pretend a Nmap scan was indeed a penetration test.

In these instances, NSM data is the best way to show not only what has happened, but what has not happened. This benefit derives from the fact that NSM is not alert-centric or alert-dependent. While one should incorporate detection methods into NSM operations, remember that NSM does not depend upon alerts alone.

I have advocated NSM for two decades because I found that the decision to capture network activity details, in a neutral way, is an incredibly powerful tool. To understand why, consider an alternative that depends upon alert creation. If one’s operation assumes alerts will always provide information on network activity, what happens when activity does not trigger an alert? Similarly, how does one expect to address the “negative knowledge” question — by not generating an alert?
In brief, because network operators have a responsibility to make decisions based on proper assumptions, and because operators also have a responsibility to know what is, and what is not, happening on their networks, implementing NSM via Corelight and Zeek data is indispensable.

Network Security Monitoring: Your best next move

By Richard Bejtlich, Principal Security Strategist, Corelight

Welcome to the first in a regular series of blog posts on network security monitoring (NSM).

In 2002 Bamm Visscher and I defined NSM as “the collection, analysis, and escalation of indications and warnings to detect and respond to intrusions.” We were inspired by our work in the late 1990s and early 2000s at the Air Force Computer Emergency Response Team (AFCERT), and the operations built on the NSM software written by Todd Heberlein. Although NSM methodology applies to any sort of evidence or environment, these posts will largely describe NSM for network traffic in the enterprise.

As might be appropriate for the first post in a series on NSM, I will explain why I believe NSM is the first step one should take when implementing a security program. This may sound like a bold claim. Shouldn’t one collect logs from all your devices first, or perhaps roll out a shiny new endpoint detection and response (EDR) agent? While those steps may indeed benefit your security posture, they are not the first steps you should take.

In 2001 Bruce Schneier neatly summarized a shared vision for security: “monitor first”. I concur with this strategy, because I advocate basing security decisions on evidence, not faith. In other words, before making changes to one’s security posture, it is more efficient and effective to determine what is happening, and address the resulting discoveries first. My 2005 post Soccer Goal Security expands on this concept.

If one accepts the need to gather evidence, and identify what is happening in one’s environment as a necessary precursor to making changes, then we must determine how best to gather that evidence. Elsewhere I have advocated for four rough categories of intelligence, which I repeat here. They are ordered by increasing difficulty of implementation, but also likely increasing granularity of information.

The first way to identify what is happening in your environment is to rely on third party notification. As Mandiant’s M-Trends reports have been documenting for years, as of 2018, 38% of the firm’s incident response workload began with the victim learning of an intrusion via a third party. This is a cheap way to get insights into your security posture, as law enforcement, or worse, reporter Brian Krebs, is acting as your free threat intelligence provider. However, you are already days or weeks behind the intruder, and you must soon hire a consultancy to instrument and protect your network. It is important to maintain good relations with law enforcement and the media, but you should not rely on them for network intelligence.

The second method, and the focus of this blog series, is network security monitoring. Begin by deploying a NSM sensor collecting, at a minimum, Zeek data at the gateway connecting your environment to the public Internet. This will see so-called “north-south” traffic (visibility for “east-west” traffic will be covered in a later post). By collecting NSM data, one has not interrupted daily IT operations or users, other than perhaps a brief outage to install a network tap. If administrators decide to (temporarily) use a switch SPAN port to see network traffic, users will suffer no interruption of service whatsoever. With a simple deployment, security teams gather a wealth of data about their environment and threat activity. I will address the specific benefits in future posts.

The third method is to collect logs from systems, servers, architecture, and other devices throughout the network. This step requires deploying not only a log management platform to collect, store, and present the data, but also reconfiguring each device to send its logs to the log management platform. Unlike the NSM deployment, installing and configuring a log management system is a demanding project. While the benefits are ultimately worthwhile, the project is much more involved, hence its status as the third step one should take.

The fourth way to learn about threat activity in the enterprise is to instrument the endpoints with an EDR agent. This is even a bigger project than the log collection effort, as the EDR agent could interfere with business operations while trying to observe and possibly interdict malicious activity. As with log management, I am not arguing against EDR. EDR is a tool that yields wonderful benefits for visibility and control. EDR is especially attractive the more mobile and distributed one’s workforce is, and the greater the amount of encrypted network traffic one encounters. However, the level of effort and return associated with NSM means I prefer network-centric visibility strategies prior to installing log management or EDR.

At this point you may ask “isn’t third party visibility the first step when trying to learn about threat activity? You listed NSM as second!” That is true, but I don’t consider third parties as a reliable method, or an especially proactive one. When called by the FBI, one should be able to reply “yes, thank you for calling, but I already detected the activity and we are handling it now.”
Some of you may also ask “how can NSM be first, when I already have a security program?” In that case, I suggest you make “NSM next!” In other words, augment your existing environment with NSM, and let the data help guide future security decisions.

Finally, you might ask if this is a workable solution. Has anyone ever done this? I’ve used or recommended the methodology in this blog series to dozens of organizations, from small start-ups of less than 100 people, to the largest corporate entities of half a million identities under management with global presence.

In future posts I will expand upon all things NSM. I look forward to you joining me on this journey.

The last BroCon. It’ll be Zeek in 2019!

By Robin Sommer, CTO at Corelight and member of the Zeek Leadership Team

IMG_2198 2

I’m back in San Francisco after the last ever BroCon! Why the last BroCon? Because the Bro Leadership Team has announced a new name for the project. After two years of discussion, no shortage of suggestions, and a final shortlist going through legal review, it was time to commit: It’ll be Zeek! For an explanation of the rationale & background behind the choice, make sure to read Vern Paxson’s blog post or watch him skillfully revealing the new name at the conference.
By holding BroCon in the Washington DC area this year, we were hoping to broaden participation—and that worked: 260 people attended, up over 35% from last year.  We also had the support of eleven corporate sponsors—more than ever!-—which we deeply appreciate. These companies offered attendees a chance to learn about a variety of products and services helping people use and implement Zeek, either in its open source form or as part of commercial offerings.

I think BroCon’s program was particularly strong this year. Marcus Ranum kicked it off with an entertaining and provocative keynote. The main technical program then offered a terrific set of presentations covering a variety of organizations and topics. Some of the conference highlights for me were:

  1. The sheer number of use cases. In the sessions, we saw things like:
    1. using weirds to diagnose split routing problems
    2. using the conn_long log to identify exfiltration / C2 / rogue IT activity
    3. using JA3S to extend SSL fingerprinting to the server side
    4. using SMB logs to find named pipes in the Belgacom attack.
  2. Watching Salesforce and Morgan Stanley stand up and explain how they use Bro to defend themselves was inspirational.
  3. The depth of technical expertise among attendees was really impressive. Folks keep pushing the boundary of how to scale Zeek clusters and come up with clever use cases of its various frameworks.
  4. Selling Bro posters to benefit Girls Who Code was fantastic.
  5. Vern’s “Zeek” name reveal moment and the positive reception of the name change by the broader community.

We received permission to record most of the talks and are currently editing the material to synchronize videos with slide sets. As soon as that’s finished, we’ll upload them to the Bro YouTube channel.

As we look to next year, the Zeek Leadership Team will begin planning the 2019 event soon. If you have attended this year, please take a moment to fill out the attendee survey; you should have received a link to provide us with feedback about program and logistics. In 2019, we’ll also do another European workshop as well. Registration details will come soon, but you can save the date already: We’ll be at CERN, Switzerland, from April 9-11.

Lastly, it will take some time to really make the change from Bro to Zeek. The soon-to-be-released version 2.6 will still be “Bro”—from then on it’ll be “Zeek.” Over the coming weeks and months you will start seeing changes, but rest assured we’ll be careful: There’s a lot to update, and we certainly don’t want to break your deployments.

Thanks for attending the last ever BroCon!

Log enrichment with DNS host names

By Christian Kreibich, Senior Engineer, Corelight

One of the first tasks for any incident responder when looking at network logs is to figure out the host names that were associated with an IP address in prior network activity. With Corelight’s 1.15 release we help automate the process and I would like to explain how this works.

Zeek (formerly known as Bro) provides a logging framework that gives users great control over summarization and reporting of network activity. Equipped with dozens of logs by default, it provides convenient features to extend these logs with additional fields, filter log entries according to user-defined criteria, create new log types, and hook new activity into logging events. Several log types provide identifiers that allow convenient pivoting from one log type to another, such as conn.log’s UID that many other log types use to link app-layer activity to the underlying TCP/IP flows.

Other information is only implicitly linked across log types, so analysts need to reveal it in manual SIEM-based post-processing. One example of such implicitly available information is host naming, which lets analysts look past IP addresses like 216.218.185.162 to corresponding (and often revealing) DNS names like ujkwwvftddjk.ru, a recent example from Spamhaus’s DBL. While Zeek’s dns.log closely tracks address-name associations, other logs do not repeat this information. Manually establishing the cross-log linkage can prove tedious since offline resolution of those names generally does not provide accurate results. Instead, one needs to identify historic name lookups that temporally most closely preceded TCP/IP flows to/from resulting IP addresses. (Other approaches, such as leveraging HTTP Host headers, also exist but here we were looking for the most generic approach.)

Zeek’s stateful network-oriented scripting language makes it ideally suited to automate such linkage: we can enrich desired logs with DNS host names in response to network events unfolding in real time. In Corelight’s 1.15 release we provide this ability via the Namecache feature. When enabled, Zeek starts monitoring forward and reverse DNS name lookups and establishes address-name mappings that allow subsequent conn.log entries to include names and the source of the naming (here, DNS A or PTR queries). For analysts requiring immediate access to host names, conn.log now readily provides this information. The following (slightly pruned) log snippet using Zeek’s JSON format shows an example:

{
“ts”:1531868622.082572,
“uid”:”C4J4Th3PJpwUYZZ6gc”,
“id.orig_h”:”192.168.1.113″,
“id.orig_p”:38194,
“id.resp_h”:”192.150.187.12″,
“id.resp_p”:80,
…,
“id.orig_h.name.src”:”DNS_A”,
“id.orig_h.name.vals”:[“christian.local”],
“id.resp_h.name.src”:”DNS_A”,
“Id.resp_h.name.vals”:[“icir.org”]
}

Our data analysis shows that for the most relevant addresses — those outside of local networks — Namecache can establish names in more than 90% of log entries. In addition to the conn.log enrichment the feature adds a separate log, reporting operational statistics (powered by the SumStats framework) such as the cache hit rate in various contexts. Starting with the 1.16 release, you’ll see local vs non-local hit rates for your network as well.

None of the above required patching the core Zeek distribution. All functionality exists in form of new event handlers and state managed via the scripting language. Nevertheless, implementing Namecache posed some interesting technical challenges. Most immediately, Bro’s multiprocessing architecture and flow distribution mean that in a cluster setting (which we do use in our Sensors) the Zeek worker observing a DNS lookup most likely is not the one observing the TCP/IP connection to the resulting IP address. Moreover, since their respective processing is fully asynchronous we also cannot guarantee that processing the DNS query finishes prior to that of the subsequent TCP/IP connection. Finally, to approach global visibility of the address–name mappings, we need to communicate the mappings across the cluster via Bro events, raising questions about event communication patterns, sustainable event rates, and processing races.

One key observation immediately simplified the problem: Zeek writes conn.log entries only when it expires its state for a given flow, i.e., at the very end of the flow’s lifetime. This means we have at least several seconds to propagate naming information for this flow across the cluster before needing to access it.

This left the event flow to tackle. In a first iteration we decided to centralize mapping ownership in the manager process: workers communicate new mappings to the manager process, which propagates additions to other workers and tracks mapping size and age. When mapping state needs to get pruned, the manager sends explicit pruning events to the workers. This proved clearly inferior to a distributed approach where the workers manage mappings autonomously, including expirations, and only communicate new mappings to the manager. The manager in turn only relays additions across the workers, saving the memory needed for an extra copy of the mappings. This approach worked quite well but induced a few percent of packet loss on our most heavily loaded AP-1000 appliances. In a final tweak, we tuned the rate at which workers transmit mapping additions. With this change we no longer observed any operational overhead of the activated Namecache feature while preserving its effectiveness.

The Namecache feature is only one example of a wide range of log enrichments we envision. We’ll soon migrate the cluster communication to the new Broker framework, add improved multicast DNS support, and we’re considering other sources of naming as well as inverse mappings where names get enriched with corresponding IP addresses.

Network security monitoring vs supply chain backdoors

By Richard Bejtlich, Principal Security Strategist, Corelight

On October 4, 2018, Bloomberg published a story titled “The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies,” with a subtitle “The attack by Chinese spies reached almost 30 U.S. companies, including Amazon and Apple, by compromising America’s technology supply chain, according to extensive interviews with government and corporate sources.” From the article:
Since the implants were small, the amount of code they contained was small as well. But they were capable of doing two very important things: telling the device to communicate with one of several anonymous computers elsewhere on the internet that were loaded with more complex code; and preparing the device’s operating system to accept this new code. The illicit chips could do all this because they were connected to the baseboard management controller, a kind of superchip that administrators use to remotely log in to problematic servers, giving them access to the most sensitive code even on machines that have crashed or are turned off.

Companies mentioned in the story deny the details, so this post does not debate the merit of the Bloomberg reporters’ claims. Rather, I prefer to discuss how a computer incident response team (CIRT) and a chief information security officer (CISO) should handle such a possibility. What should be done when hardware-level attacks enabling remote access via the network are possible?

This is not a new question. I have addressed the architecture and practices needed to mitigate this attack model in previous writings. This scenario is a driving force behind my recommendation for network security monitoring (NSM) for any organization running a network, of any kind. This does not mean endpoint-centric security, or other security models, should be abandoned. Rather, my argument shows why NSM offers unique benefits when facing hardware supply chain attacks.

The problem is one of trust and detectability. The problem here is that one loses trust in the integrity of a computing platform when one suspects a compromised hardware environment. One way to validate whether a computing platform is trustworthy is to monitor outside of it, at places where the hardware cannot know it is being monitored, and cannot interfere with that monitoring. Software installed on the hardware is by definition untrustworthy because the hardware backdoor may have the capability to obscure or degrade the visibility and control provided by an endpoint agent.

Network security monitoring applied outside the hardware platform does not suffer this limitation, if certain safeguards are implemented. NSM suffers limitations unique to its deployment, of course, and they will be outlined shortly. By watching traffic to and from a suspected computing platform, CIRTs have a chance to identify suspicious and malicious activity, such as contact with remote command and control (C2) infrastructure. NSM data on this C2 activity can be collected and stored in many forms, such as any of the seven NSM data types: 1) full content; 2) extracted content; 3) session data; 4) transaction data; 5) statistical data; 6) metadata; and 7) alert data.

Most likely session and transaction data would have been most useful for the case at hand. Once intelligence agencies identified that command and control infrastructure used by the alleged Chinese agents in this example, they could provide that information to the CIRT, who could then query historical NSM data for connectivity between enterprise assets and C2 servers. The results of those queries would help determine if and when an enterprise was victimized by compromised hardware.

The limitations of this approach are worth noting. First, if the intruders never activated their backdoors, then there would be no evidence of communications with C2 servers. Hardware inspection would be the main way to deal with this problem. Second, the intruders may leverage popular Internet services for their C2. Historical examples include command and control via Twitter, domain fronting via Google or other Web sites, and other covert channels. Depending on the nature of the communication, it would be difficult, though not impossible, to deal with this situation, mainly through careful analysis. Third, traditional network-centric monitoring would be challenging if the intruders employed an out-of-band C2 channel, such as a cellular or radio network. This has been seen in the wild but does not appear to be the case in this incident. Technical countermeasures, whereby rooms are swept for unauthorized signals, would have to be employed. Fourth, it’s possible, albeit unlikely, that NSM sensors tasked with watching for suspicious and malicious activity are themselves hosted on compromised hardware, making their reporting also untrustworthy.

The remedy for the last instance is easier than that for the previous three. Proper architecture and deployment can radically improve the trust one can place in NSM sensors. First, the sensors should not be able to connect to arbitrary systems on the Internet. The most security conscious administrators apply patches and modifications using direct access to trusted local sources, and do not allow access for any reason other than data retrieval and system maintenance. In other words, no one browses Web sites or checks their email from NSM sensors! Second, this moratorium on arbitrary connections should be enforced by firewalls outside the NSM sensors, and any connection attempts that violate the firewall policy should generate a high-priority alert. It is again theoretically possible for an extremely advanced intruder to circumvent these controls, but this approach increases the likelihood of an adversary tripping a wire at some point, revealing his or her presence.

The bottom line is that NSM must be a part of the detection and response strategy for any organization that runs a network. Collecting and analyzing the core NSM data types, in concert with host-based security, integration with third party intelligence, and infrastructure logging, provides the best chance for CIRTs to detect and respond to the sorts of adversaries who escalate their activities to the level of hardware hacking via the supply chain. Whether or not the Bloomberg story is true, the investment in NSM merits the peace of mind a CISO will enjoy when his or her CIRT is equipped with robust network visibility.

Corelight: a recipe I couldn’t refuse

By Joy Bonaguro, Head of People, Ops, and Data, Corelight

It’s hard to beat a mission like transforming government for the 21st Century. That’s what I’ve been doing for more or less my entire professional life. From building information systems in New Orleans both before and after Hurricane Katrina in 2005 to my latest role as Chief Data Officer of San Francisco, my professional life has been dedicated to public service.

So why the private sector? Why now? Why Corelight?

I first met Greg Bell during a meeting in 2011 when he was a division director at Lawrence Berkeley National Laboratory (Berkeley Lab). At that meeting, he turned an aimless discussion into a structured troubleshooting session. I gravitated towards him as a mentor.

Once he became CEO of Corelight, I started to watch closely because I knew that this company had three fundamental ingredients for success that made it worth joining:

Ingredient 1: An incredible technology with a mission that matters

Also in 2011, I first heard about open source Bro, the technology that Corelight is built on, when I had to describe how it worked as part of a job interview at Berkeley Lab. My immediate thoughts were a) awesome interview technique b) this technology sounds magical and c) why hasn’t someone built a company on top of it?

I spent the next few years working closely with the cyber team at Berkeley Lab and in that time I learned how real cybersecurity works. I discovered that it is something that extended far beyond compliance, checklists and appliance management and into a living system of dynamic response, continuous evolution, and learned resilience.

Bro empowered all of this. Whenever I try to describe Bro, I draw the following diagram. Bro extends well beyond signature based detection (SDS) to behavioral based detection and then to a proactive response. Bro is adaptive and scalable.

image 1
Signature based detection (SDS) is a subset of intrusion based detection (IDS). Bro encapsulates both of these and is truly an intrusion protection system (IPS).

Cyber threats are a daily news item. Bro, deployed at scale and with the reliability and ease of Corelight’s solution, is uniquely positioned to help our institutions solve the ever mutable threat of cybersecurity so prevalent in our world today. It’s a mission with a global scale.

Ingredient 2: A culture worth waking up to

Peter Drucker is quoted with saying “culture eats strategy for lunch.” When interviewing at Corelight, it was like a case study in how NOT to be a stereotypical “Silicon Valley” startup (You may have seen the popular HBO show…this isn’t that).

Yes, the Corelight team is insanely smart with world-class engineers and one of the founders is Vern Paxson, the inventor of Bro. But that’s not the whole story. The ethos of Corelight is meaningful collaboration and low ego. This philosophy is set at the top and reinforced throughout the team. Everyone jumps in and helps. Below are just two emblematic images from my first week at the job.

image 2
A broken elevator had everyone chipping in to help with deliveries–including our VP of Finance, Chief Products Officer, and UI Engineer.

image 3
Our VP of Engineering brought in some bike oil to tackle our squeaky bathroom door. No more squeaks!

When Greg asked me to help ensure this culture stuck at scale, I could hardly resist. Culture and organizational health are key differentiators in our modern world, where talent is both discerning and mobile.

Ingredient 3: It’s about empowerment, not fear-mongering

Corelight’s tagline is “illuminate your network.” Merriam-Webster’s dictionary defines “illuminate” as ‘to supply or brighten with light, to make luminous or shining.’ Fundamentally, Corelight is about offering a set of tools that empower cybersecurity professionals to do their jobs more effectively and efficiently.
So much of cybersecurity marketing and branding is dominated by fear-mongering: “Do this or you will be in TROUBLE. Bad things are lurking EVERYWHERE. You CAN’T FIX this alone – you need us and we will solve this for you.” In contrast, Corelight is about acknowledging the challenge and empowering you to solve it.

Corelight does this by providing our customers with elegant, beautifully structured, comprehensive data for analysis and response (and much more soon). We don’t conceal the data to create a dependence on Corelight for insights. Instead, we expose it to the professionals who need it – reflecting our open source heritage in the very nature of our product.

The above ingredients added up to a recipe that I could not refuse. I am thrilled to be joining the Corelight team – a team with the talent and skills to continue to build a technology that will empower enterprises around the world. So if you want an amazing, challenging mission PLUS a healthy and empowering culture, join us! We’re always hiring! 😉

Twenty years of network security monitoring: from the AFCERT to Corelight

By Richard Bejtlich, Principal Security Strategist, Corelight

I am really fired up to join Corelight. I’ve had to keep my involvement with the team a secret since officially starting on July 20th. Why was I so excited about this company? Let me step backwards to help explain my present situation, and forecast the future.

Twenty years ago this month I joined the Air Force Computer Emergency Response Team (AFCERT) at then-Kelly Air Force Base, located in hot but lovely San Antonio, Texas. I was a brand new captain who thought he knew about computers and hacking based on experiences from my teenage years and more recent information operations and traditional intelligence work within the Air Intelligence Agency. I was desperate to join any part of the then-five-year-old Information Warfare Center (AFIWC) because I sensed it was the most exciting unit on “Security Hill.”

I had misjudged my presumed level of “hacking” knowledge, but I was not mistaken about the exciting life of an AFCERT intrusion detector! I quickly learned the tenets of network security monitoring, enabled by the custom software watching and logging network traffic at every Air Force base. I soon heard there were three organizations that intruders knew to be wary of in the late 1990s: the Fort, i.e. the National Security Agency; the Air Force, thanks to our Automated Security Incident Measurement (ASIM) operation; and the University of California, Berkeley, because of a professor named Vern Paxson and his Bro network security monitoring software.

When I wrote my first book in 2003-2004, The Tao of Network Security Monitoring, I enlisted the help of Christopher Jay Manders to write about Bro 0.8. Bro had the reputation of being very powerful but difficult to stand up. In 2007 I decided to try installing Bro myself, thanks to the introduction of the “brolite” scripts shipped with Bro 1.2.1. That made Bro easier to use, but I didn’t do much analysis with it until I attended the 2009 Bro hands-on workshop. There I met Vern, Robin Sommer, Seth Hall, Christian Kreibich, and other Bro users and developers. I was lost most of the class, saved only by my knowledge of standard Unix command line tools like sed, awk, and grep! I was able to integrate Bro traffic analysis and logs into my TCP/IP Weapons School 2.0 class, and subsequent versions, which I taught mainly to Black Hat students. By the time I wrote my last book, The Practice of Network Security Monitoring, in 2013, I was heavily relying on Bro logs to demonstrate many sorts of network activity, thanks to the high-fidelity nature of Bro data.

In July of this year, Seth Hall emailed to ask if I might be interested in keynoting the upcoming Bro users conference in Washington, D.C., on October 10-12. I was in a bad mood due to being unhappy with the job I had at that time, and I told him I was useless as a keynote speaker. I followed up with another message shortly after, explained my depressed mindset, and asked how he liked working at Corelight. That led to interviews with the Corelight team and a job offer. The opportunity to work with people who really understood the need for network security monitoring, and were writing the world’s most powerful software to generate NSM data, was so appealing! Now that I’m on the team, I can share how I view Corelight’s contribution to the security challenges we face.

For me, Corelight solves the problems I encountered all those years ago when I first looked at Bro. The Corelight embodiment of Bro is ready to go when you deploy it. It’s developed and maintained by the people who write the code. Furthermore, Bro is front and center, not buried behind someone else’s logo. Why buy this amazing capability from another company when you can work with those who actually conceptualize, develop, and publish the code?

It’s also not just Bro, but it’s Bro at ridiculous speeds, ingesting and making sense of complex network traffic. We regularly encounter open source Bro users who spend weeks or months struggling to get their open source deployments to run at the speeds they need, typically in the tens or hundreds of Gbps. Corelight’s offering is optimized at the hardware level to deliver the highest performance, and our team works with customers who want to push Bro to the even greater levels.

Finally, working at Corelight gives me the chance to take NSM in many exciting new directions. For years we NSM practitioners have worried about challenges to network-centric approaches, such as encryption, cloud environments, and alert fatigue. At Corelight we are working on answers for all of these, beyond the usual approaches — SSL termination, cloud gateways, and SIEM/SOAR solutions. We will have more to say about this in the future, I’m happy to say!

What challenges do you hope Corelight can solve? Leave a comment or let me know via Twitter to @corelight_inc or @taosecurity.

There’s more to Bro than great network data

By Vincent Stoffer, Senior Director of Product Management, Corelight

Corelight recently released our 1.15 software update which includes some fantastic new features, including our first group of curated Bro Packages which we’re calling the “Core Collection.”  In this blog post, I’ll tell you a bit more about how Corelight is making it easier to detect threats on your network, and providing even better data to respond to them.

Bro is much more than just a source of network data, it’s also a Turing-complete, domain-specific programming language.  Expert users know that Bro scripts (now often shared as packages) are the way to tune your sensors to generate alerts, customize the data output, and to take action.  At Corelight we support running custom Bro Packages on our platform, and some of the technical details of how and why we did that are described in this blog post.

Despite the fact that customers can run their own Bro scripts and packages on our platform, we find that many people still want easier access to content created by Corelight and the broader Bro community.  So in our 1.15 software release we chose 10 of the most popular and interesting Bro Packages (most contributed by members of the Bro community) and pre-loaded them directly onto the Corelight Sensor platform.  We believe making the results of Bro’s powerful scripting language available by default and easy to use means you’ll not only have the best data for forensics, incident response, and threat hunting, but you’ll also have detections and enrichments that make your workflow easier.

Like any Bro packages running on a Corelight sensor, these 10 run in a sandboxed environment to protect the underlying operating system, and we’ve also ensured they don’t significantly impact the performance of the sensor, and we made them as easy to enable as flipping a switch.

We divided these packages into 3 separate functional groups: detection, data enrichment, and operations.  Our intent is to demonstrate how powerful Bro’s scripting capabilities can be across a variety of use cases.  I’ll briefly describe what packages we included in each group and how to use them:

Detection:

bitcoin – Detects Bitcoin, Litecoin, etc. mining traffic over TCP or HTTP and generates an alert.  Useful to track down users exploiting internal system or network resources for profit.

ja3 – A popular package written by the security team at Salesforce, this hashes properties of the SSL/TLS client negotiation to help identify and catalog client software being used over SSL/TLS.  Once you’ve associated a ja3 hash to a piece of software (whether it’s a particular version of a web browser of a piece of malware), it’s easy to match and alert on those hashes using Bro’s intelligence framework.  And because those client negotiation properties don’t change, you can use the ja3 hashes to pinpoint client software regardless of the IP it’s coming from or the external server it’s reaching out to.

http stalling – Detects a web client sending data very slowly (a resource exhaustion attack) against a webserver and generates an alert.

long connections – Because Bro usually only logs connections when they have completed, this package detects long connections that are still running and logs them periodically.  Because malware can use persistent connections, this log is great for identifying ongoing C&C channels and generally keeping your eye on other long-running connections to establish them as valid or malicious.

scan detection – A great example of the ability of Bro to provide behavioral insight, this detects machines that are port scanning (both vertically and horizontally) and generates an alert.  It’s very useful for finding recon and feeding into an orchestration platform for blacklisting.

Data Enrichment:

Hostname enrichment – One of the most common first steps when investigating network traffic is to map an IP address to its corresponding hostname.  This Corelight only feature tracks hostnames observed over DNS by the sensor, and add them as a new column to the conn.log. This speeds the incident response workflow, by providing those hostnames where available, and does so without tipping your hand to potential adversaries with an active DNS lookup.

SMTP URL extraction – For SMTP, this extracts any URLs which are seen in the message body.  Useful for searching for phishing links and doing URL intelligence matching on live email.

http post bodies – Writes an additional field into the http.log that adds the POST body data (size limited).  Watch for credentials, C&C, and more in HTTP traffic.

Operations:

shunting – Corelight’s flagship platform, the AP 3000, can now handle shunting large or long running connections using its integrated NIC – which allows performance to be maintained in extreme conditions, while still preserving connection state information.  For our other platforms, this script also shows what connections would have been shunted, helping to profile traffic meeting certain conditions. A number of options are configurable in the new Web GUI to tune the threshold for shunting.

SSL expiring certs – Highlights any internal x509 certificates which are expired or will be expiring within 30 days.  A great way to double check on the integrity of your secure applications.

As you can see, there is a lot to explore!  We’ve made it easy to enable any of these packages and this is only the first step –– we plan to offer lots more content for the Core Collection and even other collections in the future.