yalla/pkt2flow

March 1, 2014

Ending my hiatus, I present you pkt2flow; it takes a PCAP-file and splits it up into it’s individual TCP- and UDP-streams. It’s not my original work, but I adapted it to compile on OSX. Also, Isotopp contributed a patch for the SConstruct file to make it work with homebrew. I’m currently planning a portfile for MacPorts and are also thinking of extending the functionality so that it also works with .1q tagged frames in the PCAP.

Grab it at https://github.com/yalla/pkt2flow until the original author accepted my pull-request if you’re interested in building it for OSX.

Also I spend my time mostly on Google+ nowadays, but I plan to revive this whole thing here with more in-deep articles not suitable for Google+.


And the v6 keeps on rolling!

June 9, 2011

Yesterday was World IPv6 Day. From the German perspective, there was a huge spike in traffic-increase on that day. Not so much as one might have expected, but still a significant increase:

IPv6 Traffic at the German Internet Exchange

Interestingly, although IPv6 day is over, the traffic didn’t decrease. For the fun, I checked with youtube.com – and it’s still serving traffic in IPv6!

Now compare to the the montly stats:

DE-CIX monthly IPv6 traffic

Nethertheless, IPv6 traffic is still very little compared to regular IPv4 traffic:

IPv4 Traffic at DE-CIX

So while IPv6 peaked with 1.8 Gbit/s, IPv4 at DE-CIX is still a 3.2 TBit/s. Which is about 2000 times more traffic.

But what does that tell us?

Thinking of the IPv6 traffic stats I derive the following things. First, I assume that IPv6 day didn’t make user migrate to IPv6. I think – although I can’t prove it – that existing IPv6 users kept using the Internet like they used to.

But the statistics tells us one thing: That a lot of people were prepared and ready – the early adopters. Since youtube.com is still online with v6 – and the traffic didn’t change to much – I think that either youtube.com made up most of the v6 traffic, or, that most of the websites are still live with v6.

Whoever you are, running IPv6 since yesterday: Keep going! And let’s hope those traffic stats keep increasing over time.

If you’re interested in IPv6, and didn’t bother to try it, go and get yerself some v6 at the following spots:

Fly safe!

Solaris, ipfilter accounting and tagged interfaces

February 21, 2010

This is a copy of a posting I made to the LBW-mailinglist. Maybe you got some ideas to share. Thanks!

Hi guys,
I got a problem which is really driving me nuts. I got this huge farm
of proxies running Solaris 10, shoving around a dozen gigabits/s at my
customer.
Ingress and Egress traffic run over two different VLANs, so I
configured two tagged interfaces on one physical. Also, there is a
redundant interface with the same config with IPMP over it. It looks
like...

                   +--- egress IPMP group ----+
                   |                          |
                   (                          |
      +--- ingress IPMP group ---+            |
      |            (             |            |
      |            |             |            |
+-----+-----+------+-----+ +-----+-----+------+-----+
| bge111000 | bge1112000 | | bnx111000 | bnx1112000 |
|  ingress  |   egress   | |  ingress  |   egress   |
+------------------------+ +------------------------+
|          bge0          | |          bnx0          |
+------------------------+ +------------------------+

To observe traffic on all those proxy-nodes I let Cacti (fancy
rrdtool graphics) poll the nodes' SNMP-agent and leech off the traffic
statistics from the tagged interfaces (bge111000 and so on).

This worked fine until Solaris 10 05/07, when they introduced a bug in
the bge-NIC-drivers. Polling bge111000 and bge111100 didn't result in
individual statistiscs, but both interfaces show the accumulated
traffic from the underlying physical bge0. Netstat -i shows the same,
so it must be the bge-Driver. Well, we complained at Sun and got the
answer that the next release will fix the problem. Since we used IPMP
anyway, we were able to set the active interfaces to the bnx-NICs,
whose drivers were not affected by the problem

Now we upgraded to Sol10 5/09 and guess what: The bge-drivers where
NOT fixed, but now the bnx-Drivers are screwed up too!

I thought about alternatives:
* Cisco can't differentiate on Trunks too.
* Netflow was no option, because our access-switches don't support it.
* Let's use ipfilter and ip-accounting!

So I set up a lot of accounting-rules (VLAN-ids not the same here, but
you get the idea:

# Ingress
count in on bge923000 from any to any
count in on bnx923000 from any to any
count in on bge924000 from any to any
count in on bnx924000 from any to any
# Egress
count out on bge923000 from any to any
count out on bnx923000 from any to any
count out on bge924000 from any to any
count out on bnx924000 from any to any

So guess what happens... NUTHIN:
# ipfstat -ai
0 count in on bge923000 from any to any
0 count in on bnx923000 from any to any
0 count in on bge924000 from any to any
0 count in on bnx924000 from any to any
# ipfstat -ao
0 count out on bge923000 from any to any
0 count out on bnx923000 from any to any
0 count out on bge924000 from any to any
0 count out on bnx924000 from any to any

Compared to a real untagged interface:
# ipfstat -ai
1724 count in on bge1 from any to any
# ipfstat -ao
1864 count out on bge1 from any to any

So. Sorry for the long posting.
1) Am I stupid or just cursed?
2) How can I get those dang per-VLAN-statistics on my nodes?
3) Does anyone know if ipfilter on Solaris supports tagged interfaces at all?
4) How is this going to end?

Oh Eris, I really need a stiff drink.

SUN introduces ’10 Gigabit Multi-Threaded’ NIC

February 21, 2007

SUN LogoVia Supercomputing Online:

I have no idea what SUN’s talking about here, really. Supercomputing Online wasn’t helpful either. SUN introduced a 10GE NIC called to be “multi-threaded”, whatever the hell that’s supposed to mean; it’s their “first network interface specifically designed to accelerate multi-threaded application performance by optimizing I/O throughput within environments that utilize parallel threads“.

Rest. Silence. Taa-daa! So, dissecting this sentence their telling us that their NIC runs better with – yeah, what? POSIX threads? Java? I don’t get their point. NIC-drivers run in kernel-context and offer their services to every application – multi-threaded or not – with the same friggin’ performance. I don’t see why multi-threaded application should get more bandwidth than any other application.

As usal, Supercomputing Online didn’t bother to give a link to SUN’s website so I had to investigate what the frack is that all about. SUN’s website had that big ad just on the first page, “Unleashing 10GB Everywhere”. So let’s dive deeper into the topic, after ranting so much. (Too much. I had quite a sorrow day, please accept my apologis for beeing a sarcastic smartass)

The whole thing is about their “Complete chip multithreading environment (CMT)” – claiming that they offer full 10GE line-speed from application-level and beyond. A bold claim! Let’s read on. Later they say:

“Metaphorically, you can view the Sun Multithreaded 10 GigE Networking Technology as an ‘impedance matching device’ that extends the thread parallelism from the OS, through the processor, and all the way to the network wire,” says Shimon Muller, Sun distinguished engineer.”

“Impedance matching device-what-the-hell”? This ain’t even sales-speek to me, that’s more like very bad geek-speek. But then they come to the nitty-gritty:

“The Solaris OS has been a multithreaded system for years, and the latest UltraSPARC processors offer powerful chip multithreading,” says Sunay Tripathi, Sun distinguished engineer. “But what has been missing is extending multithreading into the I/O environment and networking space.”

Ariel Hendel, Sun distinguished engineer, explains that “The hardware /software interface up until now was based on first queuing packets into the system and then distributing and processing them. But with this new 10Gb/s technology, we move to a distribute then queue model, and that makes all the difference when you want to scale, apply policy, and so on.”

OK, now it’s getting interesting. They’re putting they queuing into the silicon. Reminds me of Tagged Command Queuing for NICs. That might give a couple of advantages, considering traffic-policies. However I’m waiting for benchmarks.

Is it interesting? Certainly, yes, especially that they annouce that the new CPUs will have two 10GE ports. If their CMT can really take advantage of offloading some stack’s routines to the NIC’s chip this might reduce latency from my naive point of view. But I can’t see Supercomputing Online’s intitial statement about “optimizing I/O throughput“. Throughput ain’t latency.

But I’m still curious! If you get a hand on this beauty: Drop me a line about your experiences.

Tech Tags:


Follow

Get every new post delivered to your Inbox.

Join 120 other followers