Showing posts with label networking. Show all posts
Showing posts with label networking. Show all posts

Friday, December 04, 2015

Barnfind integration with third party control panels

Barnfind are one of the products I'm responsible for at Root6 and I love them for their forward looking attitude and the fact they embrace open standards; if you see an SFP hole or a BNC connector you know that it will work with any other manufacturers (unlike Evertz who re-define the standard and put an X at the start of the product name!).


So - the BarnOne BTF1-07 is the chassis I have in my demo kit and at IBC this year Wiggo and the guys were showing a much wider set of third party control panels they have integrated to work with their cross-point router. So, to make our demo more compelling I bought a BlackMagic VideoHub Smart control panel and set about making it talk to the BarnOne.

Since I have to rock-up at customers' facilities to show this gear I figured it's daft to rely on their networks and so I've included a TP-Link travel router to dish out IP addresses to the panel and the BarnFind; it also allows you to use a wireless laptop to then run the demo. These little gadgets have single WAN and LAN facing ports but rather splendidly the BlackMagic panel has a little ethernet hub inside so you can loop the network connection to the Barnfind. 
This screen-grab is from BarnStudio - the config/control software for Barnfind products. It has a discovery protocol so it can find any chassis on the same LAN segment. However - I discovered a couple of gotchas;

  • When upgrading the chassis you can either attach a USB stick to the front and the embedded Linux machine will grab the image OR you can use BarnStudio. Initially I couldn't get either to work! 
  • Barnstudio just instructs the Linux machine to do an apt-get (or similar) and so it has to be able to see out to the internet; connect the WAN-side of the little router to the workshop network!
  • If you do the USB route then the installer checks the signature on the package; again, it needs to get out to the web to verify the cryptographic signature.
  • The Blackmagic panel does not have any host-discovery protocol built in and so it seemed only sensible to set the router to always dish out the same MAC/IP address combinations by using the DCHP reserved assignment page in the router. Then assign those number in the BlackMagic config software.
After that it really is very simple - you set the virtual buttons to be whatever sources and destinations you want. You can even reserve some to be macros (which are then just a list of route assignments) - in fact that is how you would do duplex signals; Ethernet etc. You have to assign both the in and out of each SFP (Tx and Rx). You don't have to have sequential sources or destinations and I can't honestly see how they could have made it any better!

So - with BNCs 17 & 18 on the front of the Barnfind defined as 3G HD/SDi video in and out and these button assignments on the BM panel I've got proper router control. These forty-button panels will be very useful for controlling all the synchronous broadcast signals (HD/SDi, HDMI, MADI etc.) and asynchronous data signals (Ethernet, fibre-channel etc) running through a Barnfind router (and then out and over CWDM fibre?).

The only criticism is that the software matrix takes a second or so to update after panel changes are made; but I can't honestly see when that would be an issue - this software matrix is  tool for the engineer, not the operator.

As an aside they have recently published a very complete integration guide with lots of examples and advice.

http://media.barnfind.no/marketing/BarnGuide%202.0.pdf

Saturday, June 28, 2014

Measuring fibre cabling and the problem of encircled flux loss


Last week I went on a very interesting training day courtesy of Nexans - data cable & parts supplier. I went looking forward to learning all about the new standards surrounding catagory-8 cabling for 40 and 56 Gigabit ethernet (a massive 1600Mhz of bandwidth down a twisted pair cable!) and the new GG45 connector; but those things will have to wait for another blog post! The thing that really tickled my fancy is the new standard for measuring the response of multi-mode fibre.
Multi-mode fibre works in a fundamentally different fashion to single mode (they are as different as twisted-pair and coaxial copper cable; but they look very similair). If you want a bit of a primer on fibre then Hugh & I did an episode of The Engineer's Bench a couple of years ago on the subject.



As we've gone from one-gig to greater than 10Gigbits/sec in OM3 and OM4 cable and engineers have often noted the lack of consistency between different manufacturers light-source testers. You might get as much as 0.5dB of difference between say an Owl and a JDSU calibrated light source and detector. We typically use a 20dB(m) laser at 850nM to test OM3 and we always just deliver the loss figres to the client, but it would be good to know if your absolute reading is of any use at all?

Well, the answer is that LED or VCSEL (Vertical-cavity surface-emitting laser) will tend to "overfill" the fibre and high-order modes of light travel (to a degree) down the cladding of the cable.
Launch conditions correspond to how optical power is launched into the fiber core when measuring fiber attenuation. Ideal launch conditions should occur if the light is distributed through the whole fiber core.


Transmission of Light in Multimode Fiber in Underfilled Conditions 


Transmission of Light in Multimode Fiber in Overfilled Conditions


An overfilled launch condition occurs when the launch spot size and angular distribution are larger than the fiber core (for example, when the source is a light-emitting diode [LED]). Incident light that falls outside the fiber core is lost as well as light that is at angles greater than the angle of acceptance for the fiber core. Light sources affect attenuation measurements such that one that underfills the fiber exhibits a lower attenuation value than the actual, whereas one that overfills the fiber exhibits a higher attenuation value than the actual. The new parameter covered in the IEC 61280-4-1 Ed2 standard from June 2009 is known as Encircled Flux (EF), which is related to distribution of power in the fiber core and also the launch spot size (radius) and angular distribution.

All the manufacturers are producing EF-compliant testers so you don't need to worry about inaccurate reading due to these high-order modes, but for now there are some suggestions.


Multimode launch cables allow for the signal to achieve modal equilibrium, but it does not ensure test equipment will be EF-compliant based on the IEC 61280-4-1 standard.
Multimode launch cables are used to reveal the insertion loss and reflectance of the near-end connection to the link under OTDR test. They also reduce the impact of possible fiber anomalies near the light source on the test.

If the fiber is overfilled, high-order mode power loss can significantly affect measurement results. Fiber mandrels that act as “low-pass mode filters” can eliminate power in high-order modes. It effectively eliminates all loosely coupled modes that are generated by an overfilled light source while it passes tightly coupled modes on with little or no attenuation. This solution does not make test equipment EF-compliant.


Mode conditioning patch cords reduce the impact of differential mode delay on transmission reliability in Gigabit Ethernet applications, such as 1000Base-LX. They also properly propagate the laser VCSEL light along a multimode fiber. This solution does not make test equipment EF-compliant

Thursday, May 01, 2014

Nick McKeown talking Software Defined Networks at the IET this week

I went to the Appleton Lecture at the IET (my institute) this week; here it is as a webcast and well worth watching. The first half is all about the history of packet-switched networks but the meat of it is the second half were he talks about software defined networks.

From Wikipedia;
Software-defined networking (SDN) is an approach to computer networking which evolved from work done at UC Berkeley and Stanford University around 2008. SDN allows network administrators to manage network services through abstraction of lower level functionality. This is done by decoupling the system that makes decisions about where traffic is sent (the control plane) from the underlying systems that forward traffic to the selected destination (the data plane). The inventors and vendors of these systems claim that this simplifies networking. SDN requires some method for the control plane to communicate with the data plane. One such mechanism, OpenFlow, is often misunderstood to be equivalent to SDN, but other mechanisms could also fit into the concept. The Open Networking Foundation was founded to promote SDN and OpenFlow, marketing the use of the term cloud computing before it became popular.

Appleton Lecture 2014 - Software Defined Networks and the Maturing of the Internet
Nick McKeown
From: IET Appleton Lecture 2014, 30 April 2014, London

2014-04-30 00:00:00.0 News Channel

Thursday, January 23, 2014

Friends don't let friends use stock firmware in their routers

Over the years the number of security flaws that come as standard with £50 plastic-box routers have been numerous. That 'free' router that came from your ISP probably suffers from one of these;
  1. UP & P enabled by default
  2. PING on the WAN side enabled
  3. Port 32764 left open
That last one is very serious as it allows a remote attacker to make a query of the router and dump out lots of diagnostic and configuration information. That may be of no consequence but it does allow a hacker to gain knowledge concerning your network and work on other attacks. The problem bedevils Linksys and Cisco models and SlashDot have a good write-up.

In a very real sense your router is the gateway between your network and the wild-west that is the public internet. If you can't even trust the little hardware device that sits in the cupboard under the stairs what can you do? Well, use an open source firmware in your router - Tomato is very user friendly and DD-WRT is very powerful. There are numerous others and since the source code is open it is regularly examined by the community that develops it and so many eyes spot any nasties (malicious or just bad programming) in the code.

I grabbed a couple of Buffalo models from eBay for when my eldest two went away to University and I wouldn't dream of letting my home network be based around a closed-source router.

Saturday, December 28, 2013

DD-WRT and open source router firmware - New Engineer's Bench podcast

Phil & Tim Taylor go over some of the features of the DD-WRT router firmware and how they can be used to secure a home network.


Find it on iTunes, vanilla RSS, YouTube or the show notes website.

Sunday, September 01, 2013

Using a DD-WRT router to NAT between two wireless segments

I've mentioned DD-WRT firmware in the past - it's an open-source replacement firmware for lots of cheap domestic internet routers. If the stock firmware on your router isn't doing it for you or you just want to see what all the fuss is about it is a superb way to make your £50 beige plastic router really sing; enterprise level network control for not much effort. It can terminate VPNs, do QOS and lots of the things you'd normally expect from a Cisco business class device.
Not all routers can take a different firmware image, but if yours is based on the Broadcom 54G chipset (an awful lot are) then you're away to the races; otherwise it's £15 on eBay!

Now then, my two eldest boys are away to university this month and it turns out that one of them is going to live in a student house that only has WiFi - I intended that they would both take DD-WRT routers with them to isolate their little dorm-room networks from IT ne'er do wells (NAT - Network Address Translation, the kind you get with a router, is an excellent defense against port-scanners). BUT, without a wired connection to place on the WAN side of the router how do you isolate and provide both wired and wireless connections behind the router's firewall? My first thought was to buy one of those "connect your Sky+ box Ethernet to your WiFi" adapters. It would turn the insecure WiFi into a wired connection that would sit on the WAN side of the router. 
BUT, it's one more thing to go wrong and I was sure that DD-WRT could do it with a bit of tinkering. I looked at a few of the guides online and they were very convoluted with warnings about obscure settings causing trouble and so I decided to figure it out from scratch. It went surprisingly well and now I have a Linksys router that can attach to an existing WiFi access point and then NAT that connection through to another WiFi segment as well as the wired RJ45 links.
So, couple of things to point out.
  • My home WiFi's SSID is thorpedale4 and the IP range is 10.100.100.x (.8 is the router)
  • I wanted all the hosts on the other side of the Linksys to be on a 192.168.1.x network
First up - I set the Linksys to not be an Access Point but to be a client wireless device (taking baby steps; I just wanted to make sure I could attach it to the house WiFi)


This is done under wireless>basic settings>wireless mode and is set to client and then go to wireless security and make sure you've entered the necessary settings (WPA key etc)
Reboot the router and check it is connecting to the external WiFi - see above. After this make sure you can get out to the internet from a wired connection on the Linksys. At this point the Linksys will be passing back all protocols to the main router and so you'll find the laptop is on the same IP range as the main network and there is no link-isolation (no firewall between the two networks) - we're not there yet!

Next, set the wireless>basic settings>wireless mode to repeater and add in a second virtual wireless interface (this will be your new wireless segment);


Then set up the security - again, the first is for the wireless you're attaching to;


BUT, the second is for the new network you're creating. As the router is now in repeater mode the new wireless segment is on a separate IP subnet (found in the setup>basic settings tab) and by default on the 192.168.1.x segment. The same applies to the wired connections on the Linksys - job done!

Attaching to the new network is as you'd expect;


and looking at the network details shows we're not on the house's 10.100.100.x network;

In fact, trying to reach the new network from the "outer" network;


As far as I can tell there is only one downside to this method - speed; the 54G wireless is now only running around 22Mbits/sec on both segments and that's no surprise as the Linksys is having to hold up two 802.11 links (different frequencies) using only one radio.
BUT, I have a router than can happily attach to a potentially insecure wireless network and produce a new wireless segment as well as wired Ethernet with the SPI (state-full packet inspection) firewall in the way. I paid around a tenner for the router!

Saturday, March 16, 2013

PING goes further than you think

Do you ever need a quick and free method of testing the reliable uptime of a network server? There are lots of paid for bits of software and onine services but if you have a spare Windows box this little DOS command (which you can save as a .BAT file for quick deployment) does a superb job;

cmd.exe /v:on /c "FOR /L %i in (1,0,2) do @ping -n 1 10.100.100.241  | find "Request timed out">NUL && (echo !date! !time! >> PingFail.txt) & ping -n 2 127.0.0.1>NUL"

Make sure that if you cut'n'paste it you edit out any inserted line-breaks.

Inside of our FOR loop is where we really get to the meat. We've basically got 4 steps:

  1. First we see @ping -n 1 10.100.100.241 The @ symbol says to hide the echo of the command to the screen. The switch (-n 1) says to only ping the IP once. And of course 10.100.100.241 is the address we want to ping (at home it's my media machine)
  2. Next we pipe the results of our ping into the FIND command and search for "Request timed out" to see if the ping failed. The last part of that >NUL says to dump the output from this command into NUL, because we don't really need to see it.
  3. Now we get fancy. The && says to only run this command if the previous command succeeded. In other words, if our FIND command finds the text, which means our ping failed, then we run this command. And we've enclosed this command in parenthesis contain it as a single command. We need to use the "cmd.exe /v:on /c" command at the beginning to allow for delayed environment variable expansion; that way our time & date changes each iteration. So %date% and %time% becomes !date! and !time!.
  4. And finally we're redirecting our output to a file called PingFail.txt. We use the >> operator append each new entry rather than overwrite with just >.
  5. And finally we're on to the last step. As mentioned before, the & says to run the next command no matter what has already happened. This command simply pings localhost with (-n 2) which will give us a one-second delay. The first ping happens immediately, and the second ping happens after one second. This slows down our original ping back in step 1 which would otherwise fire off like a machine gun as fast as the FOR loop can go. Lastly, we're redirecting the output with >NUL because we don't care to see it.

Friday, December 14, 2012

TCP & Networking, part 2; the protocols

I continue my conversation with Hugh going over some of the lower-level protocols that are used in IP networks. Find it on iTunes, vanilla RSS, YouTube or the show notes website.

Thursday, December 13, 2012

The Engineer's Bench podcast - TCP & Networking 101

Gone are the days when every cable carried a synchronous video stream. Contemporary engineering staff have to be aware of packetized networks and how they impact the modern facility. This part 1 (of a two-parter) covers the fundamentals of the protocols and practises that drive all internet-derived networks. Find it on iTunes, vanilla RSS, YouTube or the show notes website.

Saturday, December 01, 2012

Multicast addresses in IP

I thought I knew TCP & UDP/IP but I was reminded this week about the 224.0.0.0/4 multicast subnet. If you're ever in a position where you need to identify a device's IP address (even on a different subnet, but the same LAN segment) you can PING 224.0.0.1 and everything on the segment will respond to the PING (firewall settings permitting).
 
So, if I set my machine's IP address to 192.168.1.220 on a 10.100.100.x network and then PING the multicast address;
You can see that all the machines on the 10.100.100.0/8 network respond.

This comes in very useful with Amulet DXiP cards which you configure over a web interface. Our demo kit came back from a customer who had forgotten what they had hard-set the cards' IP addresses to and this technique was a life-saver.

Thanks for reminding me of this Graham and Don Poves!


Friday, June 15, 2012

Secure DNS - what's a home-user to do?!

I find DNS a very interesting (how often do you hear that?!). When I was doing my degree in the mid-80s there was no such thing and you routinely updated you hosts file every few weeks from a master file stored at Sheffield University Computer Science dept. However - my faculty was very IP-aware (even then) and so we were running an early BIND server when I graduated and so I was at least aware of DNS before it became a big deal on the Internet.

DNS is an inherently insecure protocol for the following reasons;
  • It runs over UDP/IP and so doesn't require the 3-way TCP handshake - it's easy to spoof IPs
  • It's unencrypted
  • It doesn't require any kind of authentication and so man-in-the-middle attacks are possible
  • Problems with the protocol itself (i.e. independent of implementation) allow things like DNS cache poisoning (read up about the Kaminsky vulnerability from a couple of years ago).
I've used OpenDNS for several years and it's an excellent service that offers so much more than my ISP's DNS servers. Those guys have recently launched DNSCrypt which is a secure client for Mac or Windows that allow DNS look-up that avoids all the problems above.

Friday, March 09, 2012

Speeding up HTTP

The big overhead with protocols that run on top of TCP/IP is the number of connections they open. A modern web page has many different kind of assets (HTML, scripts, Java, GIFs, JPEGs, etc etc) from many places (the primary domain, adservers, Google+ buttons etc etc) such that when you load the front page of facebook.com you may well have opened and closed a thousand TCP connections to a dozen servers. It's amazing that it works at all, but when you consider that each connection had to not only do the three-way TCP handshake, but it also had to run TCP flow-control starting slowly and ramping up packet rate until it started to drop packets and then dropping back. When Mr Burners-Lee wrote his original http server in the early nineties there is no way he could have anticipated how the web would grow.

It seems there are several approaches to optimising this - for the past ten years there have been various attempts to re-define TCP; MUX, SMUX, and SST protocols were valiant efforts that died on the vine because they essentially break how the infrastructure works. Whatever comes after http has to work over all the same IP v.4 routers, switches and proxies. In the last year I've become aware of two projects that work and don't require funky routers etc.
  1. Amazon Silk - this is embodied in the browser that comes on the colour Kindle. Essentially it is a mega-proxy that not only collects all the assets for the page but re-renders pictures etc for the smaller screen and sends the whole lot in a stream. So one connection allows the whole pre-rendered page to arrive with the assets from Double-Click and Google (and any other third-party elements in the page) pre-collected for you by Amazon. It runs of their EC2 platform and does depend on them providing the service.
  2. SPDY is a Google-sponsored project that doesn't replace http but optimises it. By employing pipelining (i.e. keeping a TCP connection open for all the assets from one domain), compressing headers and marking those headers that don't change so they don't need to be re-sent it speeds up the browser by around three-fold. Further speed increases come if the web-server is able to collect the third-party assets and deliver them over the same pipeline.
As ever Steve Gibson has covered these very comprehensively in Security Now! - SN320 for Silk and SN343 for SPDY.

Thursday, November 10, 2011

Remote control options

If (like me) you find yourself as the default tech support provider for friends and family you've no doubt wondered about remote desktop software - VNC, RDP, Apple remote desktop, or any of the paid-for managed services (Go To Assist, LogMeIn etc).

I think there are several things to bear in mind;
  1. NAT routers in the way? If you're merely using remote desktop to go between machines on the same LAN then this isn't an issue but if you have to take control of your Mum's laptop and you're both behind routers then you either have to have made a hole in her's or be using a protocol that supports NAT translation.
  2. IP address - again, the person you're trying to reach may well be on a dynamically assigned IP address.
  3. Bitmap vs remote GUI rendering; VNC sends a bitmap (admittedly compressed) and so maybe sluggish whereas Windows RDP or Apple remote desktop send GUI primatives which render at the remote end.
  4. What combination of OSes are you using? Running Windows but supporting someone on a Mac? The remote desktop client built into OS-X since Tiger falls back to VNC if the remote machine isn't a Mac - nice touch.
So - in the case of my Father-in-law's Windows XP desktop machine I use VNC every time - This is because I don't know if he's going to call me during the working day (when I'm using an OS-X laptop) or in the evening when I'm likely on a Windows 7 or XP desktop. Since his machine is fixed I had the liberty of installing a DynDNS account on his router (so I hit a username.dyndns.org address rather than trying to discover his internet-facing IP address) and I opened a hole in his routers firewall (so traffic on TCP port 5900 gets mapped through to his PC). With all that in place I know I can grab control of his desktop using TightVNC (my favorite VNC client) under Windows or the built-in remote desktop of Snow Leopard;


On the other hand my Mum has a laptop which may or may not be at her house. Since she is running Windows 7 and I can always get to a Win7 machine she Instant Messages me with a Windows RDP support request and after a bit of typing in confirming codes it works well without having to worry about IP addresses or NAT traversal.

That leaves the paid-for server-based systems like Log Me In and Go To Assist which require no software installed (it's done via a quick Java download) and take care of NAT traversal etc.

So - you pays for money, you takes your choice. I prefer VNC because it's open and works across OSes. It does require a bit of work to send it across the public internet. After that Windows RDP is fine if you have contemporary Windows boxes. I suspect at some point I'll sign up to Go To Assist and pay as it is very convenient and works entirely well across networks and OSes.


VNC connected to my home Windows 7 media machine, running inside a Windows 7 virtual machine on my Macbook Pro under OS-X

Friday, November 04, 2011

TCP/IP Congestion Avoidance

TCP/IP is a jolly clever protocol that now forms the bulk of the traffic that runs across the Internet. Given that routers and gateways along a packets route are entirely at liberty to drop packets without informing either the sender or the recipient (it's up to the client/server to figure out packet sequence and if any packets were lost) there is a very clever way that the IP stack in your computer does packet collision avoidance.
So when TCP establishes a connection it goes through an IP-slow-start. In essence the stack has to "sneak-up" on a transmission speed where packets are being lost faster than they're being sent; when that occurs the stack backs off until packet failure is happening less than new packets are being sent. The initial condition is that the stack can send two packets without getting a confirmation. For every confirmed packet the stack can increase the maximum segment size by one so that two packets can become three and so on. After that there are several commonly used strategies, most commonly "TCP Reno" and "TCP Tahoe". Tahoe reduces the congestion window to one MTU ("Maximum transmission Unit" - typically a 1550 byte packet in Windows) and then go back to the IP-slow-start. Reno halves the size of the congestion window and so backs off slower than Tahoe but hopefully the link has recovered fast enough to make that a better strategy than Reno.

Anyway - as ever the Wikipedia article is very comprehensive;
http://en.wikipedia.org/wiki/TCP_congestion_avoidance_algorithm

Friday, September 09, 2011

Security and the Diginotar debacle


You might have been following the trouble that the Dutch SSL-certificate issuing firm Diginotar have been suffering recently. It transpires that Iranian hackers have got into their system and have spent several months issuing themselves wildcard certs for well known domains, most notably *.google.com - it essentially means these ne'er-do-wells can sign certificates that look like they have come from Google and your browser would be none the wiser. In fact it's not that severe unless you've been the victim of another attack;
  • Man-in-the-middle attack - you might be in a coffee shop where someone has managed to poison the ARP-table in the router and inserted themselves into your wireless comms. If they served up the fraudulent cert they could make any domain (especially there own server) look like you were securely connected to.
  • DNS-poisoning attack - as highlighted by Dan Kaminsky a couple of years ago it is possible to for elderly versions of BIND and more contemporary versions of IIS to incorrectly serve up DNS look-ups. Once this is in place the fraudulent cert on the same server would have you believing you had a secure connection.
  • Corporate decrypting proxies; many corporations install their own certificate on all client machines and essentially do a man-in-the-middle SSL intercept. Your traffic to Amazon.com is encrypted, but it goes via the proxy where it is momentarily decrypted for your boss to look at! If a corporate proxy was compromised dodgy SSL certificates could have you believing you had an encrypted connection to Amazon.
All of this raises issues with SSL - when I first started using an SSL browser (Netscape Navigator v.2 IIRC in '95!) there were around seven or eight trusted issuing CAs. Now there are hundreds (including the Hong Kong Post Office!) and it comes as no surprise that some of them get compromised sometimes. What I don't understand is why browsers don't keep a record of the CA associated with domains and when they see a change (particularly if a cert had time to run) inform the user? There is a plugin I use for Firefox called "Certificate Patrol" that does just that and it's easy to use and unobtrusive.
Now then - the whole Diginotar story started three months ago and they didn't spill the beans until last week; security is never served by secrecy. Also - it took Apple far to long to patch Safari. I think if you're concerned about network security then avoid Safari on OS-X.

Friday, August 26, 2011

UPNP has always been a bad idea!

UPNP is a protocol that allows an application to open up ports on a router so that incoming packets from the Internet get to the correct IP address on the LAN. It's typically used to allow the XBox360 to set up open ports through your router to allow multi-player gaming. If both XBoxes are behind NAT routers there is no way that unsolicited traffic from one can make it to the other (hey, I never wanted your bullets to hit me!). Skype suffers thus if both callers are behind NAT routers (i.e. in most cases; who has an internet-facing IP address on their machine nowadays?) - details here). More recent versions of Skype will make use of UPNP if it's on the router.
You won't be surprised to learn that it's a Microsoft technology and I've always encouraged people to disable it on their routers. Any piece of malware inside your network can open ports and invite any other nasties in. In the case of XBox there are about four ports you need to open up for the Live! service to work. Anyhow - it turns out that Linksys routers have a bug that allows UPNP activation on the WAN side - that's right, with the correctly formatted packets you can open ports through a Linksys router from the Internet. Using something like UPNP Port Mapper will allow you to scan Internet IP addresses and open ports on those routers.

The title link is to the article on The H.

Thursday, February 24, 2011

Ever need to slow down ethernet?

I've had a few occasions when I've had to force gigabit down to 100BaseT or even 100 down to 10BaseT. My preferred method is to force the NIC down to the appropriate speed but if you aren't using Windows (OS-X, Linux or an embedded device) then a hardware solution is needed.


  • Distance - 100BaseT only goes 100m over cat5e but 10BaseT goes 300m; If you find yourself in that situation then an old 10BaseT hub at the far end does the job.
  • Equipment reports 100BaseT but is only reliable at 10BaseT; my Squeezebox network MP3 player is running a hacked OS and works a lot more reliably at 10BaseT. I achieved this by swapping the green/white and orange cores in the network cable. This degrades the common-mode rejection performance of the cable and means the ethernet switch ramps the circuit down to 10BaseT.
  • Gigabit too fast? Just make off a cable with the blue and brown pairs excluded. Gigabit needs all four pairs and if the switch only sees the Green and Orange pairs it will assume 100BaseT.

Thursday, November 25, 2010

Video and high-speed networks - article in Broadcast Engineering Magazine


What an up-market magazine Broadcast Engineering is! Well, when they publish my stuff.
You can snag a PDF of the print version from my DropBox; BroadcastEngineering_Article_Nov2010.pdf

Friday, May 22, 2009

Leaf Networks - better than Hamachi


I've written about Hamachi as a lightweight VPN solution in the past and for a year or more I used it every day to access files at home and provide an in to friends and relatives machines who I do tech support for. Not having to worry about opening ports on routers etc. is fantastic but recently Hamachi got unreliable and now I can't make it work for more than a few weeks without upgrading (due to their new license) which is a pain as suddenly you can't VNC to the machine you need to upgrade!
Anyhow - I've been playing with Leaf for a few days and it seems a much more solid solution. It also has the advantage of having routing built in so you can use it to expose other machine on the target network over the VPN tunnel. As well as allowing LAN play across the internet for XBox games that don't support Windows Live I imagine any embedded devices you don't want to expose to the world over an open port would be usable from wherever you've got your laptop (insecure wireless in a coffee shop, for example).