Monday, December 18, 2006

Making waves in December

We placed orders for the next four waves we need to light up. Two are for the HOPI backbone (Chicago to DC to New York). One is for MAGPI via Philadelphia (which we're working on accepting the suite) and another is for Nysernet via Cleveland. Level3 provisioned them almost immediately and is working on a few fiber patches in DC. In all cases, we have some final patches to run on the ends, like with any circuit turnup, but the Infinera backbone portion is more or less complete.

Caren worked up a checklist of items for the Philadelhia remote suite acceptance. Looks good and very thorough. Hope to have that checked off shortly. The MAGPI circuit is still a work in progress. Looks like there's a final patch between MAGPI's dark fiber provider and Level3 that didn't get on the radar screen until Friday. Hopefully we can make that happen in the near term.

The colocation at 710NLSD sounds like it's getting close. I believe the power was supposed to be installed today, though I don't have confirmation yet. If that happens, John Graham and I may head up to Chicago on Wednesday and Thursday to turn that up.

It's a good time of year to do some documentation catch up. I've spent most of the day working on translating some of our install notes into more formal documentation.

Friday, December 15, 2006

Philly colo delivered

Word came down today that the Philadelphia colocation suite is ready for acceptance. In the interest of time, we're going to have Level3 run through a checklist of items and provide photographs of the suite to accept it remotely. Our hope is that for these smaller Ciena-only sites, we won't need to visit them all in advance of install. This is our first go around with that, so we'll see how well it works out. It will probably be a day or two (or three) while we go back and forth on that.

The multifunction panel that I2's customer circuits will terminate into is on-site and we'll get Level3 remote hands to install it shortly after the acceptance. It's needed ASAP to terminate two fiber cross-connects from MAGPI that should drop in the next several days. Once that's up, we can bring MAGPI up on the new network.

Wednesday, December 13, 2006

Ciena cabling, schedules and meetings

Even though I was hoping everyone would slip into a three day coma after the I2MM last week, it turns out, the world keeps churning and we still have a lot of things to keep plugging away out. No rest for the weary!

Half of our prayers were answered this week with the arrival of a new engineer at IU. John Graham comes to us by way of UKLight in Great Britain. Fortunately, he's very well versed in SONET- specifically Ciena equipment. We're very fortunate that John decided to join up and make the move to Bloomington.

The Ciena buildout is just starting to get into action with our first few boxes shipped and installed over the past few weeks. John's arrival is perfect timing and he'll be running the Ciena show for the next several months. That frees up some cycles amongst the rest of us to keep our attention on the IP network transition and the Layer1 issues. John has already been instrumental in pointing out that the CoreDirectors are quite complicated beasts to cable up and that we should pre-wire the front panels to a separate fiber tray. It's that kind of real-world experience that will make the Ciena network a success.

Lots of meetings coming up. Level3 is bringing a whole slew of folks to Indianapolis on Wednesday to review our final operating procedures. We'll get to meet our dedicated support engineer that will be answering the phone when we call during business hours for Level3 technical assistance. There's been a lot of informality so far in handling these first few wave turnups, so we'll finalize the procedures that will enable us to move even faster.

Thursday is a meeting to discuss implementation of the layer2 services. There's some good documentation floating around that we'll look over and discuss before presenting it to the community for input. I'm really interested to see what sorts of things people have planned for the Ciena network, as I'm sure is the rest of the team.

Next Monday is a meeting at Ciena to talk about control plane management and implementation. John Graham will be attending in person from IU, and Matt Davy and I will be joining via phone bridge.

The schedules are getting more focused as more information rolls in. Right now, it looks like Philadelphia and Pittsburgh installs will be the first or second week of January, though there are still a few niggling details that I expect to be sorted out.

Right now, we're hoping that MAGPI's 10G wave to New York from Philadelphia will be ready sometime next week. The suite sounds on track for delivery tomorrow and we have to get it accepted and run some fibers first. Hope to have that all taken care of by Friday.

It looks like the 710 North Lakeshore facility in Chicago is having a few issues with the overhead ladder rack not being ready in time, so the last I've heard, the availability of power has moved to next week. That's our top priority now since it's needed to move the Indiana Gigapop off the Indianapolis router. Once we free up Indy, we can ship it to Washington and the next phase of transition starts up.

That's all for now. Lots going on. Should have more to report shortly

Monday, December 11, 2006

More Behind the Scenes from the I2MM

Laurie Kirchmeier graciously put together some more "behind the scenes" notes on the I2MM demonstration last week. He was manning the iHDTV video encoders and decoders behind the big black curtain. He's also supplied some photos of the equipment setup behind the stage:

Getting the live uncompressed HD video from the test camera set up in the colo space the day before the plenary address was a spine-tingling moment. We had been fighting with flaky iperf measurement tests for a couple of hours and had gone to the length of deploying a cakebox measurement box on the Juniper router located in the main telephone closet in the bowels of the convention center. When we actually ran a live video test, it turned out to be flawless with no packet loss and a wonderful view of the moving second hand of an analog alarm clock and a small computer monitor.

The morning of the event meant an early start for those of us in Chicago because of the time difference. The NYC film crew started setting up at 7.30am EST and by 8am we had color bars being transmitted over the iHDTV connection and audio testing began.

The first live video from the SteadyCam was impressive and the rehearsals looked and sounded great.

With an hour to go, Mike La Haye and myself got set behind the stage and gave countdown times to the NYC side via IM. We were too pleased when the camera kept being switched off as the software needs to stabilize with a running video stream and after the 20mins to go message was sent out things got a little tense. whilst trying to reset one of the ethernet interfaces on the NYC side via Remote DeskTop, I managed to reset the wrong interface and promptly lost my Remote Desktop connection to the machine and had to request that the sender machine be rebooted. After reconnecting and re-establishing the live iHDTV stream we watched nervously as the camera crew and Tim performed another reheasal. As they were finishing the rehearsal, I realized that Doug was making his last point of his presentation and instead of 10 mins to go, we were down to 1 or 2 mins before he introduced Tim and we needed to go live. I furiously sent impassioned messages over IM to get them back to the sttartign position for the broadcast and tried to indicate the urgency and Mike was on his cell phone talking with Aliza who was relaying the latest information to the crew and to Tim.

To raise our blood pressure further, the cameraman disconnected the camera and then we listened as Doug started his introduction to the broadcast. Mike was now speaking very calmly and firmly into his cell phone telling them to get ready and get the camera rolling again - I was watching the receiver console and the video monitor and finally we got live video and audio. We both looked at each other and them realized that the HD projector needed to be flipped on to the screen. This required the projectionist to turn off the standard slides projector and flip up the lens cap (a piece of cardboard hung over the front of the lens) on the HD projector. As switch was made and the HD video appeared on screen to the audience, I realized that we were not getting full 1080i video and the sender and receiver nodes were not properly synched. This resulted in lines appearing across the screen and the audio was slightly garbled. The solution was to reset the receiver software by hitting and re-starting the app. I reset the software and the video and audio glitches remained. Help, the reset normally just works! I reset the software a second time and still the video and audio were not perfect. By now Tim was 30 secs into his presentation and he and Doug made some comments back and forth. During one of Doug's comments, I tried one more reset and magically the video and audio stream problems disappeared and the audience was treated to the remaining 9 mins of Tim's presentation shown in uncompressed 1080i video and 48KHz audio.
Mike and I were out of our chairs watching the projected video from th
e rear and I remember nervously counting down the minutes in my brain until the presentation was over.

it was a great sense of relief to have have the event over and a tremendous sense of accomplishment to have pulled off an awsome demonstration over the brand new New network.

Over the next couple of days I was happily surprised by the comments for the attendees and to realize how much of an impression the iHDTV presentation over the new Internet2 Network had made with everyone. It certainly was a great team effort.

Wednesday, December 06, 2006

Infinera 100G ethernet demo box

Went to the POP this morning to troubleshoot some trouble on the 10th lambda that cropped up yesterday evening. It turns out it was a bad jumper on the run from the Level3 bulk panel in our New York suite to our Infinera metro equipment. Once we got that cleaned up, the circuit normalized itself. Our thanks go out to the Level3 team that ran with this (Eli, Nate, Ryan and probably some others behind the scenes)

Here's a shot of the box in our rack with 10 "greened-up" interfaces to New York. The technician in the background is Steve Armstrong, from Infinera, verifying connectivity on his laptop.

Behind the scenes of the iHDTV broadcast from New York

Tim Lance, president and Chairman of Nysernet, wrote up an excellent narrative of his experiences on the New York end of the live HD stream during yesterday's plenary. Many thanks to Tim for putting this together:

After the 32 AofA tour during Doug's opening plenary, I flew to Chicago and that evening heard comments about how stunning the video was with the sound almost as good. But an operational collocation facility is about as environmentally friendly a broadcast facility as a running shower, so the clarity represented real magic by everyone involved with the network connection and the iHDTV software, with built in escapes (as were used at the start when the interlacing got out of synch. Internet2 folks had worked on the sound and video transmission the week leading up to the live tour, with conversations between them and the TV crew, while I worked on what should be said with help from many.

The TV crew had an equally daunting task, masking the roar of air conditioning and equipment, figuring our how to light the path of the tour (not creating terrible shadowing or hot spots). We were in the colo space early on December 5 as cage walls came down and we did dry runs for sound and video, plus two dress rehearsals, and made eight or nine different placements and settings of lights before the crew was satisfied. The moving picture required manipulating a very large and heavy steady-cam, anchored to a flack-vest like contraption on the cameraman who backed down aisles almost too narrow for all the gear. The wide angle lens made the aisles seem wider than they were.

This was a really gutsy move by Internet2. They started planning this live demo just a week or two before the Member Meeting, and before the first segment of the network was up. Then they decided to do the first test of the power of the new network live and in public with a tour the most inhospitable (from a TV perspective) environment around. Perhaps it's not surprising that it all came off so well, given all the talented people working on it, but it surely demonstrated that we've entered a new era of network capabilities.

Tuesday, December 05, 2006

From one to nine in 31 minutes...

I started talking with Level3 at 14:08 about changing the framing on the CHIC-NEWY backbone OC-192 to 10GigE and bringing up the other 9 lambdas on that path. This IM exchange with Rob Vietzke says it all:

[14:39] NOC - Chris Robb: 9 lambdas up. took 31 minutes
[14:39] rpvietzke: cool
[14:40] NOC - Chris Robb: It includes the amount of time to get a tech to check it
[14:40] NOC - Chris Robb: If that weren't part of it, it would have taken about 5 minutes


I'm smiling. :-) <---Redundant emoticon used purely for illustrative purposes

The network is up, now let's tear it down!

Now that the network has been up and running for a few days, we're going to tear it down. :-)

As part of the Internet2 Member Meeting Plenary session, I2 CEO Doug Van Houweling and Nysernet CEO Tim Lance participated in an uncompressed HDTV video presentation from the New York POP. The 1.5Gbps HD stream traversed the new network. Steven Wallace took some photographs of the presentation and quickly stitched together a panoramic shot of the event.


Now that the network has run through it's paces, we're going to tear down the OC-192 between New York and Chicago to be utilized for a demonstration of an experimental 100Gbps Infinera chassis between Chicago and New York. The OC-192 will be reframed as a 10GigE and the other 9 10GigEs on the path will be lit between cities.

I'm heading to the POP very shortly to run the jumpers for the Infinera box.

BTW, Welcome Slashdot Readers (TM). :-)

Monday, December 04, 2006

Would you like to see where everyone goes?

I'm happy to share with you a map we've been using internally to keep track of the entry and destination points of all the Internet2 connectors. This shows lambda routing for connections to both the IP and grooming networks.

I lovingly call it "the subway map" since it took a PhD in diagramology to keep the lines straight.

Sunday, December 03, 2006

The ugly face of circuit tracking, Part 2

I posted before the first install about some of the challenges in nomenclature, metadata storage, and mapping abilities of having so many fiber patches and logical portions to keep track of in an optical network. I also talked about the impending system to keep track of it all. I'm happy to say that the system appears to be very well fleshed out. I saw an optical path illustrated on one of our other optical networks we manage, ILight. It's just simply stunning.

So I sat down tonight and spent an hour diagraming up the path of the first backbone circuit between Chicago and New York for the database guys to work on getting into the database. I suspect the different numbering schemes of the various fiber distribution panels (FDPs) is going to trip them up, but this is a good dry run.

So, what's good enough for our database programmers is good enough for you. Here's the diagram I sent him (with some key bits of information blurred out to protect both their nakedness and true values) along with the explanation I sent that described the data path. It's a bit more detail on just how challenging this sort of thing is to track on a large scale. Everyone I've talked with, aside from major carriers, does this sort of thing on spreadsheets spread across several engineers laptops. Getting this information into a database represents a quantum leap in the way we handle our POP metadata. I'm quite proud to work with colleagues that can think this hard about a problem and come up with such an elegant solution. I would suggest that you click on the diagram and open it in another window while reading along:

Starting in Chicago....

The T640 connects to a Level3 single RU panel in the position indicated. I've indicated the connector types on each end throughout. It terminates 24 simplex SMF fibers. They come prelabled from the manufacturer in the following manner: the chassis is split in half with 12 strands on the left and the next 12 on the right. The top row on the left starts with port 1 and increases to the right until it gets to the middle (so, 1-6). Then the numbers pick up under that (7-12). The right side is similar with number 13 starting on the top left port of the right side and increasing to 18. Under that is 19 through 24. Level3's standard is to use ports on top of each other rather than the ports next to each other. That means, instead of seeing port assignments 1 &2, you'll see 1 and 7.

That bulk panel runs to some Level3 panel somewhere and into their transport cloud. Eventually it ends up on another bulk panel in New York at the ports indicated.

That New York bulk panel is at 111 8th Avenue, but we have our router at 32 Avenue of the Americas (32AoA). To get this signal between buildings Internet2 purchased Infinera metro DWDM boxes that we manage and Level3 metro fiber between buildings. The client signal from Level3's bulk panel (framed at OC-192) drops out of the bulk panel into a client port on the Infinera (the DLM in slot X of the chassis, the TAM card in slot X of the DLM, and the TOM in port X of the TAM) From there, it is muxed onto a DWDM network between 111 8th Avenue and 32AoA. So, on the diagram from BMM to BMM on the two New York Infineras is a DWDM-capable path that can carry up to 40 circuits.

We don't really have a separate naming convention for those circuits and I went with the same name as the Level3 circuit ID. It occurs to me that we could probably make up a better circuit ID for that portion of the circuit. Perhaps something like NEWY1118TH-NEWY32AOA-0001.

The Level3 metro fiber lands in the 111 8th Avenue suite on a Level3-provided multifunction panel. It has 4 "slots", each accepting a different type of fiber, copper, or coaxial termination model. It's essentially just a piece of sheet metal holding a bunch of couplers together. Our BMM connects to ports 1 and 2 on the first slot. The particular module in that slot has 6 SC fiber couplers in two rows, numbered from left to right (so, 1-3 on the first row and 4-6 on the second row)

From that MFP, Level3 has two dark fiber strands between 111 8th Ave and 32AofA. I didn't indicate in the diagram, but the fiber in port 1 of the 111 8th Ave MFP is connected to strand 11 (the top strand) on the run betwen buildings. The circuit ID listed for the dark fiber is Level3's.

The Level3 dark fiber terminates onto a patch panel in the fiber meet me room (FMMR) in 32AofA. From there, Internet2 is responsible for a 15m jumper over to Internet2's panel in the FMMR. I don't have the rack assignment for that in my notes, but I'll try and get it from Nysernet. The I2 panel in the FMMR terminates a 48-strand bulk cable that runs over to a Leviton FDP in our suite. The Leviton FDP spec sheet is attached. It's a 2RU box populated with four 12-strand modules. Best to just look at the photos here to get the numbering scheme:

http://www.pbase.com/i2net/image/70365530/original

The upper left is port 1, numbers increase to the right. 1-6 on the top row, 7-12 on the second row, 13-18 on the third row and 19-24 on the fourth row.

From the front of that panel, the composite metro DWDM signal goes into the Infinera BMM. From there, the Infinera client optics drop it to the T640 as a framed OC-192.

As you can tell, there's quite a bit of variation in the patch panels in the path and their numbering schemes. The database will need to accomodate that.

Here's the photo of our original whiteboard discussion of how these circuits are tracked:

http://photos1.blogger.com/blogger/4378/146/1600/IMG_0024.jpg

Recall that we were going to have one overall circuit ID for this entire end to end path. That hasn't really been specified yet in this discussion. I've been calling the circuit by the Level3 circuit ID name in all labels because we don't yet have the database ready to link disparate circuit IDs together into something someone would be able to figure out without the diagram I've drawn up.
I know the GLIF has been doing some work on this sort of thing and have heard a few folks talk about going through the motion of modeling portions of their network in Network Description Language. I honestly don't know what the underlying mechanism here is, but perhaps that's a good topic for another post from someone who can speak more definitively...

See you all in Chicago.

Saturday, December 02, 2006

DNS is up

It's somewhat troubling when minor events get us engineers excited. The simple act of having a name for an IP address is the modern equivalent to the Oklahoma Land Run. This is our name and this is how it's going to look for the next 7 years, dear readers:

clarus:~ luke$ traceroute www.mit.edu
traceroute to www.mit.edu (18.7.22.83), 64 hops max, 40 byte packets
1 129.79.9.254 (129.79.9.254) 0.855 ms 0.391 ms 0.265 ms
2 ibr.noc.iu.edu (149.166.2.52) 1.266 ms 1.338 ms 1.285 ms
3 149.165.254.229 (149.165.254.229) 1.321 ms 1.386 ms 1.255 ms
4 abilene-ul.indiana.gigapop.net (192.12.206.249) 5.579 ms 1.544 ms 1.490 ms
5 chinng-iplsng.abilene.ucaid.edu (198.32.8.76) 15.656 ms 5.183 ms 16.110 ms
6 ge-0-0-0-10.rtr.chic.net.internet2.edu (64.57.28.1) 23.546 ms 12.933 ms 5.502 ms
7 so-0-0-0-0.rtr.newy.net.internet2.edu (64.57.28.8) 33.855 ms 27.289 ms 27.285 ms
8 ge-0-1-0-10.nycmng.abilene.ucaid.edu (64.57.28.7) 27.282 ms 27.222 ms 27.265 ms
9 nox230gw1-po-9-1-nox-nox.nox.org (192.5.89.9) 32.445 ms 32.466 ms 32.438 ms
10 nox230gw1-peer-nox-mit-192-5-89-90.nox.org (192.5.89.90) 33.292 ms 33.278 ms 33.318 ms
11 w92-rtr-1-backbone.mit.edu (18.168.0.25) 32.759 ms 36.684 ms 32.633 ms
12 www.mit.edu (18.7.22.83) 33.024 ms 33.196 ms 33.066 ms

Please try to contain your excitement and save it for the RRD graphs. :-)

Friday, December 01, 2006

Take a deep breath...the next 7 months

If you have an inquisitive mind, you're probably wondering how we'll transition between the two networks while still maintaining the day-to-day production connectivity the Abilene connectors have grown to love so much. I'm happy to share a set of 9 diagrams that show the network through it's various phases of evolution. I should add that this is the current plan and is subject to change as new dependencies crop up.

Phase 1 is complete! The router in Chicago is our old lab router from North Carolina and the router in New York is a loaner from Juniper:


Phase II will happen in January, once we transition networks off our Indianapolis router. That will free up a third "floating" router since Indianapolis doesn't get a router in the new network. (We're all a little sad in Indy to see our T640 go away, but these sorts of things happen) We'll have moved everyone off the old New York router, so we'll carry it up the 10 flights of stairs- or use the freight elevator, whichever is easier- to replace the temporary Juniper loaner router:

Phase 3 will see the completion of the next major round of Level3 routes in early March. By then, we'll have connectors transitioned off the old Chicago T640 and will have moved it to Atlanta:


Phase 4 will see the movement of the old Atlanta and Washington DC routers to Kansas City and Houston, respectively:


Phase 5 will see the turnup of the new Salt Lake City router after we move the old Kansas City router connectors off their node. This will pave the way to transit Denver networks:



Phase 6 will see the movement of the Houston node to Seattle:

Phase 7 brings a close to the turnup of the new backbone with the old Denver node moving to Los Angeles:



The final Phase brings a transition of networks off the Los Angeles and Sunnyvale nodes and their de-install:

Traffic moved to L3 Chicago-New York path

I just adjusted the backbone metrics to prefer the Level3 backbone circuit between Chicago and New York. The NOC's weathermap is now up to date with the new link, thanks to our programmers. DNS should be in the system within minutes. Given those two last pieces, I adjusted the IGP metrics down on the new link to prefer it over the legacy Abilene circuit between Chicago and New York. Here's the first snapshot of the weathermap right after the move:


I'm also attaching a diagram of the network with the current set of backbone metrics. These sorts of diagrams will all end up on a static webpage and be updated as we move so you don't have to follow the blog. I hope to get that worked out over the weekend and live on Monday.

Nysernet completes transition to New York

Got a note from Bill Owens this morning that they're transition to the new network went smoothly this morning. They're passing all traffic in New York via the new router and have turned down their old peering.

Shipping hiccups, I2MM showfloor, and Nysernet IPv6

Rob Vietzke was in New York today finalizing some prep work for the Internet2 Member Meeting. We had hoped to have him run some fibers between the Level3 bulk panel toward Chicago and the Infinera, but the fibers we shipped to the wrong zip code. Apparently New York is so dense that it has exactly 5,768 zip codes. :-) That meant that the fibers didn't make it in time and we'll need to do remote hands tomorrow.

Ciena had some similar problems with their 10GigE cards that they sent for the installers over a week ago. Apparently they were off by one number on the zip code and they ended up 10 blocks away from where they were supposed to be. Oddly enough, they were delivered and signed for at this mythical address. Fortunately, the carrier was able to locate them and get them into the installer technicians' hands by the end of the day.

Had a bit of back and forth with Christian Todorov and Linda Winkler about the member meeting network. I'm attaching a diagram for fun, though the VLAN assignment has already changed and I now have the AS number of the showfloor. The legacy Abilene router will be connected to the network via it's existing MREN peering. The new Chicago router will have a layer2 cutthrough directly to the showfloor. So, by default the new router will be the default path to and from the showfloor (based on AS-path length). I'm not sure which way we want to go with that, but I think folks want to keep the legacy router as the primary link. So, I'll pad the AS toward the I2MM and let MEDs take over. We redistributed our IGP metrics into BGP MEDs. Since both routers will be advertising the same set of BGP routes, the showfloor will have two paths to Abilene connectors. Since the IGP metric on the new CHIC-NEWY circuit is still higher than the legacy Qwest Chicago to New York link, the only networks that will have a lower MED (and be preferred on the new router) will be those networks that are homed directly to one of the two new T640s.

Speaking of networks that are homed to the new network, Nysernet is planning to complete their transition to the new network at 0630 EST tomorrow morning. I just traded IMs with Bill Owens this evening. IPv6 is already transition and he's seeing the Kame turtle. IPv6 multicast works as well. We sourced some >1Gbps multicast streams from New York this afternoon and all went smoothly.

BTW, I'm told that the new weather map will be ready tomorrow. Reverse and forward DNS will happen tomorrow or Saturday. Our systems engineering guys are bringing up a new tool that auto-generates all our DNS off our network topology database.