We’ve been an extensive user of Juniper router/firewall products for years; the NetScreen/SSG line of systems all the way from NetScreen 10’s through SSG350s. As this line of products nears the end of its life-cycle, we are looking to learn and implement a new platform. The JunOS platform was the nearest cousin as it shared the same hardware platform. You could convert an SSG320 to a J2320 with a simple software/sticker change. However the earlier versions of JunOS didn’t sit well with me and how it “did” things. So we waited.
Around the JunOS v9/10/11 timeframe, Juniper rebuilt JunOS to switch to the same flow-based security architecture that I had grown to appreciate in the ScreenOS. Having purchased an SRX100 to do some testing on, it looked like a better and better system. The latest v15 line has some pretty great features. So I’ve been watching the pricing on an SRX240 as that is roughly comparable to the SSG320s that we mostly deploy.
For one of our customers who does a ton of very low bandwidth VPNs, adding another 1U hardware router that can do at most 500 or 1000 concurrent VPNs seemed like a potentially losing battle in terms of rack space. As we already make extensive use of virtualization, the fact that Juniper provides a virtualized SRX seemed very intriguing. We could set up a couple of decently powered 1U Intel servers that could run 5-6 vSRX each supporting up to 800-1000 VPNs. That’s a decent win.
So, I wanted to try out the vSRX, but Juniper only provides container files pre-built for VMWare or KVM. We run Xen exclusively. Not XenServer, bare Debian Xen. Hand configuring bridges and xen.cfg files is the level of detail/control that provides us with a very robust architecture.
To that end, I figured there must be a way to run the vSRX at least in full HVM mode on Xen to test out it’s capabilities. So I downloaded the KVM container which is a qcow format file and went about installing and setting it up. Here are some of the particulars of how I did that.
For domU storage, we use LVM exclusively. So to convert the qcow format into a logical volume you have to use qemu-img. “QCOW” stands for QEMU Copy On Write. It is essentially a sparse image. Regular LVM logical volumes are not. So I had interrogate the file to see how big of a LV I needed to create:
# qemu-img info junos-vsrx-vmdisk-15.1X49-D20.2.qcow2
image: junos-vsrx-vmdisk-15.1X49-D20.2.qcow2
file format: qcow2
virtual size: 16G (17179869184 bytes)
disk size: 2.7G
cluster_size: 65536
Format specific information:
compat: 0.10
So, a 16G LV it is:
lvcreate –size=17179869184B –name=srx1-disk vm2fast1
Then write that qcow image into the lv:
qemu-img convert -O host_device junos-vsrx-vmdisk-15.1X49-D20.2.qcow2 /dev/vm2fast1/srx1-disk
Now for the vm config file. I tried many different variations of pvhm, pv-nics, virtio nics, etc, but this is the only config that ever produced a usable system with one management interface (the first one is set as the default upon initial setup) and one Untrust and one DMZ interfaces:
name=”srx1″
builder = “hvm”
device_model_version = “qemu-xen”vcpus = ‘2’
memory = ‘4096’
pool = ‘Pool-CPU1’
cpu_weight = 384
xen_platform_pci=1
hap=1
nestedhvm=1disk = [
‘phy:/dev/vm2fast1/srx1-disk,xvda,w’,
]# Networking
#
vif = [
‘bridge=xenbr0,vifname=srx1-t,mac=00:16:3e:FF:FF:00,model=e1000’,
‘bridge=xenbr1,vifname=srx1-dmz,mac=00:16:3e:FF:FF:01,model=e1000’,
‘bridge=xenbr2,vifname=srx1-ut,mac=00:16:3e:FF:FF:02,model=e1000’,
]vfb = [ “type=vnc,vncdisplay=3,vncpasswd=VNCsecret,keymap=en-us” ]
# Behaviour
#
on_poweroff = ‘destroy’
on_reboot = ‘restart’
on_crash = ‘restart’
I even made a screen recording via the VNC session of full startup and then shutdown (via JunOS cli: request system power-off)
So, the interesting thing here is: JunOS is based off FreeBSD. In order to deliver that as a widely usable virtual machine on the major platforms, Juniper wrapped it’s FreeBSD/JunOS with a Linux (Juniper Linux) and is running the JunOS as a virtual machine in that space. That is why they require you to enable nested HVM. Crazy.
I have had some trouble with restarts getting the two ge-0/0 interfaces to stay visible to the final underlying JunOS. I think I will definitely need to stand up a KVM based dom0 host to do some more testing so that the virtio based interfaces can be fully paravirtualized.
Juniper sells the licensing for vSRX based on a bandwidth and feature set model. The base license gives you 10Mbps of bandwidth which would definitely cover the 1000 tunnels our client would want to deploy on each vSRX. A perpetual license for that base vSRX license runs about $1500, which is not bad. An SRX240 currently goes for about $2000-$2400 and then add in support contract and each one takes up a full 1U of rackspace, the vSRX looks like a good deal.