Thursday, December 5, 2013

Encrypting Overlay Transport Virtualization traffic

This is a continuation from where I left off related to multicast OTV [over a multicast-core transport].

Now, using the same multicast OTV topology, I will demonstrate how we can encrypt the OTV traffic flow [this is just for unicast traffic]

The topology is shown below again for easier reference.




However, for better understanding of the configurations mentioned below, I would request you to visit the site mentioned above.

Currently with the way OTV is designed, the overlay interface can just be mapped to the main physical interface or any of its sub-interface. [as on 5th December 2013]

So, with this in mind, I have used 'crypto map' on the OTV join-interface to encrypt OTV traffic.

All configurations that follow are related to crypto-maps. Request you to kindly refer other blogs / sites to clarify your doubts related to the configurations.

ED1:

crypto isakmp policy 14
 hash md5
 authentication pre-share
 group 5

crypto isakmp key cisco123 address 10.2.14.2

crypto ipsec transform-set ed1_ts esp-aes esp-sha-hmac

ip access-list extended ed1_acl
 permit gre host 10.1.14.2 host 10.2.14.2

crypto map ed1_map 14 ipsec-isakmp
 set peer 10.2.14.2
 set transform-set ed1_ts
 match address ed1_acl

interface GigabitEthernet0/0/1.14
 crypto map ed1_map

ED2:

crypto isakmp policy 14
 hash md5
 authentication pre-share
 group 5

crypto isakmp key cisco123 address 10.1.14.2

crypto ipsec transform-set ed2_ts esp-aes esp-sha-hmac

ip access-list extended ed2_acl
 permit gre host 10.2.14.2 host 10.1.14.2

crypto map ed2_map 14 ipsec-isakmp
 set peer 10.1.14.2
 set transform-set ed2_ts
 match address ed2_acl

interface GigabitEthernet0/0/0.14
 crypto map ed2_map


The core router does not need any change what's so ever. Neither does the overlay interface need to know anything about IPSec.

Before sending traffic clear the counters on the ED's [#clear counters]

Now we will send ping traffic from VM1 to VM2:

[root@vm-aries-cel ~]# ping 172.16.11.10 -c 10
PING 172.16.11.10 (172.16.11.10) 56(84) bytes of data.
64 bytes from 172.16.11.10: icmp_seq=1 ttl=64 time=1.19 ms
64 bytes from 172.16.11.10: icmp_seq=2 ttl=64 time=1.12 ms
64 bytes from 172.16.11.10: icmp_seq=3 ttl=64 time=1.01 ms
64 bytes from 172.16.11.10: icmp_seq=4 ttl=64 time=0.999 ms
64 bytes from 172.16.11.10: icmp_seq=5 ttl=64 time=0.818 ms
64 bytes from 172.16.11.10: icmp_seq=6 ttl=64 time=0.879 ms
64 bytes from 172.16.11.10: icmp_seq=7 ttl=64 time=0.879 ms
64 bytes from 172.16.11.10: icmp_seq=8 ttl=64 time=0.997 ms
64 bytes from 172.16.11.10: icmp_seq=9 ttl=64 time=0.897 ms

--- 172.16.11.10 ping statistics ---
10 packets transmitted, 9 received, 10% packet loss, time 9004ms
rtt min/avg/max/mdev = 0.818/0.978/1.197/0.118 ms, pipe 2
[root@vm-aries-cel ~]#


As observed '1' packet loss is expected due to IPSec [kindly refer IPSec blogs to get greater details].

Also, in the ED's you can verify that 10 packets have been encrypted and decrypted [I have verified this on ED2, however, one can do it on either ED's]:

ED2#show crypto session detail
Crypto session current status

Code: C - IKE Configuration mode, D - Dead Peer Detection    
K - Keepalives, N - NAT-traversal, T - cTCP encapsulation    
X - IKE Extended Authentication, F - IKE Fragmentation
R - IKE Auto Reconnect

Interface: GigabitEthernet0/0/0.14
Uptime: 00:00:31
Session status: UP-ACTIVE    
Peer: 10.1.14.2 port 500 fvrf: (none) ivrf: (none)
      Phase1_id: 10.1.14.2
      Desc: (none)
  Session ID: 0
  IKEv1 SA: local 10.2.14.2/500 remote 10.1.14.2/500 Active
          Capabilities:(none) connid:1002 lifetime:23:59:28
  IPSEC FLOW: permit 47 host 10.2.14.2 host 10.1.14.2
        Active SAs: 2, origin: crypto map
        Inbound:  #pkts dec'ed 10 drop 0 life (KB/Sec) 4607998/3568
        Outbound: #pkts enc'ed 10 drop 0 life (KB/Sec) 4607998/3568

ED2#

ED2#show interfaces overlay 150 accounting
Overlay150
                Protocol    Pkts In   Chars In   Pkts Out  Chars Out
                   Other          2        128          3        192
                      IP         10       1020         10       1020
                    CLNS         19      16194          5       5771
ED2#

As observed, OTV is sending and receiving the traffic.

This ends the basic configuration guide which gives us an understanding of how to encrypt OTV traffic.

Just to add to this, I have updated the VRF related blog to give details related to bringing up VRF-aware crypto-map's. Link for the same can be found below:
http://stayinginit.blogspot.in/2013/12/overlay-transport-virtualization-with.html

NOTE: All the above tests were done using the recently released XE-3.11 release

Hope you found this post informative.

5 comments:

  1. After encryption as you mentioned above, my Otv arp entries disappeared and I can not ping across otv. Tried changing MTU and MSS but that didn't help. Any suggeations will be highly appreciated.

    ReplyDelete
    Replies
    1. You are in the right direction, so let me suggest you to try to changing the 'MTU' under the Overlay interface you have configured. (lesser than the default 1400 bytes)
      The above has to be done on both your edge devices.
      Also, you can try to lower the lsp-mtu under 'otv isis overlay ' to a value lower than the default 1392 bytes.

      Both the above solutions should help is overcoming your problem.

      Delete
  2. Got the same result. I don't believe this post is correct at all.

    ReplyDelete
    Replies
    1. I wish you were right :-). With that said since you too seem to have the same problem, please try out the above solution and let me know if that works.

      Delete
  3. How would the same set of configurations extend for Dual Homed setup with IPSec Crypto encryption ?

    ReplyDelete