Category: Généralités


Dans ce billet, je vais montrer comment activer une méthode d’authentification à deux facteurs pour les accès SSH qui arrivent de réseaux inconnus. L’objectif final est de fournir un niveau de sécurité renforcé pour un sous-ensemble de connexions entrantes, sans compliquer l’exploitation quotidienne depuis l’intérieur de votre réseau. La solution mise en place s’appuie sur l’infrastructure PAM.

Prérequis

L’utilisation de codes basés sur le temps demande une synchronisation des horloges, il est donc vivement conseillé de déléguer cette tâche à ntpd, si ce n’est pas déjà en place.

Le paquet qui contient le module PAM pour google authenticator est libpam-google-authenticator pour la famille des distributions debian (dispo depuis debian jessie), et google-authenticator sur CentOS 7. Si vous ne disposez pas de paquet pour votre distribution, vous pouvez télécharger les sources et les compiler.

Préparation des comptes utilisateur

Pour que l’authentification puisse fonctionner, il faut impérativement définir un jeton en lançant la commande google-authenticator sur chaque compte qui aura besoin de se connecter via SSH. L’authentification échouera systématiquement si ce n’est pas fait.

Un assistant va apparaître pour vous aider à configurer le jeton personnel. Certaines versions (comme dans le cas présent) vous afficheront un code barre QR scannable par l’application de Google.

Définir la liste de contrôle d’accès

Nous allons configurer une liste d’accès pour définir les réseaux qui seront exemptés d’une vérification à deux facteurs. La syntaxe de l’entrée dans le fichier PAM sera abordée dans la partie suivante. La logique de l’ACL est la suivante: un utilisateur « accepté » est exempté d’une procédure de double vérification.

Créer le fichier /etc/security/access-gauth.conf (si vous utilisez l’acl pour un autre module, changez gauth par le nom du module concerné).

La structure d’une entrée dans le fichier est composé de la manière suivante: <permission> : <utilisateur> : <origine>

La permission a deux valeurs possibles: « + » (autoriser) et « – » (rejeter).

Le champ utilisateur peut être composé de un ou plusieurs utilisateur(s) et/ou groupe(s) séparé(s) par des espaces. Les noms de groupes sont préfixés par un « @ ». Des exceptions peuvent être mises en place grâce au mot clé EXCEPT. La différence majeure pour la liste d’exception concerne les groupes, qui ne sont plus désigné par le « @ », mais par une paire de parenthèses.

Le champ origine peut contenir le nom du terminal d’origine de l’utilisateur, le mot-clé LOCAL qui désigne toute console présente physiquement sur la machine, une adresse IPv6/v4, un subnet IPv6/v4, un subnet réseau IPv4 avec le masque en notation décimale pointée. Encore une fois, le mot-clé EXCEPT vous permet de faire des exclusions de sous-ensembles.

Un exemple complexe de ligne est donné ci-dessous:

+ : emustermann @staff EXCEPT jdoe (rookies) : 3fff:15d4:dead:beef::/64 192.168.15.0/24 EXCEPT 192.168.15.254 3fff:15d4:dead:beef::1337

Votre fichier d’ACL devrait ressembler au minimum à ceci:

# Indiquez ici les réseaux de confiance
+ : ALL : 192.168.0.0/24
+ : ALL : 2001:xxxx:xxxx:xxxx::/64
+ : ALL : fe80::/64

# Toujours bypasser pour l'authentification locale
+ : ALL : 127.0.0.0/8
+ : ALL : ::1
+ : ALL : LOCAL

# Rejeter tous les autres
- : ALL : ALL

Configuration de PAM

Il faut maintenant éditer le fichier /etc/pam.d/sshd pour insérer les lignes données ci-dessous. Vous devrez coller ces lignes à la suite de l’inclusion du mécanisme commun d’authentification (« @include common-auth » sur debian), ou à la suite de la dernière ligne dont le premier verbe est « auth ».

auth	[success=1 default=ignore]	pam_access.so	accessfile=/etc/security/access-gauth.conf
auth	required			pam_google_authenticator.so

La première ligne sert à arrêter le processus d’authentification avant l’exécution du module google authenticator si l’utilisateur est accepté par l’ACL (success=1), mais à ne pas le rejeter s’il est rejeté par la liste (default=ignore).

Le module google authenticator est défini en « required » car un échec à ce test doit entraîner un refus de connexion, mais la réussite du test n’est pas suffisante en soi pour accepter la connexion d’office (ce que permettrait un mot-clé « sufficient »).

Configuration d’OpenSSH

Par défaut, OpenSSH est configuré pour utiliser la méthode embarquée du protocole pour authentifier l’utilisateur par mot de passe. Pour changer ce comportement, il faut définir les deux lignes suivantes dans votre fichier sshd_config:

PasswordAuthentication no
ChallengeResponseAuthentication yes
UsePAM yes

Pour éviter de se couper la main, il est vivement recommandé de disposer d’une clé privée valide pour remonter sur la machine en cas de pépins, ou d’avoir mis l’IP de votre machine en liste blanche ;).

Une fois que vous êtes prêt, relancez le serveur ssh sur la machine distante.

Publicité

(An english version of this post is available.)

Le 6 Juin 2015, je me suis généré une nouvelle clé PGP, dont l’empreinte est D4B08488 03E7D7AF F9DB90E2 4EB88CD9 57312C28.

J’utilise encore à ce jour ma clé historique, à savoir celle dont l’empreinte est E05CDFED D9B6DC33 6B23B510 27FFC627 78A363DF, mais je prévois de la révoquer d’ici environ un an.

J’ai resigné avec ma nouvelle clé toutes les clés que j’avais signé auparavant.
Pour faciliter l’authentification de ma nouvelle clé, mes deux clés sont signées mutuellement. Vous bénéficierez de la toile de confiance via ma clé actuelle, en attendant de signer ma nouvelle clé.

Voici les raison qui expliquent pourquoi j’ai décidé de me lancer:

Ma clé actuelle est vieille

Elle a 4 ans et demi, et je ne souhaite pas utiliser des clés qui ont plus de 6 ans, ce qui représente déjà une durée de vie importante pour une clé cryptographique.

Le risque associé à l’utilisation de vieilles clés tient au risque croissant avec le temps qu’un adversaire arrive à trouver la partie privée de votre clé.

Je voulais disposer d’une clé plus solide depuis un moment

Ma clé actuelle est une clé RSA de 2048 bits, et je souhaitais me générer une clé plus solide depuis un moment.

Je n’ai pas basculé vers la cryptographie sur courbes elliptiques, car les versions de gnupg 2.1 ne sont pas encore suffisament diffusées pour avoir un support correct, les cartes openpgp ne supportent pas les clés ECC, et je ne veux pas monter une configuration pour utiliser une carte PKCS#11 avec gnupg.

J’ai renouvellé ma carte à puce

Afin d’assurer une sécurité optimale, le stockage de clés privées ne devrait jamais se faire à l’extérieur d’une carte à puce. Ci-dessous, une photo du matériel que j’ai utilisé jusqu’à maintenant (carte à puce format ISO 7816 et un lecteur expresscard54).

De nos jours, d’une part, les slots Expresscard 54 se font de plus en plus rares sur les PC portables, à cause des contraintes de poids, et de volume des nouvelles gammes de matériel. Conserver une carte à puce au format ISO 7816 devient contraignant, car je devrais transporter un lecteur de cartes externe en permanence avec ma carte à puce.

D’autre part, j’utilise fréquemment ma carte à puce à la fois pour les opérations communes avec GPG, et pour l’authentification SSH, et je souhaitais remplacer mon matériel avant d’être victime d’une panne par usure de la mémoire interne.

En prenant en compte les deux points en supra, repartir sur de la carte à puce au format classique n’était pas envisageable, et je me suis donc tourné vers un format type « clé USB ».

Le format « dongle USB » est meilleur

J’ai rentenu le lecteur Gemalto Usb shell token V2 (cf. image en infra), pour sa légèreté, ses dimensions compactes, sa solidité, et parce qu’il est supporté nativement par une gamme large d’OS, avec le support des APDUs étendues (configuration plus simple).

A l’intérieur de dongle, j’ai installé une carte OpenPGP version 2.1, que mon fournisseur vend prédécoupée au format mini-SIM (si vous souhaitez vous équiper, n’hésitez pas à consulter sa boutique en ligne).

Depuis que Feitian a arrêté son contrat de revente avec gooze, c’est la meilleure solution que j’ai trouvé. A ce propos, j’ai constaté que gooze a mis la clé sous la porte.

Le résultat est assez propre une fois assemblé comme en témoigne la photo ci-dessus. J’ai ajouté un petit logo « GnuPG » entre la puce et la paroi du pour ne pas confondre ce dongle avec un autre que j’utilise déjà.

Pour les gens paranos qui souhaiterais authentifier la source de cet article, vous pouvez télécharger cette archive qui contient une version texte de cet article, et signé avec mes deux clés PGP. N’hésitez pas à poser vos questions en commentaire.

(Une version française de cet article est disponible)

On July 6th, 2015, I’ve generated a new PGP key, which fingerprint is D4B08488 03E7D7AF F9DB90E2 4EB88CD9 57312C28.

As for now, I’m still using my historical key as my primary key (E05CDFED D9B6DC33 6B23B510 27FFC627 78A363DF), but I’m planning to revoke my current main key in approximately a year.

All keys I have signed in the past have already been resigned with my new key. My two keys are cross-signed to confirm mutual ownership, and to keep a trust path, as long as my current key is not revoked.

Here are the main reasons why I decided to engage in such a mess:

My current main key is old

It’s nearly 4 and a half year old, and I don’t wish to use keys that are more that 6 years old (which is already a long life for crypto keys).

The more time elapses from the time you begin using a crypto key, the higher is the risk that an attacker could have found your private key.

I’ve wanted to get a stronger key for a while

My current key is 2048 bits long, using RSA, and I’ve been wanting to generate a stronger key for while. On the opportunities to switch to elliptic curve keys, I’ve decided not to migrate for now, as gnupg 2.1 adoption is still marginal, and OpenPGP smart card v2.1 smart cards do not support ECC keys, and I didn’t want to bother setting up a PKCS#11 card with gnupg to manage my keys.

I’ve renewed my smart card

For serious cryptographic operations, one shouldn’t store private keys outside a smartcard. Here is a photograph of the hardware I’ve been using to store my PGP key so far.

Nowadays, from one hand, Expresscard 54 slots are being scarcer and scarcer, as laptops dimensions and weight are shrinking, so keeping the ISO 7816 form factor is being more and more problematic, as I’d have to keep a smart card reader all the time with me on some hardware. The picture above this paragraph features an expresscard-54 smart card reader.

From another hand, I’ve been using extensively my smart card both for PGP and SSH authentication, and I was willing to replace my card before being victim of a wearout failure.

Considering those two reasons, re-using the same smart card isn’t an option.

The USB dongle a better form factor.

I’ve selected the Gemalto Usb shell token V2 reader (picture below related), which is light, compact, pretty solid, and natively supported by a wide range of recent OSes, with extended APDUs. The smart card inside, an OpenPGP card v2.1, is retailed in pre-cut mini SIM form factor by my dealer (you should check his shop if you wish to purchase smart cards from europe).

This is the cleanest alternative I’ve found so far since Feitian decided to break their trading agreement with Gooze. I’ve just checked out Gooze website, and it seems that  they’re now closed.

The assembled result is pretty neat. I’ve put a small « GnuPG » logo between the card and the case in order not to mistake it with another dongle I already use.

For paranoid people that would like to authenticate the source of this article, you can find a text-only version of this article signed with my two keys in this archive. Feel free to comment if you have any questions on the hardware I use.

Since my last article, there has been a lot of movement : new vulnerabilities have appeared, but a good setup might have protected you (especially if you decided to drop SSLv3 support).

Let’s begin on an excellent news from the Apache project : version 2.2.30 will support strong DH exponents as detailed in this commit.

TL;DR: updated cipher suites are at the end of the document.

SSLv3 : the weak link

Since my last post, the POODLE attack was disclosed, proving SSLv3 to be weak. For a reminder, in 2013, I did an implicit recommendation to disable SSLv3, which was already more than obsolete at that time. The caveat to this is that obsolete OS like MS Windows XP are not natively compatible with TLS 1.0.

If you’re still using Windows XP, and if you can’t migrate from it, please follow the following guide from certiport to enable TLS 1.0. Please use this opportunity to disable SSLv2 and SSLv3: as Windows XP is not supported anymore, the internal libraries won’t benefit from the TLS_FALLBACK_SCSV flag to detect a downgrade attack.

RC4: partially broken, pruned by IETF

According to a 2013 publication which deals with the state of RC4 in TLS, this cipher has been partially broken in some conditions. Therefore, it should be considered as weak, and it MUST (I strongly emphasize on this) be rolled out as soon as possible from production.

At the beginning of this year, the IETF has written RFC 7465 to prohibit the use of RC4 cipher suites. This document is, at the date of publication, a proposed standard.

ChaCha20-Poly1305: a promising newcomer

The obsolescence of RC4 has a significant impact for high traffic servers and low-resources devices (phones, embedded systems…) Indeed, AES performances are lower to RC4’s on hardware that don’t implement specific instruction sets. In regard of these issues, Google has implemented a new stream cipher that runs efficiently without any need of hardware implementation.

Beside Google’s work on this cipher, current adoption is still marginal, because the cipher specification process on the IETF side has been on hiatus for a while. However, it’s now published, and implementation in IPSec/IKE is to be published soon. No work has been performed for TLS yet, but you can track the evolution on the IETF datatracker here.

Up to now, this cipher is implemented in Android, and in Windows build of Google Chrome, but there is no upstream implementation in the most popular TLS libraries because the developpers are waiting for the cipher suite ID bytes to be standardized.

However we can think Chacha20-Poly1305 will gain traction once its specification is standardized for TLS.

3DES: Necessary evil for legacy support

Now that RC4 has been proven insecure, there is only one « safe » (112 bit of effective key size) cipher suite on Windows XP: TLS_RSA_WITH_3DES_EDE_CBC_SHA. There is another suite that uses 3DES, but it requires use of DSA keys, which size are effectively limited to 1024 bits on Legacy OS.

I insist that this cipher suite is to be included only for legacy support, put it at the end of your cipher suite, and you should remove it as soon as possible.

TLS 1.3 : what to expect so far

The IETF is currently working on TLS version 1.3 which will essentially focus on cleaning the crufty legacy features that one shouldn’t use today (compression, non-AEAD ciphers, DHE key exchange…), and improving the protocol performances and strength.
 

Recent attacks messing with both the client and server sides.

The support for « export » cryptography on the client side has been leveraged downgrade the security of connections. FREAK targets old cipher suites (e.g. DES with 56 bit keys), and Logjam targets Diffie-Hellmann ephemeral key exchange, by using weak key length (512 bits moduli). The best way to mitigate these attacks is to upgrade your browser to the last version.

The updated TLS cipher suites

A significant change has been performed on the suggested cipher suites: DHE suites are put after all ECDHE because of the risks induced by Logjam attacks, and because ECDHE key exchange is significantly faster than DHE key exchange. AEAD ciphers (with built-in authentication) and TLS 1.2 ciphers are put first.

This profile is built with performance in mind. It features 128 bits ciphers first.

SSLProtocol all -SSLv3 -SSLv2
SSLCompression off
SSLHonorCipherOrder on
SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:CAMELLIA128-SHA:AES128-SHA:CAMELLIA256-SHA:AES256-SHA

The second profile is built with maximum security.

SSLProtocol all -SSLv3 -SSLv2
SSLCompression off
SSLHonorCipherOrder on
SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA

The third profile is built for legacy support, based on the performance profile.

SSLProtocol all -SSLv3 -SSLv2
SSLCompression off
SSLHonorCipherOrder on
SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:CAMELLIA128-SHA:AES128-SHA:CAMELLIA256-SHA:AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:DES-CBC3-SHA

Bonus: Enabling HSTS

HTTP Strict Transport Security (HSTS) is a security mechanism which is to HTTPS websites against downgrade-to-HTTP attacks. Compatible browsers will remember websites that show the appropriate headers, and redirect any http request to https.

Enabling HSTS is a good practice to apply when your TLS setup is stable and mature.

To do so, you must enable mod_headers and add the following line on your TLS-Enabled virtual host:

Header always set Strict-Transport-Security "max-age=15768000"

If all your subdomains are properly set up, you can add includeSubDomain at the end of the value of the header, as follow.

Header always set Strict-Transport-Security "max-age=15768000; includeSubDomains"

Recently, I needed to interconnect two private networks using on one side, a SOHO Cisco 871 router (because it’s silent, and people don’t want a desktop appliance to be as loud as an aircraft), and on the other side, an existing linux box with some services we want to connect to.

The main purpose of this setup is not to get optimal performances nor security, but to test interoperability on the two involved IPSec stack.

NAT configuration on the cisco side will be given as a bonus, at the end of this article, as it might be tricky to deal with simultaneous NAT and VPN.

You will find below the schema of our example setup.

netschemaIn our example, let’s assume our PSK is qFcOx72WVERsNobVsimx

Before we begin to overlook the configuration, let’s recall some points on IPSec.

IPSec is a quite complete protocol that can be used in a vast number of use cases: site to site VPNs, roadwarrior remote access, host to host security, with a focus on either integrity or integrity and confidentiality enforcement.

IPSec is thus commonly considered as a complex technology: its features are described and standardized by over 30 IETF RFCs, and it’s modularity reaches such a point that different implementations may not interoperate out of the box as we will see below. Some bonus features are not even standard (e.g. Opportunistic Encryption).

When two endpoints establish a security association (SA), the endpoint that attempt to establish the SA is called the initiator.

To summarize, the protocol works in two phases:

  • Phase 1:the security association and key management, where the two IPSec endpoints mutually authenticate and exchange keys that will be used on phase 2.
  • Phase 2: the security policy(ies) setup, where the two IPSec endpoints decide to do either encryption or authentication of the secured payload, and if they want to secure host to host, or network to network communications.

Here is the list of the different components that are involved in my sample setup:

  • Debian wheezy with a stock 3.2.54-2 kernel and the racoon and ipsec-tools packages from the official repository (version 1:0.8.0-14 for both these packages).
  • Cisco 871 with a Cisco IOS C870 Software (C870-ADVIPSERVICESK9-M), Version 12.4(15)T7, RELEASE (fc3)

Racoon configuration

To begin with, the configuration of racoon was not especially tricky until I experienced a strange issue: when the tunnel was initiated by the linux box, the phase 1 handshake worked properly, but the phase 2 failed to bring up, with a NO-PROPOSAL-CHOSEN error even if sa parameters were matching. If you have more feedback on this, you’re welcome to contribute in the comments. Edit: I have found what was the problem: I forgot to include the second sainfo section in racoon.conf, and I also made a mistake in the cisco configuration. Refer to the appropriate section for further details.

To avoid getting stuck in this case, I managed to make the linux box passive, and to bring up automatically the tunnel using a trick on the cisco side.

Racoon-issued dead peer detection also made my phase 2 die after timeout, as the cisco agent did not send appropriate replies. I addressed this issue by configuring racoon as a passive DPD responder.

/etc/racoon/racoon.conf

path pre_shared_key "/etc/racoon/psk.txt";
path certificate "/etc/racoon/certs";
log notify;

listen
{
	isakmp 198.51.100.37 [500];
}

remote 192.0.2.13 {
	exchange_mode aggressive,main;
	generate_policy off;
	my_identifier address 198.51.100.37;
	peers_identifier address 192.0.2.13;
	lifetime time 3600 sec;	
	passive on;
	
	proposal {
		encryption_algorithm 3des;
                authentication_method pre_shared_key;
                hash_algorithm sha1;
                dh_group 2;
		lifetime time 3600 sec;
	}
}


sainfo address 10.0.0.0/24[any] any address 10.224.9.0/24[any] any {
{
        pfs_group modp1024;
        encryption_algorithm 3des;
        authentication_algorithm hmac_sha1;
        compression_algorithm deflate;
        lifetime time 3600 sec;
}

sainfo address 10.224.9.0/24[any] any address 10.0.0.0/24[any] any {
{
        pfs_group modp1024;
        encryption_algorithm 3des;
        authentication_algorithm hmac_sha1;
        compression_algorithm deflate;
        lifetime time 3600 sec;
}

/etc/racoon/psk.conf

192.0.2.13 	qFcOx72WVERsNobVsimx

/etc/ipsec-tools.conf

#!/usr/sbin/setkey -f

flush;
spdflush;

spdadd 10.0.0.0/24 10.224.9.0/24 any -P out ipsec
    esp/tunnel/198.51.100.37-192.0.2.13/require;

spdadd 10.224.9.0/24 10.0.0.0/24 any -P in ipsec
    esp/tunnel/192.0.2.13-198.51.100.37/require;

Cisco 871 configuration

To get a nailed-up IPSec tunnel at boot time, I decided to set up a permanent ping probe using the ip sla feature of my IOS.

Edit: There was a mistake in the « crypto isakmp profile » section: when you use the match identity host directive, the identifier that follows is a fqdn, not an IP address. If you want to match IP addresses, use the match identity address directive. This is extremely important, as the phase 2 negociation might screw up because of this.

!
version 12.4
no service pad
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
!
hostname nettest
!
boot-start-marker
boot-end-marker
!
!
no aaa new-model
!
!
dot11 syslog
ip cef
!
!
no ip dhcp use vrf connected
ip dhcp excluded-address 10.224.9.1
!
ip dhcp pool POOL_VLAN1
   network 10.224.9.0 255.255.255.0
   default-router 10.224.9.1 
!
!
ip domain name example.local
!
multilink bundle-name authenticated
!
!
username admin privilege 15 secret 0 youradminpassword
! 
!
crypto isakmp policy 1
 encr 3des
 authentication pre-share
 group 2
 lifetime 3600
crypto isakmp key qFcOx72WVERsNobVsimx address 198.51.100.37 no-xauth
crypto isakmp keepalive 10 3 periodic
crypto isakmp profile 1
   keyring default
   self-identity address
   match identity address 198.51.100.37
   keepalive 25 retry 3
!
!
crypto ipsec transform-set MyTransformSet esp-3des esp-sha-hmac 
!         
crypto map MyMap local-address FastEthernet4
crypto map MyMap isakmp-profile 1
crypto map MyMap 10 ipsec-isakmp 
 set peer 198.51.100.37
 set transform-set MyTransformSet 
 set pfs group2
 set isakmp-profile 1
 match address 150
!
archive
 log config
  hidekeys
!
!
ip tftp source-interface Vlan1
!
!
!
interface FastEthernet0
!
interface FastEthernet1
!
interface FastEthernet2
!
interface FastEthernet3
!
interface FastEthernet4
 description WAN interface
 ip address 192.0.2.13 255.255.255.0
 duplex auto
 speed auto
 crypto map MyMap
!
interface Vlan1
 description Internal interface
 ip address 10.224.9.1 255.255.255.0
 no autostate
!
ip forward-protocol nd
ip classless
ip route 0.0.0.0 0.0.0.0 192.0.2.13
!
!
no ip http server
no ip http secure-server
!
ip sla 10
 icmp-echo 10.0.0.1 source-interface Vlan1
 timeout 1000
 frequency 1
ip sla schedule 10 life forever start-time now
access-list 150 permit ip 10.224.9.0 0.0.0.255 10.0.0.0 0.0.0.255
!
!
!
!         
control-plane
!
banner motd ^C
*************************************************************
$(hostname) - VPN tests cisco router
Contact: Geoffroy GRAMAIZE
*************************************************************
^C
!
line con 0
 logging synchronous
 login local
 no modem enable
line aux 0
line vty 0 4
 logging synchronous
 login local
!
scheduler max-task-time 5000
end

As promised, If you want your internal network, on the cisco side, to communicate with the internet, you should add the following commands to the above configuration:

interface FastEthernet4
 ip nat outside
!
interface Vlan1
 ip nat inside
!
ip access-list extended NAT_list
 deny   ip 10.0.0.0 0.0.0.255 10.224.9.0 0.0.0.255
 deny   ip 10.224.9.0 0.0.0.255 10.0.0.0 0.0.0.255
 permit ip 10.224.9.0 0.0.0.255 any
!
ip nat inside source list NAT_list interface FastEthernet4 overload

I’ve recently purchased an Edgerouter PoE from Ubiquiti, which is a great deal regarding its price and performance. The only caveat was related to the lack of native support for load balancing and failover. This lack has been fixed with the release of the 1.4.0 firmware, which embeds a load balancing functionnality with native connection tracking.

For this example, I’ll take a generic scenario of a dual WAN setup, in a failover configuration with some policy routes, as we assume some ISP specific services are not available from the internet (e.g. administration interfaces, SMTP and DNS servers…)

Let’s also assume your ISP CPEs are configured in bridge mode.  To show all the potential of the router, the IP address we’ll get from ISP 1 is dynamic and the one from ISP 2 is static, but both are acquired from ISP’s DHCP server (yes, my ISP are serious people, and they don’t use PPPo[E|A] ^.^).

I also use an internal autonomous DNS server to avoid unreachability delays during failover, and to have a trusted DNSSEC anchor. You will find below the schema for this scenario. The fqdn and IP addresses in this scenario have been changed to protect the innocents.

Our sample topology.

Our sample topology.

To begin with, set up your 3 interfaces on the router, the dhcp on the inside part, and the DNAT rules.

interfaces {
    ethernet eth0 {
    address dhcp
        description ISP_1
        duplex auto
        poe {
            output off
        }
        speed auto
    }
    ethernet eth1 {
        address dhcp
        description ISP_2
        duplex auto
        poe {
            output off
        }
        speed auto
    }
    switch switch0 {
        address 192.168.0.254/24
        switch-port {
            interface eth2
            interface eth3
            interface eth4
        }
    }
}
service {
    dhcp-server {
        disabled false
        hostfile-update disable
        shared-network-name Home {
            authoritative disable
            subnet 192.168.0.0/24 {
                default-router 192.168.0.254
                dns-server 192.168.0.252
                lease 86400
                start 192.168.0.1 {
                    stop 192.168.0.50
                }
            }
        }
    }
}
nat {
    rule 5000 {
            description ISP_1_NAT
            log disable
            outbound-interface eth0
            protocol all
            type masquerade
    }
    rule 5001 {
            description ISP_2_NAT
            log disable
            outbound-interface eth1
            type masquerade
    }
}

Next, we’ll setup the load balancer to use ISP1 as our primary access and ISP2 as our failover access. I decided to change some of the check parameters to show you how powerful the tool is. As we are in a failover setup, I won’t use the weight command, which you would use for load balancing scenarios, to adjust the percentage of traffic you’d like to send to the corresponding interface.

load-balance {
    group lb-output {
        interface eth0 {
            route-test {
                count {
                    failure 3
                    success 4
                }
                interval 5
                type {
                    ping {
                        target 203.0.113.42
                    }
                }
            }
        }
        interface eth1 {
            failover-only
        }
    }
}

As told at the beginning of this article, the load balancer will take care of tracking and marking the connection, to avoid that a current session gets in and out by different IP addresses. This is especially useful if you decide to use SNAT rules. As shown above, I decided to check ISP 1 connectivity against a specific IP address, but by default, the equipment will run the check against « ping.ubnt.com ».

Next, we’ll configure the fwr-lbalance firewall modifier group to set the policy routes. This firewall modifier will be used to let the trafic through the load balancer « lb-output », expect for:

  • RFC1918 networks which we will route through the main routing table.
  • 192.0.2.129 which is only reachable via ISP 1 (we’ll set up the target VRF table 10 for this case).
  • 198.51.100.192/28 which is only reachable via ISP 2. (we’ll assume our gateway is 198.51.100.62, and we’ll set up another target VRF table 20 for this case).

Edit: ubnt-stig advised me to use a firewall group to define the RFC1918 in the comment, so you will find an updated version below.

And here is the associated configuration:

firewall {
    group {
        network-group RFC1918 {
            network 10.0.0.0/8
            network 172.16.0.0/12
            network 192.168.0.0/16
        }
    }
    modify fwr-lbalance {
        rule 1 {
            action modify
            destination {
                group {
                    network-group RFC1918
                }
            }
            modify {
                table main
            }
        }
        rule 100 {
            action modify
            destination {
                address 192.0.2.129
            }
            modify {
                table 10
            }
        }
        rule 200 {
            action modify
            destination {
                address 198.51.100.192/28
            }
            modify {
                table 20
            }
        }
        rule 500 {
            action modify
            modify {
                lb-group lb-output
            }
        }
    }
} 
protocols {
    static {
        table 10 {
            interface-route 0.0.0.0/0 {
                next-hop-interface eth0 {
                }
            }
        }
        table 20 {
            route 0.0.0.0/0 {
                next-hop 198.51.100.62 {
                }
            }
        }    
    }
}

Now, you need to tell the router to apply the firewall modifier instance to your internal interfaces:

interfaces {
    switch switch0 {
        address 192.168.0.254/24
        firewall {
            in {
                modify fwr-lbalance
            }
        }
        switch-port {
            interface eth2
            interface eth3
            interface eth4
        }
    }
}

And finally, you’re done! Your dual wan setup is operationnal. Now you can configure SNAT rules for your publicly available services. If you want to use different a different load balancing policy, create another load-balancer group with the appropriate settings, and add a new rule into the firewall modifier group. Before exiting configuration mode, don’t forget to commit the configuration, and to save the configuration if it fits your requirement.

Edit: On the following screenshot, you can see the output of the load-balancer status commands.

Load Balander status

Load Balander status

Hi,In this article, I’ll talk about some problems you might encounter while working on a Cisco device:

When I plug a device on my Cisco CPE, I have some issues to get a DHCP lease.

This problem is mainly caused by the following points:

  • Your CPE ports are configured with the default spanning-tree behaviour. With the default behaviour, after being plugged, a switch port is temporarily put in a blocking state for 30 to 50 seconds, to acquire and calculate STP topology. If this behaviour is absolutely safe, some early network applications – like DHCP – will be subject to timeout.
  • When plugging your first equipment, if your CPE uses a level 2 switch, and has « vlan » interfaces , the vlan management interface takes some time to toggle from down/down to up/up state. The DHCP server is bound to this interface, so it won’t be able to process a request until the associated vlan interface is in up/up state.

To deal with the first issue, you have either to use the following commands to switch the level 2 port in STP portfast mode:

interface fastethernet X
 spanning-tree portfast

or to disable the spanning tree for your VLAN by typing the following in configuration mode:

no spanning-tree vlan <VID>

To mitigate the second issue, you must stick the vlan interface to the up/up state. You can do so using the following commands:

interface vlan X
 no autostate

I have screwed while flashing my Cisco device! Rommon tells me that it cannot find a bootable image.

keepcalm

Using tftpdnld, you can load a bootable image from a TFTP server from the rommon prompt. To do so, hook up your Cisco device to a network with a TFTP server, and type the following commands in the rommon prompt:

IP_ADDRESS=X.X.X.X
IP_SUBNET_MASK=X.X.X.X
DEFAULT_GATEWAY=X.X.X.X
TFTP_SERVER=<tftp_server_IPv4_address>
TFTP_FILE=<path_to_your_IOS_image_on_tftp_server>
tftpdnld -r

Once your image has booted, copy again your image from TFTP to flash, then check its integrity by computing and checking the resulting hash.

copy tftp://<tftp_server_addr>/<path_to_IOS_image> flash:<image_filename>
verify /md5 flash:<image_filename>

Then, configure the bootloader to load your image at boot time:

configure terminal
 boot system flash:<image_filename>
 exit
copy running-config startup-config

Now, you can reboot safely and enjoy your fresh IOS image.

I don’t remember my login/enable password, how to recover it?

To begin with, you need a serial console client which support the break signal: this is something you can’t emulate only with your keyboard. If you don’t know what the break signal is, please refer to the following document: Cisco Standard Break Key Sequence Combinations

As the recovery procedures is model dependent, visit the Password Recovery Procedures web page where you will find the detailed instructions for your device.

Standard TLS sessions have a big issue: they are vulnerable to the « wiretap then crack » attack scheme: any intercepted communication can be stored and deciphered when you’ll have/find/factorize the server private key. Network traffic analysers often provide an option to perform such an operation for debug/protocol validation purposes. « Perfect » forward secrecy has been designed to fight against this issue: when two peers want to establish a TLS tunnel with PFS, after performing the server (or the mutual) authentication, they agree on an ephemeral session key.

TLS perfect forward secrecy can be supported in all recent browsers with Apache 2.3+. Version 2.4 has recently been migrated to Debian Jessie. The configuration you will find below has been made with the Qualys SSL Server Test. This test suite is quite useful to review the configuration of your TLS server: it checks the validity of your certificates, the strength of the cipher suite your server offers and gives you information on how common browsers will behave on your website.

Though the grade I get is not perfect, the configuration that I posted below seems to be, at this i’m writing this post, both the most interoperable and the most robust configuration you can create with Apache.

Replace (or add if applicable) the following configuration directives in your SSL module configuration file (most likely to be found in /etc/apache2/mods-enabled/ssl.conf).

SSLProtocol +TLSv1.2 +TLSv1.1 +TLSv1
SSLCompression off
SSLHonorCipherOrder on
SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES256-SHA:RC4-SHA:AES256-GCM-SHA384:AES256-SHA256:CAMELLIA256-SHA:ECDHE-RSA-AES128-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:CAMELLIA128-SHA

then restart your apache server.

Please refer to the configuration at the bottom of this post


UPDATE: Qualys has updated its browser test suite, integrating Opera 12.15 and Firefox 21 on fedora 19, which are both known for not supporting TLS_ECDHE_RSA_WITH_RC4_128_SHA (0xC011).

Fedora community is well known for building FIPS 140-2 compliant cryptographic modules, and FIPS 140-2 happens not to support TLS_ECDHE_RSA_WITH_RC4_128_SHA.

This federal standard is known to only whitelist some cipher suites. In fact FIPS 140-2 doesn’t allow the use of « next » generation algorhithms (such as ECDHE for ephemeral key exchange, or as Galois/Counter Mode for block ciphers) despite the fact they bring significant improvements in terms of bandwidth and computing overhead.

Considering nowadays cryptographic ecosystem, this standard is to be considered as obsolete, except if you plan to sell any cryptographic product to the US goverment.

Ignoring that fact, today’s main security issue is caused by Apple Safari that still has not mitigated the BEAST vulnerability using the (1/n-1) split record trick.

So here is quite embarassing tradeoff you have to deal with: 1. Let your apple clients be vulnerable to BEAST while supporting PFS for any other browser by recommending CBC modes block ciphers and refusing use of RC4 (this will enhance the security for every browser except Safari), 2. Protect everyone against BEAST by using a crippled stream cipher and support PFS with a best-effort policy.

Bonus reminder: even if 3DES-EDE cipher suites use a 168 bit key, real key strength is 112 bits because of its vulnerability to the meet-in-the-middle attack. You may advertise for legacy support purposes, but if you do so, put these suites at the bottom of your server cipherlist, but do not add export or ADH suites are they are respectively weak and vulnerable.


UPDATE 2: After some browser behaviour analysis, I have achieved to get full PFS support for modern browsers, as shown on the pictures below. To do so, you need to properly enable some Diffie-Hellman ciphers. Doing so doesn’t trip the BEAST vulnerability flag and enables PFS for FIPS-compliant browsers that were not supported. The « Key exchange » subgrade in the Qualys SSL Server Test will decrease because DH parameters length are not great, even if they give a sufficient level of security for now.

This grade deterioration is an apache-specific issue: Apache developpers currently assumes that (EC)DH parameter choice is to be build-specific. According to me, this is not an acceptable solution as it makes long term support more difficult. You should consider bumping (voting for) this bugtracker feature request to get a more flexible way to control DH/ECDH parameters in the future.

Replace (or add if applicable) the following configuration directives in your SSL module configuration file (most likely to be found in /etc/apache2/mods-enabled/ssl.conf).

SSLProtocol +TLSv1.2 +TLSv1.1 +TLSv1
SSLCompression off
SSLHonorCipherOrder on
SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:RC4-SHA:AES256-GCM-SHA384:AES256-SHA256:CAMELLIA256-SHA:ECDHE-RSA-AES128-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:CAMELLIA128-SHA

then restart your apache server.

Here is the grade I’ve got with the last configuration:

Below, you can see which cipher suites are selected by your clients.

Expert tip: The (1/n-1) split record trick is implemented in Firefox since a while. Thus, you can safely disable RC4 on Firefox in the advanced configuration menu. To do so, enter ‘about:config’ in your address bar, then search for ‘rc4’ and toggle all the found values to ‘false’. If you experience connections issues, toggle back those parameters to true.


An updated version of these cipher suites has been posted in the post 2015, an overview of TLS

À l’attention des personnes qui éditent leur site en direct (c’est très mal) et/ou qui oublient de supprimer leurs fichiers de sauvegarde avant de mettre en production, ce tweet devrait vous faire réfléchir…

Un hack de brute tout simple.

Un hack de brute bête et méchant.

Si vous éditez rapidement vos pages sur place et/ou que vous faites une copie de sauvegarde, les fichiers de sauvegarde que vous créez restent disponibles par défaut, et ne seront pas interprétés comme des scripts. Il devient facile d’accéder à votre code source et donc, éventuellement à des données sensibles comme les crédentiels de votre base de données par exemple.

Si vous êtes sur Apache, vous pouvez ajouter ceci dans votre fichier de configuration principal pour colmater cette vulnérabilité:

<Files ~ "(\.(bak|old)|\~)$">
    Order allow,deny
    Deny from all
    Satisfy all
</Files>

Pour terminer, relancez l’indien (^_–)  ~ ☆

La matrice de Walsh-Hadamard dispose de certaines propriétés mathématiques intéressantes, comme l’orthogonalité 2 à 2 des lignes de la matrice qui est exploitée dans les systèmes à accès multiple par répartition de codes (CDMA). Cependant, les méthodes de génération conventionnelles font assez facilement appel à la multiplication de deux matrices.

Je me suis donc intéressé à une méthode de génération qui utilise quasi exclusivement des opérations de base (décalage de bits pour les puissances de 2, additions et branchements logiques). Elle présente toutefois l’inconvénient de consommer 2 fois la mémoire nécessaire à stocker la matrice.

Le code qui vous est proposé est optimisable, notamment en terme de stockage, mais il a été réalisé dans un objectif d’universalité. Il est fonctionnel et a été vérifié avec les matrices H2, H4 et H1024. Attention, le code source suivant est en C99, n’oubliez pas de configurer correctement votre compilateur.

Fichier hadamard.h

#ifndef __HADAMARD_H
#define __HADAMARD_H

typedef struct matrix {
	size_t width;
	size_t height;
	unsigned char *data;
} matrix;

typedef struct smatrix {
	size_t width;
	size_t height;
	char *data;
} smatrix;

void print_smatrix( smatrix *mtx);

void __hadamard_increment( size_t iteration, matrix *mtx);

// Generates the hadamard matrix of dimention 2^k
smatrix* gen_hadamard_matrix( size_t k);

#endif // __HADAMARD_H

Fichier hadamard.c

#include <stdio.h>
#include <stdlib.h>
#include "hadamard.h"

void print_smatrix( smatrix *mtx) {
	for( size_t i = 0; i<mtx->height; ++i)
	{
		printf( "|");

		for( size_t j=0; j<mtx->width; ++j)
		{
			printf( ((int)mtx->data[mtx->width*i+j] < 0 ? " %d" : " +%d"), (int)mtx->data[mtx->width*i+j]);
		}
		printf( " |\n");
	}
}

void __hadamard_increment( size_t iteration, matrix *mtx)
{
	size_t powOfTwo = 1 << iteration;

	for( size_t y=0; y<mtx->height; ++y)
	{
		// If selected bit is not set, skip this column
		if( (powOfTwo & y) != powOfTwo )
			continue;

		for( size_t x=0; x<mtx->height; ++x)
		{
			if( (powOfTwo & x) != powOfTwo)
				continue;
			++(mtx->data[mtx->height*y+x]);
		}
	}
}

// Generates the hadamard matrix of dimention 2^k
smatrix* gen_hadamard_matrix( size_t k)
{
	matrix mtx;
	smatrix* smat = malloc( sizeof(smatrix));

	// To generate a hadamard matrix, without using the multiplication,
	// we use an additive transition matrix.

	mtx.width = mtx.height = 1 << k;
        mtx.data = malloc(sizeof(unsigned char)*mtx.width*mtx.height);

	size_t mtxSZ2D = smat->width*smat->height;
        for (size_t i=0; i<mtxSZ2D; ++i)
        {
                mtx.data[i] = 0;
        }

	// We iteratively apply the incrementation on cells where the k_th bit
	// is set on both y and x coordinates. Apply k times for a 2^k by 2^k
	// Hadamard matrix

        for( size_t i=0; i<k; ++i)
                __hadamard_increment( i, &mtx);

        smat->width = mtx.width;
        smat->height = mtx.height;
        smat->data = malloc(sizeof(char)*smat->width*smat->height);

	// Then we generate the hadamard matrix by converting odd cells to '-1'
	// from the transition matrix and even cells to '+1'.

	size_t smatSZ2D = smat->width*smat->height;
        for ( size_t i=0; i<smatSZ2D; ++i)
        {
                smat->data[i] = (char)( ((mtx.data[i]) & 1) == 0 ? 1 : -1 );
        }

	free(mtx.data);
	return smat;
}

Fichier main.c

#include <stdio.h>
#include <stdlib.h>
#include "hadamard.h"

int main(void)
{
	size_t k;
	matrix mtx;

	printf( "Calcul de H(2^k). Valeur de k? ");
	scanf( "%i", &k);

	smatrix* smat = NULL;

	smat = gen_hadamard_matrix(k);
	print_smatrix( smat);

	free(smat->data);
	free(smat);
	return 0;
}

Les explications seront publiées prochainement.