Posted on | April 17, 2015 | No Comments
It is a bit ironic:
In the wake of the Snowden revelations, only a very few security and encryption technologies were found to still be trusted and uncrackable
(from all we know today).
One of these technologies is
Side note: The terms “OpenPGP”, “PGP”, and “GnuPG / GPG” are often used interchangeably. This is a common mistake, since they are distinctly different.
OpenPGP is technically a proposed standard, although it is widely used. OpenPGP is not a program, and shouldn’t be referred to as such.
PGP and GnuPG are computer programs that implement the OpenPGP standard.
PGP is an acronym for Pretty Good Privacy, a computer program which provides cryptographic privacy and authentication. For more information, see the Wikipedia article.
GnuPG is an acronym for Gnu Privacy Guard, another computer program which provides cryptographic privacy and authentication. For further information on GnuPG, see the Wikipedia article.
Now, the very moment PGP etc is identified as one of the only feasible approaches to “privacy for the common user”,
it is gets under heavy fire, from various sides.
(Some sources for further reading at the bottom of this article)
The criticism mostly targets the follwoing areas
1/ quality/beauty of core code / protocol
2/ usability issues
3/ key management
4/ social / cultural issues
(where 2/ & 3/ might be seen as the same)
2/ usability issues
mainly points at the lack of easy-to-use implementations,
causing users to make grave mistakes like sending their private key out, instead of their public key
(though i would argue that there has been great progress).
An Einsteinism is due, though:
“make things as easy as possible, but dont make them more simple than
In the context of online privacy, Clay Shirky might be wrong when he says:
“Communications tools don’t get socially interesting until they get
technologically boring. The invention of a tool doesn’t create change; it
has to have been around enough that most of society is using it. It’s when
a technology becomes normal, then ubiquitous, and finally so pervasive as
to become invisible, that profound changes happen.”
No level of usablity will get you around the fact that
privacy is a conscious act:
it can (and should) be made easier, but not invisible.
If you don’t get the key idea, you can’t have a locked house.
If you don’t care enough for encryption to keep your key in a safe place,
It will never work for you, no matter how well you have designed your round corners.
If you expect some service out there to make things easier for you, well
then obviously, privacy is not for you.
Tools that allegedly aim at improving the user friendliness of PGP,
often do so at the expense of further corroding the very principle of end-to-end security,
by exposing private keys to browsers (known as insecure) or, worse, your webmail provider.
Tools that bring high quality security to mobile messaging (like TextSecure) do so on operating systems
that are so insecure in themselves that even the suggestion of privacy is misleading.
(It should be noted though that placing trust somewhere might be a viable strategy –
just not an ultimately private one).
3/ key management
problems are best illustrated by what all PGP users experience frequently:
“oh could you send me that unencrypted? i ve changed computer and dont have my key anymore”
“ooooh thats not REALLY my key anymore …”
“ooh … which of my 13 keys DID you use?”
4/ social / cultural issues –
it is true to say that after decades of existence, PGP still has not been able to truly break into the mainstream,
and its user group tends to be nerdy.
PGP is not cool – but it has to be added, for many users email is no longer the preferred choice of communications. Email is not cool.
Even at IT and technical universities, the use of mail clients is far from natural,
and (inherently insecure) webmail, FB messaging, WhatsApp, Kik etc etc is often the norm.
Even SMS – once thought to be THE universal messaging standard –
has been overtaken by WhatsApp –
taking mobile messages into the Facebook platform,
and adding to a surveillance system impressive in its all-inclusiveness.
An illustration of why OpenPGP isn’t mainstream yet: PGP best practices
The BIG SEVEN problems with security
Moxie Marlinspike’s (of Whispersystems/TextSecure) attack on GPG
A rebuttal: GPG Criticism Reaches New Low As Use Cases Expand
Posted on | March 19, 2015 | No Comments
(On the occasion of the IPv6 sessions at http://wireless.ictp.it/school_2015/ – if you are interested in a workshop on this, please contact us!)
How to set up a Raspberry PI, IPv6 enabled filesharing server with ownCloud
1/ Install Ubuntu or Debian or similar
on a Raspberry PI – a PI2 is best.
help on installation is at: http://raspberry.org
2/ USB stick: plug in and mount
On the command line of you Pi, do the following
and then plug in a USB stick and see what device name it shows up as.
typically it is
if in doubt, take USB stick out,
and plug in again, until you see the device name.
you can also do a
#grep sd /var/log/kern.log
#grep usb /var/log/kern.log
to help you find it.
in what follows, we assume it is
and mount it:
#mount -t vfat -o rw /dev/sda1 /media/usbstick/
#cd /home/<your username>
#sh -c “echo ‘deb http://download.opensuse.org/repositories/isv:/ownCloud:/community/xUbuntu_14.04/ /’ >> /etc/apt/sources.list.d/owncloud.list”
#apt-key add – < Release.key
#apt-get install owncloud
put the data directory on the stick and symlink it:
#cp -R ./* /media/usbstick/data/
#rm -rf ./data
#ln -s /media/usbstick/data/
Instead of the symlink, you could also edit
in your owncloud directory, and point at the data directory on the usb stick:
‘datadirectory’ => ‘/path/to/your/data’,
At this point, you should have an ownCloud server, reachable on
4/ Make it IPv6
Edit your apache’s site configuration
to start with this:
of course you do this with your IPv6 address, which you can find by doing
Then restart apache:
you should now have an IPv6 ownCloud server.
5/ Put this on your home’s DSL (or what your connection might be)
If you do this at home, you might now want to put this on your routers’ DMZ.
Posted on | May 28, 2014 | No Comments
This might not be strictly wireless, but it s a good use of DIY technology anyway: old mobile phones make excellent microscopes, for use in telemedicine (e.g. blood checks), water quality monitoring, etc.
The cost is almost zero – apart from the phone itself, everything can be built from scrap parts.
This model here loosely follows the instructions on http://www.iflscience.com/ & http://www.instructables.com/id/10-Smartphone-to-digital-microscope-conversion/ (thanks!) and was built by Lars Yndal at the PITLab at ITU Copenhagen – http://pit.itu.dk
Click image for larger view.
Posted on | June 9, 2013 | No Comments
“There was a recent news item regarding a teenager’s project to use a super capacitor as a quick-charging energy storage device. The primary claim is that this could be used to fully charge a phone in just 30 seconds.”
Luckily, Wired put a physicist on the case, and he discusses this in much length.
Maybe a bit too complicated for most people.
Here s a simplified version (not precise, just testing whether this is realistic at all):
To charge a mobile phone (battery is typically 1500 mAh at 3.7 V or 5-6 Wh) in 30 secs,
you d need about 600 -700 W for those 30 seconds. Pretty hot.
The energy needed would be about 20,000 J
The energy stored in a capacitor is
W = 0.5 * C * V**2
so that wouldgive
C = 1600 F
Do such capacitors exist?
Yes, they are called supercaps, and no, Khare did not develop them.
They have been used in lost of electronics for decades.
They are inside some of the stuff you use.
You can get them on ebay:
A supercap of 1600 F would be a bit too big for your phone, but
since her work,
in fact looks at improving (rather than “inventing”) supercaps,
it s not entirely impossible that you d see something like this, at some point in time.
just not next week.
Her report claims
“an energy density of 20.1 Wh/kg, comparable to batteries,
while maintaining a high power density of 20540 W/kg.”
The energy density limit would make her current supercap about 250 grams –
a bit heavy still, but the right way to go and the rest, that’s gradual improvement.
So, not possible – not all bad, just reported by most media in a wrong way.
Posted on | June 7, 2013 | 1 Comment
Last week i attended the TV White Spaces Africa Forum 2013
It was an event organised and largely sponsored by Google in partnership with a number of other organisations,
including Microsoft http://www.microsoft.com/,
Internet Society Senegal http://www.isoc.sn/,
Afrinic http://www.afrinic.net/, APC http://www.apc.org/,
and the Senegalese Ministry of Communication, Telecommunications and the Digital Economy
There s plenty of tweets, https://twitter.com/search?q=%23TVWSAfrica,
and Steve Song, now also a fellow NSRCrian, has an excellent summary here:http://manypossibilities.net/2013/06/tv-white-spaces-in-africa/
– to which i’d like to add a few thoughts:
( Steve’s lines in italics, comments in bold.)
Both [Microsoft and Google] stand to benefit from having more Africans affordably online and in this particular case, their corporate goals are fairly well-aligned with a development goal.
In some areas, goals are well-aligned, in other areas – like local ownership, network neutrality, control and openness of data they are in conflict.
There is the very fundamental question of why african networks and services should be run, and user profiles and profits be harvested by non-african companies.
What stuck in my mind from this session was an interesting analogy
shared by Kai Wulff who pointed out that in Africa, taxi drivers
generally don’t fill up their tank completely but just enough to get
where they need to go. Similarly, pay-as-you-go airtime has allows for
dynamic management of phone charges. This is common knowledge but it
was a nice insight that this conceptual framework might be applied to
It could be argued that the underlying logic and reasons for this,
e.g. the non-availability of capital on personal and organizational level,
should be challenged rather than solidified and capitalized on –
but that is a general comment, outside the scope of this conferences’ theme.
Arno Hart, TENET’s project manager for the [Cape Town] trial, presented the work of
the trial so far. So far 10 schools have been connected representing
more than 6400 students. So far the technology has met its most
important goal which is demonstrating non-interference with television
Cape Town has more active terrestrial television broadcast
channels than most place in South Africa so it was ideal for a trial of
TVWS. The throughput and latency of the connectivity is not quite the
10mbps over 8km that has been advertised but it was still respectable
It is evident that TVWS technology is still evolving and performance
improvements can be expected.
The focus of this trial was in fact to look at interference, in an urban privileged environment, rather than at performance in a rural, low density environment, in which we see the greatest potential of this technology.
The performance as such is far from what can be achieved in the 2.4 and 5 Ghz bands, for such a link.
It lead some people to note that, performance wise, of all these trials could have been done better in spectrum that already is open (2.4 and 5 GHz)).
(… The Kenya Mawingu project: …)
A video is at https://vimeo.com/60073409 –
i d be interested in people’s comments on its narrative.
There was some discussion among the participants as to whether an
authentication database was the best approach to TVWS for Africa. Some
people argued that this was importing unnecessary limitations from the
from highly developed (urban) areas rather than remote low density environments
(in which TVWS has its greatest innovative potential)
and essentially tying Africa to a western agenda and blocking innovation rather than enabling it.
It is an interesting question. On the one hand, I would like to
see TVWS achieve the same kind of success that WiFi has through an
unlicensed environment. I fear that constraints such as an
authentication database might limit the uptake of the technology.
Certainly, the idea of having to connect
to a remote database in order to just start equipment
is somewhat of a horror scenario for network deployers out in remote fields and community networkers alike.
But, from a regulators perspective, there is no necessity to limit equipment and innovative use in this way,
and one might hope the inextricability will not be interpreted in this way.
From an african regulators perspective, the US – or generally external – database is not needed in order to get started – databases may be build along the way, and be controlled on national or regional level.
As these databases could and will become increasingly dynamic,
they carry enormous economic value and control potential.
They add one more layer to (already impressive) corporate mapping, surveillance and profiling technology.
The question of whether they will be run and owned by private global companies
and/or national/regional organizations deserves attention.
In order for TVWS [to] achieve the same kind of success that WiFi has through an
unlicensed environment, it will take the same essentials:
unlicensed spectrum, open standards, community interest, sub-$100 gear.
Unlicensed spectrum (Wifi) – not the mobile operators spectrum – today carries the larger part of all mobile data traffic. There is something to learn from this, also for #TVWSAfrica.
Posted on | May 4, 2013 | 3 Comments
putting the green window plug into perspective:
Posted on | August 14, 2012 | No Comments
Steve Houben, Sebastian Buettrich
Arduino Touché @pITlab –
all credits & thanks: DZL, Mads Hobye
we followed their instructable 100% and it worked straight away.
Posted on | August 8, 2012 | 1 Comment
Second comment to the security debate, e.g. here
So MSCHAPv2 is completely broken. No problem.
For EAP/802.1x wireless security, that should not matter, as we only use it inside a tunnel (TTLS, PEAP) (SSL protected).
Popular EAP/802.1x-methods: PEAP+MSCHAPv2 or TTLS+PAP or TTLS+MSCHAPv2
In most networks, on most clients, certificate validation is largely absent
and difficult to enforce across all clients (BYOD!).
Moreover, many user guidelines explicitly ask clients to NOT validate the certificate.
A very simple, realistic attack scenario:
Place a rogue AP with the right SSID and connected to a fake RADIUS server in the target building/area,
and harvest logons at leisure.
No client has any chance to even notice the attack.
So, the tunnel is broken.
The fact that MSCHAPv2 is broken – it does not even really matter:
the attacker lures the client into talking to their rogue RADIUS server,
and of course can read all user credentials, regardless of encryption.
This is NOT a little irrelevant side note to the discussion of MSCHAPv2, which is, i agree, more intellectually interesting.
The MSCHAPv2 discussion unfortunately is an interesting academic but irrelevant side note to the fact that our de-facto wireless security practices render EAP/802.1x broken.
Unless the certificate validation problem is addressed,
we should consider current wireless security with EAP/802.1x completely broken / obsolete.
Agreed – it would not have to be, but it is.
Posted on | August 1, 2012 | No Comments
Quoting this DEFCON 20 article
“MS-CHAPv2 is used quite heavily in WPA2 Enterprise environments.
In their 1999 analysis of the protocol,
Bruce Schneier and Mudge conclude “Microsoft has improved PPTP to correct the major security weaknesses described in [SM98].
However, the fundamental weakness of the authentication and encryption protocol is that it is only as secure as the password chosen by the user.”
“This, along with other writings, has led both service providers and users to conclude that they can use MS-CHAPv2 in the form of PPTP VPNs and mutually authenticating WPA2 Enterprise servers safely, if they choose good passphrases.”
Is there anything new in the attack reported here, then?
The attack focusses not on a library or guessing attack on the password but, instead on
recovering the MD4 hash of the user’s password.
A detailed look into the problem shows that what looks like a 2**128 crack job is really just a 2**56 – due to redundancies, shared bases and zero padding.
In other words, a single round DES crack.
The actual crack work is performed by a dedicated piece of hardware, “an FPGA box that implemented DES as a real pipeline, with one DES operation for each clock cycle.
With 40 cores at 450mhz, that’s 18 billion keys/second. With 48 FPGAs, the Pico Computing DES cracking box gives us a worst case of ~23 hours for cracking a DES key, and an average case of about half a day.”
This cracking engine is made accessible via “the cloud” (no comment on the cloud meme here) – an API and helper tool, free for download.
The article concludes:
“Enterprises who are depending on the mutual authentication properties of MS-CHAPv2 for connection to their WPA2 Radius servers should immediately start migrating to something else. ”
On Schneier’ blog, a comment has been requested:
Can we get a comment / response to the work presented at Defcon on MS-CHAPv2 only being as secure as a single round of DES?”
But, for something that sounds like it s going to bring down most of this planets enterprise wireless, it doesnt seem to make an awful lot of waves. Why not?
As far as wireless security (and only that!) is concerned,
does this discovery mean we need to stop using MSCHAPv2 or maybe even
EAP-TLS/PEAP/802.1x altogether, and “use something else” – as the article
somewhat vaguely says?
Simplified, it doesnt matter much to Enterprise WiFi if MSCHAPv2 is broken, as we are
only using it inside protected tunnels.
Andrew von Nagy explains this much better than i could:
“What is the Impact to Wi-Fi Network Security?
Specifically, does this make much of an impact for Wi-Fi networks where
802.1X authentication is employed where MS-CHAPv2 is used (namely
EAP-PEAPv0 and EAP-TTLS)?
Answer – No, it really does NOT. The impact is essentially zero.”
Much more of a problem in real life wireless is the fact that on the networks i have seen, almost nobody enforces strict certificate validation.
Also, keep in mind that certificates are bound to hosts/domains/organizations, but in no way to SSIDs (whether ESSID or BSSID) or APs.
Thus, a realistic attack scenario is quite simple:
i will deploy a rogue AP and Radius server that supplies some (!) certificate (which will
never be checked for validity!), and own the EAP-TLS tunnel and hence all communication inside it, harvesting usernames and passwords as people connect.
Now THAT is a problem.
The fact that i speak open inside the tunnel is not a problem, really,
as long as we know who owns the tunnel.
So, we can more or less ignore the MSCHAPv2 hack and focus on certificates instead.
ps. Thanks NSRC colleagues for heads up and thanks to my colleague Felix here at ITU, for discussion!
Posted on | June 8, 2012 | No Comments
1. Physical connections
DVI screen, network, usb keyboard, power.
2. Prepare SD card
I m using Debian Squeeze as supplied here: http://downloads.raspberrypi.org/download.php?file=/images/debian/6/debian6-19-04-2012/debian6-19-04-2012.zip
It s the “image we recommend you use. It’s a reference root filesystem from Gray and Dom, containing LXDE, Midori, development tools and example source code for multimedia functions.”
Downloaded it, insert SD card, which in my case mounts as /dev/sdb1
# umount /dev/sdb1
# dd bs=1M if=~/Downloads/debian6-19-04-2012/debian6-19-04-2012.img of=/dev/sdb
1859+1 records in
1859+1 records out
1950000000 bytes (2,0 GB) copied, 947,865 s, 2,1 MB/s
3. First boot
Inserting it into the Pi.
Not so good: the card sticks out from the board quite a bit.
Well. nobody said to expect a cased designer product.
Power supply is known to be critical – i ve chosen a 5V/1200 mA – that should be margin enough.
Plug it in, boots without a problem.
Log in – some people noticed that the Pi comes with english keyboard settings as standard, so check whether you are QWERTY or QWERTZ.
# dpkg-reconfigure locales
as root to adjust your locales settings.
4. Summary so far
No surprises, install and boot is easy, following the guides linked from the raspberry site –
and some guides out there .
Now we are starting running various tests, trying different images, seeing how the CPU copes, whether the board gets hot, etc.
to be continuedkeep looking »