Blog

Decrypting Lost LUKS Partitions

Linux allows for block level filesystem encryption, via LUKS and the cryptsetup utility. When installing Linux, disk encryption is a recommended option as it ups your data security and protection. When encrypting external drives, the drives are unreadable on Mac and Windows computers, which will then ask you if you want to format the drives. If you’ve formatted a drive by accident, do not panic, just make sure you don’t write any new data to the drive and use below steps to get your data back.

 
1. Search hard-drive for LUKS (missing) partition.
Substitute sdc with your hard-drive, use for example gnome-disks to identify the hard-drive path):

hexdump -C /dev/{sdc} | grep LUKS

This will output something like:

hexdump -C /dev/{sdc} | grep LUKS
2e3b5040  65 73 73 20 64 65 6e 69  65 64 00 4c 55 4b 53 ba  |ess denied.LUKS.|
{{2f500000}}  4c 55 4b 53 ba be 00 01  61 65 73 00 00 00 00 00  |LUKS....aes.....|

→ If you have multiple encrypted partitions on the drive, you will get more outputs. If you just have 1 partition, you can cancel the command once you have reached the first outputs.

 
2. Loopmount the found partition.
Add "0x" to the location descriptor (for example: 2f500000) outputted by GREP in previous step.

losetup -o 0x{{2f500000}} -r -f /dev/{sdc}

 
3. Decrypt the found partition.
With the following command, it will be mounted at /dev/mapper/decrypted_partition. You will be asked for your password.

cryptsetup luksOpen /dev/loop0 decrypted_partition

 
4. Access the decrypted partition
For regular partitions, such as ext4, btrfs, etc, you should now see the partition in your favorite file browser, or using gnome-disks software.

If the partition contains an LVM, run:

vgchange -ay

 

And then check your file browser or gnome-disks software for your hard-drive. De-panic and backup your data to another disk.

Optimizing Images for Google PageSpeed with PHP and NGINX

One of the most critical factors in getting a high Google PageSpeed ranking is optimized images. Penalty is given for any image that is not compressed. This blog post will show you a framework-independent way to optimize your images with PHP and NGINX.

SPIP, Drupal, WordPress and the like store and serve uncompressed files usually from a single folder. In SPIP it is "local", in Wordpress it would be "wp-uploads". Instead of directly optimizing the source images in these folders, I’ve developed a PHP script that copies the files to a separate folder mimicking the same folder structures and filenames, and then optimizing the files. The specific software tools used for optimization are jpegtran and optipng.

 

Optimizing the Images
Deploy the following script in the root directory of your PHP framework.

<?php
//Optimizes images for delivery over web
function copyfile($in, $out,$outfolder) {
        //check fi fiel is in destiation
        if (file_exists($out)) {
                $return = "already_processed";
        }
        else {
                exec('mkdir -p ' . $outfolder);
                exec('cp ' . $in . ' ' . $out);
                exec('chmod 777 ' . $out);
                echo "copied file: $in to $out";
                $return = "not_processed";
        }
        if (isset($_GET['all'])) {
        $return = "not_processed";
        }
        return $return;
}
$rii = new RecursiveIteratorIterator(new RecursiveDirectoryIterator('/var/www/html/local'));
$files = array();
foreach ($rii as $file) {
        if ($file->isDir()){
                continue;
        }
        $files[] = $file->getPathname();
}
        foreach ($files as $value) {
        $file_input = $value;
        //regex to replace last folders
        $re = '/(\/var\/www\/html\/local)/';
        $subst = '/var/www/html/local_optimized';
        $file_output = preg_replace($re, $subst, $value);
        //regex to get folder name of file
        $file_output_pathinfo = pathinfo($file_output);
        $file_output_folder = $file_output_pathinfo['dirname'];
        echo $file_input . "\n";
        echo $file_output . "\n";
        echo $file_output_folder . "\n";
        if (exif_imagetype($value) == IMAGETYPE_PNG) {
                echo 'The picture is a PNG...
                ';
                $processing_status = copyfile($file_input,$file_output,$file_output_folder);
                if($processing_status == "already_processed"){
                        echo "already processed... nothing to do. \n \n";
                } else {
                        echo "processing now... \n \n";
                        $output = exec('optipng -o5 '.$file_output);
                        echo $output;
                }
        }
        if (exif_imagetype($value) == IMAGETYPE_JPEG) {
                echo 'The picture is a JPG...
                ';
                $processing_status = copyfile($file_input,$file_output,$file_output_folder);
                if($processing_status == "already_processed"){
                        echo "already processed...\n\n";
                }in
                else{
                        echo "processing now...\n\n";
                        $output = exec("jpegoptim  --verbose --max=80 --strip-all --preserve --totals " . $file_output);
                        echo $output;
                }
        }
}
?>

 

You can run the script directly from the command line, such as php filename.php or access the file over the internet via https://yourwebsite/filename.php (however, running over CLI and automating this with a cron-job is better as there is no PHP max execution time limit on the CLI, and optimizing images is a resource heavy process).

 

Configuring NGINX
Next we need to configure NGINX to serve images from the "local_optimized" folder as opposed to the "local" folder. Because the above script will run as Cron Job periodically, we want to fall back to the "local" folder when the iamge cannot be found in the "local_optimized" folder (yet).

In your NGINX website conf-file, make sure you add a "location" block for your main website ("/"), and then serve the "local_optimized" folder primarily and then the original "local" folder as backup:

#your main location block
     location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass https://10.0.0.10:443;
}


#your optimized image block
location /local/ {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_pass https://10.0.0.10:443/local_optimized/ ;
        proxy_intercept_errors on;
        recursive_error_pages on;
        error_page 404 = @static_image_https;
}
#your optimized image block fallback
location @static_image_https {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass  https://10.0.0.10:443;
}

Don’t forget to nginx -t and service nginx reload

 

Validate your Setup
As mentioned in the beginning, Google PageSpeed is a great tool to validate that your images are served compressed. To validate the script working, you can simply compare the filesizes of the images in the "local" folder versus the "local_optimized" folder.

Qemu Headless Install

I use qemu to virtualize appliances on my servers, and I access them over SSH. Allthough SSH supports streaming X windows so it is possible to fire up a graphical install, it is not so convenient as the connection may drop and cancel the install.

A convenient workaround is to use qemu’s curses interface, where instead of a display, a terminal is attached to the virtual machine. That way you can install and use your virtual machines directly in the SSH terminal.

To install a new system from scratch, simply define the -hda, -cdrom and -boot flags, and add the -curses option:
qemu-system-x86_64  -hda /dev/sdc1 -m 1500 --enable-kvm -curses -cdrom images/debian-stable.iso -boot d

The Debian installer supports headless mode, but we have to add two settings at the boot prompt to make it work with curses.

1. Fire up qemu with the command above. Wait a few seconds, the screen will turn black after loading syslinux. Hit ESC, and enter:
install fb=none vga=normal

and follow the install. Voilà, you’re installing your VM in qemu directly over ssh.

Tip: To prevent losing the install if the network connection is lost, run the commands in a screen terminal.

LXC IPTABLES Error When Not Loading Kernel Module

Recently I was deploying a service that made use of Linux’s IPTABLES feature, but this time in an LXC container. LXC containers provide an extremely lightweight virtualization technology and a simple way to separate environments.

When loading IPTABLES rules in the container I encountered the following error:

ERROR: initcaps
[Errno 2] modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/3.16.0-4-amd64/modules.dep.bin'
ip6tables v1.4.21: can't initialize ip6tables table `filter': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.

This was due to the fact that on the host, the ip6table_filter module was not loaded. Usually, the iptables command will by itself load the module when needed, but LXC shares the kernel of the host system and a container is restricted from loading modules into the host’s kernel.

The solution is to simply load the kernel module on the host:

# on debian jessie, as root:
modprobe ip6table_filter

After that the container will be able to make use of the new kernel module.


Was this article useful? Leave me a comment!

OpenLabel: Product Information from the Crowd

I’ve been waiting for an open platform where people can share the "what to know" about any product. When I buy something I want to know what impact it has, on my health, the workers, the planet.

I want to know this information from people I trust, not from the corporations themselves that don’t want me to worry about "what kind of aerosol is in that deodorant and it’s climate impact" but rather "how this is going to increase my sex-appeal."

Finally the platforms that help us gather and use consumer sourced producer information are emerging. With OpenLabel, https://theopenlabel.com/, and www.wiki-products.org (German) we are entering a buying culture where if I have the choice between two products and I happen to care about animal rights I can make a better choice. Better might not be good, but if we’re sticking with consumerism let’s send the right signals up that chain.

When advertisement and mass media lose their power, we will probably stuff this void with real information. I’m looking forward to that.

Unattended Upgrades on Debian

Unattended upgrades on Debian allow to sync/update any repository and install the upgraded packages automatically. This makes sense for packages in the security repository.

The following commands enable unattended security upgrades on Debian 7 and 8. Note that it is not enough to just install the "unattended-upgrades" package.

apt-get install unattended-upgrades apt-listchanges
dpkg-reconfigure -plow unattended-upgrades

By default it runs daily, sends a mail to root. Follow the Debian Wiki for further teaking: https://wiki.debian.org/UnattendedUpgrades

Enable Browser Caching with NGINX

server
listen 80;

location /

expires 30d;
add_header Pragma public;
add_header Cache-Control "public";


Set default server on NGINX

When NGINX can’t match a virtual host to a requested domain name it serves the first server-block registered in the system. This results in all unconfigured domain names serving the same website, which negatively affects SEO.

To serve a site for all unconfigured domain names, use the default_server property to serve a placeholder page. Don’t forget to set it on both HTTP:80 and HTTPS:443.

Create a file in /etc/nginx/sites-enabled/ and paste this config to serve a static html page stored at /var/www. Don’t forget to adjust the reference to the SSL key and crt.

Don’t forget to restart the nginx server afterwards (Debian):
service nginx restart

Sample fallback config file

server {
        listen   80 default_server;

        root /var/www;
        index index.html index.htm;

        # Make site accessible from http://localhost/
        server_name localhost;

        location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ /index.html;
                # Uncomment to enable naxsi on this location
                # include /etc/nginx/naxsi.rules
        }

       
}



# HTTPS server
#
server {
        listen 443 default_server;
        server_name localhost;

       root /var/www;
       index index.html index.htm;

ssl on;
ssl_certificate /etc/nginx/fallback.crt;
ssl_certificate_key /etc/nginx/fallback.key;

#enables all versions of TLS, but not SSLv2 or 3 which are weak and now deprecated.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

#Disables all weak ciphers
ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";

ssl_prefer_server_ciphers on;

        location / {
                try_files $uri $uri/ =404;
        }
}

PiWik behind proxy in LXC or Qemu

PiWik can lookup your site visitor’s IPs to geolocate them. But if you’re running PiWik as an LXC container and forward traffic with Apache or Nginx to the container, PiWik thinks you’re coming from the IP the last network interface, eg. 10.0.0.2 or your server’s public facing IP.

I assume that the container runs Debian and you’re running PiWik with Apache.

1. Make sure you forward the original remote IP as HTTP headers to your Piwik install.
If you use NGINX as a proxy, you’d do it as follows:

proxy_set_header X-Real- $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

2. Configure Piwik to use the remote HTTP headers instead of normal ones:

nano /var/www/html/config/config.ini.php

and add these three lines below the [General] settings:

; Uncomment line below if you use a standard proxy
proxy_client_headers[] = HTTP_X_FORWARDED_FOR
proxy_host_headers[] = HTTP_X_FORWARDED_HOST

You may also need to enable the Apache RPAF module.

apt-get install libapache2-mod-rpaf
nano /etc/apache2/mods-enabled/rpaf.conf

Change RPAFproxy_ips setting to include your bridge network’s IP, or public IP depending on your set up, eg. 10.0.0.2

service apache2 restart

Check your PiWik and see visitor’s IPs. PiWik can automatically anonymize IP adresses to retain privacy.

Picasa on Archlinux with Videos

1. enable [multilib] repository.

2. Install winetricks, wine, and lib32-lcms from multilib.
# pacman -S wine winetricks lib32-lcms

3. Then download and install Picasa:

wget dl.google.com/picasa/picasa39-setup.exe
export WINEARCH=win32
winetricks ie7
wine picasa39-setup.exe

Now you got picasa working.
Next is installing video codecs so Picasa can pick up .MOV files, etc.
Download this codec pack, and install with Wine. Select "Advanced" during install, and make sure you never select the "Windows" options when you can avoid them.
http://codecguide.com/download_k-lite_codec_pack_basic.htm

Thoughts on OKfest 2012

I stumbled over this blog post that made me think about a pretty hillarious moment at OKFest 2012, my first time in Finland.

An illustration of the potential difference between centralised and decentralised development at the infrastructure level was offered by Urs Riggenbach of Solar Fire, who described the development of open source hardware for small-scale hydro-electric power generation. Urs argued that, rather than massive cost large-scale Dams projects, with their visible ecological impacts, potential to displace communities, and scope for corruption in their contracting arrangements, communities could make use of Intellectual Property free designs to construct their own small-scale solutions.

Tim Davies, www.timdavies.org.uk

It was in a panel representing the “open development” efforts done by the World Bank, Os and NGOs. As Tim Davies points out in his blog, open development and open knowledge as framed by the Open Knowledge Fondation strongly focusses on data as in statustics, exluding other knowledge that’s open such as open tools for change: Open hardware, openly accessible knowledge, construction plans, etc.

For me, open development means to work on solutions for positive change ("development") and at the same time "open" up the tools for this change so that others can do the same.

Senior Project Report and Presentation

Here is a presentation I gave part of graduation week at College of the Atlantic, 2012. You can also read my senior project report as PDF, here.

Project Report

High-res Version, 94MB
PDF - 2 Mb
Low-res Version, 2MB
PDF - 90 Mb

Global Collaboration, Local Production: Open Source Commons

The open source movement is facilitating the creation of knowledge in a decentralized way that serves all, not only the person that creates it. If the intellectual commons becomes the best source of information in other realms such as agriculture, politics, economics and medicine, it will be a revolutionary tipping point, writes Urs Riggenbach. Via STWR

14th January 2011 - Published first on HumJournal

Wikipedia proves that global collaboration on a commons (a resource commonly owned) is possible. The amount and quality of open source software shows that this collaboration can be very productive. An example is Firefox, an application that is free to all users. Everyone with the skills to program is welcome to contribute to its development. Other examples are Linux, a fully featured operating system, and Apache, a software that powers 60% of the world’s websites. These technologies are a commons whose use is not restricted; use is encouraged, and the more people use it and improve it, the more useful it becomes. This is what is so special about the intellectual commons.

We can take the idea of global collaboration on the commons further, and apply it to technologies other than software. An exciting example of commons technology is on the building plans of life: building machinery, houses and devices for everything that is essential to sustain our lives. This makes the information from which industrial goods are produced a commons. This is revolutionary because it empowers people to use free and open knowledge to produce for themselves, rather than being dependent upon the predominant capitalist economy that forces them into industrial consumption. Patent laws currently enable capitalists to exclude people from access to their information, giving them monopolistic power to produce the goods that people want. The open source or commons movement now is creating peer-to-peer networks that facilitate the creating of that knowledge in a decentralized way. this serves all, not only the person that creates it.

The greatest example of this right now is what the people in the Factor E Farm are doing: they are creating the Global Village Construction Set. It is a set of modular technologies that can be used primarily for farming and construction. The set is self-replicable, meaning that it includes the technologies that one needs to replicate any one tool part of the set. Factor E Farm has already developed a hydraulic farm-tractor powered by a modular power generation unit that can also power their compressed-earth-brick (CEB) press as well as other tools. Farms around the world can adapt their plans and start producing machinery that is 8-20 times cheaper than the commercial "competition." As the number of people using these technologies increase, some of them will share the modifications they have made to the technologies back with the user-community. Hence these technologies will be ever-evolving.

There are other technologies similar to the Global Village Construction Set that enable a shift towards post-industrial, local, decentralized production.

One of them is a 3D printer: A printer that can print objects in 3D made from material such as plastic. It connects to a computer that supplies it with the 3D model, and then starts to print out layer by layer. At the same time as commercial variants come to the market, there are already three open source 3D printers. That means that anything made from plastic (or silicon, or perhaps soon metal) can be produced by a printer that people can build themselves, and the printer can print large parts of what it takes to build the next one itself. People can build their own version at the cost of raw materials and the time spent learning to build and use it. This is empowering. This is how production should be.

These are the 3D printers available today. Find out more about Makerbot, RepRap and Fab@Home.

Since they are based on the ability to move a tool in all three directions of x, y and z, the tool for printing plastic can be replaced by a tool to weld, cut, spray, stir, etc, each time increasing the possibilities of things to produce.

Building plans for life are empowering people to become less dependent on the industrial economy, as they can start to provide for themselves. They are in fact facilitating the movement of localizing economies, as communities with local economies can tap into this global commons and create their own market autonomy. Local/complementary currencies, farm-to-school programs, community-supported-agriculture, cooperatives and cooperative networks are also helping to localize economies; and making the information on how to create these a commons can facilitate the movement further.

Constructing hydraulic tractors or electronic technology needs skills, and so does using them. This is a problem, as today’s educational systems are built to supply graduates to the industrial workforce where only basic skills are needed. On the other hand, especially in higher education, students are taught a lot of knowledge that is not directly relevant to the creation of our day to day life. There is an interesting discussion about how much one needs to know in order to build, use and develop these machines at Global Guerrillas.

I believe that this trend in open source technology actually helps in bringing about sustainability, since it empowers local production though global innovation. It can make communities more self-reliant and their markets more autonomous, since production can be localized. It makes technology accessible and empowering. In terms of economic development, the best aid is when people become able to help themselves. Open source commons lead to a complete transfer of knowledge, and creates no dependence on corporate input.

Nurturing the commons brings together a variety of people. Those who see economic inequalities caused by inequalities of access to knowledge come together with those who take part in a global, collaborative process to create software that helps local economies start up. Open source hackers find that they have something in common with farmers who produce from their own seeds. A commons---be it a seed or a software--- has to be maintained; otherwise it dies. It also has to be protected; otherwise it can get taken over by private firms and made intellectual property by patenting it. In the open source software world, software is protected by a license that makes it impossible to patent/restrict the knowledge. But many farmers are still dependent on a commons that is not protected. Large multinational corporations are buying up their seeds, patenting them in such a way that the farmers don’t have the right to produce from their own seeds anymore.

The realms for application of open source principles are all-inclusive because production always depends on knowledge. Software, agriculture, architecture, technology, politics, economics, medicine and science are all realms for the application of open source principles.

Medicine, Science and Biotechnology

Governments should have an interest in their people’s health. Some governments are corrupted by the corporations they serve, but nevertheless governments that are publicly funding research should be interested in the open source approach for two reasons. 1) In order to bring the maximum benefit to a people’s health, the research result should be available to the whole industry freely. Hence research results should be released as a commons, using for example the creative commons license. 2) One government has only a limited amount of funding, and by working together with other governments they can advance more in their research and develop more and and better medicine to serve their people. Governments might be interested in creating common research pools that increase the health of their citizens and those of other countries.

Many research teams are starting to cross-pollinate and collaborate, and there are 50 open source medicine projects, from software to actual projects that create accessible research data. As in open-source building plans for life, open source projects can not only focus on the building plan for a product, but also on the building plan for the machine (capital) that is used to produce that good, or open source capital. If open source medicine takes off, I imagine that there will be medical capital developed, such as a machine that can synthesize medicines on a small scale. It could connect to the Internet, where it could tap into a global database of medical building plans and research.

Biotechnology and the sciences in general would benefit from more collaborative approaches too. Cambia, an international non-profit focused on democratizing innovation, has launched and supports Open Innovation projects.

Politics

"Open source governance is a political philosophy which advocates the application of the philosophies of the open source and open content movements to democratic principles in order to enable any interested citizen to add to the creation of policy, as with a Wiki document. Legislation is democratically opened to the general citizenry in this way, allowing policy development to benefit from the collected wisdom of the people as a whole" (from Wikipedia, 6. Dec, 2011).

In terms of an architectural commons, the OpenArchitecture Network shows the way. The agency builds houses for communities in need through a collaborative process with the locals, and then opensources the plans on their Website.

How do architects get their ideas? They probably look at houses. Imagine if an architect can take the plan of any house and adapt it. No need to reinvent the same four walls over and over again. Someone may now take the plan for a school in Nairobi, make that building flood-resistant and use it in Haiti. The world then is blessed with two plans for different locations.

Economic development and local currency

An example for economic development is the organization called STRO. To improve living conditions, the organization focuses on strengthening local economies through local currencies and local production. Just like the OpenArchitecture Network, they are not able to work with every community; but they potentially help them by releasing their building tools as a commons. In the case of STRO, it is a software used to manage local currencies: Cyclos. The system allows people to have their own accounts and trade in local/complementary currency.

It is not a coincidence that institutions focused on localizing and autonomizing economies nurture the commons, as the examples of STRO and Cyclos show. In fact, localizing economies are both dependent and supportive of a global commons. As we localize economies, they become less competitive between each other, and more focused on providing what is needed locally. It also becomes easier to tell what is needed locally, as a smaller economy is easier to understand. They become more autonomous, as people start to control their own means of production, and more sustainable, as people are able to understand the effects of their production, while they have the power to directly affect how things are produced. With knowledge comes power, with power comes responsibility, and all of this leads to wiser decisions. But these communities all face the same problem: How to produce as much as possible locally. As firms produce locally, the need for global patents decreases, enabling a rise in collaboration. Hence the wheel does not need to be reinvented in every community; they build upon each other’s efforts. In the examples provided, I see a global network of collaboration on sustainable living and local economies emerging.

These networks are fueled by farmer-scientists, private people, development institutions and local businesses. They are not fueled by corporations nor capitalist firms, even though the open source paradigm does not oppose profit maximization. It opposes the other basic principle of capitalism: exclusion. In capitalism, and hence in industry, goods are produced from knowledge that is proprietary; and keeping this knowledge restricted (a non-commons) gives them their competitive edge. It is in this principle, that commons oppose capitalism.

As time passes, these intellectual commons may reach critical mass and become the best source of information for production, just like Wikipedia has become the best source for generally everything else. That will be a revolutionary tipping point. Imagine that the commons for cancer medicine has been developed through a collaborative process between three countries, and it has become the leading knowledge in the market. That means that even capitalist firms will tap into it; but as they do so, they will be required to share their modifications of the knowledge back to the public. That creates a truly "perfect market" in which profit maximization through exclusion is impossible.

Once part of our cultural evolution, the knowledge to produce for ourselves has been stripped away from us, leaving us dependent on the industrial economy. As this knowledge finds its way back into protected open source commons and the public domain, we are experiencing a revolutionary change, an empowerment like never before. Personally, I move forward by nurturing the commons; and I know many brothers and sisters who are working for the same causes of local welfare, solidarity, autonomy and sustainability.