Blog

Posts about technology and arts.

How does the Jenkins Credentials Plug-in store passwords?

Jenkins Credentials Plug-in manages credentials stored in Jenkins. These credentials can be used in many jobs and by plug-ins for executing SSH commands, authenticating to systems, or running other commands that need some sort of authentication or authorisation.

I recently used its API for the first time in the BioUno figshare Plug-in to store OAuth 1.0 credentials (consumer key, consumer secret, token key, token secret). This blog post has more details about how we used the plug-in, but this post is specifically on how the passwords are stored by Jenkins.

Secret and ciphers

Jenkins stores its configuration on disk as XML using the XStream library. Plug-in developers using the Credentials Plug-in API must use the Secret class to encrypt sensitive information.

The Secret.fromString method is responsible for creating a cipher from a given String. As in the Secret Javadoc, “this is not meant as a protection against code running in the same VM, nor against an attacker who has local file system access on Jenkins master”. But at least makes things more complicated :-)

public static Secret fromString(String data) {
    data = Util.fixNull(data);
    Secret s = decrypt(data);
    if(s==null) s=new Secret(data);
    return s;
}

The first line simply replaces a null string by an empty “”, or keeps the current value of not null.

After that, the decrypt method is called.

public static Secret decrypt(String data) {
    if(data==null)      return null;
    try {
        byte[] in = Base64.decode(data.toCharArray());
        Secret s = tryDecrypt(KEY.decrypt(), in);
        if (s!=null)    return s;

        // try our historical key for backward compatibility
        Cipher cipher = getCipher("AES");
        cipher.init(Cipher.DECRYPT_MODE, getLegacyKey());
        return tryDecrypt(cipher, in);
    } catch (GeneralSecurityException e) {
        return null;
    } catch (UnsupportedEncodingException e) {
        throw new Error(e); // impossible
    } catch (IOException e) {
        return null;
    }
}

The KEY.decrypt() call will return a javax.crypto.Cipher. The Cipher class is handled in CryptoConfidentialKey in Jenkins API, where it defines the algorithm used to create the cipher: AES.

Jenkins has also a ConfidentialStore, that is required to create the cipher. This class must be initialized before someone tries to create or read a cipher. This extra step also increases security, though access to the JVM is still a problem.

It is a bit late, so it is all for today. In summary: the credentials plug-in gives you a central place to manage credentials, but it is up to plug-in developers to use it. Sensitive values can be encrypted with AES on disk. So it is important that your file permissions, ACL and system auditing processes are in place and well maintained and monitored.

Happy hacking!

Modeling observation data in SOS (Sensor Observation Service)

This week NZ Herald published an article about a device created by an Irish farmer enterpreneur that sends a message to a farmer when the cow is about to give birth. The device monitors “heightened tail moviment”.

In this post I will try to apply what I am learning following the SOS Tutorial (the Open Geospatial Consortium standard for Sensor Observation Service). Feel free to drop me a message via @kinow if you find any mistakes or have any suggestions.

Modeling the tail moviment observation data in SOS

SOS is a standard designed to provide access to observation data. There are several server implementations, such as Kisters KiWIS, istSOS and 52North SOS.

The standard mentions and utilises several other standards, such as SensorML, WFS, XML, WMS, etc. The SOS Tutorial on how to model your observation data for SOS starts by defining procedure, observed property, feature of interest, phenomenon and result times, and the result value.

Let’s try to model the data from the tail moviment sensors in the following table.

Name Description Our example
Procedure The process that has generated the observation, such as a sensor, from the O&M specification. In our example that could be a sensor identification moocall_001
Observed Property A property which is observed (look at NASA SWEET ontology for existing values) Heightned tail moviment
Feature of Interest A feature that carries the property which is observed Pregnant cow
Phenomenon and Result Times The phenomenon time is when the data has been taken, and the result time when it has been created. If both are the same, the resultTime can point to the phenomenonTime 20150623142000
Result value This is the result of the observation. Can be a OM_Measurement if numeric, OM_TruthObservation, etc (O&M) 3 (supposing we have a scale from 1 to 5)

In the next post I will try to show how to load this model and some dummy data into a fresh installation of 52North SOS server.

Groovy Hooks in Jenkins for increasing logging level

Yesterday, while debugging a problem we had in the BioUno update center, I realized that after increasing the logging level in the WEB interface, the messages that I needed weren’t being displayed in the logs.

It happened because some of the logging happened during Jenkins initialization, and before I could adjust the log level.

The solution was to use a Groovy Hook Script. If you are familiar with Linux init scripts, the idea is quite similar.

A Groovy script in the $JENKINS_ROOT_DIR/init.groovy.d/ directory is executed during Jenkins initialization. This way you can increase the global logger level with a script as the following below.

import java.util.logging.ConsoleHandler
import java.util.logging.LogManager
import java.util.logging.Logger
import java.util.logging.Level

def logger = Logger.getLogger("")
logger.setLevel(Level.FINEST)
logger.addHandler (new ConsoleHandler())

Happy logging!

Contributing to Apache Jena

As I mentioned in my previous post, I am using Apache Jena for a project of a customer. I had never used any triple store, nor a SPARQL Endpoint server before. But for being involved with the Apache Software Foundation, and since the company itself is using several Apache components, it was only natural Jena to be our first choice.

It has served us very well so far. At the moment we have less than 100 queries per day, but the project is still under development and we expect 1000 queries per day by the first quarter of 2015 and 1000000 near the end of 2015. We also have few entries in TDB, but expect to grow this number to a few million before 2016.

Basic workflow of a SPARQL query in Fuseki

Before using any library or tool in a customer project, specially when it is an Open Source one, there are many things that I like to look at before deploying it. Basically, I look at the features, documentation, community, open issues (in special blockers or criticals), the time to release fixes and new features and, obviously, the license.

At the moment I’m using Apache Jena to work with ontologies, SPARQL and data matching and enrichment for a customer.

Jena is fantastic, and similar tools include Virtuoso, StarDog, GraphDB, 4Store and others. From looking at the code and its community and documentation, Jena seems like a great choice.

I’m still investigating if/how we gonna need to use inference and reasoners, looking at the issues, and learning my way through its code base. The following is my initial mapping of what happens when you submit a SPARQL query to Fuseki.

Fuseki SPARQL query work flow

My understanding is that Fuseki is just a web layer, handling a bunch of validations, logging, error handling, and relying on the ARQ module, that is who actually handles the requests. I also think a new Fuseki server is baking in the project git repo, so stay tuned for an updated version of this graph soon.

Happy hacking!

Subscribe