Blog

Posts about technology and arts.

Add a header to a file with Shell script (sed)

Today I was re-generating the documentation for a REST API written in PHP, with Laravel. To generate the documentation, one would have to call a Laravel command first. That command would create a Markdown page. And since in this project I am using Jekyll for the project site, the final step was adding a header to the file, so that Jekyll can recognize that content as a blog post.

Laravel allows you to add custom commands to your project, so I decided to write a command that would call the other command that generates the documentation, and add an extra step of adding the header to the Markdown file.

Here’s the shell script part, that allows you to add a header to a file, in place (i.e. it will alter and save the change your file).

sed -i 1i"----\nlayout: page\ntitle: API Installation\n----\n\n" ./docs/documentation/api/api.md

Here the first argument to sed, -i is for in place. Then that strange 1i means that it will insert something before the first line, once. Then we have our header, and finally the file.

Happy hacking!

Content negotiation with Spring Boot and React

A few days ago I had a bug in a system built with Spring Boot and React. The frontend application was using a REST client in React, built in a similar way to what is found in the documentation, and also in blogs.

import rest from 'rest';
const Rest = () => rest.wrap(mime);

However, for one of the Spring Boot application endpoints, the React component was not working. The response seemed to be OK in the Network tab, of the browser developer tools. But the component was failing and complaining when parsing the response.

Turns out that the frontend was sending the request with the header Accept: text/plain, application/json. And Spring Boot was just using its default content negotiation and returning what the frontend requested: a text plain version of, what looked like, JSON.

The quick fix was to request the content as JSON in React.

import rest from 'rest';
import mime from 'rest/interceptor/mime';
const Rest = () => rest.wrap(mime , { mime: 'application/json' } );

Now we will revisit the backend to return the JSON content, as content, regardless of what the user asks :-)

Happy hacking!

Checking the operating system type in shell script

Last week I learned about a tool called ShellCheck, a tool for static analysis of shell scripts. It reports errors like missing double quotes, use of deprecated syntax, etc.

I decided to check some projects I contribute to, and the first issue I found was in Apache Jena:

kinow@localhost:~/Development/java/jena/jena/apache-jena/bin$ shellcheck arq

In arq line 8:
    case "$OSTYPE" in
          ^-- SC2039: In POSIX sh, OSTYPE is not supported.

So, in summary, the OSTYPE variable should not be available in POSIX shell. The case in question, where OSTYPE is being used, checks for the Darwin OS type (i.e. Mac OS). Knowing how things get weird when you use different operating systems, I decided to check and learn how OSTYPE works. Here’s what I found.

  • In Ubuntu, with /bin/bash, OSTYPE works fine (linux-gnu)
  • In Ubuntu, with /bin/sh, OSTYPE is not set
  • In Mac OS, with /bin/sh, OSTYPE is set (darwin15)

I checked the shells to make sure they were not pointing to symbolic links - some distributions use a different default shell, and replace /bin/bash and/or /bin/sh by a link to another shell. Looks like Mac OS has a POSIX shell that behaves different than Ubuntu’s.

Instead of trying to find a way to use OSTYPE, I decided to spend some time looking at how other projects do the same thing. And the best example I could find was git.

Instead of relying on OSTYPE, git uses uname.

I will spend some time during the next days working on a proposal to replace the OSTYPE from Apache Jena scripts, but then may have to submit more changes for the other issues found by ShellCheck.

Happy hacking!

Using the AWS API with Python

Amazon Web Services provides a series of cloud services. When you access the web interface, most - if not all - of the actions you do there are actually translated into API calls.

They also provide SDK’s in several programming languages. With these SDK’s you are able to call the same API used by the web interface. Python has boto (or boto3) which lets you to automate several tasks in AWS.

But before you start using the API, you will need to set up your access key.

It is likely that with time you will have different roles, and may have different permissions with each role. You have to configure your local environment so that you can either use the command line Python utility (installed via pip install awscli) or with boto.

The awscli is a dependency for using boto3. After you install it, you need to run aws configure. It will create the ~/.aws/config and ~/.aws/credentials files. You can tweak these files to support multiple roles.

I followed the tutorials, but got all sorts of different issues. Then after debugging some locally installed dependencies, in special awscli files, I found that the following settings work for my environment.

# File: config
[profile default]
region = ap-southeast-2

[profile profile1]
region = ap-southeast-2
source_profile = default
role_arn = arn:aws:iam::123:role/Developer

[profile profile2]
region = ap-southeast-2
source_profile = default
role_arn = arn:aws:iam::456:role/Sysops
mfa_serial = arn:aws:iam::789:mfa/user@domain.blabla

and

# File: credentials
[default]
aws_access_key_id = YOU_KEY_ID
aws_secret_access_key = YOUR_SECRET

And once it is done you can, for example, confirm it is working with some S3 commands in Python.

#!/usr/bin/env python3

import os
import boto3

session = boto3.Session(profile_name='profile2')
s3 = session.resource('s3')

found = False

name = 'mysuperduperbucket'

for bucket in s3.buckets.all():
    if bucket.name == name:
        found = True

if not found:
    print("Creating bucket...")
    s3.create_bucket(Bucket=name)

file_location = os.path.dirname(os.path.realpath(__file__)) + os.path.sep + 'samplefile.txt'
s3.meta.client.upload_file(Filename=file_location, Bucket=name, Key='book.txt')

The AWS files in this example are using MFA too, the multi-factor authentication. So the first time you run this code you may be asked to generate a token, which will be cached for a short time.

That’s it for today.

Happy hacking!

Best place to find science fiction books in New Zealand

Since I moved to New Zealand it has been hard to find good science fiction sections in the local book stores. I really miss the Livraria Cultura at Avenida Paulista, or the Saraiva Megastores. Whitcoulls and Paper Plus do a good job, but they have very few series, only recent releases, and you do not feel that cool environment while you browse the books.

I found that small secondhand bookstores are the best places to find good books - not only science fiction - in New Zealand. Devonport has one, near the waterfront. Auckland has Jason books too, right in the CBD. Wellington, Dunedin and Queenstown too (the Queenstown one is on the second level in a small mall, you have to check it out!).

But last week I remembered my wife telling me about this web site, https://www.hardtofind.co.nz/. I found several books from my Wish List there, and also other Stephen King books recommended by the bookstore clerk from Queenstown.

Books from Hard To Find bookstore

If you are into science fiction, Asimov, Neal Stephenson, Peter H. Hamilton, Piers Anthony, Arthur C. Clarke, Stephen King, etc, I am sure you will enjoy going to the Hard To Find store in Onehunga. It is near the Dress Smart. So during week if you stop by Dress Smart, you can hop on their free bus from the city centre and later enjoy checking out the store too.

It is a two story house, with lots of rooms, with science fiction, New Zealand, maritime sports, arts, poetry, health, etc.

Happy reading!

Subscribe