Posts about technology and arts.
Amazon Web Services provides a series of cloud services. When you access the web interface, most - if not all - of the actions you do there are actually translated into API calls.
They also provide SDK’s in several programming languages. With these SDK’s you are able to call the same API used by the web interface. Python has boto (or boto3) which lets you to automate several tasks in AWS.
But before you start using the API, you will need to set up your access key.
It is likely that with time you will have different roles, and may have different permissions with each role. You have to configure your local environment so that you can either use the command line Python utility (installed via
pip install awscli) or with boto.
The awscli is a dependency for using boto3. After you install it, you need to run
aws configure. It will create the ~/.aws/config and ~/.aws/credentials files. You can tweak these files to support multiple roles.
I followed the tutorials, but got all sorts of different issues. Then after debugging some locally installed dependencies, in special awscli files, I found that the following settings work for my environment.
# File: config [profile default] region = ap-southeast-2 [profile profile1] region = ap-southeast-2 source_profile = default role_arn = arn:aws:iam::123:role/Developer [profile profile2] region = ap-southeast-2 source_profile = default role_arn = arn:aws:iam::456:role/Sysops mfa_serial = arn:aws:iam::789:firstname.lastname@example.org
# File: credentials [default] aws_access_key_id = YOU_KEY_ID aws_secret_access_key = YOUR_SECRET
And once it is done you can, for example, confirm it is working with some S3 commands in Python.
#!/usr/bin/env python3 import os import boto3 session = boto3.Session(profile_name='profile2') s3 = session.resource('s3') found = False name = 'mysuperduperbucket' for bucket in s3.buckets.all(): if bucket.name == name: found = True if not found: print("Creating bucket...") s3.create_bucket(Bucket=name) file_location = os.path.dirname(os.path.realpath(__file__)) + os.path.sep + 'samplefile.txt' s3.meta.client.upload_file(Filename=file_location, Bucket=name, Key='book.txt')
The AWS files in this example are using MFA too, the multi-factor authentication. So the first time you run this code you may be asked to generate a token, which will be cached for a short time.
That’s it for today.
Since I moved to New Zealand it has been hard to find good science fiction sections in the local book stores. I really miss the Livraria Cultura at Avenida Paulista, or the Saraiva Megastores. Whitcoulls and Paper Plus do a good job, but they have very few series, only recent releases, and you do not feel that cool environment while you browse the books.
I found that small secondhand bookstores are the best places to find good books - not only science fiction - in New Zealand. Devonport has one, near the waterfront. Auckland has Jason books too, right in the CBD. Wellington, Dunedin and Queenstown too (the Queenstown one is on the second level in a small mall, you have to check it out!).
But last week I remembered my wife telling me about this web site, https://www.hardtofind.co.nz/. I found several books from my Wish List there, and also other Stephen King books recommended by the bookstore clerk from Queenstown.
If you are into science fiction, Asimov, Neal Stephenson, Peter H. Hamilton, Piers Anthony, Arthur C. Clarke, Stephen King, etc, I am sure you will enjoy going to the Hard To Find store in Onehunga. It is near the Dress Smart. So during week if you stop by Dress Smart, you can hop on their free bus from the city centre and later enjoy checking out the store too.
It is a two story house, with lots of rooms, with science fiction, New Zealand, maritime sports, arts, poetry, health, etc.
JENKINS-17887 reports performance problems in the Jenkins TAP Plug-in. It also lists a series of suggestions on how to improve the Jenkins TAP Plug-in performance. On this initial post, we will get a general idea of how the plug-in performs for large projects.
BioPerl has over 21K tests. That should be enough for giving an initial idea of CPU, memory and disk usage for the plug-in.
git clone https://github.com/bioperl/bioperl-live.git cd bioperl-live sudo cpanm -vv --installdeps --notest . sudo cpanm Set::Scalar Graph::Directed XML::LibXML XML::SAX \ SVG XML::Parser::PerlSAX Convert::Binary::C XML::SAX::Writer \ XML::DOM::XPath Spreadsheet::ParseExcel XML::SAX::Writer \ XML::DOM HTML::TableExtract XML::Simple Test::Pod DBI prove -r t/ -a tests.tar.gz All tests successful. Files=325, Tests=21095, 94 wallclock secs ( 2.47 usr 0.55 sys + 88.29 cusr 3.85 csys = 95.16 CPU) Result: PASS
When the test results are parsed, the plug-in also copies TAP files over to the master, in a folder called tap-master-files.
The BioPerl tests are not really big, just 1.7M. It gets doubled as there will be the workspace copy, and the tap-master-files directory copy, so 3.4M.
But several objects get created in memory, and persisted into the build.xml job file. BioPerl generates a build.xml file with 11M. So less than 15M. But the build.xml contains objects that are read via XStream by Jenkins and into the memory.
The build page with the graph, and the other two test result pages are rendering in more than 10 seconds in my computer. But the CPU load is OK, so a closer look at the memory use would probably be more interesting.
The image shows one of the screens in YourKit profiler, where it is possible to see that org.tap4j.plugin.model.TapTestResultResult has over 6 million objects.
One build.xml for the BioPerl project gets over 80K entries for the TestResult object.
grep "org.tap4j.model.TestResult" builds/1/build.xml -o | wc -l 84522
This happens because each TAP file may contain multiple test results (lines with test results). Each of these test results gets turned into a Java object and loaded by the plug-in. So when loading the test result pages, Jenkins needs to wait until all these objects have been parsed, deserialized and read into the memory.
The next post will continue on code improvements, and another benchmark.
Last time I used Blender was around 2007 I think, in University. But the bad weather in Auckland gives me plenty of time to have fun checking out Blender again :-)
Followed the following tutorials:
Here are the work in progress, created only with the bézier curve.
Here’s the result after the mesh was created, and some material applied.
Then using a plane as background, replacing the lamp by a sun, and tweaking a few parameters.
And finally playing with animation. Not sure if there was a time line and animation controls in Blender the last time I used it, but the controls are not really complex.
I had to combine both meshes into a single object, in order to add a bone and rotate it. That is why the logo got back to a single material. The angle of the camera could probably do with some tweaking as well.
But it was my very first time animating in Blender. Some day if I get access to one of those 3D printers, I will check what are the requirements for printing this logo.
Blender models can be downloaded here.
Now back to programming :-)
For a while I had been wanting to try adding a flat colour to the Frege logo. This weekend had some bit of spare time, and here is the result.
Submitted to the project as pull request #299.
And the updated logo.
The colour used was Flamingo (#EF4836), found in Flat UI Color Picker.
Perhaps not exactly revamping, since all I did was just change a colour. There were adjustments to the bézier curves as well, to better align them, but I updated both logos, not just the new one :-)