Category: JSON
-
Stephen Dolan’s “jq” is a lightweight and flexible command-line JSON processor
- https://stedolan.github.io/jq/
- https://stedolan.github.io/jq/tutorial/
- https://stedolan.github.io/jq/manual/
- https://github.com/stedolan/jq
- https://github.com/stedolan/jq/wiki
- https://github.com/stedolan/jq/wiki/Cookbook
- https://stackoverflow.com/questions/tagged/jq
- http://shop.oreilly.com/product/0636920032823.do – o’Reilly “Data Science at the Command Line” – has some examples making use of jq
- https://library.oreilly.com/book/0636920032823/data-science-at-the-command-line/84.xhtml?ref=toc#_jq – behind a paywall
- available as source (portable C) and as executables for various Intel-based platforms, i.e. some Linux distributions, Mac OS X, Windows incl. Cygwin
- https://cygwin.com/packages/x86_64/jq/
- https://www.safaribooksonline.com/library/view/json-at-work/9781491982389/ch06.html#json_search : …,
jq,jqPlay,jq-tutorial, …
“jq” (presumably) stands for “JSON query processor”.
jq helps writing shell scripts processing JSON responses from e.g. RESTful application APIs like Jenkins, Atlassian JIRA, Atlassian Confluence, … – code making use of powerful means like XPath (…) or jq is supposedly far more readable than Python / Perl / … scripts slurping the JSON and processing it (w/o jq resp. XPath) – but there are also Python resp. Perl bindings for jq.
Hope and fear
A couple of weeks ago I had written a shell script querying a Jenkins server’s REST API “à la XML”. Now it looks a little straighter to query the API “à la JSON” and employ jq. But the critical question is, whether the (industrial) customer (I wrote the Jenkins utility for) will like the dependency on a utility like jq. My Jenkins utility might have to serve for quite a couple of years – but who can predict the future and availability of jq?
Update 2016-02-23: Meanwhile I built a shell script with a couple of simple jq queries accessing Jenkins CI. Looks rather impressive to me.
If you do XPath queries from shell scripts, you always have the option to rewrite the shell script as Python or Perl script. But what about JSON queries (…) in Python or Perl?
- https://pypi.python.org/pypi/jq
- https://github.com/spiritloose/JQ — “Perl binding for jq”
By default, jq pretty-prints JSON output.
$ jq . < … > …
-
JSON and Perl – json-tidy
- http://search.cpan.org/perldoc?JSON
- https://metacpan.org/module/json_pp – a nice utility, that also does JSON pretty-printing
-
the JIRA REST API, how to authenticate, …
- https://en.wikipedia.org/wiki/Jira_(software)
- https://developer.atlassian.com/jiradev/jira-apis/jira-rest-apis/jira-rest-api-tutorials/jira-rest-api-example-basic-authentication
- https://developer.atlassian.com/jiradev/jira-apis/jira-rest-apis/jira-rest-api-tutorials/jira-rest-api-example-cookie-based-authentication
- https://metacpan.org/module/json_pp – a nice utility, that also does JSON pretty-printing; it comes with http://search.cpan.org/perldoc?JSON
There are certainly legion of reasons to use a REST API and also to use the JIRA REST API, I wanted to create a linear “diary” of JIRA actions.
This is our sample JIRA issue URL:
http://kelpie9:8081/browse/QA-31
This is its corresponding REST URL:
http://kelpie9:8081/rest/api/2/issue/QA-31
Find yourself a working sample JIRA issue URL use the corresponding REST URL in your browser, save the JSON returned to a file!
You usually want to read “pretty” / tidied JSON, so before you start reading JSON, find yourself a JSON-tidy utility:
Usually we want to retrieve JSON from JIRA through REST URLs via the curl utility.
CAVEAT: See my note on the cookie jar below!
This is the “simple example”, that the page referred to above (“Basic Authentication“) shows you:
$ curl -D- -u fred:fred -X GET
-H "Content-Type: application/json"
http://kelpie9:8081/rest/api/2/search?jql=assignee=fredIf your JIRA site requires you to use “Basic Authentication”, you have to encode username:password base64-wise, and this is how to do it:
$ echo -n fred:fred | base64So if you want to use “Basic Authentication” with these credentials, this is how … (using our sample REST URL):
$ curl -D- -X GET
-H "Authorization: Basic $(echo -n fred:fred | base64)"
-H "Content-Type: application/json"
"http://kelpie9:8081/rest/api/2/issue/QA-31"During my experiments I got locked out of the company’s Active Directory / SSO quite a few times — and I had to call the help desk in order to get my account reset. This is what JIRA tells you, once it decides you have to go through a CAPTCHA_CHALLENGE procedure, because you are behaving a little too suspicious:
HTTP/1.1 403 Forbidden
Server: Apache-Coyote/1.1
X-AREQUESTID: ...
X-Seraph-LoginReason: AUTHENTICATED_FAILED
Set-Cookie: JSESSIONID=...; Path=/; Secure; HttpOnly
WWW-Authenticate: OAuth realm="https%3A%2F%2Fjira.___.com"
X-Content-Type-Options: nosniff
X-Authentication-Denied-Reason: CAPTCHA_CHALLENGE; login-url=https://jira.___.com/login.jsp
Content-Type: text/html;charset=UTF-8
Content-Length: 6494
Date: Wed, 02 Dec 2015 11:59:15 GMT
But once you are beyond this, making use of the JIRA REST API works like a charm.
Update: Although I certainly had not failed (“basic”) authentication, JIRA got my Active Directory / SSO account locked again and again. My new strategy:
- 1st logon through “basic authentication” and store the cookie jar
- further authentications (during a script run) though the cookie stored before — yes, I will supply you with examples here in the near future
Wishlist:
- instead of shell+curl use perl+libcurl
- use the “epic link” to get the “epic link nice name” in order to describe the issue as “issue# + epic-link-nice-name + summary”
- extend the tool to also deal with Atlassian Confluence
-
added JSON and YAML in the article on “data logger” / data formats on the English Wikipedia
I wonder, whether this modifications will find mercy in the eyes of the “article surveyors”.
- JSON: I attended a presentation given by a member of Pivotal.io, telling of a data logger device, that they use in a project, that emits JSON
- YAML: a huge international web warehouse exchanges its data with its logistics partners in YAM – that is EDI genuine homeland. I personally and directly know of business going on like that.
Yes, I should have quoted both of them, when I added the 2 to the article. But then – what serious proof do I have?
-
Hibiscus Payment Server – successfully upgraded to 2.6.5
- http://Jochen.Hayek.name/wp/blog-en/2013/11/15/hibiscus-payment-server-hbci-banking/ – added the migration steps there
- everything was fine, the .json files (that I exptract) are just the same – I like it very much that way
-
Hibiscus Payment Server – HBCI banking
- http://www.willuhn.de/products/hibiscus-server/
- http://www.willuhn.de/products/hibiscus-server/install.php
- http://www.willuhn.de/products/hibiscus-server/support.php
- http://www.willuhn.de/wiki/ – I don’t see Hisbiscus Server mentioned
- http://www.onlinebanking-forum.de/phpBB2/viewforum.php?f=33
- http://www.willuhn.de/bugzilla
With the hibiscus-server running on (let’s say) your current machine at port 8080 (AKA https://localhost:8080), you have a few rather useful services available:- https://localhost:8080/hibiscus/ – the “Hibiscus Management Console” – this is where you enter your HBCI accounts and where you can view account statements etc
- https://localhost:8080/webadmin/ – the “Jameica Management Console” for the Hibiscus Server
- https://localhost:8080/webadmin/rest.html – the Management Console shows a link “REST Services” listing available REST services, amongst others the list of accounts and list of transactions … (see below)
- https://localhost:8080/webadmin/rest/hibiscus/konto/list – a JSON list of the accounts, of course with details
- https://localhost:8080/webadmin/rest/hibiscus/konto/2/umsaetze/days/999 – a JSON list of transactions on account “2” (you can find acount “2” described above) during the last 999 days
- All that is implemented in Java.
I would love to see this running on my Synology NAS at home with plenty of Internet bandwith available, so that I can get my bank account transactions updated over the Internet a few times each day. (Update: For whatever reason hibiscus/jameica do not run as expected on my NAS. I ran out of time investigating this.)These days Synology does not supply Java on their devices – I assume, they do not want to get officially bothered with Java difficulties on their devices. But still here I found a description of how to install Oracle Java SE on a Synology NAS:With the “https://localhost:8080/webadmin/rest/hibiscus/konto/…” REST services listed above I get hold off account data rather, rather easily like this:$ curl –sslv3 –insecure
–user jameica:PASSWORD
https://localhost:8080/webadmin/rest/hibiscus/konto/listNow I prefer developing software using Perl, and I will make use of the JSON lists in Perl, and that should be rather easy. I think, I am going to abandon my web-scraping scripts in Perl, once all this is in place. Web-scraping banking web-sites is a rather tedious business, whereas HBCI/FinTS is a confirmed banking standard in this country (Germany), and I consider the Hibiscus Server as a rather easy way to deal with the HBCI Moloch AKA FinTS.
…:
- http://www.willuhn.de/wiki/doku.php?id=support:list:banken:misc:pintan – bank details needed to set up an HBCI account
update 2014-04-23:
- after my initial start with hibiscus-server-2.4.0 today I decided to upgrade to hibiscus-server-2.6.5
- downloaded the .zip
- I shut down the running hibiscus-server
- I renamed the current hibiscus-server to hibiscus-server-2.4.0 – I would have to fall back to this state in case …
- unpacked hibiscus-server-2.6.5.zip, resulting in a new hibiscus-server/
- …/create_snapshot.sh …/hibiscus-server/cfg/de.willuhn.jameica.hbci.rmi.HBCIDBService.properties; removed the original, so that we would operate on an H2 db
- on the NAS I moved all the .zip and .gz of …/.jameica/ to an “uncle” directory
- removed the old .zip and .gz from $HOME/.jameica/
- and I took a time-stamped snapshot of $HOME/.jameica/
- on the NAS I moved that time-snapshot to an “uncle” directory as well
- locally I removed that time-snapshot
- started …/hibiscus-server/jameicaserver.sh again
- same results, *.json looks fine, just/almost as before – that’s rather fine – whatever the banks had changed in the meantime – no changes on my side
(To be continued…)
-
logging: Write Logs for Machines, use JSON
Write Logs for Machines, use JSON | Paul’s Journal
Looks like a nice new challenge to use JSON also for debug / trace output.I always found unstructured debug / trace output awful, and of course mine has been “key-value” for a long time.
I learned that “at home”, the Karlsruhe Ada Compiler construction team, back in the 80-s.- Graylog Extended Log Format
- ruby, github: https://github.com/Graylog2/gelf-rb
- perl, METACPAN: Log::Log4perl::Layout::GELF
- perl, github: log4perl_gelf