3. Management¶
3.1. Docker¶
QueryBuilder is comprised of a set of Docker images. A minimal YAML file for
use with Docker Swarm consists of at least the querybuilder
and das
services:
version: "3.1"
services:
querybuilder:
image: harbor.mgrid.net/mgrid/querybuilder-prod:1.1.1-1
ports:
- "443:443"
volumes:
- config:/config
environment:
- CONFIG_GIT_BRANCH=master
- APP_CONFIG_DIR=querybuilder
das:
image: harbor.mgrid.net/mgrid/das-prod:1.2.3-4
ports:
- "444:443"
volumes:
- config:/config
environment:
- CONFIG_GIT_BRANCH=master
- APP_CONFIG_DIR=das
This YAML file refers to Docker images provided by MGRID. It also refers to
configuration files that you need to edit to suit your environment. These
settings.cfg
files should be present in the config
directory with
subdirectories querybuilder
and das
. Your working directory should then
look like this:
stack.yml
config/querybuilder/settings.cfg
config/das/settings.cfg
Before you start the stack, make sure there is a Postgres instance available
where both QueryBuilder and DAS can connect to, and provide the credentials in
their respective configuration files. It is suggested to create one database
and user for querybuilder
and one database and user for das
within this
single Postgres instance.
Starting up the stack:
$ docker stack deploy -c stack.yml qb
Verifying that the stack successfully started:
$ docker stack ps qb
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
opdl47m6tcvs qb_querybuilder.1 harbor.mgrid.net/mgrid/querybuilder-prod:1.1.1-1 machine Running Running 12 seconds ago
xd9yrhadqnbs qb_das.1 harbor.mgrid.net/mgrid/das-prod:1.2.3-4 machine Running Running 15 seconds ago
Executing commands within the QueryBuilder container:
$ docker exec -it qb_querybuilder.1.opdl47m6tcvse4l19av0jyjdi bash
root@7b814a719db7:/opt/mgrid/querybuilder# circusctl status
nginx: active
querybuilder: active
3.2. Invoke¶
Low level administration tasks can be executed using the invoke
command
inside the QueryBuilder docker container. For invoke
to be able to find
the configured tasks it is necessary to have as a current working directory
/opt/mgrid/querybuilder
or below.
app.db.branches Show current branch points
app.db.current Display the current revision for each
database
app.db.downgrade Revert to a previous version
app.db.edit Upgrade to a later version
app.db.heads Show current available heads in the script
directory
app.db.history List changeset scripts in chronological order
app.db.init-model Fill a database with the model
app.db.merge Merge two revisions together (creates a new
migration file)
app.db.migrate Alias for 'revision --autogenerate'
app.db.revision Create a new revision file
app.db.show Show the revision denoted by the given symbol
app.db.stamp 'stamp' the revision table with the given
revision; don't run any migrations
app.db.upgrade Upgrade to a later version
app.inspect.joinpaths Generate a world.dot on stderr visualizing
the join paths of the world.yaml
app.inspect.run (app.inspect) Generate a world.yaml on stderr by inspecting
the configured datawarehouse
app.paths.paths (app.paths) Development: Show all wired paths
app.projects.create Create a new project
app.projects.delete Delete a project based on id
app.projects.show Show all configured projects
app.roles.delete Delete a role based on id
app.roles.grant Create a new project role
app.roles.show Show all configured roles
app.run.debug Run a single instance of the application for
development
app.run.run (app.run) Run the application
app.users.create Create a new user
app.users.delete Delete a user based on id
app.users.show Show all configured users
cron.add Add a new Cron schedule
cron.run.force Force consideration of a Cron schedule
cron.run.run (cron.run) Schedule new jobs based on Cron definitions
demo.azure.load Load MIMIC on Azure demo -- DANGER, will
destroy your current application model
database
demo.fnt.load Load FNT demo -- DANGER, will destroy your
current application model database
demo.mimic.load Load MIMIC demo -- DANGER, will destroy your
current application model database
demo.mimic.provision-dds Provision the local dds based on the mimic
demo
jobs.run.debug Debug jobs
jobs.run.run (jobs.run) Execute jobs
3.2.1. Model Database Upgrade¶
Querybuilder will check its model database on startup;
If no database exists, it will be created.
If a database exists that corresponds to its current model, Querybuilder will start normally.
If an old database exists that corresponds with a previous version of Querybuilder, Querybuilder will halt.
After a backup of the old database, the database structure can be upgraded by issueing
$ docker exec -it qb_querybuilder.1.opdl47m6tcvse4l19av0jyjdi bash
root@7b814a719db7:/opt/mgrid/querybuilder# inv app.db.upgrade
...
INFO [a.runtime.migration] Running upgrade 71a1d44789c7 -> 0055de37bb82, Add session key to Principal
3.3. API¶
The provisioning API is available through HTTP calls. Authentication is
handled by JSON Web Tokens (JWT). Both the QueryBuilder and the calling
application need to know a shared secret, JWT_SECRET_KEY
in the
configuration file.
The following JWT attributes need to be present in the payload:
{
"iss": "mgrid",
"iat": 1532007679,
"jti": "9deb0180-8a0f-4c12-b6d4-1841d357f04b",
"identity": "name_of_calling_application",
"type": "access",
"fresh": "True"
}
Where iat
is the number of seconds since epoch and jti
is a nonce,
e.g. a UUID.
This payload then needs to be signed with the shared secret and added as the
Authorization
HTTP header. In a Python program this can be performed as
such:
token = jwt.encode(payload, jwt_secret_key, algorithm="HS512")
http_header = {"Authorization": "Bearer %s" % token}
3.3.1. /api/v1/provision/projects GET¶
List summaries of all known provisioned projects.
Input: None
Output
[
{
"description": "Description of project 1",
"id": 1,
"name": "project1"
},
{
"description": "Description of project 2",
"id": 2,
"name": "project2"
}
]
3.3.2. /api/v1/provision/projects POST¶
Create a new project. Note that name
must be unique among all projects,
authgroup
requires a user to have LDAP group membership with a particular
group. Projects may be provisioned with either SFTP export or Azure Blob export.
In the case of SFTP, either sftp_export.password
or sftp_export.key
must be provided.
Input
{
"name": "new_sftp_project",
"description": "Description of new SFTP project",
"authgroup": "some_auth_group",
"sftp_export": {
"host": "sftp_host",
"port": 22,
"user": "sftp_user",
"password": "sftp_password",
"key": "sftp_key",
"path": "sftp_path"
},
"deid_project_id": 123,
"owner": "owner@example.com"
}
Output
{
"status": "success",
"project": 11
}
Input
{
"name": "new_azure_blob_project",
"description": "Description of new Azure Blob project",
"authgroup": "some_auth_group",
"azure_blob_export": {
"account_key": "account_key",
"account_name": "account_name",
"container_name": "container_name",
"path": "azure_blob_path"
},
"deid_project_id": 456,
"owner": "owner@example.com"
}
Output
{
"status": "success",
"project": 12
}
3.3.3. /api/v1/provision/project/{id} GET¶
List the summary for a provisioned project with given id
.
Input: None
Output
{
"id": 11,
"name": "new_project",
"description": "Description of new project"
}
3.3.4. /api/v1/provision/project/{id} PUT¶
Update existing project with given id
. The project with the given
identifier will be updated with the attributes set in this call. This call is
idempotent and can be used to retry an action that failed in a previous call.
Note that the owner attribute cannot be changed after the first call. This is
intentional.
Input
{
"name": "updated_project",
"description": "Description of updated project",
"authgroup": "updated_auth_group"
}
Output
{
"status": "success",
"project": 11
}
3.3.5. /api/v1/provision/project/{id} DELETE¶
Delete existing project with given id
.
Input: None
Output
{
"status": "success",
"project": 11
}