Creating a Query Job
To create a query through the Query Jobs interface you must submit a
POST
:
Description | Create a query job | ||
Method | POST /api/v1/repositories/ | ||
Request Data | QueryInputJob | ||
Response Data | QueryResults | ||
Authentication Required | yes | ||
Path Arguments | Description | Data type | Required? |
repo | Name of repository to be searched | string | required |
Return Codes | |||
200 | Request complete | ||
400 | Request is malformed and either missing critical fields or the JSON is badly structured | ||
500 | Request failed |
The JSON request body supports the following attributes:
Table: QueryInput JSON Object Fields
Field | Type | Required? | Default | Description |
---|---|---|---|---|
allowEventSkipping | boolean | false | If set to true, events in the result skipped will be skipped if not retrieved. | |
arguments |
| Dictionary of arguments specified in queries with ?param or ?{param=defaultValue} syntax. Provided arguments must be a simple dictionary of string values. If an argument is given explicitly as in ?query(param=value) then that value overrides values provided here. | ||
around |
| Used to define the pagination of events in the result set within the given query when using pagination. Used for cursor-based paginating of filter query result. This cannot be used with aggregate results as all rows are always returned. For more information see api-search-request-around | ||
eventId | string | Yes | The ID of the event to use as the reference point | |
numberOfEventsAfter | integer | Yes | Number of events to show after the eventId | |
numberOfEventsBefore | integer | Yes | Number of events to show before the eventId | |
timestamp | integer | Yes | The timestamp to use as the reference for pagination. | |
autobucketCount | integer | Determines the number of buckets the system should create during live query searches, when no other explicit bucketing aggregate is specified (such as bucket(), timechart()). Higher autobucket counts means finer granularity, but at the cost of increased memory usage during search. Default value: 90 | ||
end | relative-time | The end date and time. This parameter tells LogScale not to return results from after this date and time. See how to specify a time. | ||
ingestEnd | relative-time | Specifies the end time based on when the data was ingested. | ||
ingestStart | relative-time | Specifies the start time based on when the data was ingested. | ||
isAlertQuery | boolean | Indicates whether the query comes from an alert or not. | ||
isInteractive | boolean | false | Whether the search is being run interactively, i.e., from with the LogScale or another UI. | |
isLive | boolean | false | Sets whether this query is live. Defaults to false . Live queries are continuously updated. | |
languageVersion | string | The version of the query language to use | ||
queryString | string | Yes | The actual query. See Query Language Syntax for details. | |
showQueryEventDistribution | boolean | false |
If true LogScale will return an additional resultset containing
a histogram of the number of events over the time interval. This
field is deprecated. Use
results instead. If
both this field an
results is specified,
results takes
precedence.
| |
start | relative-time | The start date and time. This parameter tells LogScale not to return results from before this date and time. See how to specify a time. | ||
timeZone | string | The timezone to be used when returning dates. | ||
timeZoneOffsetMinutes | integer |
Set the time zone offset used for bucket()
and timeChart() time slices, which is
significant if the corresponding
span is multiples of days.
Defaults to 0 (UTC);
positive numbers are to the east of UTC, so for
UTC+01:00 timezone the value
60 should be passed.
| ||
useIngestTime | boolean | false | When set to true, uses the ingest time rather than event timestamp as the basis for the time span. |
The request for the query matches the simple query JSON fields. See Simple Search Request for more information on the supported arguments.
{
"queryString" : "",
"start" : "1d",
"isLive" : false,
"showQueryEventDistribution" : true
}
curl -v -X POST https://$YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/queryjobs \
-H "Accept: application/json" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- << EOF
{
"isLive" : false,
"start" : "1d",
"queryString" : "",
"showQueryEventDistribution" : true
}
EOF
curl -v -X POST https://$YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/queryjobs \
-H "Accept: application/json" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- << EOF
{
\"showQueryEventDistribution\" : true,
\"queryString\" : \"\",
\"isLive\" : false,
\"start\" : \"1d\"
}
EOF
curl -v -X POST https://$YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/queryjobs ^
-H "Accept: application/json" ^
-H "Authorization: Bearer $TOKEN" ^
-H "Content-Type: application/json" ^
-d @'{ ^
\"showQueryEventDistribution\" : true, ^
\"start\" : \"1d\", ^
\"isLive\" : false, ^
\"queryString\" : \"\" ^
} ^
'
curl.exe -X POST
-H "Accept: application/json"
-H "Authorization: Bearer $TOKEN"
-H "Content-Type: application/json"
-d '{
\"queryString\" : \"\",
\"start\" : \"1d\",
\"isLive\" : false,
\"showQueryEventDistribution\" : true
}
'
"https://$YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/queryjobs"
#!/usr/bin/perl
use HTTP::Request;
use LWP;
my $INGEST_TOKEN = "TOKEN";
my $uri = 'https://$YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/queryjobs';
my $json = '{
\"isLive\" : false,
\"start\" : \"1d\",
\"queryString\" : \"\",
\"showQueryEventDistribution\" : true
}
';
my $req = HTTP::Request->new("POST", $uri );
$req->header("Accept" => "application/json");
$req->header("Authorization" => "Bearer $TOKEN");
$req->header("Content-Type" => "application/json");
$req->content( $json );
my $lwp = LWP::UserAgent->new;
my $result = $lwp->request( $req );
print $result->{"_content"},"\n";
#! /usr/local/bin/python3
import requests
url = 'https://$YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/queryjobs'
mydata = r'''{
"queryString" : "",
"isLive" : false,
"start" : "1d",
"showQueryEventDistribution" : true
}
'''
resp = requests.post(url,
data = mydata,
headers = {
"Accept" : "application/json",
"Authorization" : "Bearer $TOKEN",
"Content-Type" : "application/json"
}
)
print(resp.text)
const https = require('https');
const data = JSON.stringify(
{
\"queryString\" : \"\",
\"start\" : \"1d\",
\"isLive\" : false,
\"showQueryEventDistribution\" : true
}
);
const options = {
hostname: 'https://$YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/queryjobs',
path: '/graphql',
port: 443,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': data.length,
Authorization: 'BEARER ' + process.env.TOKEN,
'User-Agent': 'Node',
},
};
const req = https.request(options, (res) => {
let data = '';
console.log(`statusCode: ${res.statusCode}`);
res.on('data', (d) => {
data += d;
});
res.on('end', () => {
console.log(JSON.parse(data).data);
});
});
req.on('error', (error) => {
console.error(error);
});
req.write(data);
req.end();
When creating a query job, an ID is returned that identified the query
job. The id
field indicates the
{id}
for
the query, which you can then poll using the HTTP GET method (see
Polling a Query Job).
{
"hashedQueryOnView" : "4ab13fa1",
"id" : "P15-uoxCpi2DFJDFTAkHXLyYL8bN"
}
The returned JSON uses the following schema:
Table: Metadata JSON Object Fields
Field | Type | Description |
---|---|---|
hashedQueryOnView | string | A string hash of the optimized query (the "query plan"). For advanced users. |
id | string | The id of the started queryjob. This can be used to poll results. |
queryOnView | string | A string representation of the optimized query (the "query plan"). For advanced users. |
staticMetaData |
| If provided, indicates that the backend is running the query in a special mode, and may include information about the reason for selecting that mode. |
executionMode | The execution mode of the query. |
An OpenAPI specification for this interface is available: Download it here. This can be used with a number of languages to build clients for this data.