Skip to content

Script Prefixes

Prefixes may be applied to applicable regular statements but never to control statements. The when and unless prefixes can however be used as suffixes to a few specific control statement clauses.

All script keywords can be typed with any combination of lower case and upper case characters. Field and variable names used in the statements are however case sensitive.

Add

The add prefix can be added to any LOAD, SELECT or map...using statement in the script. It is only relevant during partial reloads.

Add [only] (loadstatement | selectstatement | mapstatement)

During a partial reload the table, for which a table name is generated by the add LOAD/add SELECT statement (provided such a table exists), will be appended with the result of the add LOAD/add SELECT statement. No check for duplicates is performed. Therefore, a statement using the add prefix will normally include either a distinct qualifier or a where clause guarding duplicates. The map...using statement causes mapping to take place also during partial script execution.

Argument Description
only An optional qualifier denoting that the statement should be disregarded during normal (non-partial) reloads.
Example Result
Tab1:
LOAD Name, Number FROM Persons.csv;
Add LOAD Name, Number FROM newPersons.csv;
During normal reload, data is loaded from Persons.csv and stored in the table Tab1. Data from NewPersons.csv is then concatenated to the same table.
During partial reload, data is loaded from NewPersons.csv and appended to the table Tab1. No check for duplicates is made.
Tab1:
SQL SELECT Name, Number FROM Persons.csv;
Add LOAD Name, Number FROM NewPersons.csv where not exists(Name);
A check for duplicates is made by means of looking if Name exists in the previously loaded table data.
During normal reload, data is loaded from Persons.csv and stored in the table Tab1. Data from NewPersons.csv is then concatenated to the same table.
During partial reload, data is loaded from NewPersons.csv which is appended to the table Tab1. A check for duplicates is made by means of seeing if Name exists in the previously loaded table data.
Tab1:
LOAD Name, Number FROM Persons.csv;
Add Only LOAD Name, Number FROM NewPersons.csv where not exists(Name);
During normal reload, data is loaded from Persons.csv and stored in the table Tab1. The statement loading NewPersons.csv is disregarded.
During partial reload, data is loaded from NewPersons.csv which is appended to the table Tab1. A check for duplicates is made by means of seeing if Name exists in the previously loaded table data.

Buffer

QVD files can be created and maintained automatically via the buffer prefix. This prefix can be used on most LOAD and SELECT statements in script. It indicates that QVD files are used to cache/buffer the result of the statement.

Buffer [(option[ ,option]] ( loadstatement | selectstatement )

option::= incremental | stale [after] amount [(days | hours)]

If no option is used, the QVD buffer created by the first execution of the script will be used indefinitely.

The buffer file is stored in the Buffers sub-folder.

The name of the QVD file is a calculated name, a 160-bit hexadecimal hash of the entire following LOAD or SELECT statement and other discriminating info. This means that the QVD buffer will be rendered invalid by any change in the following LOAD or SELECT statement.

QVD buffers will normally be removed when no longer referenced anywhere throughout a complete script execution in the app that created it or when the app that created it no longer exists.

Argument Description
incremental The incremental option enables the ability to read only part of an underlying file. Previous size of the file is stored in the XML header in the QVD file. This is particularly useful with log files. All records loaded at a previous occasion are read from the QVD file whereas the following new records are read from the original source and finally an updated QVD-file is created. Note that the incremental option can only be used with LOAD statements and text files and that incremental load cannot be used where old data is changed or deleted!
stale [after] amount [(days | hours)] amount is a number specifying the time period. Decimals may be used. The unit is assumed to be days if omitted.
The stale after option is typically used with DB sources where there is no simple timestamp on the original data. Instead you specify how old the QVD snapshot can be to be used. A stale after clause simply states a time period from the creation time of the QVD buffer after which it will no longer be considered valid. Before that time the QVD buffer will be used as source for data and after that the original data source will be used. The QVD buffer file will then automatically be updated and a new period starts.

Numerous limitations exist, most notable is that there must be either a file LOAD or a SELECT statement at the core of any complex statement.

Example 1: Buffer SELECT * from MyTable;

Example 2: Buffer (stale after 7 days) SELECT * from MyTable;

Example 3: Buffer (incremental) LOAD * from MyLog.log;

Concatenate

If two tables that are to be concatenated have different sets of fields, concatenation of two tables can still be forced with the Concatenate prefix. This statement forces concatenation with an existing named table or the latest previously created logical table.

Concatenate[ (tablename ] ( loadstatement | selectstatement )

A concatenation is in principle the same as the SQL UNION statement, but with two differences:

  • The Concatenate prefix can be used no matter if the tables have identical field names or not.
  • Identical records are not removed with the Concatenate prefix.
Argument Description
tablename The name of the existing table.

Example

Concatenate LOAD * From file2.csv;
Concatenate SELECT * From table3;
tab1:
LOAD * From file1.csv;
tab2:
LOAD * From file2.csv;
.. .. ..
Concatenate (tab1) LOAD * From file3.csv;

Crosstable

The crosstable prefix is used to turn a cross table into a straight table, that is, awide table with many columns is turned into a tall table, with the column headings being placed into a single attribute column.

crosstable (attribute field name, data field name [ , n ]) (loadstatement | selectstatement)

Argument Description
attribute field name The field that contains the attribute values.
data field name The field that contains the data values.
n The number of qualifier fields preceding the table to be transformed to generic form. Default is 1.

A crosstable is a common type of table featuring a matrix of values between two or more orthogonal lists of header data, of which one is used as column headers. A typical example could be to have one column per month. The result of the crosstable prefix is that the column headers (for example month names) will be stored in one field, the attribute field, and the column data (month numbers) will be stored in a second field: the data field.

Examples

Crosstable (Month, Sales) LOAD * from ex1.csv;
Crosstable (Month,Sales,2) LOAD * from ex2.csv;
Crosstable (A,B) SELECT * from table3;

First

The First prefix to a LOAD or SELECT (SQL) statement is used for loading a set maximum number of records from a data source table.

First n ( loadstatement | selectstatement )

Argument Description
n An arbitrary expression that evaluates to an integer indicating the maximum number of records to be read. n can be enclosed in parentheses, like (n), but this is not required.

Examples

First 10 LOAD * from abc.csv;
First (1) SQL SELECT * from Orders;

Generic

The unpacking and loading of a generic database can be done with a generic prefix.

Generic( loadstatement | selectstatement )

Tables loaded through a generic statement are not auto-concatenated.

Examples

Generic LOAD * from abc.csv;
Generic SQL SELECT * from table1;

Hierarchy

The hierarchy prefix is used to transform a parent-child hierarchy table to a table that is useful in a data model. It can be put in front of a LOAD or a SELECT statement and will use the result of the loading statement as input for a table transformation.

The prefix creates an expanded nodes table, which normally has the same number of records as the input table, but in addition each level in the hierarchy is stored in a separate field. The path field can be used in a tree structure.

Hierarchy (NodeID,ParentID, NodeName, [ParentName], [PathSource], [PathName],[PathDelimiter],[Depth](loadstatement | selectstatement)

The input table must be an adjacent nodes table. Adjacent nodes tables are tables where each record corresponds to a node and has a field that contains a reference to the parent node. In such a table the node is stored on one record only but the node can still have any number of children. The table may of course contain additional fields describing attributes for the nodes.

The prefix creates an expanded nodes table, which normally has the same number of records as the input table, but in addition each level in the hierarchy is stored in a separate field. The path field can be used in a tree structure. The levels in an expanded nodes table can easily be used e.g. in a pivot table and the path field can be used in a tree structure.

Usually the input table has exactly one record per node and in such a case the output table will contain the same number of records. However, sometimes there are nodes with multiple parents, i.e. one node is represented by several records in the input table. If so, the output table may have more records than the input table.

All nodes with a parent id not found in the node id column (including nodes with missing parent id) will be considered as roots. Also, only nodes with a connection to a root node - direct or indirect - will be loaded, thus avoiding circular references.

Additional fields containing the name of the parent node, the path of the node and the depth of the node can be created.

Argument Description
NodeID The name of the field that contains the node id. This field must exist in the input table.
ParentID The name of the field that contains the node id of the parent node. This field must exist in the input table.
NodeName The name of the field that contains the name of the node. This field must exist in the input table.
ParentName A string used to name the new field. If omitted, this field will not be created.
ParentSource The name of the field that contains the name of the node used to build the node path. Optional parameter. If omitted, NodeName will be used.
PathName A string used to name the new field, which contains the path from the root to the node. Optional parameter. If omitted, this field will not be created.
PathDelimiter A string used as delimiter in the new **Path field. Optional parameter. If omitted, ‘/’ will be used.
Depth A string used to name the new Depth field, which contains the depth of the node in the hierarchy. Optional parameter. If omitted, this field will not be created.

Example

Hierarchy(NodeID, ParentID, NodeName, ParentName, NodeName, PathName,'\\', Depth) LOAD * inline
[
NodeID, ParentID, NodeName
1, 4, London
2, 3, Munich
3, 5, Germany
4, 5, UK
5, ,
Europe
];
NodeID ParentID NodeName NodeName1 NodeName2 NodeName3 ParentName PathName Depth
1 4 London Europe UK London UK Europe\UK\London 3
2 3 Munich Europe Germany Munich Germany Europe\Germany\Munich 3
3 5 Germany Europe Germany - Europe Europe\Germany 2
4 5 UK Europe UK - Europe Europe\UK 2
5 Europe Europe - - - Europe 1

HierarchyBelongsTo

This prefix is used to transform a parent-child hierarchy table to a table that is useful in a data model. It can be put in front of a LOAD or a SELECT statement and will use the result of the loading statement as input for a table transformation.

The prefix creates a table containing all ancestor-child relations of the hierarchy. The ancestor fields can then be used to select entire trees in the hierarchy. The output table in most cases contains several records per node.

HierarchyBelongsTo (NodeID, ParentID, NodeName, AncestorID, AncestorName, [DepthDiff](loadstatement | selectstatement)

The input table must be an adjacent nodes table. Adjacent nodes tables are tables where each record corresponds to a node and has a field that contains a reference to the parent node. In such a table the node is stored on one record only but the node can still have any number of children. The table may of course contain additional fields describing attributes for the nodes.

The prefix creates a table containing all ancestor-child relations of the hierarchy. The ancestor fields can then be used to select entire trees in the hierarchy. The output table in most cases contains several records per node.

An additional field containing the depth difference of the nodes can be created.

Argument Description
NodeID The name of the field that contains the node id. This field must exist in the input table.
ParentID The name of the field that contains the node id of the parent node. This field must exist in the input table.
NodeName The name of the field that contains the name of the node. This field must exist in the input table.
AncestorID A string used to name the new ancestor id field, which contains the id of the ancestor node.
AncestorName A string used to name the new ancestor field, which contains the name of the ancestor node.
DepthDiff A string used to name the new DepthDiff field, which contains the depth of the node in the hierarchy relative the ancestor node. Optional parameter. If omitted, this field will not be created.

Example

HierarchyBelongsTo (NodeID, AncestorID, NodeName, AncestorID, AncestorName, DepthDiff) LOAD * inline [
NodeID, AncestorID, NodeName
1, 4, London
2, 3, Munich
3, 5, Germany
4, 5, UK
5, , Europe
];
NodeID AncestorID NodeName AncestorName DepthDiff
1 1 London London 0
1 4 London UK 1
1 5 London Europe 2
2 2 Munich Munich 0
2 3 Munich Germany 1
2 5 Munich Europe 2
3 3 Germany Germany 0
3 5 Germany Europe 1
4 4 UK UK 0
4 5 UK Europe 1
5 5 Europe Europe 0

Inner

The join and keep prefixes can be preceded by the prefix inner.If used before join it specifies that an inner join should be used. The resulting table will thus only contain combinations of field values from the raw data tables where the linking field values are represented in both tables. If used before keep, it specifies that both raw data tables should be reduced to their common intersection before being stored.

Inner ( Join |) [ (tablename)]( |selectstatement)

Argument Description
tablename The named table to be compared to the loaded table.
loadstatement or selectstatement The LOAD or SELECT statement for the loaded table.

Example 1

Table 1
A B
1 aa
2 cc
3 ee
Table 2
A C
1 xx
4 yy
QVTable:
SQL SELECT * From table1;
inner join SQL SELECT * From table2;
QVTable
A B C
1 aa xx

Example 2

QVTab1:
SQL SELECT * From Table1;
QVTab2:
inner keep SQL SELECT * From Table2;
QVTab1
A B
1 aa
QVTab2
A C
1 xx

The two tables in the keep example are, of course, associated via A.

IntervalMatch

The IntervalMatch prefix is used to create a table matching discrete numeric values to one or more numeric intervals , and optionally matching the values of one or several additional keys.

IntervalMatch (matchfield)(loadstatement | selectstatement )

IntervalMatch (matchfield,keyfield1[ , keyfield2, ... keyfield5 ] (loadstatement | selectstatement )

The IntervalMatch prefix must be placed before a LOAD or a SELECT statement that loads the intervals. The field containing the discrete data points (Time in the example below) and additional keys must already have been loaded before the statement with the IntervalMatch prefix. The prefix does not by itself read this field from the database table. The prefix transforms the loaded table of intervals and keys to a table that contains an additional column: the discrete numeric data points. It also expands the number of records so that the new table has one record per possible combination of discrete data point, interval and value of the key field(s).

The intervals may be overlapping and the discrete values will be linked to all matching intervals.

When the IntervalMatch prefix is extended with key fields, it is used to create a table matching discrete numeric values to one or more numeric intervals, while matching the values of one or several additional keys.

In order to avoid undefined interval limits being disregarded, it may be necessary to allow NULL values to map to other fields that constitute the lower or upper limits to the interval. This can be handled by the NullAsValue statement or by an explicit test that replaces NULL values with a numeric value well before or after any of the discrete numeric data points.

Argument Description
matchfield The field containing the discrete numeric values to be linked to intervals.
keyfield Fields that contain the additional attributes that are to be matched in the transformation.
loadstatement or selectstatement Must result in a table, where the first field contains the lower limit of each interval, the second field contains the upper limit of each interval, and in the case of using key matching, the third and any subsequent fields contain the keyfield(s) present in the IntervalMatch statement. The intervals are always closed, i.e. the end points are included in the interval. Non-numeric limits render the interval to be disregarded (undefined).

Example 1:

In the two tables below, the first one lists a number of discrete events and the second one defines the start and end times for the production of different orders. By means of the IntervalMatch prefix it is possible to logically connect the two tables in order to find out e.g. which orders were affected by disturbances and which orders were processed by which shifts.

EventLog:
LOAD * Inline [
Time, Event, Comment
00:00, 0, Start of shift 1
01:18, 1, Line stop
02:23, 2, Line restart 50%
04:15, 3, Line speed 100%
08:00, 4, Start of shift 2
11:43, 5, End of production ];

OrderLog:
LOAD * INLINE [
Start, End, Order
01:00, 03:35, A
02:30, 07:58, B
03:04, 10:27, C
07:23, 11:43, D
];

//Link the field Time to the time intervals defined by the fields Start and End.
Inner Join IntervalMatch ( Time )
LOAD Start, End
Resident OrderLog;

The table OrderLog contains now an additional column: Time. The number of records is also expanded.

Time Start End Order
00:00 - - -
01:18 01:00 03:35 A
02:23 01:00 03:35 A
04:15 02:30 07:58 B
04:15 03:04 10:27 C
08:00 03:04 10:27 C
08:00 07:23 11:43 D
11:43 07:23 11:43 D

Example 2: (using keyfield)

Same example than above, adding ProductionLine as a key field.

EventLog:
LOAD * Inline
[
Time, Event, Comment,
ProductionLine
00:00, 0, Start of shift 1, P1
01:00, 0, Start of shift 1, P2
01:18, 1, Line stop, P1
02:23, 2, Line restart 50%, P1
04:15, 3, Line speed 100%, P1
08:00, 4, Start of shift 2, P1
09:00, 4, Start of shift 2, P2
11:43, 5, End of production,P1
11:43, 5, End of production,P2
];

OrderLog:
LOAD * INLINE
[
Start, End, Order, ProductionLine
01:00, 03:35, A,P1
02:30, 07:58, B,P1
03:04, 10:27, C,P1
07:23, 11:43, D,P2
];

//Link the field Time to the time intervals defined by the fields Start and End and match the values
// to the key ProductionLine.

Inner Join
IntervalMatch ( Time, ProductionLine )
LOAD Start, End, ProductionLine
Resident OrderLog;

A table box could now be created as below:

ProductionLine Time Event Comment Order Start End
P1 00:00 0 Start of shift 1 - - -
P2 01:00 0 Start of shift 1 - - -
P1 01:18 1 Line stop A 01:00 03:35
P1 02:23 2 Line restart 50% A 01:00 03:35
P1 04:15 3 Line speed 100% B 02:30 07:58
P1 04:15 3 Line speed 100% C 03:04 10:27
P1 08:00 4 Start of shift 2 C 03:04 10:27
P2 09:00 4 Start of shift 2 D 07:23 11:43
P1 11:43 5 End of production - - -
P2 11:43 5 End of production D 07:23 11:43

Join

The join prefix joins the loaded table with an existing named table or the last previously created data table.

inner | outer | left | right ]Join [ (tablename ]( loadstatement | selectstatement )

The join is a natural join made over all the common fields. The join statement may be preceded by one of the prefixes, outer, left or right.

Argument Description
tablename The named table to be compared to the loaded table.
loadstatement or selectstatement The LOAD or SELECT statement for the loaded table.

Example:

Join LOAD * from abc.csv;

Join SELECT * from table1;

tab1:
LOAD * from file1.csv;
tab2:
LOAD * from file2.csv;
.. .. ..
join (tab1) LOAD * from file3.csv;

Keep

The keep prefix is similar to the join prefix. Just as the join prefix, it compares the loaded table with an existing named table or the last previously created data table, but instead of joining the loaded table with an existing table, it has the effect of reducing one or both of the two tables before they are stored, based on the intersection of table data. The comparison made is equivalent to a natural join made over all the common fields, i.e. the same way as in a corresponding join. However, the two tables are not joined and will be kept as two separately named tables.

inner | left | right) keep [(tablename ) ]( loadstatement | selectstatement )

The keep prefix must be preceded by one of the prefixes, left or right.

The explicit join prefix in the script language performs a full join of the two tables. The result is one table. In many cases such joins will result in large tables. One of the main features in the Qlik Engine is its ability to make associations between multiple tables instead of joining them, which greatly reduces memory usage, increases processing speed and offers enormous flexibility. Explicit joins should therefore generally be avoided in scripts. The keep functionality was designed to reduce the number of cases where explicit joins needs to be used.

Argument Description
tablename The named table to be compared to the loaded table.
loadstatement or selectstatement The LOAD or SELECT statement for the loaded table.
Inner Keep LOAD * from abc.csv;
Left Keep SELECT * from table1;
tab1:
LOAD * from file1.csv;
tab2:
LOAD * from file2.csv;
.. .. ..
Left Keep (tab1) LOAD * from file3.csv;

Left

The Join and Keep prefixes can be preceded by the prefix left.

If used before join it specifies that a left join should be used. The resulting table will only contain combinations of field values from the raw data tables where the linking field values are represented in the first table. If used before keep, it specifies that the second raw data table should be reduced to its common intersection with the first table, before being stored.

Left ( Join | Keep ) [ (tablename) ]( loadstatement | selectstatement)

Argument Description
tablename The named table to be compared to the loaded table.
loadstatement or selectstatement The LOAD or SELECT statement for the loaded table.

Example:

Table1
A B
1 aa
2 cc
3 ee
Table2
A C
1 xx
4 yy
QVTable:
SELECT * From table1;
Left Join Sselect * From table2;
QVTable
A B C
1 aa xx
2 cc
3 ee
QVTab1:
SELECT * From Table1;
QVTab2:
Left Keep SELECT * From Table2;
QVTab1
A B
1 aa
2 cc
3 ee
QVTab2
A C
1 xx

The two tables in the keep example are, of course, associated via A.

tab1:
LOAD * From file1.csv;
tab2:
LOAD * From file2.csv;
.. .. ..
Left Keep (tab1) LOAD * From file3.csv;

Mapping

The mapping prefix is used to create a mapping table that can be used to, for example, replacing field values and field names during script execution.

Mapping( loadstatement | selectstatement )

The mapping prefix can be put in front of a LOAD or a SELECT statement and will store the result of the loading statement as a mapping table. Mapping provides an efficient way to substituting field values during script execution, e.g. replacing US, U.S. or America with USA. A mapping table consists of two columns, the first containing comparison values and the second containing the desired mapping values.

Mapping tables are stored temporarily in memory and dropped automatically after script execution.

The content of the mapping table can be accessed using e.g. the Map … Using statement, the Rename Field statement, the Applymap() function or the Mapsubstring() function.

In this example we load a list of salespersons with a country code representing their country of residence. We use a table mapping a country code to a country to replace the country code with the country name. Only three countries are defined in the mapping table, other country codes are mapped to 'Rest of the world'.

// Load mapping table of country codes:
map1:
mapping LOAD *
Inline [
CCode, Country
Sw, Sweden
Dk, Denmark
No, Norway
] ;
// Load list of salesmen, mapping country code to country
// If the country code is not in the mapping table, put Rest of the world
Salespersons:
LOAD *,
ApplyMap('map1', CCode,'Rest of the world') As Country
Inline [
CCode, Salesperson
Sw, John
Sw, Mary
Sw, Per
Dk, Preben
Dk, Olle
No, Ole
Sf, Risttu] ;
// We don't need the CCode anymore
Drop Field 'CCode';

The resulting table looks like this:

Salesperson Country
John Sweden
Mary Sweden
Per Sweden
Preben Denmark
Olle Denmark
Ole Norway
Risttu Rest of the world

NoConcatenate

The NoConcatenate prefix forces two loaded tables with identical field sets to be treated as two separate internal tables, when they would otherwise be automatically concatenated.

NoConcatenate ( loadstatement | selectstatement )

Example

LOAD A,B from file1.csv;
NoConcatenate LOAD A,B from file2.csv;

Outer

The explicit Join prefix can be preceded by the prefix Outer in order to specify an outer join. In an outer join all combinations between the two tables are generated. The resulting table will thus contain combinations of field values from the raw data tables where the linking field values are represented in one or both tables. The explicit Join prefix can be preceded by the prefix Outer in order to specify an outer join. In an outer join, the resulting table will contain all values from both raw tables where the linking field values are represented in either one or both tables. The Outer keyword is optional and is the default join type used when a join prefix is not specified.

Outer Join [(tablename)](loadstatement |selectstatement)

Argument Description
tablename The named table to be compared to the loaded table.
loadstatement or selectstatement The LOAD or SELECT statement for the loaded table.
Table1
A B
1 aa
2 cc
3 ee
Table2
A C
1 xx
4 yy
SQL SELECT * from table1;
join SQL SELECT * from table2;

OR

SQL SELECT * from table1;
outer join SQL SELECT * from table2;
Joined table
A B C
1 aa xx
2 cc -
3 ee -
4 - yy

Replace

The replace prefix is used to drop the entire table and replace it with a new table that is loaded or selected.

Replace (loadstatement |selectstatement |map...usingstatement)

The replace prefix can be added to any, SELECT or map...using statement in the script. The replace LOAD/replace SELECT statement has the effect of dropping the entire table, for which a table name is generated by the replace LOAD/replace SELECT statement, and replacing it with a new table containing the result of the replace LOAD/replace SELECT statement. The effect is the same during partial reload and full reload. The replace map...using statement causes mapping to take place also during partial script execution.

Argument Description
only An optional qualifier denoting that the statement should be disregarded during normal (non-partial) reloads.
Example Result
Tab1:
Replace LOAD * from File1.csv;
During both normal and partial reload, the table Tab1 is initially dropped. Thereafter new data is loaded from File1.csv and stored in Tab1.
Tab1:
Replace only LOAD * from File1.csv;
During normal reload, this statement is disregarded.
During partial reload, any table previously named Tab1 is initially dropped. Thereafter new data is loaded from File1.csv and stored in Tab1.
Tab1:
LOAD a,b,c from File1.csv;
Replace LOAD a,b,c from File2.csv;
During normal reload, the file File1.csv is first read into the table Tab1, but then immediately dropped and replaced by new data loaded from File2.csv. All data from File1.csv is lost.
During partial reload, the entire table Tab1 is initially dropped. Thereafter it is replaced by new data loaded from File2.csv.
Tab1:
LOAD a,b,c from File1.csv;
Replace only LOAD a,b,c from File2.csv;
During normal reload, data is loaded from File1.csv and stored in the table Tab1. File2.csv is disregarded.
During partial reload, the entire table Tab1 is initially dropped. Thereafter it is replaced by new data loaded from File2.csv. All data from File1.csv is lost.

The Join and Keep prefixes can be preceded by the prefix right.

If used before join it specifies that a right join should be used. The resulting table will only contain combinations of field values from the raw data tables where the linking field values are represented in the second table . If used before keep, it specifies that the first raw data table should be reduced to its common intersection with the second table, before being stored.

Right (Join |) [(tablename)](loadstatement|selectstatement )

Argument Description
tablename The named table to be compared to the loaded table.
loadstatement or selectstatement The LOAD or SELECT statement for the loaded table.
Table1
A B
1 aa
2 cc
3 ee
Table2
A C
1 xx
4 yy
QVTable:
SQL SELECT * from table1;
right join SQL SELECT * from table2;
QVTable
A B C
1 aa xx
4 - yy
QVTab1:
SQL SELECT * from Table1;
QVTab2:
right keep SQL SELECT * from Table2;
QVTab1
A B
1 aa
QVTab2
A C
1 xx
4 yy

The two tables in the keep example are, of course, associated via A.

tab1:
LOAD * from file1.csv;
tab2:
LOAD * from file2.csv;
.. .. ..
right keep (tab1) LOAD * from file3.csv;

Sample

The sample prefix to a LOAD or SELECT statement is used for loading a random sample of records from the data source.

Sample p ( loadstatement | selectstatement )

Argument Description
p An arbitrary expression which valuates to a number larger than 0 and lower or equal to 1. The number indicates the probability for a given record to be read.
All records will be read but only some of them will be loaded.

Example:

Sample 0.15 SQL SELECT * from Longtable;
Sample(0.15) LOAD * from Longtab.csv;

Note

The parentheses are allowed but not required.

Semantic

Tables containing relations between records can be loaded through a semantic prefix. This can for example be self-references within a table, where one record points to another, such as parent, belongs to, or predecessor.

Semantic( loadstatement | selectstatement)

The semantic load will create semantic fields that can be displayed in filter panes to be used for navigation in the data.

Tables loaded through a semantic statement cannot be concatenated.

Example:

Semantic LOAD * from abc.csv;
Semantic SELECT Object1, Relation, Object2, InverseRelation from table1;

Unless

The unless prefix and suffix is used for creating a conditional clause which determines whether a statement or exit clause should be evaluated or not. It may be seen as a compact alternative to the full if..end if statement.

(Unless condition statement | exitstatement Unless condition )

The statement or the exitstatement will only be executed if condition is evaluated to False.

The unless prefix may be used on statements which already have one or several other statements, including additional when or unless prefixes.

Argument Description
condition A logical expression evaluating to True or False.
statement Any script statement except control statements.
exitstatement An exit for, exit do or exit sub clause or an exit script statement.

Examples:

exit script unless A=1;
unless A=1 LOAD * from myfile.csv;
unless A=1 when B=2 drop table Tab1;

When

The when prefix and suffix is used for creating a conditional clause which determines whether a statement or exit clause should be executed or not. It may be seen as a compact alternative to the full if..end if statement.

(when condition statement | exitstatement when condition )

The statement or the exitstatement will only be executed if condition is evaluated to True.

The when prefix may be used on statements which already have one or several other statements, including additional when or unless prefixes.

Argument Description
condition A logical expression evaluating to True or False.
statement Any script statement except control statements.
exitstatement An exit for, exit do or exit sub clause or an exit script statement.

Example 1:

exit script when A=1;

Example 2:

when A=1 LOAD * from myfile.csv;

Example 3:

when A=1 unless B=2 drop table Tab1;