DB Commands
Queries for DB using CLI
When we run the cli it's connected with the default database. The database name could be provided by the command line argument when running the cli or else it will use "mydb" as the database to connect to.
At a time BangDB runs only one database, hence we can switch to different db from cli.
Once we run the cli, we can perform certain activities in different areas, however this page is focusing on DB related activities
To see help
bangdb > help db
To see all the tables
bangdb > show tables
To see all the user tables
bangdb > show tables only user
Similarly to see all the sys tables
bangdb > show tables only sys
To describe the database, get the details of the db
This will flush a json on the screen.
bangdb > describe database
To see the pretty format of the json
bangdb > describe database pretty
To describe any table, let's say ml_bucket_info, which is a sys table
bangdb > describe table ml_bucket_info pretty
To create a table
BangDB provides plenty of different options to create a table. Therefore table creation is a workflow which gets initiated when we issue the "create table ..." command.
To select default values, simply enter when prompted
babgdb >create table mytable
Table Type [ NORMAL_TABLE, KV (0) | WIDE_TABLE, Documents (1) | LARGE_TABLE, large objects/files (2) | PREMITIVE_TABLE, like column (3) ] (or Enter for default (0)):
Here as we see we need to select the table type.
NORMAL_TABLE (0) : select this if you want to store opaque value, KV store model. Secondary Indexes can't be created on this table
WIDE_TABLE (1) : to store json doc. Secondary indexes can be created on this table
LARGE_TABLE (2) : when you wish to deal with large files or objects, definition of Large could be from several 100s of KBs to several GBs of file or objects
PRIMITIVE_TABLE (3) : KV store essentially but when value is of fixed size, for ex; int, long, double.
Next set of workflow will be based on the selection here.
Let's say we select 1 here, WIDE_TABLE
Is it a SW(Sliding Window) table? y/n (or Enter for default (n)):
We can put this table in a sliding window if we wish, good for use cases where we wish to discard data after some time
Let's say we select NO (n)
allow reverse index as well? y/n (or Enter for default (n)):
Do you want to enable reverse index for the table, let's select Yes (y) here
allow duplicate primary keys as well? y/n(or Enter for default (n)):
Primary keys are always indexed within BangDB, for all kinds of table. This helps in query using the primary keys.
However, we can allow primary keys to be duplicated as well
Let's select Yes (y) here
Key type [NORMAL_KEY(string type) (0) | COMPOSITE_KEY(also string type) (1) | NORMAL_KEY_LONG (long type) (2)] (or Enter for default (0)):
We can define different kinds of keys for primary key
NORMAL_KEY : String type, needs length to be defined. User should not use very long value here as it affects the performance, however it should be enough to cover the case. Note that if key size is too low then data insert may reject the operation when encountered key size is more than the defined one. Default value is 24 bytes
COMPOSITE_KEY : Again String type only, hence need key length to be defined as well. In some cases composite keys could be very efficient from storage and scan perspective. The size of the keys
NORMAL_KEY_LONG : select this when key is long type. This is fixed size key and it's very efficient
Let's select NORMAL_KEY (0)
key size in num of bytes (at least 8 bytes, max 128 bytes, as low as possible but high enough for the key) (or Enter for default (24)):
Let's go with default 24 bytes
primary key arrangement (index) type [ BTREE (2) | EXTHASH (1) ] (or Enter for default (2)):
Primary keys could be arranged in sorted (BTREE, actually B+ExtTree) manner or hashed (EXTHASH).
Let's go with BTREE as it's a good choice most of the time
Method for key sort [ Lexicographically (1) | Quasi Lexicographically (2) ] (or Enter for default (2)):
We can further tell how to arrange if sorted, let's go with default
Direction for key sort [ Ascending (3) | Descending (4) ] (or Enter for default (3)):
Let's go with ascending, default
Now it will flush our selection on the terminal and ask for confirmation
table config set is as follows;
db type = 1, idx type = 2, table type = 1, key type = 1, table_sub type = 7, allow_duplicate = 1, allow_rev_idx = 1, sort_method = 2, sort_direction = 3, key_sz = 24, wal_enabled = 1, log_sz_mb = 128
Please type 'a' for abort or 'c' for commit [ a | c ]:
Press c to commit, table should be created.
You can check the details of the table by using "describe table mytable" command
To create index on the table
Let's create index on the firstname for the mytable table
bangdb > create index mytable.firstname
Index creation is very similar to table creation, it also creates a similar workflow
Key Type [NORMAL_KEY (1) | NORMAL_KEY_LONG (2)] (or Enter to set default (1)):
Let's select NORMAL_KEY (1)
Key size (or Enter to set default (24)):
Select 24 as key size
Sort direction [SORT_ASCENDING(3) | SORT_DESCENDING(4)] (or Enter to set default (3)):
Select default 3
Allow duplicate indexes as well? y/n: (or Enter for default (n))
Select yes (y) for duplicate index
finally it asks for confirmation before commit
Please type 'a' for abort or 'c' for commit [ a | c ]:
select c for commit and it will create the index
Insert few data into the table
insert into mytable values "user1" {"firstname":"sachin", "org":"bangdb","city":"bangalore"}
select data from the table now
select * from mytable
It will return something like this;
+---------+------------------------------------------------------------------------------------------------------------+
|key | val |
+---------+------------------------------------------------------------------------------------------------------------+
|user1 | {"firstname":"sachin","org":"bangdb","city":"bangalore","_pk":"user1","_v":1} |
+---------+------------------------------------------------------------------------------------------------------------+
total rows retrieved = 1 (1)
we can also scan using firstname
bangdb > select * from mytable where firstname = "sachin"
+---------+------------------------------------------------------------------------------------------------------------+
|key | val |
+---------+------------------------------------------------------------------------------------------------------------+
|user1 | {"firstname":"sachin","org":"bangdb","city":"bangalore","_pk":"user1","_v":1} |
+---------+------------------------------------------------------------------------------------------------------------+
total rows retrieved = 1 (1)
even though we didn't create index on "org", still we can scan for this
select * from mytable where org = "bangdb"
+---------+------------------------------------------------------------------------------------------------------------+
|key | val |
+---------+------------------------------------------------------------------------------------------------------------+
|user1 | {"firstname":"sachin","org":"bangdb","city":"bangalore","_pk":"user1","_v":1} |
+---------+------------------------------------------------------------------------------------------------------------+
total rows retrieved = 1 (1)
We can use primary keys for select along with other filter
select * from mytable where _pk > "user" and org = "bangdb"
If we wish to limit the number of rows to be returned then we use "limit n" where n is number of rows
Default value of limit is 10
select * from mytable where _pk > "user" and org = "bangdb" limit 20
Let's use reverse index now, since we enabled them during table creation
But for this let's insert few docs and reverse index few keyes/fields
insert into mytable values "user1" {"firstname":"sachin", "org":"bangdb","city":"bangalore","fav_quote":"Truth is ever to be found in simplicity, and not in the multiplicity and confusion of things"}} revidx fav_quote
Note that we added "revidx fav_quote" in the end. This is to tell the db that reverse index this field.
We can have multiple fields here separated by comma
Now we will use reverse index based search, the query again looks similar
We wish to select all the rows where "fav_quote" field contains "Truth", "confusion" and "simplicity" tokens
select * from mytable where fav_quote = "Truth, confusion, simplicity"
scanning for pk range [null : null] and query = {"query":[{"match_words":"Truth, confusion, simplicity","joinop":1,"field":"fav_quote"}]}
+---------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
|key |val |
+---------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
|user1. |{"firstname":"sachin","org":"bangdb","city":"bangalore","fav_quote":"Truth is ever to be found in simplicity, |
| |and not in the multiplicity and confusion of things","_pk":"user1","_v":1} |
+---------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
total rows retrieved = 1 (1)
count number of records
bangdb > select count(*) from mytable
We can add all those filters that we can for select query, for example
bangdb > select count(*) from mytable where _pk > "user"
update the record
bangdb > update mytable set val = {"name":"sachin sinha","city":"delhi"} where _pk = "user1" and city = "delhi"
To delete a key
we can give all those filters that we can for select query
bangdb > delete from mytable where _pk = "user1"
dump the table on disk (force to write it on disk)
bangdb> dump table mytable
table mytable dumped successfully
drop the index
bangdb> drop index mytable.firstname
your are going to permanently drop and delete the index files
do you still wish to drop the index...? [ yes | no ]: yes
dropping index mytable.firstname ...
Index mytable.firstname dropped successfully
drop the table now, this will permanently delete the table
bangdb > drop table mytable
your are going to permanently drop and delete the table files
you may close the table and move the table files as archive instead
do you still wish to drop the table...? [ yes | no ]: yes
dropping table mytable ...
table mytable dropped successfully