Creating Your First Braineous Data Pipeline
Learn how to create a Braineous Data Pipeline.
This guide covers:
-
Get the Source Data
-
Register a data pipe and send source data to configured target MongoDB database
-
Verify the target database received the data
1. Prerequisites
To complete this guide, you need:
-
Roughly 15 minutes
-
An IDE
-
JDK 11+ installed with
JAVA_HOME
configured appropriately -
Apache Maven 3.9.5
Verify Maven is using the Java you expect
If you have multiple JDK’s installed, it is not certain Maven will pick up the expected java
and you could end up with unexpected results.
You can verify which JDK Maven uses by running |
Download the Braineous-1.0.0-CR3 zip archive
This tutorial is located under: braineous-1.0.0-cr3/tutorials/get-started
2. Initialize
Get an instance of the DataPlatformService
. Setup your API_KEY
and API_SECRET
.
Please refer to 'Step 9' of the Getting Started guide.
https://bugsbunnyshah.github.io/braineous/get-started/
DataPlatformService dataPlatformService = DataPlatformService.getInstance();
String apiKey = "ffb2969c-5182-454f-9a0b-f3f2fb0ebf75";
String apiSecret = "5960253b-6645-41bf-b520-eede5754196e";
3. Get the Source Data
Let’s start with a simple Json array to be used as datasource to be ingested by the Braineous Data Ingestion Engine
Source Data
[
{
"id" : 1,
"name": "name_1",
"age": 46,
"addr": {
"email": "name_1@email.com",
"phone": "123"
}
},
{
"id": "2",
"name": "name_2",
"age": 55,
"addr": {
"email": "name_2@email.com",
"phone": "1234"
}
}
]
Java Code
String datasetLocation = "dataset/data.json";
String json = Util.loadResource(datasetLocation);
A dataset can be loaded from any data source such as a database, legacy production data store,
live data feed, third-party data source, Kafka stream, etc. In this example the dataset is loaded from a classpath
resource located at src/main/resources/dataset/data.json
4. Register a data pipe and send source data to configured target MongoDB database
Register a data pipe with the Braineous Data Ingestion Engine using the Java Braineous Data Ingestion Client SDK.
Pipe Configuration
{
"pipeId": "yyya",
"entity": "abc",
"configuration": [
{
"stagingStore" : "com.appgallabs.dataplatform.targetSystem.core.driver.MongoDBStagingStore",
"name": "yyya",
"config": {
"connectionString": "mongodb://localhost:27017",
"database": "yyya",
"collection": "data",
"jsonpathExpressions": []
}
}
]
}
-
pipeId : As a data source provider, this id identifies this data pipe uniquely with the Braineous Data Pipline Engine.
-
entity : The business/domain entity that this dataset should be associated with.
-
configuration.stagingStore: The
Staging Store
driver -
configuration.name: a user-friendly way to indentify the target store
-
configuration.config.connectionString: MongoDB database connection string for your target store
-
configuration.config.database: MongoDB database on your target store
-
configuration.config.collection: MongoDB database collection on your target store
-
configuration.config.jsonpathExpressions: Data Transformation based on JSONPath specification: https://www.ietf.org/archive/id/draft-goessner-dispatch-jsonpath-00.html
A data pipe can be configured with multiple target stores/systems associated with the same data pipe for data delivery.
The current Release, supports the following target stores
-
Snowflake
-
MySQL
-
ElasticSearch
-
MongoDB
-
ClickHouse
In the future releases, Braineous team will add support for more target stores and systems such as :
-
Postgresql
-
Oracle
-
Amazon RedShift
Java Code - Register Pipe
String configLocation = "pipe_config/pipe_config.json";
String pipeConfigJson = Util.loadResource(configLocation);
JsonObject configJson = JsonUtil.validateJson(pipeConfigJson).getAsJsonObject();
String pipeId = configJson.get("pipeId").getAsString();
String entity = configJson.get("entity").getAsString();
System.out.println("*****PIPE_CONFIGURATION******");
JsonUtil.printStdOut(configJson);
//configure the DataPipeline Client
Configuration configuration = new Configuration().
ingestionHostUrl("http://localhost:8080/").
apiKey(apiKey).
apiSecret(apiSecret).
streamSizeInObjects(0);
dataPlatformService.configure(configuration);
//register pipe
dataPlatformService.registerPipe(configJson);
System.out.println("*****PIPE_REGISTRATION_SUCCESS******");
Pipe Configuration can be provided dynamically at runtime. The source can be a
database, a configuration system, local file system, network file system etc.
In this example the dataset is loaded from a classpath
resource located at src/main/resources/pipe_config/pipe_config.json
Java Code - Send Data for ingestion
//send source data through the pipeline
dataPlatformService.sendData(pipeId, entity,datasetElement.toString());
System.out.println("*****DATA_INGESTION_SUCCESS******");
5. Run the Tutorial
cd braineous-1.0.0-cr3/tutorials/get-started
./run.sh
Expected Output
:
[INFO] ------------------------------------------------------------------------
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
*****DATA_SET******
******ARRAY_SIZE: 2**********
[
{
"id": 1,
"name": "name_1",
"age": 46,
"addr": {
"email": "name_1@email.com",
"phone": "123"
}
},
{
"id": "2",
"name": "name_2",
"age": 55,
"addr": {
"email": "name_2@email.com",
"phone": "1234"
}
}
]
**********************
*****PIPE_CONFIGURATION******
{
"pipeId": "yyya",
"entity": "abc",
"configuration": [
{
"stagingStore": "com.appgallabs.dataplatform.targetSystem.core.driver.MongoDBStagingStore",
"name": "yyya",
"config": {
"connectionString": "mongodb://localhost:27017",
"database": "yyya",
"collection": "data",
"jsonpathExpressions": []
}
}
]
}
**********************
*****PIPE_REGISTRATION_SUCCESS******
***SENDING_DATA_START*****
*****DATA_INGESTION_SUCCESS******
6. Verify all target collections receive the data
To verify the success of the ingestion and delivery to the configured target databases, use the following MongoDB commands.
Expected Result :
You should see two records added to a collection called "data"
in a database called "yyya" corresponding to configured value configuration.config.database
mongosh
mongosh
use yyya
show collections
db.data.find()
db.data.count()