This tutorial demonstrates how to centralize logging for Snowflake by sending your logs to Coralogix.
Snowflake is a cloud-based data warehousing platform. Designed for storing, processing, and analyzing large volumes of data, it provides a data warehousing service that is highly scalable, allowing organizations to store and query data efficiently in the cloud.
Snowflake does not have direct built-in integration with Coralogix to export the logs and audit related data. This step-by-step guide will teach you how to set up an AWS S3 bucket, to which you can export your Snowflake logs and audit data. Coralogix then ingests the data and presents it for optimized visualization and analysis in our platform.
STEP 1. Create an IAM policy.
Configure access permissions for your S3 bucket by following Step 1 here to create IAM policy now create an IAM role which will be used by Snowflake to unload data into S3 bucket.
STEP 2. Configure an IAM role.
STEP 3. Create an S3 storage integration.
create or replace storage integration s3_integration type = external_stage storage_provider = s3 enabled = true storage_aws_role_arn = '<PUT_HERE_AWS_ROLE_ARN>' storage_allowed_locations = ('<PUT_HERE_AWS_S3_BUCKET_PATH>');
STEP 4. Create a JSON file format.
create or replace file format my_json_format type = json COMPRESSION = 'gzip' null_if = ('NULL', 'null');
STEP 5. Create an S3 stage.
use database '<PUT_HERE_DB_NAME>'; use schema '<PUT_HERE_DB_SCHEMA_NAME_TO_USE>'; create or replace stage my_s3_stage storage_integration = s3_integration url = '<PUT_HERE_AWS_S3_BUCKET_PATH>' file_format = my_json_format;
STEP 6. To get your snowflake Account ID, run this command:
DESC INTEGRATION s3_integration;
STEP 6. Execute an unload command to push data from tables to stage and in turn to AWS S3.
use database ‘<PUT_HERE_DB_NAME>’; use WAREHOUSE ‘<PUT_HERE_WAREHOUSE_NAME>’;copy into@my_s3_stage/login_history from (SELECT OBJECT_CONSTRUCT(‘application’, ‘snowflake’ ,’environment’, ‘<PUT_HERE_ENV_NAME>’, ‘log_type’, ‘login_history’, ‘EVENT_TIMESTAMP’, EVENT_TIMESTAMP, ‘EVENT_TYPE’, EVENT_TYPE, ‘USER_NAME’, USER_NAME, ‘CLIENT_IP’, CLIENT_IP, ‘REPORTED_CLIENT_TYPE’, REPORTED_CLIENT_TYPE, ‘FIRST_AUTHENTICATION_FACTOR’,FIRST_AUTHENTICATION_FACTOR, ‘IS_SUCCESS’, IS_SUCCESS, ‘ERROR_CODE’, ERROR_CODE, ‘ERROR_MESSAGE’, ERROR_MESSAGE) from snowflake.account_usage.Login_history) FILE_FORMAT = (TYPE = JSON) ;copy into@my_s3_stage/access_history from (SELECT OBJECT_CONSTRUCT(‘application’, ‘snowflake’ ,’environment’, ‘<PUT_HERE_DB_NAME>’, ‘log_type’, ‘access_history’, ‘QUERY_START_TIME’,QUERY_START_TIME, ‘USER_NAME’, USER_NAME, ‘DIRECT_OBJECTS_ACCESSED’,DIRECT_OBJECTS_ACCESSED, ‘BASE_OBJECTS_ACCESSED’, BASE_OBJECTS_ACCESSED, ‘OBJECTS_MODIFIED’, OBJECTS_MODIFIED) from snowflake.account_usage.Access_History ) FILE_FORMAT = (TYPE = JSON);
Notes:
copy into
command to unload / extract data from various tables like login_history, access_history, query_history, sessions, etc.OBJECT_CONSTRUCT
to selectively get data from columns and to add that against the specified name while preparing a Json formatted file.STEP 6. [Optional] Repeat and write scripts for all below tables in which Snowflake stores logs and audit related data.
All these tables and views are present in the snowflake.account_usage schema.
Once the above commands are run, you should see logs files created in your S3 bucket.
Coralogix provides multiple methods in which you can collect logs from Amazon S3. Send us your CloudTrail data from your Amazon S3 bucket using an AWS Lambda function, with one of two event-driven design patterns:
Use any of our customized log collection options to allow Coralogix to ingest the logs stored in your Amazon S3 bucket and process them for further analysis and monitoring.
In your Coralogix toolbar, navigate to Explore > Logs. View your logs in your Coralogix dashboard.
Need help?
Our world-class customer success team is available 24/7 to walk you through your setup and answer any questions that may come up.
Feel free to reach out to us via our in-app chat or by sending us an email at [email protected].