BigBear.ai
  • Home
  • Industries
    • Academia
    • Government
    • Healthcare
    • Manufacturing
  • Solutions
    • AI Capabilities
    • Cyber
    • Data Analytics
    • Enterprise Planning and Logistics
    • Intelligent Automation
    • Modeling Solutions
    • Professional Services
  • Products
    • FutureFlow Rx
    • MedModel
    • Process Simulator
    • ProModel
    • ProModel AutoCAD Edition
    • Shipyard AI
    • Support
  • Company
    • About
    • Investor Relations
    • Partners
    • Team
  • Careers
    • Benefits
    • Culture
    • Explore Jobs
    • Military and Veterans
    • Applicant Login
    • Employee Login
  • Resources
    • Blog
    • Events
    • Newsroom
    • Resource Library
    • Online Store
  • Contact
Search

Home Technology Using Talend to Extract Data from an AWS S3 Bucket to Postgres

Blog

Using Talend to Extract Data from an AWS S3 Bucket to Postgres

Jordan Valdespino
May 5, 2020
  • Share
  • Share

Today we will be using Talend to extract data from an AWS S3 bucket environment and load it as a new table in PostgreSQL.

I have approximately five years of ETL and Database experiences, and as of a few weeks ago, I never used Talend, AWS, or Postgres. Through a series of videos on Pluralsight and Talend’s curriculum, I was able to navigate this challenge in a matter of hours.

The first step is to find your AWS S3 connection and file paths. After you have logged into AWS, type in “S3” in the “Find Services” section of the management console and click “S3 Scalable Storage in Cloud”. You will find all your buckets that house your files and data sets. For this blog, I am working with a bucket labeled “Covid19”. Inside this bucket are two folders labeled “Confirmed” and “Deaths” that are separated into the US and Global numbers of the Covid19 pandemic. We will only be working with the “Confirmed” folder and “Global” file for this blog.

 

With our files in hand, lets open Talend and create a new job design.

For extracting the data, we will be using the “tS3Connection”. This will establish the AWS connection so we can gather our files. All you must do for this connection to be successful is to provide the “Access Key” and “Secret Key” provided by your AWS admin and embed them in double-quotes. No other settings need to be changed.

The next component is needed to locate the bucket and file in our connection. The name of that component is “tS3Get”. Before we get into the settings, you need to link the tS3Connection component with the tS3Get by right-clicking the connection, going to trigger, and selecting “OnSubJobOk”. Then you will take the line and click tS3Get. This will tell Talend once the connection is successful to move to the next step of the job flow. In the settings, you will check the “Use an existing connection” box and select the connection you created in the last step. Then in the “Bucket” box, you supply the file path, in this case, “nuwave-covid19/Confirmed”. The “key” box is the name of the file that is present in the folder, “ts_conf_global.csv”. Lastly, you supply the target destination. The file is to be downloaded at “C:/Users/jv/Documents /TS_CONF_GLOBAL.csv.” Then you run the job in the “Run” tab.

Once our file is downloaded, we will use it as our Metadata schema.

  • Navigate to the “Metadata” section
    • Tight click the “File Delimited”
    • Select “Create File Delimited”.
    • Name the metadata “TS_CONF_GLOBAL”.
  • Browse for the file and confirm the Format is “UNIX”.
  • In the “File Settings” navigate to “Field Separator” and select “Comma” in the dropdown.
    • Under the “Rows To Skip” select the header box.
    • In the “Preview” pan check the “Set heading row as column names”
      • Select “Refresh.
    • After the preview is loaded click next.

The final step of the metadata is confirming the data structure of each column. I have found Talend’s guesses to sometimes be off, so to be safe let’s change the “Type”,” Province_State”, and” Country_Region” to a length of 100, and make sure to change the Schema name to TS_Conf

The next component is the “tFileInputDelimited”. This will read the file you downloaded based on the schema you previously set up. First, you need to connect the tSGet to the tFileInput. Right-click the tSGet, going to trigger, and selecting “On Component Ok”, followed by placing the connection on the new tFileInputDelimited component.

In the tFileInputDelimited settings you will:

  • Go to the “Schema” drop-down and select “Repository”.
    • Select the “…” next “Schema”.
    • Navigate to the “File Delimited” section and keep clicking the down arrow until you see “TS_Conf”.
      • This tells Talend the structure of the file.
    • In the “File name/Stream” provide the path and filename of your downloaded file.
    • Replace the semicolon with a comma in the “Field Separator”.
    • Select “CSV Options” and make sure the “Header” box has the number 1 in it.

For the next step, we will now connect to Postgres and load our file. The database component we are using is “tDBouput”. After it is selected, we need to connect it to our tFileInput. Right-click tFileInput and select “Row” followed by “Main” and select the new tDBOutput. In the settings of the tDBOuput under database, select “PostgreSQL” and click apply. Provide all the connection information as provided by your DBA. You will then select “Repository” in the “Schema” box and choose the “TS_Conf” schema. Notice how the Table changed to match the name of the file. Since you are using this file to create a new table, make sure to change the “Action on table” box to “Create table if it does not exist” and confirm the “Action on data” is set to “Insert”. If you have a warning symbol in the tDBOuptut component referencing the schema, click the “Sync columns” button near the bottom of the settings box.

The final step and most important step is running the completed job. When it finishes, you should see a new table in the Postgres database labeled “TS_Conf” with 264 rows.

Posted in Technology.
BigBear.ai
  • Home
  • Industries
  • Solutions
  • Products
  • Company
  • Careers
  • Blog
  • Investor Relations
  • Online Store
  • Contact
  • Twitter
  • Facebook
  • Linkedin
  • Google My business for BigBear.ai
1-410-312-0885
[email protected]
  • Privacy Policy
  • Terms of Use
  • Accessibility
  • Site Map
© BigBear.ai 2023
We value your privacy
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
Privacy Policy | Do not sell my personal information
AcceptCookie Settings
Manage Consent

Cookies Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
JSESSIONIDsessionThe JSESSIONID cookie is used by New Relic to store a session identifier so that New Relic can monitor session counts for an application.
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
CookieDurationDescription
__atuvc1 year 1 monthAddThis sets this cookie to ensure that the updated count is seen when one shares a page and returns to it, before the share count cache is updated.
__atuvs30 minutesAddThis sets this cookie to ensure that the updated count is seen when one shares a page and returns to it, before the share count cache is updated.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
CookieDurationDescription
_ga2 yearsThe _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors.
_ga_NK4L4Q320Q2 yearsThis cookie is installed by Google Analytics.
_gat_gtag_UA_163894009_21 minuteSet by Google to distinguish users.
_gid1 dayInstalled by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously.
at-randneverAddThis sets this cookie to track page visits, sources of traffic and share counts.
CONSENT2 yearsYouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data.
uvc1 year 1 monthSet by addthis.com to determine the usage of addthis.com service.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
CookieDurationDescription
f5avraaaaaaaaaaaaaaaa_session_sessionbusinesswire.com cookie
loc1 year 1 monthAddThis sets this geolocation cookie to help understand the location of users who share the information.
VISITOR_INFO1_LIVE5 months 27 daysA cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface.
YSCsessionYSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages.
yt-remote-connected-devicesneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt-remote-device-idneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
Save & Accept