[et_pb_section fb_built=”1″ _builder_version=”3.22″ custom_padding=”0|0px|0|0px|false|false”][et_pb_row _builder_version=”3.25″][et_pb_column type=”4_4″ _builder_version=”3.25″ custom_padding=”|||” custom_padding__hover=”|||”][et_pb_post_title comments=”off” featured_image=”off” _builder_version=”4.0.6″][/et_pb_post_title][/et_pb_column][/et_pb_row][et_pb_row column_structure=”2_3,1_3″ make_equal=”on” _builder_version=”3.25″][et_pb_column type=”2_3″ module_class=”vertical-center” _builder_version=”3.25″ custom_padding=”|||” custom_padding__hover=”|||”][et_pb_text _builder_version=”4.0.6″ background_size=”initial” background_position=”top_left” background_repeat=”repeat” custom_padding=”|||” hover_enabled=”0″]
Rustam Mehmandarov is a passionate computer scientist. A Java Champion and Google Developers Expert (GDE) for Cloud. JavaOne Rockstar. Public speaker. Ex-leader of JavaZone and Norwegian JUG – javaBin.
[/et_pb_text][/et_pb_column][et_pb_column type=”1_3″ module_class=”vertical-center” _builder_version=”3.25″ custom_padding=”|||” custom_padding__hover=”|||”][et_pb_image src=”https://voxxedromania.ams3.cdn.digitaloceanspaces.com/2020-03-VDBUH/speakers/speakers/rustam-700.jpg” url_new_window=”on” align=”right” align_tablet=”center” align_phone=”” align_last_edited=”on|desktop” _builder_version=”4.0.6″ custom_margin=”0px|||” hover_enabled=”0″ border_radii=”on|0%|0%|0%|0%” border_color_left=”#18b9f0″ box_shadow_style=”preset1″ box_shadow_blur=”15px” box_shadow_color=”#18b9f0″][/et_pb_image][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.25″][et_pb_column type=”4_4″ _builder_version=”3.25″ custom_padding=”|||” custom_padding__hover=”|||”][et_pb_text _builder_version=”4.0.6″ hover_enabled=”0″]
A few years ago, moving data between applications and datastores included expensive monolithic stacks from large software vendors with little flexibility.
Now, with frameworks such as Apache Beam and Apache Airflow, we can schedule and run data processing jobs for both streaming and batch with the same underlying code.
This presentation demonstrates the concepts of how this can glue your applications together and shows how we can run a data pipeline from Apache Kafka through Hadoop Flink to Hive and move this to Pub/Sub, DataFlow, and BigQuery by changing a few lines of Java in our Apache Beam code. We will be looking at how this can be deployed in different cloud solutions, like Oracle Cloud or any other cloud out there.
[/et_pb_text][et_pb_button button_url=”https://romania.voxxeddays.com/bucharest/voxxed-days-bucharest-2020/#speakers” url_new_window=”on” button_text=”SEE ALL OUR SPEAKERS” button_alignment=”center” _builder_version=”4.0.6″ custom_button=”on” button_font=”||||||||” button_use_icon=”off” box_shadow_style=”preset1″ button_text_color_hover=”#ffffff” button_bg_color_hover=”#18b9f0″ button_text_size__hover_enabled=”off” button_one_text_size__hover_enabled=”off” button_two_text_size__hover_enabled=”off” button_text_color__hover_enabled=”on” button_text_color__hover=”#ffffff” button_one_text_color__hover_enabled=”off” button_two_text_color__hover_enabled=”off” button_border_width__hover_enabled=”off” button_one_border_width__hover_enabled=”off” button_two_border_width__hover_enabled=”off” button_border_color__hover_enabled=”off” button_one_border_color__hover_enabled=”off” button_two_border_color__hover_enabled=”off” button_border_radius__hover_enabled=”off” button_one_border_radius__hover_enabled=”off” button_two_border_radius__hover_enabled=”off” button_letter_spacing__hover_enabled=”off” button_one_letter_spacing__hover_enabled=”off” button_two_letter_spacing__hover_enabled=”off” button_bg_color__hover_enabled=”on” button_bg_color__hover=”#18b9f0″ button_one_bg_color__hover_enabled=”off” button_two_bg_color__hover_enabled=”off”][/et_pb_button][/et_pb_column][/et_pb_row][/et_pb_section]