Writy.
No Result
View All Result
  • Home
  • Business & Finance
    • Global Markets & Economy
    • Entrepreneurship & Startups
    • Investment & Stocks
    • Corporate Strategy
    • Business Growth & Leadership
  • Health & Science
    • Digital Health & Telemedicine
    • Biotechnology & Pharma
    • Wellbeing & Lifestyl
    • Scientific Research & Innovation
  • Marketing & Growth
    • SEO & Digital Marketing
    • Branding & Public Relations
    • Social Media & Content Strategy
    • Advertising & Paid Media
  • Policy & Economy
    • Government Regulations & Policies
    • Economic Development
    • Global Trade & Geopolitics
  • Sustainability & Future Trends
    • Renewable Energy & Green Tech
    • Climate Change & Environmental Policies
    • Sustainable Business Practices
    • Future of Work & Smart Cities
  • Tech & AI
    • Artificial Intelligence & Automation
    • Software Development & Engineering
    • Cybersecurity & Data Privacy
    • Blockchain & Web3
    • Big Data & Cloud Computing
  • Home
  • Business & Finance
    • Global Markets & Economy
    • Entrepreneurship & Startups
    • Investment & Stocks
    • Corporate Strategy
    • Business Growth & Leadership
  • Health & Science
    • Digital Health & Telemedicine
    • Biotechnology & Pharma
    • Wellbeing & Lifestyl
    • Scientific Research & Innovation
  • Marketing & Growth
    • SEO & Digital Marketing
    • Branding & Public Relations
    • Social Media & Content Strategy
    • Advertising & Paid Media
  • Policy & Economy
    • Government Regulations & Policies
    • Economic Development
    • Global Trade & Geopolitics
  • Sustainability & Future Trends
    • Renewable Energy & Green Tech
    • Climate Change & Environmental Policies
    • Sustainable Business Practices
    • Future of Work & Smart Cities
  • Tech & AI
    • Artificial Intelligence & Automation
    • Software Development & Engineering
    • Cybersecurity & Data Privacy
    • Blockchain & Web3
    • Big Data & Cloud Computing
No Result
View All Result
Develop and take a look at AWS Glue 5.0 jobs domestically utilizing a Docker container

Develop and take a look at AWS Glue 5.0 jobs domestically utilizing a Docker container

Theautonewspaper.com by Theautonewspaper.com
12 March 2025
in Big Data & Cloud Computing
0
Share on FacebookShare on Twitter


AWS Glue is a serverless information integration service that lets you course of and combine information coming by means of completely different information sources at scale. AWS Glue 5.0, the newest model of AWS Glue for Apache Spark jobs, supplies a performance-optimized Apache Spark 3.5 runtime expertise for batch and stream processing. With AWS Glue 5.0, you get improved efficiency, enhanced safety, assist for the following era of Amazon SageMaker, and extra. AWS Glue 5.0 lets you develop, run, and scale your information integration workloads and get insights sooner.

AWS Glue accommodates varied growth preferences by means of a number of job creation approaches. For builders preferring direct coding, Python or Scala growth is accessible utilizing the AWS Glue ETL library.

Constructing production-ready information platforms requires sturdy growth processes and steady integration and supply (CI/CD) pipelines. To assist various growth wants—whether or not on native machines, Docker containers on Amazon Elastic Compute Cloud (Amazon EC2), or different environments—AWS supplies an official AWS Glue Docker picture by means of the Amazon ECR Public Gallery. The picture permits builders to work effectively of their most popular atmosphere whereas utilizing the AWS Glue ETL library.

On this publish, we present the best way to develop and take a look at AWS Glue 5.0 jobs domestically utilizing a Docker container. This publish is an up to date model of the publish Develop and take a look at AWS Glue model 3.0 and 4.0 jobs domestically utilizing a Docker container, and makes use of AWS Glue 5.0 .

Accessible Docker pictures

The next Docker pictures can be found for the Amazon ECR Public Gallery:

  • AWS Glue model 5.0 – ecr.aws/glue/aws-glue-libs:5

AWS Glue Docker pictures are suitable with each x86_64 and arm64.

On this publish, we use public.ecr.aws/glue/aws-glue-libs:5 and run the container on an area machine (Mac, Home windows, or Linux). This container picture has been examined for AWS Glue 5.0 Spark jobs. The picture incorporates the next:

To arrange your container, you pull the picture from the ECR Public Gallery after which run the container. We display the best way to run your container with the next strategies, relying in your necessities:

  • spark-submit
  • REPL shell (pyspark)
  • pytest
  • Visible Studio Code

Conditions

Earlier than you begin, be sure that Docker is put in and the Docker daemon is operating. For set up directions, see the Docker documentation for Mac, Home windows, or Linux. Additionally just remember to have at the least 7 GB of disk house for the picture on the host operating Docker.

Configure AWS credentials

To allow AWS API calls from the container, arrange your AWS credentials with the next steps:

  1. Create an AWS named profile.
  2. Open cmd on Home windows or a terminal on Mac/Linux, and run the next command:
PROFILE_NAME="profile_name"

Within the following sections, we use this AWS named profile.

You might also like

Introducing Claude 4 in Amazon Bedrock, essentially the most highly effective fashions for coding from Anthropic

Introducing Claude 4 in Amazon Bedrock, essentially the most highly effective fashions for coding from Anthropic

24 May 2025
OpenSearch UI: Six months in overview

OpenSearch UI: Six months in overview

24 May 2025

Pull the picture from the ECR Public Gallery

In case you’re operating Docker on Home windows, select the Docker icon (right-click) and select Change to Linux containers earlier than pulling the picture.

Run the next command to drag the picture from the ECR Public Gallery:

docker pull public.ecr.aws/glue/aws-glue-libs:5

Run the container

Now you possibly can run a container utilizing this picture. You’ll be able to select any of following strategies based mostly in your necessities.

spark-submit

You’ll be able to run an AWS Glue job script by operating the spark-submit command on the container.

Write your job script (pattern.py within the following instance) and put it aside below the /local_path_to_workspace/src/ listing utilizing the next instructions:

$ WORKSPACE_LOCATION=/local_path_to_workspace
$ SCRIPT_FILE_NAME=pattern.py
$ mkdir -p ${WORKSPACE_LOCATION}/src
$ vim ${WORKSPACE_LOCATION}/src/${SCRIPT_FILE_NAME}

These variables are used within the following docker run command. The pattern code (pattern.py) used within the spark-submit command is included within the appendix on the finish of this publish.

Run the next command to run the spark-submit command on the container to submit a brand new Spark software:

$ docker run -it --rm 
    -v ~/.aws:/residence/hadoop/.aws 
    -v $WORKSPACE_LOCATION:/residence/hadoop/workspace/ 
    -e AWS_PROFILE=$PROFILE_NAME 
    --name glue5_spark_submit 
    public.ecr.aws/glue/aws-glue-libs:5 
    spark-submit /residence/hadoop/workspace/src/$SCRIPT_FILE_NAME

REPL shell (pyspark)

You’ll be able to run a REPL (read-eval-print loop) shell for interactive growth. Run the next command to run the pyspark command on the container to begin the REPL shell:

$ docker run -it --rm 
    -v ~/.aws:/residence/hadoop/.aws 
    -e AWS_PROFILE=$PROFILE_NAME 
    --name glue5_pyspark 
    public.ecr.aws/glue/aws-glue-libs:5 
    pyspark

You will notice following output:

Python 3.11.6 (most important, Jan  9 2025, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] on linux
Sort "assist", "copyright", "credit" or "license" for extra data.
Setting default log degree to "WARN".
To regulate logging degree use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _ / _ / _ `/ __/  '_/
   /__ / .__/_,_/_/ /_/_   model 3.5.2-amzn-1
      /_/

Utilizing Python model 3.11.6 (most important, Jan  9 2025 00:00:00)
Spark context Internet UI accessible at None
Spark context accessible as 'sc' (grasp = native[*], app id = local-1740643079929).
SparkSession accessible as 'spark'.
>>> 

With this REPL shell, you possibly can code and take a look at interactively.

pytest

For unit testing, you should use pytest for AWS Glue Spark job scripts.

Run the next instructions for preparation:

$ WORKSPACE_LOCATION=/local_path_to_workspace
$ SCRIPT_FILE_NAME=pattern.py
$ UNIT_TEST_FILE_NAME=test_sample.py
$ mkdir -p ${WORKSPACE_LOCATION}/checks
$ vim ${WORKSPACE_LOCATION}/checks/${UNIT_TEST_FILE_NAME}

Now let’s invoke pytest utilizing docker run:

$ docker run -i --rm 
    -v ~/.aws:/residence/hadoop/.aws 
    -v $WORKSPACE_LOCATION:/residence/hadoop/workspace/ 
    --workdir /residence/hadoop/workspace 
    -e AWS_PROFILE=$PROFILE_NAME 
    --name glue5_pytest 
    public.ecr.aws/glue/aws-glue-libs:5 
    -c "python3 -m pytest --disable-warnings"

When pytest finishes executing unit checks, your output will look one thing like the next:

============================= take a look at session begins ==============================
platform linux -- Python 3.11.6, pytest-8.3.4, pluggy-1.5.0
rootdir: /residence/hadoop/workspace
plugins: integration-mark-0.2.0
collected 1 merchandise

checks/test_sample.py .                                                   [100%]

======================== 1 handed, 1 warning in 34.28s =========================

Visible Studio Code

To arrange the container with Visible Studio Code, full the next steps:

  1. Set up Visible Studio Code.
  2. Set up Python.
  3. Set up Dev Containers.
  4. Open the workspace folder in Visible Studio Code.
  5. Press Ctrl+Shift+P (Home windows/Linux) or Cmd+Shift+P (Mac).
  6. Enter Preferences: Open Workspace Settings (JSON).
  7. Press Enter.
  8. Enter following JSON and put it aside:
{
    "python.defaultInterpreterPath": "/usr/bin/python3.11",
    "python.evaluation.extraPaths": [
        "/usr/lib/spark/python/lib/py4j-0.10.9.7-src.zip:/usr/lib/spark/python/:/usr/lib/spark/python/lib/",
    ]
}

Now you’re able to arrange the container.

  1. Run the Docker container:
$ docker run -it --rm 
    -v ~/.aws:/residence/hadoop/.aws 
    -v $WORKSPACE_LOCATION:/residence/hadoop/workspace/ 
    -e AWS_PROFILE=$PROFILE_NAME 
    --name glue5_pyspark 
    public.ecr.aws/glue/aws-glue-libs:5 
    pyspark

  1. Begin Visible Studio Code.
  2. Select Distant Explorer within the navigation pane.
  3. Select the container ecr.aws/glue/aws-glue-libs:5 (right-click) and select Connect in Present Window.

  1. If the next dialog seems, select Obtained it.

  1. Open /residence/hadoop/workspace/.

  1. Create an AWS Glue PySpark script and select Run.

You need to see the profitable run on the AWS Glue PySpark script.

Adjustments between the AWS Glue 4.0 and AWS Glue 5.0 Docker picture

The next are main modifications between the AWS Glue 4.0 and Glue 5.0 Docker picture:

  • In AWS Glue 5.0, there’s a single container picture for each batch and streaming jobs. This differs from AWS Glue 4.0, the place there was one picture for batch and one other for streaming.
  • In AWS Glue 5.0, the default consumer identify of the container is hadoop. In AWS Glue 4.0, the default consumer identify was glue_user.
  • In AWS Glue 5.0, a number of extra libraries, together with JupyterLab and Livy, have been faraway from the picture. You’ll be able to manually set up them.
  • In AWS Glue 5.0, all of Iceberg, Hudi, and Delta libraries are pre-loaded by default, and the atmosphere variable DATALAKE_FORMATS is now not wanted. Till AWS Glue 4.0, the atmosphere variable DATALAKE_FORMATS was used to specify whether or not the precise desk format is loaded.

The previous record is restricted to the Docker picture. To study extra about AWS Glue 5.0 updates, see Introducing AWS Glue 5.0 for Apache Spark and Migrating AWS Glue for Spark jobs to AWS Glue model 5.0.

Issues

Take into account that the next options are usually not supported when utilizing the AWS Glue container picture to develop job scripts domestically:

Conclusion

On this publish, we explored how the AWS Glue 5.0 Docker pictures present a versatile basis for growing and testing AWS Glue job scripts in your most popular atmosphere. These pictures, available within the Amazon ECR Public Gallery, streamline the event course of by providing a constant, transportable atmosphere for AWS Glue growth.

To study extra about the best way to construct end-to-end growth pipeline, see Finish-to-end growth lifecycle for information engineers to construct a knowledge integration pipeline utilizing AWS Glue. We encourage you to discover these capabilities and share your experiences with the AWS group.


Appendix A: AWS Glue job pattern codes for testing

This appendix introduces three completely different scripts as AWS Glue job pattern codes for testing functions. You need to use any of them within the tutorial.

The next pattern.py code makes use of the AWS Glue ETL library with an Amazon Easy Storage Service (Amazon S3) API name. The code requires Amazon S3 permissions in AWS Identification and Entry Administration (IAM). You have to grant the IAM-managed coverage arn:aws:iam::aws:coverage/AmazonS3ReadOnlyAccess or IAM customized coverage that lets you make ListBucket and GetObject API requires the S3 path.

import sys
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue.utils import getResolvedOptions


class GluePythonSampleTest:
    def __init__(self):
        params = []
        if '--JOB_NAME' in sys.argv:
            params.append('JOB_NAME')
        args = getResolvedOptions(sys.argv, params)

        self.context = GlueContext(SparkContext.getOrCreate())
        self.job = Job(self.context)

        if 'JOB_NAME' in args:
            jobname = args['JOB_NAME']
        else:
            jobname = "take a look at"
        self.job.init(jobname, args)

    def run(self):
        dyf = read_json(self.context, "s3://awsglue-datasets/examples/us-legislators/all/individuals.json")
        dyf.printSchema()

        self.job.commit()


def read_json(glue_context, path):
    dynamicframe = glue_context.create_dynamic_frame.from_options(
        connection_type="s3",
        connection_options={
            'paths': [path],
            'recurse': True
        },
        format="json"
    )
    return dynamicframe


if __name__ == '__main__':
    GluePythonSampleTest().run()

The next test_sample.py code is a pattern for a unit take a look at of pattern.py:

The next test_sample.py code is a pattern for a unit take a look at of pattern.py:
import pytest
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue.utils import getResolvedOptions
import sys
from src import pattern


@pytest.fixture(scope="module", autouse=True)
def glue_context():
    sys.argv.append('--JOB_NAME')
    sys.argv.append('test_count')

    args = getResolvedOptions(sys.argv, ['JOB_NAME'])
    context = GlueContext(SparkContext.getOrCreate())
    job = Job(context)
    job.init(args['JOB_NAME'], args)

Appendix B: Including JDBC drivers and Java libraries

So as to add a JDBC driver not at present accessible within the container, you possibly can create a brand new listing below your workspace with the JAR recordsdata you want and mount the listing to /choose/spark/jars/ within the docker run command. JAR recordsdata discovered below /choose/spark/jars/ throughout the container are routinely added to Spark Classpath and can be accessible to be used through the job run.

For instance, you should use the next docker run command so as to add JDBC driver jars to a PySpark REPL shell:

$ docker run -it --rm 
    -v ~/.aws:/residence/hadoop/.aws 
    -v $WORKSPACE_LOCATION:/residence/hadoop/workspace/ 
    -v $WORKSPACE_LOCATION/jars/:/choose/spark/jars/ 
    --workdir /residence/hadoop/workspace 
    -e AWS_PROFILE=$PROFILE_NAME 
    --name glue5_jdbc 
    public.ecr.aws/glue/aws-glue-libs:5 
    pyspark

As highlighted earlier, the customJdbcDriverS3Path connection possibility can’t be used to import a customized JDBC driver from Amazon S3 in AWS Glue container pictures.

Appendix C: Including Livy and JupyterLab

The AWS Glue 5.0 container picture doesn’t have Livy put in by default. You’ll be able to create a brand new container picture extending the AWS Glue 5.0 container picture as the bottom. The next Dockerfile demonstrates how one can prolong the Docker picture to incorporate extra elements you might want to improve your growth and testing expertise.

To get began, create a listing in your workstation and place the Dockerfile.livy_jupyter file within the listing:

$ mkdir -p $WORKSPACE_LOCATION/jupyterlab/
$ cd $WORKSPACE_LOCATION/jupyterlab/
$ vim Dockerfile.livy_jupyter

The next code is Dockerfile.livy_jupyter:

FROM public.ecr.aws/glue/aws-glue-libs:5 AS glue-base

ENV LIVY_SERVER_JAVA_OPTS="--add-opens java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/solar.nio.ch=ALL-UNNAMED --add-opens=java.base/solar.nio.cs=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED"

# Obtain Livy
ADD --chown=hadoop:hadoop https://dlcdn.apache.org/incubator/livy/0.8.0-incubating/apache-livy-0.8.0-incubating_2.12-bin.zip ./

# Set up and configure Livy
RUN unzip apache-livy-0.8.0-incubating_2.12-bin.zip && 
rm apache-livy-0.8.0-incubating_2.12-bin.zip && 
mv apache-livy-0.8.0-incubating_2.12-bin livy && 
mkdir -p livy/logs && 
cat > livy/conf/livy.conf
livy.server.host = 0.0.0.0
livy.server.port = 8998
livy.spark.grasp = native
livy.repl.enable-hive-context = true
livy.spark.scala-version = 2.12
EOF && 
cat > livy/conf/log4j.properties
log4j.rootCategory=INFO,console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.goal=System.err
log4j.appender.console.structure=org.apache.log4j.PatternLayout
log4j.appender.console.structure.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %mpercentn
log4j.logger.org.eclipse.jetty=WARN
EOF

# Switching to root consumer quickly to put in dev dependency packages
USER root 
RUN dnf replace -y && dnf set up -y krb5-devel gcc python3.11-devel
USER hadoop

# Set up SparkMagic and JupyterLab
RUN export PATH=$HOME/.native/bin:$HOME/livy/bin/:$PATH && 
printf "numpy /tmp/constraint.txt && 
pip3.11 --no-cache-dir set up --constraint /tmp/constraint.txt --user pytest boto==2.49.0 jupyterlab==3.6.8 IPython==7.14.0 ipykernel==5.5.6 ipywidgets==7.7.2 sparkmagic==0.21.0 jupyterlab_widgets==1.1.11 && 
jupyter-kernelspec set up --user $(pip3.11 --no-cache-dir present sparkmagic | grep Location | lower -d" " -f2)/sparkmagic/kernels/sparkkernel && 
jupyter-kernelspec set up --user $(pip3.11 --no-cache-dir present sparkmagic | grep Location | lower -d" " -f2)/sparkmagic/kernels/pysparkkernel && 
jupyter server extension allow --user --py sparkmagic && 
cat > /residence/hadoop/.native/bin/entrypoint.sh
#!/usr/bin/env bash
mkdir -p /residence/hadoop/workspace/
livy-server begin
sleep 5
jupyter lab --no-browser --ip=0.0.0.0 --allow-root --ServerApp.root_dir=/residence/hadoop/workspace/ --ServerApp.token='' --ServerApp.password=''
EOF

# Setup Entrypoint script
RUN chmod +x /residence/hadoop/.native/bin/entrypoint.sh

# Add default SparkMagic Config
ADD --chown=hadoop:hadoop https://uncooked.githubusercontent.com/jupyter-incubator/sparkmagic/refs/heads/grasp/sparkmagic/example_config.json .sparkmagic/config.json

# Replace PATH var
ENV PATH=/residence/hadoop/.native/bin:/residence/hadoop/livy/bin/:$PATH

ENTRYPOINT ["/home/hadoop/.local/bin/entrypoint.sh"]

Run the docker construct command to construct the picture:

docker construct 
    -t glue_v5_livy 
    --file $WORKSPACE_LOCATION/jupyterlab/Dockerfile.livy_jupyter 
    $WORKSPACE_LOCATION/jupyterlab/

When the picture construct is full, you should use the next docker run command to begin the newly constructed picture:

docker run -it --rm 
    -v ~/.aws:/residence/hadoop/.aws 
    -v $WORKSPACE_LOCATION:/residence/hadoop/workspace/ 
    -p 8998:8998 
    -p 8888:8888 
    -e AWS_PROFILE=$PROFILE_NAME 
    --name glue5_jupyter  
    glue_v5_livy

Appendix D: Including additional Python libraries

On this part, we focus on including additional Python libraries and putting in Python packages utilizing

Native Python libraries

So as to add native Python libraries, place them below a listing and assign the trail to $EXTRA_PYTHON_PACKAGE_LOCATION:

$ docker run -it --rm 
    -v ~/.aws:/residence/hadoop/.aws 
    -v $WORKSPACE_LOCATION:/residence/hadoop/workspace/ 
    -v $EXTRA_PYTHON_PACKAGE_LOCATION:/residence/hadoop/workspace/extra_python_path/ 
    --workdir /residence/hadoop/workspace 
    -e AWS_PROFILE=$PROFILE_NAME 
    --name glue5_pylib 
    public.ecr.aws/glue/aws-glue-libs:5 
    -c 'export PYTHONPATH=/residence/hadoop/workspace/extra_python_path/:$PYTHONPATH; pyspark'

To validate that the trail has been added to PYTHONPATH, you possibly can examine for its existence in sys.path:

Python 3.11.6 (most important, Jan  9 2025, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] on linux
Sort "assist", "copyright", "credit" or "license" for extra data.
Setting default log degree to "WARN".
To regulate logging degree use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _ / _ / _ `/ __/  '_/
   /__ / .__/_,_/_/ /_/_   model 3.5.2-amzn-1
      /_/

Utilizing Python model 3.11.6 (most important, Jan  9 2025 00:00:00)
Spark context Internet UI accessible at None
Spark context accessible as 'sc' (grasp = native[*], app id = local-1740719582296).
SparkSession accessible as 'spark'.
>>> import sys
>>> "/residence/hadoop/workspace/extra_python_path" in sys.path
True

Putting in Python packages utilizing pip

To put in packages from PyPI (or another artifact repository) utilizing pip, you should use the next method:

docker run -it --rm 
    -v ~/.aws:/residence/hadoop/.aws 
    -v $WORKSPACE_LOCATION:/residence/hadoop/workspace/ 
    --workdir /residence/hadoop/workspace 
    -e AWS_PROFILE=$PROFILE_NAME 
    -e SCRIPT_FILE_NAME=$SCRIPT_FILE_NAME 
    --name glue5_pylib 
    public.ecr.aws/glue/aws-glue-libs:5 
    -c 'pip3 set up snowflake==1.0.5; spark-submit /residence/hadoop/workspace/src/$SCRIPT_FILE_NAME'


In regards to the Authors

Author Headshot - Subramanya VajirayaSubramanya Vajiraya is a Sr. Cloud Engineer (ETL) at AWS Sydney specialised in AWS Glue. He’s obsessed with serving to clients remedy points associated to their ETL workload and implementing scalable information processing and analytics pipelines on AWS. Exterior of labor, he enjoys occurring bike rides and taking lengthy walks along with his canine Ollie.

Noritaka Sekiyama is a Principal Large Information Architect on the AWS Glue staff. He works based mostly in Tokyo, Japan. He’s answerable for constructing software program artifacts to assist clients. In his spare time, he enjoys biking along with his street bike.

Tags: AWScontainerDevelopDockerGluejobslocallyTest
Theautonewspaper.com

Theautonewspaper.com

Related Stories

Introducing Claude 4 in Amazon Bedrock, essentially the most highly effective fashions for coding from Anthropic

Introducing Claude 4 in Amazon Bedrock, essentially the most highly effective fashions for coding from Anthropic

by Theautonewspaper.com
24 May 2025
0

Anthropic launched the subsequent technology of Claude fashions right this moment—Opus 4 and Sonnet 4—designed for coding, superior reasoning, and...

OpenSearch UI: Six months in overview

OpenSearch UI: Six months in overview

by Theautonewspaper.com
24 May 2025
0

OpenSearch UI has been adopted by 1000's of consumers for numerous use instances since its launch in November 2024. Thrilling...

Asserting Anthropic Claude 3.7 Sonnet is natively out there in Databricks

Introducing new Claude Opus 4 and Sonnet 4 fashions on Databricks

by Theautonewspaper.com
23 May 2025
0

Purpose over your information. Automate advanced workflows. Scale with confidence — all in Databricks. Two months after launching our partnership...

Cloudera Releases AI-Powered Unified Knowledge Visualization for On-Prem Environments

Cloudera Releases AI-Powered Unified Knowledge Visualization for On-Prem Environments

by Theautonewspaper.com
23 May 2025
0

Santa Clara, California – Could 20, 2025: Hybrid knowledge platform firm Cloudera introduced the most recent launch of Cloudera Knowledge...

Next Post
No, you’re not fired – however watch out for job termination scams

No, you’re not fired – however watch out for job termination scams

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

The Auto Newspaper

Welcome to The Auto Newspaper, a premier online destination for insightful content and in-depth analysis across a wide range of sectors. Our goal is to provide you with timely, relevant, and expert-driven articles that inform, educate, and inspire action in the ever-evolving world of business, technology, finance, and beyond.

Categories

  • Advertising & Paid Media
  • Artificial Intelligence & Automation
  • Big Data & Cloud Computing
  • Biotechnology & Pharma
  • Blockchain & Web3
  • Branding & Public Relations
  • Business & Finance
  • Business Growth & Leadership
  • Climate Change & Environmental Policies
  • Corporate Strategy
  • Cybersecurity & Data Privacy
  • Digital Health & Telemedicine
  • Economic Development
  • Entrepreneurship & Startups
  • Future of Work & Smart Cities
  • Global Markets & Economy
  • Global Trade & Geopolitics
  • Health & Science
  • Investment & Stocks
  • Marketing & Growth
  • Public Policy & Economy
  • Renewable Energy & Green Tech
  • Scientific Research & Innovation
  • SEO & Digital Marketing
  • Social Media & Content Strategy
  • Software Development & Engineering
  • Sustainability & Future Trends
  • Sustainable Business Practices
  • Technology & AI
  • Wellbeing & Lifestyl

Recent News

What’s developing at #ICRA2025?

#ICRA2025 social media round-up – Robohub

24 May 2025
Musk vows to be ‘tremendous centered’ on corporations amid X outages

Musk vows to be ‘tremendous centered’ on corporations amid X outages

24 May 2025
The Fairness Explorers Declares Jean-Pierre Colin as Featured Speaker for 5-Day Investor Boot camp

The Fairness Explorers Declares Jean-Pierre Colin as Featured Speaker for 5-Day Investor Boot camp

24 May 2025
Schedule for Week of Might 25, 2025

Schedule for Week of Might 25, 2025

24 May 2025
Learn how to earn backlinks from websites like Reader’s Digest and American Specific

Learn how to earn backlinks from websites like Reader’s Digest and American Specific

24 May 2025
  • About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us

© 2025 https://www.theautonewspaper.com/- All Rights Reserved

No Result
View All Result
  • Home
  • Business & Finance
    • Global Markets & Economy
    • Entrepreneurship & Startups
    • Investment & Stocks
    • Corporate Strategy
    • Business Growth & Leadership
  • Health & Science
    • Digital Health & Telemedicine
    • Biotechnology & Pharma
    • Wellbeing & Lifestyl
    • Scientific Research & Innovation
  • Marketing & Growth
    • SEO & Digital Marketing
    • Branding & Public Relations
    • Social Media & Content Strategy
    • Advertising & Paid Media
  • Policy & Economy
    • Government Regulations & Policies
    • Economic Development
    • Global Trade & Geopolitics
  • Sustainability & Future Trends
    • Renewable Energy & Green Tech
    • Climate Change & Environmental Policies
    • Sustainable Business Practices
    • Future of Work & Smart Cities
  • Tech & AI
    • Artificial Intelligence & Automation
    • Software Development & Engineering
    • Cybersecurity & Data Privacy
    • Blockchain & Web3
    • Big Data & Cloud Computing

© 2025 https://www.theautonewspaper.com/- All Rights Reserved