Search for:
using amazon s3 for object storage
Using Amazon S3 for Object Storage

using amazon s3 for object storage

Image by Mohamed Hassan from Pixabay

Introduction

It is the 21st century and we are literally obsessed with data. Data seems to be everywhere and companies currently hold huge amounts of it regardless of the industry they belong to. 

This brings us to the problem of storing data in a way that it can be accessed, processed, and used efficiently. Before cloud solutions, companies would have to spend a lot of money on physical storage and infrastructure to support all the data the company had. 

Nowadays, the more popular choice is to store data in a cloud as this is often the cheaper and more accessible solution. One of the most popular storage options on the market is Amazon S3 and in this article, you are going to learn how to get started with it using Python (and boto3 library).

So let’s first see how AWS S3 works.

AWS S3 Object Storage

As we have mentioned before AWS S3 is a cloud storage service and its name S3 actually stands for Simple Storage Service. Additionally, AWS S3 is unstructured object storage. What does that mean? 

This means that data is stored as objects without any explicit structure such as files or subdirectories. The object is stored directly in the bucket with its metadata. 

This approach has a lot of advantages. The flat structure allows fast data retrieval and offers high scalability (if more data needs to be added it is easy to add new nodes). The metadata information helps with searchability and allows faster analysis. 

These characteristics make object storage an attractive solution for bigger and smaller companies. Additionally, it may be the most cost-effective option of storing data nowadays.

So, now that you have a bit of background on AWS S3 and object storage solutions, you can get started and create an AWS account.

Creating an Account with AWS and Initiating Setup

In order to create an account, head to AWS and click on create an AWS account. You will be prompted to fill in a form similar to the one below.

using amazon s3 for object storage

There are currently five steps in the set up so you will need to fill all of them (including your address, personal, and billing information). When asked for the plan you can choose a free one. After completing the signup process, you should see a success message and be able to access the AWS Management Console.

Once you are in the AWS Management Console you will be able to access S3 from there.

using amazon s3 for object storage 1

Now you can open S3 and create your first bucket. The name that you choose for the bucket has to be unique across the entire S3 platform. I suggest you use only lowercase letters for the name (e.g. konkibucket). For the AWS region, choose the location closest to you and leave the rest of the settings as default.

Once the bucket is created you should be able to see it from S3. 

using amazon s3 for object storage 2

Now you can add and delete files by accessing the bucket via the AWS S3 console. 

The next step is to learn how to interact with the bucket via Python and the boto3 library. 

Installing boto3

The first step is to use pip to install boto3. In order to do this you can run the following command:

pip3 install boto3

Setup Credentials File with AWS Keys

The next step is to set up a file with AWS credentials that boto will use to connect to your S3 storage. In order to do it, create a file called ~/.aws/credentials and set up its contents with your own access keys.

[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

In order to create keys, you will need to head to the AWS Management Console again and select IAM under Security, Identity & Compliance.

using amazon s3 for object storage 3

Once in IAM, select the users option as shown in the screenshot below:

using amazon s3 for object storage 4

This will lead you to the user management platform where you will be able to add a new user. Make sure you grant a new user programmatic access.

using amazon s3 for object storage 5

Also, make sure that the user has AmazonS3FullAccess permission.

using amazon s3 for object storage 6

Once the user creation process is successful you will see the access keys for the user you have created.

using amazon s3 for object storage 7

You can now use your credentials to fill in ~/.aws/credentials file.

Once the credentials file is set up you can get access to S3 via this Python code:

import boto3
s3_resource = boto3.resource('s3')

If you want to see all the buckets in your S3 you can use the following snippet:

for bucket in s3_resource.buckets.all():
   print(bucket.name)

Uploading a File to S3

It’s time to learn how to upload a file to S3 with boto3.

In order to send a file to S3, you need to create an object where you need to specify the S3 bucket, object name (e.g. my_file.txt) and a path from your local machine from which the file will be uploaded:

s3_resource.Object('<bucket_name>', '<my_file.txt>').upload_file(Filename='<local_path_file>')

Yes, it’s that simple! If you now look at the S3 Management Console the bucket should have a new file there.

Downloading a File from S3

Like uploading a file, you can download one once it actually exists in your S3 bucket. In order to do it, you can use the download_file function and specify the bucket name, file name and your local path destination.

s3_resource.Object('<bucket_name>', '<my_file.txt>').download_file('<local_path_file>')

You have just learned how to upload and download files to S3 with Python. As you can see the process is very straightforward and easy to set up.

Summary

In this article, you have learned about AWS S3 and how to get started with the service. You also learned how to interact with your storage via Python using the boto3 library. 

By now you should be able to upload and download files with Python code. There is still much more to learn but you just got started with AWS S3!

Source Prolead brokers usa

data analytics how it drives better decision making
Data Analytics: How it Drives Better Decision-Making

data analytics how it drives better decision making

Not one or two, but more than enough studies have shown that for businesses to succeed, insights are vital. Insights about customer behavior, market trends, operations, and so much more — the point is that insight is essential. Now, how do you gain these insights then? Data — tons and tons of it — and a technology to help you make sense of it. The former is available in abundance, while the latter’s mantle has been taken up by data analytics, assisting companies to approach growth and progress with a whole new perspective. As the amount of data we generate continues to grow by analytics, analytics has empowered enterprises to understand the dynamics and factors that impact the business.

Once the data is rounded up from all the possible sources, data analytics gets to work. It furnishes detailed insights into all the relevant and critical factors. For example, it helps companies better understand the challenges it faces, if at all, and also offer solutions and alternatives to deal with said problem. Provided a proper plan and strategy drives the implementation, data analytics can deliver a world of benefits to the table. Here are some of the advantages have been discussed in detail below to help you gain perspective about the utility of data analytics for any company.

  1. Make better decisions: We have already discussed that data analytics uses the plethora of data — processing and analyzing it to provide insights into product development, trends, sales, finance, marketing, etc. But that’s not all — data analytics also provides the context in which these reports are to be viewed. This allows employees and executives to understand better the information presented to them and then make decisions based on said data. Such data-driven decision making is invaluable for any business’ growth.
  2. Better resource allocation: Strategizing is critical to a company’s growth and this assertion extends to resources, be it Human Resources or IT infrastructure, and their usage as well. Data analytics helps companies understand where and how well these resources are being utilized across their operations. It also helps identify any improvement scope, enables automation, and more to ensure more effective and efficient usage.
  3. Improve performance: Performance is yet another factor that serves as the foundation for any business’ growth and success. Data analytics can help in this department and assist companies in optimizing operations and the industry in general. It also helps improve efficiencies via, say, insights into the target audience, price segmentation, product innovation and more. Simply put, data analytics allows companies determine problems that plague the business, present solutions, and then also help measure the efficacy of the answers.

Data analytics stands to virtually transform a business, when it is driven by an informed strategy, of course. So, suppose you too wish to gain these advantages and more. In that case, we recommend getting in touch with an analytics services company to help you get started on that journey.

Source Prolead brokers usa

data center infrastructure market is projected to reach usd 100 billion by 2027
Data Center Infrastructure Market is Projected to Reach USD 100 Billion by 2027

data center infrastructure market is projected to reach usd 100 billion by 2027

According to a recent study from market research firm Global Market Insights, The need for data center infrastructure market management among organizations to offer higher energy-efficiencies will be positively driven by the influx of cloud computing, Big Data, and AI solutions. The surge in internet infrastructure activities has led to the generation of large quantities of data by individuals and connected devices.

The rising levels of data traffic have placed an immense power burden on data centers on account of the significant jump in the usage of IoT devices. This has in turn pushed data center operators to increasingly adopt efficient and cost-effective data center infrastructure solutions.
As per a report by Global Market Insights, Inc., the global data center infrastructure market could reach USD 100 billion in annual revenue by 2027.

Owing to the adoption of data analytics, cloud computing, and emerging technologies such as AI, machine learning, and IoT, hyper-scale data centers have seen huge demand lately. Big tech giants like Facebook, Amazon, and Google are investing heavily in the construction of hyper-scale data center facilities.

These data center infrastructures need high capability and modernized infrastructure for supporting critical IT equipment and offer enhanced data protection. High-density networking servers in these data centers demand security management, power and cooling combinations for enabling energy-efficient operation.

Increasing government initiatives regarding the safety of customer data are encouraging businesses to establish their own data center facilities in the Asia Pacific. For instance, China’s Cybersecurity Law states data localization requirements on Critical Information Infrastructure Operators (CIIOs). The Law guides network operators to analyze, store and process customer data within the country. With this, it is estimated that the Asia Pacific data center infrastructure market may speculate sturdy progress over the forecast period. Multiple initiatives such as Smart Cities, Made in China, and Digital India, may also boost the adoption of IoT and cloud computing in the region.

Mentioned below are some of the key trends driving data center infrastructure market expansion:

1) Growing demand for hyperscale data centers

Expansion of hyperscale data centers owing to the usage of cloud computing, data analytics, and emerging technologies like IoT, AI, and machine learning is fueling industry outlook. Hyperscale data centers need high capability and modernized infrastructure to improve protection and support the critical IT equipment.

High-density networking servers in hyperscale data centers demand cooling, security management and power solutions in order to facilitate energy-efficient operation. Major cloud service providers like Facebook Inc., Amazon, and Google LLC are making huge investments in the construction of hyperscale data center facilities.

2) Increasing adoption of data center services

The service segment is anticipated to account for a substantial market share on account of surging demand for scalable infrastructure for supporting high-end applications. Data center services such as monitoring, maintenance, consulting, and design help operators to better manage data centers and their equipment.

Enterprises often need professional, skilled and managed service providers for the management of systems and optimization of data center infrastructure to obtain efficiencies. Professional service providers having the required technical knowledge and expertise in IT management and data center operations allow streamlining of business processes. These services help to significantly decrease the total cost of operations and maintenance of IT equipment.

3) Robust usage of cooling solutions

There is a proliferation of AI, driverless cars, and robots which is encouraging data center service providers to move strategic IT assets nearer to the network edge. These edge data centers are in turn rapidly shifting towards liquid cooling solutions to run real applications having full-featured hardware and lessen energy consumption for the high-density applications.

Key companies operating in the data center infrastructure market are Panduit Corporation, Hewlett Packard Enterprise Company, Black Box Corporation, Vertiv Group Co., ClimateWorx International, Eaton Corporation, Huawei Technologies Co., Ltd., Cisco Systems, Inc., ABB Ltd, Schneider Electric SE, Degree Controls, Inc., and Dell, Inc.

Source: https://www.gminsights.com/pressrelease/data-center-infrastructure-…

Source Prolead brokers usa

netsuite erp ushering a digital era for smes
NetSuite ERP ushering a digital era for SMEs

netsuite erp ushering a digital era for smes

SMEs contribute significantly to the field of trade, employment as well as productivity.

The potential of an SME remains buried if digital technologies are not utilized smartly. Small-scale companies and start-ups avoid digitization, leading to many problems such as inefficiencies, higher costs and losses, loss of business, and overall lack of visibility about business operations.

https://www.advaiya.com/technology/enterprise-resource-planning-wit…

Digital transformation can be critical for SME businesses. Thoughtful adoption of digital technologies can rebuild and re-energize company strategy and its execution by leveraging the power of cloud, data, mobility and AI. Digital transformation is the solution to the problems of a next-gen business enterprise.

The Capabilities:
Dynamic 365: Field Service has various capabilities that significantly improves their efficiency. Capabilities such as:

Digital transformation and Oracle NetSuite ERP

Digital transformation can be achieved with NetSuite ERP as a business platform. It can lead to radical changes in a company through customized and vibrant process automation. It is the solution for simplifying primary business functions such as accounting, finance management, resource management and inventory management in a single integrated hub.

NetSuite is a robust and scalable solution that helps an SME achieve its business goals by:

Streamlined Processes

NetSuite ERP streamlines all business functions; It eliminates the need for having a separate interface for each department. It helps businesses run efficiently by removing removes silos between operations, automates critical processes.

Workflow automation

NetSuite ERP software helps companies to focus on productive work. The manual tasks are eliminated, leading to the reduction of human error through multiple data entries. Data is backed up in the clouds, reducing the chance of data loss.

Improved visibility

NetSuite ERP can provide relevant information on finger tips with the right visualization. Thus it helps make decisions faster and better. It leads to the enhancement of business communication and business transparency.

Integrated CRM

SMEs can integrate the customer journey with the business operations using NetSuite ERP. Comprehensive coverage of lead to cash cycle means that business and customer relationships would be on a much stronger footing.

Business flexibility

NetSuite ERP can support multiple currencies, integration across companies, numerous languages, tax rates, time zones and much more. These additional features make the business ready for global standards and future expansion.

Built-In Business Intelligence

NetSuite ERP can generate meaningful and actionable insights by combining data with visual analytics. Data analysis can open a vast opportunity to spin it into actionable insights and new business opportunities.

Better security and administration

NetSuite ERP simplifies the process, bringing all the departments a robust security system without compromising the industry data security standards. Entirely on the cloud, it eliminates the need for an SME to have IT staff and expertise to operate, maintain and manage the ERP.

SMEs must adopt technologies smartly. Advaiya’s approach, where the unique aspects of any business are understood and a solution is built and implemented using the world-class platform—Oracle NetSuite ERP can be hugely valuable for a forward-looking company. With such a solution in place, businesses can focus on growth and innovation while having technology for managing the operations most effectively and efficiently.

Source Prolead brokers usa

dsc weekly digest 05 april 2021
DSC Weekly Digest 05 April 2021
dsc weekly digest 05 april 2021

A Winter of Discontent has shifted over the last few months to a Spring of hope. Many countries (and in the US, most states) are now actively vaccinating their populace against Covid-19, unemployment is dropping dramatically, and people are beginning to plan for the post-epidemic world.

As I write this, Norwescon, a science fiction conference held annually in Seattle, Washington since 1976, has wrapped up its first virtual incarnation. With a focus on science fiction writing and futurists, this convention has long been a favorite of mine, and a chance to talk with professional authors, media producers, and subject matter experts. This year, that whole process was held across Airmeet, which (a few minor glitches aside) performed admirably in bringing that experience to the computer screen.

It is likely that the year 2020 will be seen in retrospect as the Great Reset. While many of the panelists and audience members expressed a burning desire to see the end of the pandemic and the opportunity to meet in person, what surprised me was the fact that many also expressed the desire to continue with virtual conferences as an adjunct to the experience, rather than simply going back to the way things had been.

This theme is one I’ve been hearing echoed repeatedly outside this venue as well. It is almost certain that post-Covid, the business place will change dramatically to a point where work becomes a place to meet periodically, where the workweek of 9-5 days will likely transform into a 24-7 work environment where some times are considered quieter than others, and where the obsession with butts-in-seats gets replaced by a more goal-oriented view where reaching demonstrable objectives becomes more important than attendance.

This conference also laid bare another point – your audience, whether as a writer, a creative, a business, or any other organization, is no longer geographically bound. For the first time, people attended this convention from everywhere in the world, even as people who traditionally hadn’t been able to come locally because of health issues were able to participate this year. The requirements of geo-physicality have long been a subtle but significant barrier for many such people, and the opportunity for this segment of the population to attend this year meant that many more points of view were presented than would have been otherwise. This too illustrates that for all the talk of inclusiveness previous, the rise of the digital society may be the first time that such unacknowledged barriers are now&nbsp;actively being knocked over.

Finally, a theme that seemed to permeate the convention this year is that increasingly we are living in the future we visualized twenty years ago. Science fiction is not about “predicting” the future, despite the perception to the contrary. It is, instead, a chance to explore what-if scenarios, to look at the very real human stories as they are impacted by changes in technology. During one (virtual) hallway conversation I had, noted Philip K. Dick Award-winning author PJ Manney made the point that ultimately the technology, while important in science fiction, is not the central focus of writers in this genre. Rather, the stories being written explore the hard questions about what it means to be human in a world of dramatic change, something that anyone who works in this space should always keep in the back of their mind.

This is why we run Data Science Central, and why we are expanding its focus to consider the width and breadth of digital transformation in our society.&nbsp;Data Science Central&nbsp;is your community. It is a chance to learn from other practitioners, and a chance to communicate what you know to the data science community overall. I encourage you to&nbsp;submit original articles&nbsp;and to make your name known to the people that are going to be hiring in the coming year. As always let us know what you think.

In media res,
Kurt Cagle
Community Editor,
Data Science Central

Source Prolead brokers usa

a plethora of machine learning tricks recipes and statistical models
A Plethora of Machine Learning Tricks, Recipes, and Statistical Models

a plethora of machine learning tricks recipes and statistical models

Source: See article #5, in section 1

Part 2 of this short series focused on fundamental techniques, see here. In this Part 3, you will find several machine learning tricks and recipes, many with a statistical flavor. These are articles that I wrote in the last few years. The whole series will feature articles related to the following aspects of machine learning:

  • Mathematics, simulations, benchmarking algorithms based on synthetic data (in short, experimental data science)
  • Opinions, for instance about the value of a PhD in our field, or the use of some techniques
  • Methods, principles, rules of thumb, recipes, tricks
  • Business analytics 
  • Core Techniques 

My articles are always written in simple English and accessible to professionals with typically one year of calculus or statistical training, at the undergraduate level. They are geared towards people who use data but are interesting in gaining more practical analytical experience. Managers and decision makers are part of my intended audience. The style is compact, geared towards people who do not have a lot of free time. 

Despite these restrictions, state-of-the-art, of-the-beaten-path results as well as machine learning trade secrets and research material are frequently shared. References to more advanced literature (from myself and other authors) is provided for those who want to dig deeper in the interested topics discussed. 

1. Machine Learning Tricks, Recipes and Statistical Models

These articles focus on techniques that have wide applications or that are otherwise fundamental or seminal in nature.

  1. Defining and Measuring Chaos in Data Sets: Why and How, in Simple W…
  2. Hurwitz-Riemann Zeta And Other Special Probability Distributions
  3. Maximum runs in Bernoulli trials: simulations and results
  4. Moving Averages: Natural Weights, Iterated Convolutions, and Centra…
  5. Amazing Things You Did Not Know You Could Do in Excel
  6. New Tests of Randomness and Independence for Sequences of Observations
  7. Interesting Application of the Poisson-Binomial Distribution
  8. Alternative to the Arithmetic, Geometric, and Harmonic Means
  9. Bernouilli Lattice Models – Connection to Poisson Processes
  10. Simulating Distributions with One-Line Formulas, even in Excel
  11. Simplified Logistic Regression
  12. Simple Trick to Normalize Correlations, R-squared, and so on
  13. Simple Trick to Remove Serial Correlation in Regression Models
  14. A Beautiful Result in Probability Theory
  15. Long-range Correlations in Time Series: Modeling, Testing, Case Study
  16. Difference Between Correlation and Regression in Statistics

2. Free books

  • Statistics: New Foundations, Toolbox, and Machine Learning Recipes

    Available here. In about 300 pages and 28 chapters it covers many new topics, offering a fresh perspective on the subject, including rules of thumb and recipes that are easy to automate or integrate in black-box systems, as well as new model-free, data-driven foundations to statistical science and predictive analytics. The approach focuses on robust techniques; it is bottom-up (from applications to theory), in contrast to the traditional top-down approach.

    The material is accessible to practitioners with a one-year college-level exposure to statistics and probability. The compact and tutorial style, featuring many applications with numerous illustrations, is aimed at practitioners, researchers, and executives in various quantitative fields.

  • Applied Stochastic Processes

    Available here. Full title: Applied Stochastic Processes, Chaos Modeling, and Probabilistic Properties of Numeration Systems (104 pages, 16 chapters.) This book is intended for professionals in data science, computer science, operations research, statistics, machine learning, big data, and mathematics. In 100 pages, it covers many new topics, offering a fresh perspective on the subject.

    It is accessible to practitioners with a two-year college-level exposure to statistics and probability. The compact and tutorial style, featuring many applications (Blockchain, quantum algorithms, HPC, random number generation, cryptography, Fintech, web crawling, statistical testing) with numerous illustrations, is aimed at practitioners, researchers and executives in various quantitative fields.

To receive a weekly digest of our new articles, subscribe to our newsletter, here.

About the author:  Vincent Granville is a data science pioneer, mathematician, book author (Wiley), patent owner, former post-doc at Cambridge University, former VC-funded executive, with 20+ years of corporate experience including CNET, NBC, Visa, Wells Fargo, Microsoft, eBay. Vincent is also self-publisher at DataShaping.com, and founded and co-founded a few start-ups, including one with a successful exit (Data Science Central acquired by Tech Target). You can access Vincent’s articles and books, here.

Source Prolead brokers usa

reinforcement learning for dynamic pricing
Reinforcement Learning for Dynamic Pricing

reinforcement learning for dynamic pricing

Limitations on physical interactions throughout the world have reshaped our lives and habits. And while the pandemic has been disrupting the majority of industries, e-commerce has been thriving. This article covers how reinforcement learning for dynamic pricing helps retailers refine their pricing strategies to increase profitability and boost customer engagement and loyalty. 

In dynamic pricing, we want an agent to set optimal prices based on market conditions. In terms of RL concepts, actions are all of the possible prices and states, market conditions, except for the current price of the product or service.

Usually, it is incredibly problematic to train an agent from an interaction with a real-world market. The reason is that an agent should gain lots of samples from an environment, which is a very time-consuming process. Also, there exists an exploration-exploitation trade-off. It means that an agent should visit a representable subset of the whole state space, trying out different actions. Consequently, an agent will act sub-optimally while training and could lose lots of money for a company.

An alternative approach is to use a simulation of the environment. Using a prognostication model, we can compute the reward (for example, income) based on the state (market conditions, except current price), and the action is the current price. So, we only need to model transitions between states. This task strongly depends on the state representation, but it tends to create a few modelling assumptions to be solved. The main drawback of the RL approach is that it is extremely hard to simulate a market accurately.

Sales data

For simplicity, we use simulated sales rather than real ones. Sales data are simulated as a sum of a price-dependent component, a highly seasonal component dependent on time and a noise term. To get a seasonal component, we use the Google Trends data of a highly seasonal product – a swimming pool. Google Trends provides weekly data for over five years. There is a clear one-year seasonality in the data, so it is easy to extract it and use it as a first additive term for sales. Since this term repeats with the one year, it is a function of a week number, ranging from 0 to 52.

Reinforcement learning sales data seasonality

The second term depends on prices from the current timestamp as well as the previous timestamp to model sales. The overall formula looks like this:

reinforcement learning for dynamic pricing 1

Here, f may be any monotonically decreasing function. The intuition is that if all other features are fixed, increasing the company’s price (either current or previous) decreases sales. On the other hand, increasing competitors’ prices leads to increased sales. The random term is sampled from a zero-mean normal distribution.

Reinforcement learning

We set the function f to be linear with a negative coefficient. This allows us to analytically find a greedy policy and compare it with the RL agent’s performance.

Experiments

We treat the dynamic pricing task as an episodic task with a one-year duration, consisting of 52 consecutive steps. We assume that competitors change their prices randomly.

We compare different agents by running 500 simulations and collecting cumulative rewards over 52 weeks. The graph below shows the performance of the random and greedy agents

Reinforcement learning the performance of the random and greedy agents

Tabular Q-learning

Q-learning is an off-policy temporal difference control algorithm. Its main purpose is to iteratively find the action values of an optimal policy (optimal action values).

reinforcement learning for dynamic pricing 2

Using these action values, we can easily find an optimal policy. It would be any greedy policy with respect to optimal action values. The following updated formula is used:

reinforcement learning for dynamic pricing 3

The estimates converge to the optimal action values, independent of the policy used (usually epsilon-greedy with respect to the current estimates is used). This updated formula can also be treated as an iterative way of solving Bellman optimality equations.

This algorithm assumes a discrete state space and action space. Accordingly, before running this algorithm, we should discretise continuous variables into bins. The name “tabular” means that action values are stored in one huge table. Memory usage and training time grow exponentially with the increase in the number of features in the state representation, making it computationally intractable for complex environments (for example Atari games).

Reinforcement learning the performance of the random and greedy agents

Reinforcement learning

As we can see, this approach outperforms a random agent, but cannot outperform a greedy agent.

Deep Q-network

The deep Q-network (DQN) algorithm is based on the same idea as Tabular Q-learning. The main difference is that DQN uses a parametrised function to approximate optimal action values. More specifically, DQN uses artificial neural networks (ANNs) as approximators. Based on state representation, both convolutional neural networks and recurrent neural networks can be used.

reinforcement learning for dynamic pricing 4

The optimisation objective at iteration i looks as follows:

reinforcement learning for dynamic pricing 5

where:

reinforcement learning for dynamic pricing 6

The behaviour distribution p is obtained by acting epsilon-greedy with respect to the previous model’s parameters.

The gradient of the objective is as follows:

reinforcement learning for dynamic pricing 7

The loss function is optimised by stochastic gradient descent. Instead of computing a full expectation, a single-sample approximation can be used, leading to an updated Q-learning formula.

Two problems that make the optimisation process harder are correlated input data and the dependence of the target on the model’s parameters. Both problems are tackled by using an experience replay mechanism. At each step it saves a trajectory into a buffer, then instead of using a single trajectory to update parameters, a full batch sampled from a buffer is used.

With DQN you can use higher-dimensional state spaces (even images can be used). Also, it tends to be more sample-efficient than the Tabular approach. The reason is that ANNs can generalise to unseen states, even if the agent does not act from those states. On the other hand, the Tabular approach requires the whole state space to be visited. DQN is sample-efficient because it is an experience replay, which allows multiple uses of a single sample.

Reinforcement learning

Reinforcement learning

As we can see, DQN outperforms all other agents. Also, it was trained on a smaller number of episodes.

Policy Gradients

The policy gradients algorithm uses an entirely different idea to learn an optimal policy. Instead of learning optimal action values and moving greedily with respect to them, policy gradients directly parametrise and optimise a policy. ANNs are often used to parametrise a policy.

reinforcement learning for dynamic pricing 8

The difficulty here is that an optimisation objective, the state-value of the first state, depends on a dynamics function p, which is unknown.

reinforcement learning for dynamic pricing 9

That is why policy gradients use the fact that the gradient of the objective is an expected value of a random variable, which is approximated while acting in the environment.

reinforcement learning for dynamic pricing 10

We can subtract a baseline, which depends only on the state, from action value estimates to reduce variance. This does not affect the mean but can significantly reduce the variance, thus speeding up the learning process.

reinforcement learning for dynamic pricing 11

Usually, state-value estimates are used as a baseline.

reinforcement learning for dynamic pricing 12

Then stochastic gradient ascent is done using the following formula:

This method requires a lot of interaction with an environment in order to converge. 

Reinforcement learning

Reinforcement learning

Policy gradients outperform a greedy agent but do not perform as well as DQN. Likewise, policy gradients require far more episodes then DQN.

Reinforcement learning

Please note that the real-world market environment and dependencies in it are way more complicated. This article covers a basic simulation proving that approach is working and can be applied to real data.

We will be happy to answer your questions. Get in touch with us any time!

Originally published at ELEKS Labs blog

Source Prolead brokers usa

deep learning for autonomous driving a breakthrough in urban navigation
Deep Learning for Autonomous Driving: A Breakthrough in Urban Navigation

The six levels of autonomous driving

Autonomous vehicle’ is a buzzword that’s been circulating in recent decades. However, the development of such a vehicle has posed a significant challenge for automotive manufacturers. This article describes how deep learning autonomous driving and navigation can help to turn the concept into a long-awaited reality.

The low-touch economy in a post-pandemic world is driving the introduction of autonomous technologies that can satisfy our need for contactless interactions. Whether it’s self-driving vehicles delivering groceries or medicines or robo-taxis driving us to our desired destinations, there’s never been a bigger demand for autonomy.

Self-driving vehicles have six different levels of autonomy, from drivers being in full control to full automation. According to Statista, the market for autonomous vehicles in levels 4 and 5 will reach $60 billion by 2030. The same research indicates that 73% of the total number of cars on our roads will have at least some level of autonomy before fully autonomous vehicles are introduced.

Countries and automobile companies around the world are working on bringing a higher level of unmanned driving to a wider audience. South Korea has recently announced it is to invest around $1 billion in autonomous vehicle technologies and introduce a level 4 car by 2027.

Machine learning and deep learning are among other technologies that enable more sophisticated autonomous vehicles. Applications of deep learning techniques in self-driving cars include:

  • Scene classification
  • Path planning
  • Scene understanding
  • Lane and traffic sign recognition
  • Obstacle and pedestrian detection
  • Motion control

Deep learning for autonomous navigation

Deep learning methods can help to address the challenges of perception and navigation in autonomous vehicle manufacturing. When a driver navigates between two locations, they drive using their knowledge of the road, how streets look like and traffic lights, etc. It is a simple task for a human driver, but quite a challenge for an autonomous vehicle.

Here at ELEKS, we’ve created a demo model that can help vehicles to navigate the environment as humans already do – using eyesight and previous knowledge. We came up with a solution that offers autonomous navigation without GPS and vehicle telemetry by using modern deep learning methods and other data science possibilities.

We used only an on-dash camera and street view dataset of the city of Lviv, Ukraine; we used no GPS or sensors. Below is an overview of the techniques applied and our key findings.

1. Image segmentation task

We used a Cityscapes dataset with 19 classes, which focuses on buildings, road, signs, etc., and an already trained model from DeepLab. The model we used is based on Xception inference. Other models with different maps/IoUs are also available.

Final layers were the semantic probabilities — mostly dim ~ classes*output_image_dims (without channels), so they could be filtered and become the inference to similarity model. It is recommended to transform them into the embedding layer or find a more suitable layer before the outputs. However, even after transformation objects position (higher or lower) on the frame and distance to it, may have influenced the embedding robustness.

Deep Learning for Autonomous Driving

Deep Learning for Autonomous Driving

2. Gathering additional data and labelling

We then downloaded raw photos from the web of the streets, road names and locations (coordinates, etc.), and we also got the Street View API key for download. We added labels in semi-automated ways based on the names and locations and verified them manually. We created pairs of images for similarity model training.

Deep Learning for Autonomous Driving

Finally, we used the image augmentation (also adding photos of different times of day and seasons) and image labelling using model (for example, additional negative samples, which the model recognizes as similar, but they are not located on the one street (GPS, street names, etc.)). As a result, we created a dataset containing approximately 8-12K augmented images.

Deep Learning for Autonomous Driving

3. Similarity models ideation and validation

We tested a few streets view comparison approaches from classical descriptor and template matching to modern SOTA DL algorithms like QATM. The most accurate was the inference model with representation for each segmented image in a pair, like VGG, ResNet or efficientNet and binary classifier (xgb or rf). Validated accuracy equals to approximately 82.5% (whether the right street was found or not), taking into account Lviv’s most known streets between 2011 and 2019 and with augmentation (changing image shapes, lightning, etc).

Similarity models ideation and validation

Deep Learning for Autonomous Driving

Deep Learning for Autonomous Driving

4. Outcome and performance features

We segmented every tenth frame, which was helpful for near real-time calculation, and because there would not be any huge changes in the environment in the space of 10 frames (1/3 s). Then, DeepLab models have shown >70 mIoU (Cityscapes, third semantic mat – buildings), time for prediction – CPU 15s-more than 10m based on Xception, GPU ~ <1s.

The similarity prediction was equal to 1min per 100 pairs (inference on GPU (4Gb VRAM) + classifier on 6 CPU cores). It can be optimised, after the first estimated positions, by limiting search only to closer street views, because the vehicle can’t appear more than 1 km in 10-50 frames.

Not all of the city’s streets were covered, so we found videos with a drive around the city centre. For the map positioning, we used wiki maps; however, other maps can be used if needed. We got the vehicle coordinates from street image metadata (lat/long, street name).

Some streets segments are available in a few different versions — the same location in 2011, 2015 or 2019 and photos from different sources, etc., so the classifier can find any of them. We used mostly weak affine transformations for the street’s augmentation with no flipping or strong colour and shape changes.

Some of the estimations may be inaccurate for the following reasons:

  • Street and road estimation – the static object area is low, street noise is quite high (vehicles, pedestrians) or seasonality changes (trees, snow, rain, etc.)
  • Vehicle position and speed errors – the same street position and different street step or Euclidean distance for curved streets can be viewed with a different focus (distance to an object), etc.

You can check out a video sample of prerecorded navigation with post-processing here.

Want to learn more about our demo? We will be happy to answer your questions!

Originally published at ELEKS Labs blog

Source Prolead brokers usa

dataops building an efficient data ecosystem
DataOps: Building an Efficient Data Ecosystem
dataops building an efficient data ecosystem
Data is more present and more powerful than ever. It can be tapped to tailor products, services and experiences. It contains insights on all manner of things; from shopping and travel habits, to music preferences, to clinical drug trial efficiency. And, critically for businesses, it can improve operational efficiency, customer conversion and brand loyalty. DataOps can help developers facilitate data management to add real value to businesses and customers alike.

it can come in many different forms and there’s so much of it, data is a messy mass to handle. 

Modern data analytics requires a high level of automation in order to test validity, monitor the performance and behaviour of data pipelines, track data lineage, detect anomalies that might indicate a quality issue and much more besides.

DataOps is a methodology, created to tackle the problem of repeated, mundane data processing tasks, thus making analytics easier and faster, while enabling transparency and quality-detection within data pipelines. Medium describes DataOps’ aim as; “to reduce the end-to-end cycle time of data analytics, from the origin of ideas to the literal creation of charts, graphs and models that create value”.

So, what are the DataOps principles that can boost your business value?

What’s the DataOps Manifesto?

DataOps relies on much more than just automating parts of the data lifecycle and establishing quality procedures. It’s as much about the innovative people using it, as it is a tool in and of itself. That’s where the DataOps Manifesto comes in. It was devised to help businesses facilitate and improve their data analytics processes. The Manifesto lists 18 core components, which can be summarised as the following:

  • Enabling end-to-end orchestration and monitoring
  • Focus on quality
  • Introducing an Agile way of working
  • Building a long-lasting data ecosystem that will continuously deliver and improve at a sustainable pace (based on customer feedback and team input)
  • Developing efficient communication between (and throughout) different teams and their customers
  • Viewing analytics delivery as ‘lean manufacturing’ which strives for improvements and facilitates the reuse of components and approaches
  • Choosing simplicity over complexity

DataOps enables businesses to transform their data management and data analytics processes. By implementing intelligent DataOps strategies, it’s possible to deploy massive disposable data environments where it would have been impossible otherwise. Additionally, following this methodology can have huge benefits for companies in terms of regulatory compliance. For example, DataOps combined with migration to a hybrid cloud allows companies to safeguard the compliance of protected and sensitive data, while taking advantage of cloud cost savings for non-sensitive data.

DataOps and the data pipeline

It is common to imagine data pipeline as a conveyor belt-style manufacturing process, where the raw data enters one end of the pipeline and is processed into usable forms by the time it reaches the other end. Much like a traditional manufacturing line, there are stringent quality and efficiency management processes in place along the way. In fact, because this analogy is so apt, the data pipeline is often referred to as a “data factory”.

This refining process delivers quality data in the form of models and reports, which data analysts can use for the myriad reasons mentioned earlier, and far more beyond those. Without the data pipeline, the raw information remains illegible.

The key benefits of DataOps

The benefits of DataOps are many. It creates a much faster end-to-end analytics process for a start. With the help of Agile development methodologies, the release cycle can occur in a matter of seconds instead of days or weeks. When used within the environment of DataOps, Agile methods allow businesses to flex to changing customer requirements—particularly vital nowadays—and deliver more value, quicker.

A few other important benefits are:

  • Allows businesses to focus on important issues. With improved data accuracy and less time spent on mundane tasks, analytics teams can focus on more strategic issues.
  • Enables instant error detection. Tests can be executed to catch data that’s been processed incorrectly, before it’s passed downstream.
  • Ensures high-quality data. Creating automated, repeatable processes with automatic checks and controlled rollouts reduces the chances that human error will end up being distributed.
  • Creates a transparent data model. Tracking data lineage, establishing data ownership and sharing the same set of rules for processing different data sources creates a semantic data model that’s easy for all users to understand—thus, data can be used to its full potential.

So how is DataOps implemented within an organisation? There are four key stages within the DataOps roadmap, illustrated in simple form below.

Source Prolead brokers usa

forewarn business growth with current situation of ai in construction market
Forewarn: Business growth with current situation of AI in Construction Market

forewarn business growth with current situation of ai in construction market

In these uncertain and unprecedented times due to the COVID-19 outbreak, more and more businesses are witnessing a slow-down in their operations. However, the construction market is continuing to be resilient in spite of the tremendous challenges brought about by COVID-19 pandemic.

When it comes to construction sites, drive-thru strategies and work from home are not feasible as they need to run job sites. Artificial intelligence (AI) is one such technology in construction industry, which is helping to sustain in these trying times. According to a Research Dive published report, the COVID-19 pandemic has negatively impacted the global AI in construction market .

Transforming the Construction Industry

Artificial intelligence in the realm of the engineering & construction industry, is renovating construction to ‘artificial construction.’ In these pandemic situation, AI is leading to real-time VR (Virtual Reality) construction models and reducing errors. The major market player are implementing several business strategies in AI in construction market to sustain in these trying times. For instance, Vinnie, a construction-trained AI engine by smartvid.io can see if the workers are close to each other in this COVID-19 pandemic through the introduction of its novel ‘People in Group’ analytics.

Today, AI in construction industry has become a common tool for carrying out many construction activities. In addition, many big companies in the construction industry all across the globe are immensely adopting AI as it boasts a multitude of applications. AI has the ability to accurately evaluate the cost overrun of a project, on the basis of factors such as type of contract, size, and also the level of competence of the managers to risk moderation via self-driving machinery and equipment. Thus, there’s no reason for AI not being a part of toolset for any construction firm.

The activities detected by artificial intelligence in construction in every image include:

  • Excavation
  • Foundation
  • Demolition
  • Trench Work
  • Concrete Pour
  • Structural Steel
  • Mechanical, Electrical, and Plumbing
  • Finish work

New COVID-19 Tags of AI in Construction

The novel tags of artificial intelligence in construction activities due to coronavirus pandemic provide opportunities for improved and efficient workflows. AI can automatically acclaim risk ratings on the basis of the hazards detected in combination with circumstantial tags. In addition, during the ongoing demolition, you can dive more deeply into the images, find any demolition photo with a quick search, and create an observation. The rules to follow during this pandemic situation can be also established within these contexts.

Artificial intelligence has already created a benchmark in construction industry. Unlike humans who would lose focus and tire after identifying hazards in large number of images, AI equipment never fades in finding out the hazards quickly and accurately. Beside, with the new tags for COVID-19, AI can now adopt some of the context that people clench without barely thinking. The better AI is getting in construction activities, the more information will be obtained such as identifying risks and prevent any hazardous incidents from happening.

The new tags of AI in construction for COVID-19 include:

Worker in Cab – These photos show someone operating a machine or driving a vehicle.

Worker at Height – These images show someone on a ladder, in a lift, or near an unprotected edge.

Workers in Groups: This is a way to identify whether the worker are standing too close to each other in groups and not performing “social distancing” in the new era of coronavirus.

Today, the vast majority of construction firms big or small are relying more on technology such as artificial intelligence to sustain. The key players of AI in construction industry are advancing AI-trained equipment and engines to effectively run their businesses in the coronavirus chaos. Thus, in reality, this has become a necessity for the firms to innovate technology in order to keep the business afloat amid these uncertain times.

About Us:
Research Dive is a market research firm based in Pune, India. Maintaining the integrity and authenticity of the services, the firm provides the services that are solely based on its exclusive data model, compelled by the 360-degree research methodology, which guarantees comprehensive and accurate analysis. With unprecedented access to several paid data resources, team of expert researchers, and strict work ethic, the firm offers insights that are extremely precise and reliable. Scrutinizing relevant news releases, government publications, decades of trade data, and technical & white papers, Research dive deliver the required services to its clients well within the required timeframe. Its expertise is focused on examining niche markets, targeting its major driving factors, and spotting threatening hindrances. Complementarily, it also has a seamless collaboration with the major industry aficionado that further offers its research an edge.

Source Prolead brokers usa

Pro Lead Brokers USA | Targeted Sales Leads | Pro Lead Brokers USA Skip to content