Search for:
formulating your problem as a reinforcement learning problem
Formulating your problem as a reinforcement learning problem

This blog is the first part of a three-blog series, which talks about basics of reinforcement learning (RL)and how we can formulate a given problem into a reinforcement learning problem.

The blog is based on my teaching and insights from our book at the University of Oxford. I also wish to thank my co-authors Phil Osborne and Dr Matt Taylor for their feedback to my work.

In this blog, we introduce Reinforcement learning and the idea of an autonomous agent.

In the next blog, we will discuss the RL problem in context of other similar techniques – specifically Multi-arm bandits and Contextual bandits

Finally, we will look at various applications of RL in the context of an autonomous agent

Thus, in these three blogs – we consider RL, not as an algorithm in itself but rather as a mechanism to create autonomous agents (and their applications)      

This series will help you understand the core concepts of reinforcement learning and encourage you to build and define your problem into an RL problem.

What is Reinforcement Learning?“It is a field of Artificial Intelligence in which the machine learns in an environment setup by trial and error methods. Here the machine is referred to as an agent that performs certain actions and for each valuable action, a reward is given. Reinforcement learning algorithm’s focus is based on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).

Understanding with an example . . .

Let’s go with the most common yet easy example to understand the basic concept of Reinforcement learning. Think of a new dog and ways with which you train it. Here, the dog is the agent and its surroundings become the environment. Now, when you throw a frisbee away, you expect your dog to run behind it and get it back to you. Here, throwing away the frisbee becomes the state and whether or not the dog runs behind the frisbee will depict its action. If the dog chooses to run behind the frisbee (an action) and get it back, you will reward him with a cookie/biscuit to indicate the positive response. If otherwise, some punishment can be given in order to indicate the negative response. That’s exactly what happens in reinforcement learning.

This interactive method of learning stands on four pillars, also called “The Elements of Reinforcement Learning” –

  • Policy – A policy can be termed as a way of tackling agent’s learning behaviour at a given instance. In a more generic language, it is a strategy used by agent towards its end goal. 
  • Reward – In RL, training the agent is more like luring it to a bait of reward points. For every right decision an agent makes, it is rewarded with positive points, whereas, for every wrong decision an agent makes, a punishment or negative points are given.
  • Value – The value function works upon the probability of achieving the maximum reward. It is an algorithm that determines whether or not the current action in a given state will yield or help yield best reward.
  • Model (optional) – RL can either be model-free or model-based. Model-based reinforcement learning helps connect the environment with some prior knowledge i.e. it comes with a planned idea of agent’s policy determination with integrated functional environment.

Formulating an RL problem . . .

Reinforcement learning is a general interacting, learning, predicting, and decision-making paradigm. This can be applied to an application where the problem can be treated as a sequential decision-making problem. For which we first formulate the problem by defining the environment, the agent, states, actions, and rewards. 

A summary of the steps involved in formulating an RL problem to modelling the problem and finally deploying the system is discussed below –

  • Define the RL problem – Define environment, agent, states, actions, and rewards.
  • Collect data – Prepare data from interactions with the environment and/or a model/simulator.
  • Feature engineering – This can probably be a manual task with the domain knowledge.
  • Choose modelling method – Decide the best representation and model/algorithm. It can be online/offline, on-/off-policy, model-free/model-based, etc.
  • Backtrack and refine – Iterate and refine the previous steps based on experiments.
  • Deploy and Monitor – Monitor the deployed system 

RL framework – Markov Decision Processes (MDPs)

Generally, typical reinforcement learning problems are formalized in the form of Markov Decision Processes, which acts as a framework for modelling a decision-making situation. They follow the principles of Markov property, i.e. any future state will only be dependent on the current state and independent of past states, and hence the name Markov decision process. Mathematically, MDPs are derived consisting of following elements –

  • Actions ‘A’,
  • States ‘S’,
  • Reward function ‘R’,
  • Value ‘V’
  • Policy ‘π’

where, the end goal is to get the value of state, V(s), or the value of state-action pairs, Q(s,a) while there is a continuous interaction of the agent and environment space.

In the next blog, we will discuss the RL problem in context of other similar techniques – specifically Multi-arm bandits and Contextual bandits. This will expand on the problem of using RL to create autonomous agents. In the final part, we will talk about real-world reinforcement learning applications and how one can apply the same in multiple sectors.

About Me (Kajal Singh)

Kajal Singh is a Data Scientist and a Tutor at the  Artificial Intelligence – Cloud and Edge implementations  course at the University of Oxford. She is also the co-author of the book  “Applications of Reinforcement Learning to Real-World Data: An educational introduction to the fundamentals of Reinforcement Learning with practical examples on real data (2021)”

References –

Source Prolead brokers usa

data agility and popularity vs data quality in self serve bi and analytics
Data Agility and ‘Popularity’ vs. Data Quality in Self-Serve BI and Analytics

One of the most valuable aspects of self-serve business intelligence is the opportunity it provides for data and analytical sharing among business users within the organization. When business users adopt true self-serve BI tools like Plug n’ Play Predictive Analysis, Smart Data Visualization, and Self-Serve Data Preparation, they can apply the domain knowledge and skill they have developed in their role to create reports, analyze data and make recommendations and decisions with confidence.

It is not uncommon for data shared or created by a particular business user to become popular among other business users because of a particular analytical approach, the clarity of the data and conclusions presented or other unique aspects of the user’s approach to business intelligence and reporting. In fact, in some organizations, a business user can get a reputation as being ‘popular’ or dependable and her or his business intelligence analysis and reports might be actively sought to shape opinion and make decisions. That’s right, today there is a social networking aspect even in Business Intelligence. Think of it as Social Business Intelligence or Collaborative Business Intelligence. It is a new concept that we can certainly understand, given the modern propensity for socializing and sharing information that people want to share, discuss, and rate and they want to understand the context, and views and opinions of their peers and teammates.

By allowing your team members to easily gather, analyze and present data using sophisticated tools and algorithms (without the assistance of a programmer, data scientist or analyst), you can encourage and adopt a data sharing environment that will help everyone do a better job and empower them with tools they need to make the right decisions.

When considering the advantages of data popularity and sharing, one must also consider that not all popular data will be high-quality data (and vice versa). So, there is definitely a need to provide both approaches in data analysis. Create a balance between data quality and data popularity to provide your organization and business users with the best of both worlds.

You may also wish to improve the context and understanding of data among business users by leveraging the IT curation approach to data and ‘watermarking’ (labeling/tagging) selected data to indicate that this data has been certified and is dependable. Business users can then achieve a better understanding of the credibility and integrity of the integrated data they view and analyze in the business intelligence dashboard and reports.

As the organization builds a portfolio of reports and shared data it can better assess the types of data, formats, analysis and reports that are popular among its users and will provide more value to the team and the enterprise.

Encourage your team members to share their views and ratings, with self-serve data preparation and BI tools and create an environment that will support power business users. While self-serve data prep may not always produce 100% quality data, it can provide valuable insight and food for thought that may prompt further exploration and analysis by an analyst or a full-blown Extract, Transform and Load (ETL) or Data Warehouse (DWH) inquiry and report.

There are many times when the data extracted and analyzed through self-serve data preparation is all you will need; times when the organization or the user or team needs solid information without a guarantee of 100% accuracy. In these times, the agility of self-serve data prep provides real value to the business because it allows your team to move forward, ask questions, make decisions, share information and remain competitive without waiting for valuable skilled resources to get around to the creating a report or performing a unique inquiry or search for data.

If you build a team of power business users, and transform your business user organization into Citizen Data Scientists, your ‘social network’ of data sharing and rating will evolve and provide a real benefit to the organization. Those ‘popular’, creative business users will emerge and other users will benefit from their unique approach to data analysis and gain additional insight This collaborative environment turns dry data analysis and tedious reporting into a dynamic tool that can be used to find the real ‘nuggets’ of information that will change your business.

When you need 100% accuracy – by all means seek out your IT staff, your data scientists and your analysts and leverage the skilled resources to get the crucial data you need. For much of your organization, your data analysis needs and your important tasks, the data and analysis gleaned from a self-serve data preparation and business intelligence solution will serve you very well, and your business users will become more valuable, knowledgeable assets to your organization.

By balancing agility and data ‘popularity’ and democratization with high quality, skilled data analysis, you can better leverage all of your resources and create an impressive, world-class business ‘social network’ to conquer the market and improve your business. To achieve a balance between data quality and data popularity, your organization may wish to create a unique index within the business intelligence analytics portal, to illustrate and balance data popularity and data quality, and thereby expand user understanding and improve and optimize analytics at every level within the enterprise.

Source Prolead brokers usa

unlocking e commerce growth for cpg with data and analytics
Unlocking e-commerce growth for CPG with data and analytics

CPG opportunities in the new normal
The COVID-19 pandemic has compelled businesses to shift to virtual marketplaces and CPG has been no different. Consumers are increasingly preferring online portals instead of brick-and-mortar stores and the time’s ripe for CPG companies to extend their digital reach. Some reports suggest that nearly 60% of consumers feared getting infected from visiting a physical store. This led to more than 50% ordering products online that they would otherwise normally purchase directly from the stores. As per the latest reports, the average spend per grocery order shot up to an all-time high of US$95 per order in August 2020, with the intent of repeat purchase monthly reaching a peak at 75% signifying that online shopping is all set to become one of the reigning CPG trends going forward.

Prior to the coronavirus pandemic e-commerce accounted for approximately 4% of all grocery sales, a tiny portion of the overall volume. But during the pandemic share of grocery spending going online has increased to as high as 20% according to Sigmoid analysis. The figure is expected to settle at about 10-12% by 2022. A boost in digital sales of essential goods and personal care products, which were purchased more frequently online during the pandemic, has driven CPG spend growth. Consequently, digital ad spending in the US consumer packaged goods (CPG) industry will increase 5.2% to $19.40 billion in 2020. With marketers relying on data to guide their digital advertising spends, ML-driven Multi Touch Attribution provides them with customer journey insights to optimize campaigns.

Boosting CPG data insights with e-commerce analytics
With the surge in online grocery shopping, copious amounts of user data is getting generated presenting online CPG businesses with unique opportunities. The utilization of e-commerce analytics will glean significant benefits and surely be a game-changer for CPG companies in a highly competitive market. In fact, more than half (52%) of the CPG respondents in a recent survey reported resources to react quicker and analyze faster. Another 7% of the responders predicted their analytics spending to reach 25% of their total IT expenses by 2023. Organizations are clearly inclined to bolster their data analytics initiatives. However, they also need to plan and execute their data strategy for CPG carefully.

CPG companies must invest more in analytics to align their strategies and business models with evolving consumer trends and requirements. The first step toward unearthing actionable data insights is to outline the data type to be considered. Usually, there’s no single set of data that is used across all business types. Data requirements vary along with the specific requirements of the industry, the market, or even the individual business entity. However, datasets can be broadly categorized into product-based data and consumer behavior data. Product based data includes tracking and logging product specific trends and statistics. Some product specific datasets are:

  1. Individual product sales trends
  2. Sales analysis of products within a category
  3. Distribution
  4. Price analytics

Customer behavior data points on the other hand, would include tracking and logging purchase behavior, preferences, and trends of online shoppers. Customer specific datasets are:

  1. Frequency of making purchases
  2. Cart abandonment to transaction completion analysis
  3. Brand/ Store loyalty
  4. Consumer demographics

Once the required data has been made available, the next step is to glean insights out of the available data. Specific analysis needs to be done keeping the end goal in mind. The data obtained can be utilized in various ways, such as:

Personalized marketing: This involves understanding consumer behavior to determine preferences and generate recommendations. Learn how Personalized recommendations driven by Advanced analytics improved customer experience and product sales for a popular cosmetic brand. As a future forward representation, a leading online retailer has patented a new feature that enables smart speakers to detect when a user is under the weather and generates recommendations accordingly including specific dietary choices from their pantry.

Order fulfilment: The surge in online CPG retails is redefining traditional order fulfilment process. With online retail, CPG players are now able to cater to a wider demographic as well as a larger geographic footprint while short term trends such as bulk buying behavior is also compelling them to mold their business approach. In this new business paradigm, they need to build on capabilities to capture data from Omni-channel sources and create data lakes to ingest and analyze data from disparate sources.

Product launches: Today CPG companies mostly depend upon retailers for consumer data generated from POS transactions and sales performance figures. In a new normal, the proliferation of online retail will generate significantly larger and substantially more diverse data streams which will provide the CPG companies with newer opportunities to leverage user data. This will help them redefine personalized recommendations with newer perspectives and offerings.

Category specific decision-making: CPG analytics output can objectively highlight strengths, weaknesses, inefficiencies, and opportunities rife within a particular product category giving granular visibility into each product type. Businesses that have successfully adopted data analytics enabled decision-making have seen up to 22% increase in demand for specific products.

What CPG firms require to build e-commerce strategy:
Data culture and automation: Culture of data as an asset, predictive analytics, and AI fully embedded in day-to-day operations and embraced by company leadership to swiftly address shifts in e-commerce demand, supply chain, and consumer preferences. Reduction in manual labor by automating processes across functionalities for demand forecasting.

Digital infrastructure: Connected data platforms, IT, and infrastructure that enable full visibility of the customer’s path to purchase and e-commerce dashboards that provide real-time insights into changes in demand. Prioritize customer-centricity across critical touchpoints to improve conversion rates and drive revenue growth.

Partnerships and ecosystem: Forge strategic alliances to establish ecosystems that differentiate customer services. CPGs partnering with 3 PLs and digital natives is a vital element in exploration of new revenue streams and operating models. Acquire or partner with digital specialists to contain costs by expediting and optimizing processes.

The need to bolster data engineering capabilities
In the current scheme of things, it’s a business imperative for CPG players to leverage data analytics to enable quicker yet informed decision making and achieve consistent business gains. But how do CPG organizations gain the most out of the consumer and product data? Building data engineering proficiency and the ability to collect and utilize comprehensive data pertaining to customer journey will become highly relevant for CPGs rather than only relying on external inputs. Data engineering is fundamental to achieving quantifiable gains from analytics. It can help CPG companies create interfaces and mechanisms that dictate the flow and access of data.

Building out the overall strategy for data engineering requires scoping out data needed to align with business objective and availability.

When it comes to building a solid data foundation for ecommerce, CPG companies should consider the following:

Building data pipelines: It is important to build a scalable data pipeline that can be queried at high speed and hosted in a cloud environment. This encompasses collecting data from different sources, storing the data including in data lakes and in-memory processing.

Data warehousing: Data pipelines collect data from multiple sources and store them in data warehouse in a structured format via ETL. This acts as a single source of truth simplifying the company’s analysis & reporting processes.

Data governance: Establishing processes for data availability, integrity, visibility to users, and security.

ML models in production: Integrating production ready machine learning models into the workflows can be the key to optimizing the available data and gaining significant business benefits.

Embrace AI/ML: Leverage predictive analytics and AI to improve financial metrics and the overall customer experience. Develop predictive analytics and AI use cases to transform processes, optimize operations, and enhance customer experiences.

Customer data platform: Enables brands to collect, unify, enrich and activate their customer data effectively. To manage and enhance the 1st party data that is required to better know and engage with consumers to drive increased margins and revenue.

Uncovering insights from granular user data and other data types such as transactional data, operations data, etc. will allow CPG companies to develop a more personalized approach to reaching and engaging with shoppers. More importantly, companies need to create an effective data strategy that is aligned with the business goals in order to derive value from the data while driving ROI.

New age e-commerce players have already challenged the status quo and successfully exhibited the perks of effectively leveraging data. Even though CPG companies have been investing in data analytics, they need to revamp their approaches to fit the current paradigm. CPG companies with data-driven, customer-centric strategies will gain more traction due to the demand for more personalized, convenient, and safe shopping experiences.

Advanced analytics can drive incremental revenue growth by up to 10% by helping companies launch new lines or modify products based on customer preferences. It can also improve profitability by 1% – 2% by helping companies optimize their manufacturing and supply chain processes.

Conclusion
Fundamental shifts in shopping and consumer attitudes have changed the grocery landscape forever. The CPG sector which is heavily dependent on what happens in grocery retail will have to adapt to the new models. E-commerce sales are accelerating as CPG firms focus on business sustainability and customer engagement. For a USD 635 billion-sized CPG business in 2019 with a 2% annual growth, if 10% share of total revenue is expected to come from ecommerce, that means a significant business opportunity for the future.

While CPGs have been conservative in leveraging emerging technologies due to the need for upfront investment, the pandemic is compelling them to rapidly adopt and integrate digital technologies. The ability to harness data around the rapidly shifting environment has become an important differentiator. CPG companies that move into action quickly to enhance their e-commerce capabilities and leverage data analytics to address consumer needs will emerge winners.

About the author
Jayant is Director of Marketing and Pre-sales at Sigmoid and is passionate about applying data & analytics to solve business problems. He has helped CPG and Retail companies globally to leverage IT for business transformation.

Source Prolead brokers usa

building a cyber physical grid for energy transition part 4 of 4
Building A Cyber-Physical Grid for Energy Transition (Part 4 of 4)

The new distributed energy market imposes new data and analytics architectures

Introduction

Part 1 provided a conceptual-level reference architecture of a traditional Data and Analytics (D&A) platform.

Part 2 provided a conceptual-level reference architecture of a modern D&A platform.

Part 3 highlighted the strategic objectives and described the business requirements of a TSO that modernizes its D&A platform as an essential step in the roadmap of implementing its cyber-physical grid for energy transition. It also described the toolset used to define the architecture requirements and develop the future state architecture of TSO’s D&A platform.

This part maps the business requirements described in part 3 into architectural requirements. It also describes the future state architecture and the implementation approach of the future state D&A platform.

TRANSPOWER Future state Architectural Requirements

In order to develop the future state architecture, the business requirements described in part 3 are first mapped into high-level architectural requirements. These architectural requirements represent the architectural building blocks that are missing or need to be improved in each domain of TRANSPOWER enterprise architecture in order to realize the future state architecture. Table 1 shows TRANSPOWER high-level architectural requirements.

Table 1: TRANSPOWER high-level architectural requirements

The Future State Architecture of TRANSPOWER Data and Analytics Platform

Figure 1 depicts the conceptual-level architecture of TRANSPOWER digital business platform.  Modernizing the existing D&A platform is one of the prerequisites for TRANSPOWER to build is digital business platform. Therefore, TRANSPOWER used the high-level architectural requirements shown in Table 1 and the modern data and analytics platform reference architecture described in Part 2 to develop the future state architecture of its D&A platform. Table 2 shows some examples of TRANSPOWER business requirements and their supporting digital business platform applications as well as the D&A platform architectural building blocks that support these applications. These D&A architectural building blocks are highlighted in red in Figure 2.

Figure 1. Conceptual-level architecture of TRANSPOWER digital business platform

Table 2: Examples of TRANSPOWER business requirements and their supporting digital business platform applications and D&A platform architectural building blocks

Figure 2: Examples of the new architectural building blocks

The Implementation Approach

After establishing the new human capital capabilities required for the implementation of the digital business transformation program, TRANSPOWER started to partner with relevant ecosystem players and deliver the program.

The implementation phase of the D&A platform modernization was based on the Unified Analytics Framework (UAF) described in Part 3. The new D&A applications and architectural building blocks described in Table 2 are planned and delivered using Part II of the UAF (including Inmon’s Seven Streams Approach).  According to Inmon’s Seven Streams Approach, stream 3 is the “driver stream” that sets the priorities for the other six streams, and the business discovery process should be driven by the “burning questions” that the business has put forward as its high-priority questions. These are questions for which the business community needs answers so that decisions can be made, and actions can be taken that will effectively put money on the company’s bottom line. Such questions can be grouped into topics, such as customer satisfaction, profitability, risk, and so forth. The information required to answer these questions is identified next. Finally, the data essential to manufacture the information that answers the burning questions (or even automate actions) is identified. It is worth noting that the Information Factory Development Stream is usually built topic by topic or application by application. Topics are usually grouped into applications and a topic contains several burning questions. Topic often spans multiple data subject areas.

Figure 3 depicts the relationship between Burning Questions, Applications, Topics Data Subject Areas, and the Information Factory Development Stream.

Figure 3. Relationship between Burning Questions, Applications, Topics Subject Areas, and the Information Factory Development Stream

 

Conclusion

In many cases, modernizing the traditional D&A platform is one of the essential steps an enterprise should take in order to build its digital business platform, therefore enabling its digital business transformation and gaining a sustainable competitive advantage.  This four-part series introduced a step by step approach and a toolkit that can be used to determine what parts of the existing traditional D&A capabilities are missing or need to be improved and modernized in order to build the enterprise digital business platform. The use of the approach and the toolkit was illustrated by an example of a power utility company, however, this approach and the toolkit can be easily adapted and used in other vertical industries such as Petroleum, Transportation, Mining, and so forth.

Source Prolead brokers usa

the evolution of the enterprise data management industry five years out
The Evolution of the Enterprise Data Management Industry: Five Years Out

Enterprise Data Management industry is predicted to rise with a CAGR of 9.3% over the forecast period by generating a revenue of $126.9 billion by 2026.

Enterprise Data Management program collates all the data related with making major decisions and building a strategy for the organization. Enterprise Data Management helps to identify the compliance, operating efficiencies, risksand build client relationship, which results in data quality, control on the data and information storage.

Rise in the use of data management application in many of the organization is predicted to drive the Enterprise Data Management industry over the forecast period. The demand for data management has increased due to handling large data sets by data integration, data profiling, checking the quality of data, metadata management and many other data related problems. Moreover, enterprise data management helps in sharing, consistency, reliability and governing information to the organization for taking major decisions, and this is predicted to be the major driving factor for the industry.

Data privacy is predicted to hamper the growth of the industry during the forecast period. Most of the companies handle data with the help of open source applications which includes various processes and algorithms. Most of the processes and algorithms are run through open sources which enable hackers to get the source code without difficulty if the data are not highly protected. These are the biggest restraints for the growth of the industry in the forecast period.

Access Full Sample Copy of this Report @ https://www.researchdive.com/download-sample/167

The major players in Enterprise Data Management industry are Amazon Web Services, Inc., TierPoint, LLC., VMware Inc.,Microsoft, HP Development Company, L.P.,Cloudera, Inc.,SAS Institute Inc.,SAP SE,IBM Corporation, andNTT Communications Corporation among others.

Source Prolead brokers usa

your ecommerce pros can easily use augmented analytics
Your eCommerce Pros Can Easily Use Augmented Analytics

eCommerce and online shopping businesses employ professionals in many roles including sales managers, marketing professionals, social media experts, product and service professionals and others. Together, every role in a business is designed to create a team that will ensure business success and, with eCommerce exploding, it is easy to think that the right people can get the job done.

But, there is another component to success, especially today. Given the competitive environment and market with thousands of eCommerce sites and apps, it is imperative that the business have a measurable, fact-based view of results and enable its team members (no matter their role) to access tools and solutions that will give them the information they need to succeed, to improve results, to come up with new ideas for products and services, to target customers appropriately, to bundle products, to shift pricing and marketing approaches and more!

But, eCommerce business users do not have the time or the inclination to adopt new technology and software. They are often overwhelmed with tasks and responsibilities so, making it easier for them to understand and analyze results is crucial. An augmented analytics solution that integrates with an eCommerce solution like Shopify can provide pre-built templates, reports, KPIs and in-depth analysis of customer lifetime value, customer cohorts, trends, sales results and other important aspects of eCommerce business.

It is easy to implement analytical capability using a solution that integrates Shopify with augmented analytics, and provide your users and business a meaningful way to compete and succeed.

Contact us to today to find out how SmartenApps for Shopify can help you achieve your goals.

Source Prolead brokers usa

statistical hypothesis testing step by step
Statistical Hypothesis Testing: Step by Step

 

Hypothesis by statisticalaid.com

                                                     Image Source: Statistical Aid: A School of Statistics

What is hypothesis testing?

In statistics, we may divide statistical inference into two major part: one is estimation and another is hypothesis testing. Before hypothesis testing we must know about hypothesis. so we can define hypothesi as below-

A statistical hypothesis is a statement about a population which we want to verify on the basis of information which contained in a sample.

Example of statistical hypothesis

 

Few examples of statistical hypothesis related to our daily life are given below-

  • The court assumes that the indicted person is innocent.
  • A teacher assumes that 80% of the student of his college is from a lower-middle-class family. 
  • A doctor assumes that 3D(Diet, Dose, Discipline) is 95% effective to the diabetes patient.
  • A beverage company claims that its new cold drinks are superior to the other drinks available in the market, etc.

 

A statistical test mainly involves four steps:

  • Evolving a test statistic
  • To know the sampling distribution of the test statistic
  • Selling of hypotheses testing conventions
  • Establishing a decision rule that leads to an inductive inference about the probable truth. 

 

Types of statistical hypothesis

  • Null hypothesis
  • Alternative hypothesis

 

Null hypothesis

 

A null hypothesis is a statement, which tells us that no difference exists between the parameter and the statistic being compared to it. According to Fisher, any hypothesis tested for its possible rejection is called a null hypothesis and is denoted by H0.

Alternative hypothesis

 

The alternative hypothesis is the logical opposite of the null hypothesis. The rejection of the null hypothesis leads to the acceptance of the alternative hypothesis. It is denoted by H1.

For example, with a coin-tossing experiment, the null and alternative hypothesis may be formed as,

H0: the coin is unbiased.

H1: the coin is biased.

 

Depending on the population distribution, the statistical hypothesis are two types,

  • Simple hypothesis:when a hypothesis completely specifies the distribution of the population, then the hypothesis is called a simple hypothesis.
  • Composite hypothesis: when a hypothesis does not completely specify the distribution of the population, then the hypothesis is called a composite hypothesis…(Source)

Source Prolead brokers usa

Pro Lead Brokers USA | Targeted Sales Leads | Pro Lead Brokers USA Skip to content