Search for:
elegant entrepreneur i quit what happens after you resign
Elegant Entrepreneur: I QUIT! What Happens After You Resign

This is the second and final article in the ‘I Quit’ series. These articles are meant to provide you with some insight into the resignation process from an employer perspective. In this article, I will discuss what happens (or should happen) after you tell your manager you are leaving the company.

As I mentioned in the first article, entitled, ‘I QUIT! The When and How of Resignation‘, I have accepted numerous resignations in my time as a business owner and manager, and the nature and tone of those resignations is as varied as the people working in and with my business over the past twenty years. After years of experience, I have concluded that, while there are many ways to quit your job, there are definitely right ways and wrong ways to take on this process.If you haven’t already read the first article in this series, you will want to do so. It covers the ‘when’ and the ‘how’ of the resignation process.

So, let’s talk about what happens during the time after you resign and before you leave the company. Once you deliver your resignation letter and meet with your manager, you might think the important steps are finished. But, there is much to be done, before you leave, and those tasks are important, not only for your employer but for your reputation and your professional future.

Before You Go

Once you have given notice, you can discuss your remaining time at the company. Those notes you took before the meeting will come in handy here. Make suggestions on how you might ease the transition. Can you train a particular person or team to take over responsibilities while your manager searches for a replacement? Is there someone in particular your manager might consider as your replacement? Could you train them? What projects do you need to complete? Outline the tasks and tell your manager what you expect to complete before your departure.

After you talk to your manager, and to HR and your team, resist the temptation to kick back and wait for your departure date. Get the work done. Do not disrupt the work process. If you are leaving because you are unhappy, do not engage in poisonous rhetoric in the break room or encourage others to leave. Do your job and impress others with your professionalism.

Before your last day, be sure all exit paperwork is complete, and that you have taken care of all benefits, experience certificates, relieve letters, etc.

After You Leave

Here is the point where I tell you why you want your employer to be happy! Whether your relationship with your prior employer was good or bad, your ex-manager, your teammates and colleagues will remember your professionalism.

You finished your last day at the company and you are moving on – hopefully to a great new job in a new company. Whether you know it or not everyone was watching you and judging how you handled the resignation process. Younger team members will learn from your behavior. Your old colleagues may leave the company and you may wish to hire them or have them considered for employment in your new company. They will WANT to work with you because they perceive you as being professional. Your ex-manager may become a client when you start your new business. You may be nominated for an industry panel or position by someone with whom you used to work. All of those things are possible and some are probable. Most of us work in a small professional community and our behavior and attitude follow us from one company and career experience to another. THAT is why you want your ex-manager to be grateful.

Oh, and one last thing before you update your resume and put that previous job in the rearview mirror! Keep in touch with your previous team, your ex-manager, your professional contacts. Call them and ask if they are going to an industry event and arrange to meet them there for a cup of coffee. Call to ask how they are doing. Call your manager a month or two after you leave and ask how things are going. Offer to answer any questions they might have. In short, be mature and professional, and make your old colleagues miss you.

That’s the way to resign!

Source Prolead brokers usa

what does the future of machine learning look like
What Does the Future of Machine Learning Look Like?

With machine learning now being behind many technologies, from Netflix’s recommendation algorithm to self-driving cars, it’s time for businesses to start taking a closer look. In this article, we will discuss the future of machine learning and its value throughout industries.

Machine learning solutions continue to incorporate changes into businesses’ core processes and are becoming more prevalent in our daily lives. The global machine learning market is predicted to grow from $8.43 billion in 2019 to $117.19 billion by 2027.

Despite being a trending topic, the term ‘machine learning’ is often used interchangeably with the concept of artificial intelligence. In fact, machine learning is a subfield of artificial intelligence based on algorithms that can learn from data and make decisions with minimal or no human intervention.

Many companies have already begun using machine learning algorithms due to their potential to make more accurate predictions and business decisions. In 2020, $3.1 billion in funding was raised for machine learning companies. Machine learning has the power to bring transformative changes across industries.

With machine learning being so prominent in our lives today, it’s hard to imagine a future without it. Here are our predictions for the development of machine learning in 2021 and beyond.

Quantum computing can define the future of machine learning

Quantum computing is one advancement that has the potential to boost machine learning capabilities. Quantum computing allows the performance of simultaneous multi-state operations, enabling faster data processing. In 2019, Google’s quantum processor performed a task in 200 seconds that would take the world’s best supercomputer 10,000 years to complete.

Quantum machine learning can improve the analysis of data and obtain more profound insights. Such increased performance can help businesses to get better results than via more traditional machine learning methods.

So far, there is no commercially ready quantum computer available. However, a handful of big tech companies are investing in technology, and the rise of quantum machine learning is not that far off.

AutoML will facilitate the end-to-end model development process

Automated machine learning or AutoML is automating the process of applying machine learning algorithms to complete real-life tasks. AutoML simplifies the process so that a person or business can apply complex machine learning models and techniques without being an expert in machine learning.

AutoML enables a wider audience to use machine learning, which indicates its potential to change the technology landscape. A data scientist can use AutoML to their benefit, for instance, to quickly find the algorithms they can use or whether any algorithms have been missed. Here are some stages of development and deployment of a machine learning model that AutoML can automate:

  • Data pre-processing – improve data quality, transform unstructured data into structured data with the help of data cleaning, data transformation and data reduction, etc.
  • Feature engineering – use automation with machine learning algorithms to help create more adaptable features based on the input data.
  • Feature extraction – use different features or datasets to produce new features that will improve results and reduce the size of data processed.
  • Feature selection – choose only useful features for processing.
  • Algorithm selection and hyperparameter optimisation – automatically choose the best possible hyperparameters and algorithms.
  • Model deployment and monitoring – deploy a model based on the framework and monitor the condition of the model via dashboards.

Source Prolead brokers usa

how big data makes digital marketing campaigns more efficient
How Big Data Makes Digital Marketing Campaigns More Efficient

Data has gained relevance in almost every sphere of business activity. In this article, we will discuss the key aspects of how data influences the digital marketing space.

You perhaps have not yet realized that data bears the capability to guide the strategic efforts of marketing. With data, marketers can understand their target audience to run personalized campaigns. 

During the last few years, big data has gained importance as the business world shifted to a digital environment. 

Today, all businesses need to understand what Big Data is and how it can help you run successful digital marketing campaigns and maximize RoI.

What is Big Data?

The concept of big data is simple. You can regard it as a large volume of structured and unstructured information that a business receives daily.

The importance of big data arises not from the amount but from how businesses organize the data to figure out the actions, needs, wants, and buying habits of their consumers.

Volume, Velocity, Variety, and Veracity are the classifications of big data.

Volume

Volume is the amount of data that gets collected from a variety of different sources. The sources can be social media, online forms, online transactions, machine-to-machine data, etc. 

As the number of consumer characteristics has multiplied manifold, the volume of structured and unstructured data also has grown substantially. As, such, the traditional methods of handling data no longer works properly. 

New and advanced methods have come up today with which businesses can easily process big data.

Velocity

The speed at which data gets generated, stored, analyzed, and archived is called velocity. Businesses should ensure that appropriate methods are available to handle the inflow of data.

Variety

The different forms of data that a business receives is called variety. As data comes from different sources, the construction will be different. Depending on the sources, data can be structured or unstructured. Variety also depends on the format, like videos, written documents, images, etc.

Veracity

The discrepancies and noise in the data are called veracity. Businesses should ensure that they do not store any irrelevant information which can negatively impact any analysis. Data has to be meaningful to be fit for analysis.  

Influence of Big Data on Digital Marketing

With big data, businesses can get the insights they need to understand their target audience, big data has become an integral part of any digital marketing strategy.

Big data can help organize data, segment the market and create consumer personas based on characteristics such as behaviours, purchase patterns, hobbies, geolocation, etc. Moreover, it helps to improve the user experience

As such, it eliminates all the guesswork and thus is an effective marketing method.

Big Data helps digital marketing in the following ways:

Consumer Insights

Interpretation of data has become a crucial component of executing marketing strategies in today’s digital age.

Big data helps to elicit customer insights in real-time. Therefore, marketers can understand the tastes and preferences of their target audience. 

When businesses interact with consumers through social media, they can figure out what consumers expect from them. 

You can therefore structure your digital marketing campaign to distinguish it from that of your competitors. 

Personalisation

In today’s competitive business landscape, businesses cannot escape personalization. Big data can help businesses personalize their digital marketing campaigns. With the knowledge of consumer insights, businesses can understand the tastes and preferences, so they can structure their digital marketing campaigns in a targeted and personalized manner. 

Digital marketing is all about delivering the right message at the right time. Targeted emails and ads can help to personalize digital marketing campaigns. 

With targeted emails, businesses can create a stronger bond with their consumers. Email marketing can help marketers create more personalised and effective campaigns by delivering the right message. Businesses

Businesses can target these emails through browsing history, behaviours, purchasing history, etc.

Big data can help businesses create more effective targeted ads. Marketers can use third party sources to display ads to users. As a result, businesses can increase brand awareness, revenues and brand loyalty.

Boost Sales

With big data, businesses can predict the demand for a product or service. The information on user behaviour can help businesses to answer many types of questions, such as what types of product consumers buy, how often they purchase or search for a product or service and what payment methods they prefer using.

Obviously, every visitor to your website will not make a purchase. So, if businesses have answers to the questions, they can create a seamless user experience and identify and target users’ pain points.

Campaign Efficiency

Big data can bring more efficiency to digital marketing campaigns, and at the same time, optimize your costs.  

Digital marketers should always have answers to the following questions, regarding their target audience:

  • Who is the contact?
  • When will be the prospect available to contact?
  • How will they contact the prospect?
  • What should they offer to the contact?

When markers have answers to these questions, they can segment their target audience and construct predictive models to predict future behaviours.

Also, when markers know when the users will be online in their preferred platforms, they can target customers in their familiar digital environment. So, they can focus on platforms with high levels of conversions. Asa result, they can see increased revenue and hence higher RoI. Also, they can manage their budgets in a better way.

Analyse Campaign Results

Measurement of your digital marketing campaign is essential to figure out the results. And with big data, you can easily measure the performance of your campaign. 

Marketers can use reports to determine the presence of any negative changes to marketing KPI’s. If they detect any deviation from achieving the desired results, they can re-orient their marketing strategy.  As such, they can maximise revenue and make the marketing efforts more scalable.

Conclusion

Big data has eventually made inroads into the digital marketing sphere. And it has become a crucial part of all business strategies. You should know effective ways to target your audience on digital channels. Also, you should structure your content based on the right keywords and user information. And the result will be an increase in the effectiveness of your digital marketing campaign, and hence an increase in your RoI.

Source Prolead brokers usa

what is the most robust binary classification performance metric
What is the most robust binary-classification performance metric?

Accuracy, F1, or TPR (a.k.a recall or sensitivity) are well-known and widely used metrics for evaluating and comparing the performance of machine learning based classification.

But, are we sure we evaluate classifiers’ performance correctly? Are they or others such as BACC (Balanced Accuracy), CK (Cohen’s Kappa), and MCC (Matthews Correlation Coefficient) robust?

My latest research on benchmarking classification performance metrics (BenchMetrics) has just been published with SpringerNature in Neural Computing and Applications (SCI, Q1) journal.

Read here: https://rdcu.be/cvT7d

Highlights

  • A benchmarking method is proposed for binary-classification performance metrics.
  • Meta-metrics (metric about metric) and metric-space concepts are introduced.
  • The method (BenchMetrics) tested 13 available and two recently proposed metrics.
  • Critical issues are revealed in common metrics while MCC is the most robust one.
  • Researchers should use MCC for performance evaluation, comparison, and reporting.

Abstract

This paper proposes a systematic benchmarking method called BenchMetrics to analyze and compare the robustness of binary-classification performance metrics based on the confusion matrix for a crisp classifier. BenchMetrics, introducing new concepts such as meta-metrics (metrics about metrics) and metric-space, has been tested on fifteen well-known metrics including Balanced Accuracy, Normalized Mutual Information, Cohen’s Kappa, and Matthews Correlation Coefficient (MCC), along with two recently proposed metrics, Optimized Precision and Index of Balanced Accuracy in the literature. The method formally presents a pseudo universal metric-space where all the permutations of confusion matrix elements yielding the same sample size are calculated. It evaluates the metrics and metric-spaces in a two-staged benchmark based on our proposed eighteen new criteria and finally ranks the metrics by aggregating the criteria results. The mathematical evaluation stage analyzes metrics’ equations, specific confusion matrix variations, and corresponding metric-spaces. The second stage, including seven novel meta-metrics, evaluates the robustness aspects of metric-spaces. We interpreted each benchmarking result and comparatively assessed the effectiveness of BenchMetrics with the limited comparison studies in the literature. The results of BenchMetrics have demonstrated that widely used metrics have significant robustness issues, and MCC is the most robust and recommended metric for binary-classification performance evaluation.

A critical question for the research community who wish to derive objective research outcomes

The chosen performance metric is the only instrument to determine which machine learning algorithm is the best.

So, for any specific classification problem domain in the literature:

Question: If we evaluate the performances of algorithms based on MCC will the comparisons and ranks change?

Answer: I think so. At least, we should try and see.

Question: But how?

Answer:

Please, share the results with me.

Citation for the article:

Canbek, G., Taskaya Temizel, T. & Sagiroglu, S. BenchMetrics: a systematic benchmarking method for binary classification performance metrics. Neural Comput & Applic (2021). https://doi.org/10.1007/s00521-021-06103-6

Source Prolead brokers usa

big data technology importance in human resource management
Big Data Technology Importance in Human Resource Management

Big data in human resource management refers to the use of several data sources to evaluate and improve practices including recruitment, training and development, performance, compensation, and end-to-end business performance.

It has attracted the attention of human resource professionals who can analyze huge amounts of data to answer important questions regarding employee productivity, the impact of training on business performance, employee attrition, and much more. Using sophisticated HR software that gives robust data analytics professionals can make smarter and more accurate decisions.

In this article, let’s dive further and understand the role of big data technology in the field of HR, in today’s fast-paced world, is done. Where there is the need of analyzing massive quantities of different information.

Recruiting top talent for the firms is the primary task of HR departments. They are required to select candidates’ resumes and interview appropriate applicants till they get the right person they are looking for. Big data offers a broader platform for the recruitment process, which is the Internet.

By integrating recruitment with social networking, HR recruiters can find more information about the candidates, such as personal video pictures, living conditions, social relationships, ability, etc., so that the applicant’s image becomes more vivid and can recruit the right fit. Moreover, the candidates can learn more about the organization for which they will be giving an interview of the information and the recruitment process becomes more open and transparent.

Compensation is the most essential indicator, which attracts potential applicants, and getting a salary is one of the main objectives of employees to participate in the work. Traditional performance management systems often have more qualitative and less quantitative terms and compensation is out of touch with performance results.

Data analytics are solutions that identify meaningful patterns in a set of data. They are helpful to quantify the performance of a firm, product, operation, team, or even employees to support the business decisions. Data compilation to manipulation and statistical analysis are the core elements of big data analysis.

With the help of big data technology in performance management, professionals can record daily workload, the specific content of the work, and the task achievement of each employee. The professionals who have done talent management certification programs will have the advantage to receive better compensation. Sophisticated HR management software, which performs these operations enhances the work efficiency and also reduces enterprise investment in human capital. They are also useful for calculating salaries automatically to get better insights on performance standards.

Employers can gather health-related data on their employees. As a result, more attractive and beneficial packages can be created. The certified HR professionals will get to enjoy more perks too. It is crucial to note that the organizations must be transparent about what they’re doing to avoid legal concerns related to discrimination practices. This can be done by revealing how they are collecting and using this data.

Workforce training is an important part to enable the sustainable development of a business. Successful training can enhance employees’ level of knowledge and improve their performance. So that firms can keep their benefits of human resources in fierce competition and increase their profitability.

Traditional employee training needs a lot of manpower, material, and financial resources. With the advent of big data, information access and sharing have become more convenient. Employees can easily search and find the information they want to learn through the network at any time or anywhere. Workforce big data analysis – makes use of software to apply statistical models to employee-related data to optimize human resource management (HRM). It is also helpful to record data of studied behaviors of each employee, who not only can use the online system to analyze their own training needs but also can choose their favorite form of training.

The role of big data in human resource management has become more prominent. The value of data has certainly accelerated the way a business functions. This rapidly-growing technology enables HR professionals to effectively manage employees so that business goals can be accomplished more quickly and efficiently.

Source Prolead brokers usa

open ai codex challenge seen by the participants
Open AI Codex Challenge Seen By the Participants

On the 12th of August, Open AI hosted a hackathon for all those interested in trying out Codex. Codex is a new generation of their GPT-3 algorithm that can translate plain English commands into code.

We at Serokell thought it would be interesting to try this out: right now free access to the beta is accessible only to a small group of people. One of our teammates got access to it after being on the waiting list for over a year.

What was the format?  

The point of the challenge was to solve 5 small tasks that were the same for everyone to test the system. To be fair, they were quite simple – maybe because Codex can’t solve complex problems. To give an example, one of the tasks was to use pandas’ functionality to calculate the number of days between two dates in a string. There was a simple task dedicated to algorithms as well: for a binary tree it was needed to restore the original message. 

Our main motivation was to see what Codex can do, how well it understands tasks, and monitor the logic of its decisions. Spoiler alert: not everything was as great and smooth as during the Open AI demo!

What was the problem?

The first problem was connected with server lagging – maybe the company wasn’t ready for such a huge number of participants (a couple of thousands). Because of that, we wasted a lot of time trying to reconnect. Interestingly enough: the leaderboard had a weird logic. The solutions were rated by the time of completion, not by the time needed to solve the problem. So people who were late for the beginning of the challenge were apriori low on the scoreboard. 

To us, it seemed that Codex is not a very smart coder. First of all, it made quite many syntax mistakes. It can easily forget the closing bracket or introduce extra columns. Because of that, the code becomes incorrect. It really takes time and effort to catch these errors!

Secondly, it seems that Codex doesn’t know how to work with data types. You as a programmer have to be very careful, or the model will mess things up. 

For instance, in the previous example of a task that simply is counting days between dates, Codex messed up the sequence of actions for us. It forgot to convert string to date and tried to perform an operation with it as it is. 

Finally, the solutions that Codex proposes are not optimal. It’s a huge part of being a good programmer: to understand the task, break it down into realizable pieces and implement the most optimal solution in terms of execution time and memory. Codex does come up with some solutions but they’re far from being the most optimal ones. For example, when working with the tree, it wrote a while cycle instead of a for cycle and added extra conditions that weren’t in the initial task. Everyone knows that writing while loop instead of for loop is kind of a big no-no. 

Conclusion

All that said, it’s worth saying that Codex can’t be used as a no-code alternative to real programming. It’s unclear who Open AI is targeting with this solution. Non-programmers can’t use it, for the reasons mentioned above. Programmers would prefer to write code from scratch than sit and edit brackets in the Codex code. 

Before it was said that Codex will be behind the Copilot, the initiative was realized together with GitHub. But firstly, it doesn’t work as an autocomplete tool like PyCharm. The majority of the team doesn’t write code in GitHub and uses it simply for project management. So it’s unclear what Open AI is going to do with Codex.

Anyhow, it’s an interesting initiative that has the potential to greatly improve the more people use it. So perhaps in the future, it will become a super user-friendly alternative to no-code solutions for non-programmers.

Source Prolead brokers usa

scaling data science and ai to boost business growth
Scaling Data Science And AI To Boost Business Growth

Data science and AI has become a requirement for business growth. The technology has advanced enough to predict customer’s choices and satisfy their needs. 

The volume of data generated per day is predicted to reach 463 exabytes by 2025. On the Internet, the world spends about $1 million per minute on goods. This huge amount of data, known as big data, has increased the need for qualified data science workers. 

According to the US Bureau of Labor Statistics, the employment of data scientists is predicted to grow 15% by 2029, much faster than the 4% average for all occupations. Graduates with exceptional data management skills are preferred both in large corporations and small businesses.

Almost 79% of CEOs believe that implementing artificial intelligence in companies will make their jobs more efficient and easier. AI results in 50% more sales, up to 60% cost reductions, and more. 

Leveraging AI helps you to reach out to your target audience more efficiently, nurture their needs and increase revenue. For instance, you can improve your landing page relevancy and improve the ad quality score for increased conversions. 

Sales and Marketing

AI-powered apps may soon be able to manage all of your everyday tasks. AI-powered predictive content tools enable marketers to be more strategic while also reducing workload. Spending less time on less profitable possibilities and more time on your most profitable segments will allow your sales force to enhance its win rate, cover more ground, and eventually increase revenue.

AI-powered marketing applications crawl your website for blog articles, case studies, white papers, ebooks, videos, and other content. For a complete omnichannel approach, insights can be leveraged to engage visitors across email, online, social, and mobile channels. AI can assist with everything from managing your calendar to scheduling meetings to appraising a sales team’s pipeline by automatically executing these things for you or making them significantly easier by making recommendations based on your prior usage data. 

As a result, companies may now engage in one-to-one value marketing, which was previously impossible without significant scalability.  This enables the system to predict what each user wants to buy. AI knows what to market since it closely monitors customer behaviour. Salespeople devote hours to study and interviews in order to discover what AI already knows.

One such tool to ease your work is Finteza. It offers analytical solutions and the advertising engine. Analytics aids in the collection of data about your site, such as user behaviour, traffic quality, weak points, and inefficient advertising channels. The advertising engine allows for the management of online campaigns, the generation of banners, and the placement of targeted adverts all from a single interface. 

Accurate Customer Demand Prediction

Today, predictive analytics is all about linking disparate systems and data sets to conduct adequate analysis and extract meaningful information from seemingly chaotic data. Advanced analytics-powered solutions can significantly cut costs associated with failures, customer attrition, and other factors.

Data collection and analysis on a bigger scale can help you detect emerging trends in your market. Purchase data, celebrities and influencers, and search engine searches can all be used to determine which goods individuals are interested in. Clothing upcycling, for example, is becoming more popular as an environmentally conscientious approach to update one’s wardrobe. According to Nielson research, 81% of consumers strongly believe that businesses should assist in improving the environment.

Demand prediction is an important part of building a thriving business. If you take a look at a firm that is growing, it means that they have a growth strategy in place. In demand prediction, you calculate a firm’s past sales and then their potential growth rate. 

Data processing, mining, analysis, and manipulation assists in the demand forecasting of an organization. You can analyse the historical data of your competitors’ performance, the firm’s ROI or attrition rate and predict the future just with the help of AI and machine learning. 

Acxiom, IBM, Information Builders, Microsoft, SAP, SAS Institute, Tableau Software, and Teradata are among the major predictive analytics software and service companies.

By remaining updated on the behaviours of your target market, you can make business decisions that will put you ahead of the competition.

Final Thoughts

In business management, the trend is toward customer-centricity, personalization, and data-driven decision-making. Regardless of how organisations do this, being open and adaptable to change puts them one step closer to remaining competitive. Start leveraging the power of data science and AI to elevate your business growth. 

Source Prolead brokers usa

the growing importance of data and ai literacy part 2
The Growing Importance of Data and AI Literacy – Part 2

This is the second part of a 2-part series on the growing importance of teaching Data and AI literacy to our students.  This will be included in a module I am teaching at Menlo College but wanted to share the blog to help validate the content before presenting to my students.

In part 1 of this 2-part series “The Growing Importance of Data and AI Literacy”, I talked about data literacy, third-party data aggregators, data privacy, and how organizations monetize your personal data.  I started the blog with a discussion of Apple’s plans to introduce new iPhone software that uses artificial intelligence (AI) to detect and report child sexual abuse.  That action by Apple raises several personal privacy questions including:

  • How much personal privacy is one willing to give up trying to halt this abhorrent behavior?
  • How much do we trust the organization (Apple in this case) in their use of the data to stop child pornography?
  • How much do we trust that the results of the analysis won’t get into unethical players’ hands and used for nefarious purposes?

In particular, let’s be sure that we have thoroughly vented the costs associated with the AI model’s False Positives (accusing an innocent person of child pornography) and False Negatives (missing people who are guilty of child pornography). That is the focus of Part 2!

AI literacy starts by understanding how an AI model works (See Figure 1).

Figure 1: “Why Utility Determination Is Critical to Defining AI Success

AI models learn through the following process:

  1. The AI Engineer (in very close collaboration with the business stakeholders) defines the AI Utility Function, which are the KPIs against which the AI model’s progress and success will be measured.
  2. The AI model operates and interacts within its environment using the AI Utility Function to gain feedback in order to continuously learn and adapt its performance (using backpropagation and stochastic gradient descent to constantly tweak the models weights and biases).
  3. The AI model seeks to make the “right” or optimal decisions, as framed by the AI Utility Function, as the AI model interacts with its environment.

Bottom-line: the AI model seeks to maximize “rewards” based upon the definitions of “value” as articulated in the AI Utility Function (Figure 3).

Figure 2:  “Will AI Force Humans to Become More Human?”

To create a rational AI model that understands how to make the appropriate decisions, the AI programmer must collaborate with a diverse cohort of stakeholders to define a wide range of sometimes-conflicting value dimensions that comprise the AI Utility Function.  For example, increase financial value, while reducing operational costs and risks, while improving customer satisfaction and likelihood to recommend, while improving societal value and quality of life, while reducing environmental impact and carbon footprint.

Defining the AI Utility Function is critical because as much credit as we want to give AI systems, they are basically dumb systems that will seek to optimize around the variables and metrics (the AI Utility Function) that are given to them.

To summarize, the AI model’s competence to take “intelligent” actions is based upon “value” as defined by the AI Utility Function.

One of the biggest challenges in AI Ethics has nothing to do with the AI technology and has everything to do with Confirmation Bias.  AI model Confirmation Bias is the tendency for an AI model to identify, interpret, and present recommendations in a way that confirms or supports the AI model’s preexisting assumptions. AI model confirmation bias feeds upon itself, creating an echo chamber effect with respect to the biased data that continuously feeds the AI models. As a result, the AI model continues to target the same customers and the same activities thereby continuously reinforcing preexisting AI model biases.

As discussed in Cathy O’Neil’s book “Weapons of Math Destruction”, the confirmation biases built into many of the AI models that are used to approve loans, hire job applicants, and accept university admissions are yielding unintended consequences that severely impact individuals and society. Creating AI models that overcome confirmation biases takes upfront work and some creativity.  That work starts by 1) understanding the costs associated with the AI model’s False Positives and False Negatives and 2) building a feedback loop (instrumenting) where the AI model is continuously-learning and adapting from tis False Positives and False Negatives.

Instrumenting and measuring False Positives – the job applicant you should not have hired, the student you should not have admitted, the consumer you should not have given the loan – are fairly easy because there are operational systems to identify and understand the ramifications of those decisions.  The challenge is identifying the ramifications of the False Negatives.

“In order to create AI models that can overcome model bias, organizations must address the False Negatives measurement challenge. Organizations must be able to 1) track and measure False Negatives to 2) facilitate the continuously-learning and adapting AI models that mitigates AI model biases.”

The instrumenting and measuring of False Negatives – the job applicants you did not hire, the student applicant you did not admit, the customer to whom you did not grant a loan – is hard, but possible.  Think about how an AI model learns – you label the decision outcomes (success or failure), and the AI model continuously adjusts the variables that are predictors of those outcomes.  If you don’t feedback to the AI model its False Positives and False Negatives, then the model never learns, never adapts, and in the long run misses market opportunities. See my blog “Ethical AI, Monetizing False Negatives and Growing Total Address” for more details.

Organizations need a guide for creating AI models that are transparent, unbiased, continuously-learn and adapt, and exist in support of societal laws and norms.  That’s the role of the Ethical AI Application Pyramid (Figure 3).

Figure 3: The Ethical AI Application Pyramid

The Ethical AI Application Pyramid embraces the aspirations of Responsible AI by ensuring the ethical, transparent, and accountable application of AI models in a manner consistent with user expectations, organizational values and societal laws and norms.  See my blog “The Ethical AI Application Pyramid” for more details about the Ethical AI Application Pyramid.

AI run amuck is a favorite movie topic.  Let’s review a few of them (each of these is on my rewatchable list):

  • Eagle Eye: An AI super brain (ARIIA) uses Big Data and IOT to nefariously influence humans’ decisions and actions.
  • I, Robot: Way cool looking autonomous robots continuously learn and evolve empowered by a cloud-based AI overlord (VIKI).
  • The Terminator: An autonomous human killing machine stays true to its AI Utility Function in seeking out and killing a specific human target, no matter the unintended consequences.
  • Colossus: The Forbin Project: An American AI supercomputer learns to collaborate with a Russian AI supercomputer to protect humans from killing themselves, much to the chagrin of humans who are intent on killing themselves.
  • War Games: The WOPR (War Operation Plan Response) AI system learns through game playing that the only smart nuclear war strategy is “not to play”.
  • 2001: The AI-powered HAL supercomputer optimizes its AI Utility Function to accomplish its prime directive, again no matter the unintended consequences.

There are some common patterns in these movies – that AI models will seek to optimize their AI Utility Function (their prime directive) no matter the unintended consequences. 

But here’s the real-world AI challenge: the AI models will not be perfect out of the box.  The AI models, and their human counterparts, will need time to learn and adapt. Will people be patient enough to allow the AI models to learn? And do normal folks understand that AI models are never 100% accurate and while they will improve over time with more data, models can also drift over time as the environment in which they are working changes? And are we building “transparent” models so folks can understand the rationale behind the recommendations that AI models make?

These questions are the reason why Data and AI literacy education must be a top priority if AI is going to reach its potential…without the unintended consequences…

Source Prolead brokers usa

opportunities and risks of foundation models
Opportunities and Risks of Foundation Models

This month, the Centre for Research on foundation models at the University of Stanford published an insightful paper called On the Opportunities and Risks of Foundation Models

From the abstract (emphasis mine)

  • AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.
  • The paper calls these models foundation models to underscore their critically central yet incomplete character.
  • The paper provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles (e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations).
  • Though foundation models are based on conventional deep learning and transfer learning, their scale results in new emergent capabilities, and their effectiveness across so many tasks incentivizes homogenization.
  • Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream.
  • Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties.

To expand further from the (long!) paper my notes and comments from the paper

  • foundation models have the potential to accentuate harms, and their characteristics are in some ways poorly understood.
  • Foundation models are enabled by transfer learning and scale. Foundation models will drive the next wave of developments in NLP based on BERT, RoBERTa, BART, GPT and ELMo.
  • But the impact of foundation models will be beyond NLP itself
  • The report is divided into four parts: capabilities, applications, technology, and society.

Capabilities

  • This Report looks at five potential capabilities of foundation models. These include the ability to process different modalities (e.g., language, vision), to affect the physical world (robotics), to perform reasoning, and to interact with humans.
  • The paper explore the underlying architecture behind foundation models and identify 5 key attributes. These include expressivity of the computational model, scalability, memory, compositionality and memory capacity.
  • The ecosystem surrounding foundation models requires a multi-faceted approach:

more compute-efficient models, hardware, and energy grids all may mitigate the carbon burden of these models – environmental cost should be a clear factor that informs how foundation models are evaluated, such that foundation models can be more comprehensively juxtaposed with more environment-friendly baselines

  • the cost-benefit analysis surrounding environmental impact necessitates greater documentation and measurement across the community.

Language

  • The study of foundation models has led to many new research directions for the community, including understanding generation as a fundamental aspect of language and studying how to best use and understand foundation models.
  • The researchers also examined whether foundation models can satisfactorily encompass linguistic variation and diversity, and finding ways to draw on human language learning dynamics.

Vision

  • In the longer-term, the potential for foundation models to reduce dependence on explicit annotations may lead to progress on essential cognitive skills which have proven difficult in the current paradigm.

Robotics

  • Using strategies based on transfer leaning, Robots can be used to learn new but similar tasks through foundation models enabling generalist behaviour

Reasoning and search

  • foundation models should play a central role towards general reasoning as vehicles for tapping into the statistical regularities of unbounded search spaces (generativity) and exploiting the grounding of knowledge in multi-modal environments (grounding)
  • Researchers have applied these language model-based approaches to various applications, such as predicting protein structures and proving formal theorems. Foundation models offer a generic way of modeling the output space as a sequence.

Applications

  • foundation models can be a central storage of medical knowledge that is trained on diverse sources/modalities of data in medicine.
  • For example, a model trained on natural language could be adapted for protein fold prediction.
  • Also, pretrained models can help lawyers to conduct legal research, draft legal language, or assess how judges evaluate their claims.

Technology

  • The emerging paradigm of foundation models has attained impressive achievements in AI over the last few years.
  • The paper identifies and discuss five properties, spanning expressivity,scalability, multimodality, memory capacity, and compositionality, that they believe are essential for a foundation model to be successful.

 

Society

Finally, the impact on society will be most profound.

  • The paper asks what fairness-related harms relate to foundation models, what sources are responsible for these harms, and how we can intervene to address them.
  • The issues relate to broader questions of algorithmic fairness and AI ethics – but foundation models are at the forefront of impact and scale
  • People can be underrepresented or entirely erased, e.g., when LGBTQ+ identity terms are excluded in training data.
  • The relationship between the training data and the intrinsic biases acquired by the foundation model remains unclear. Establishing scaling laws for bias, akin to those for accuracy metrics, may enable systematic study at smaller scales to inform data practices at larger scales.
  • Foundation models will allow for the creation of content that is indistinguishable from content created by humans – which poses risks
  • Also, even seemingly minuscule decisions, like reducing the. the number of layers a model has, may lead to significant environmental cost reductions at scale.
  • Even if foundation models increase average productivity or income, there is no economic law that guarantees everyone will benefit because not all tasks will be affected to the same extent.

Finally, a view that I very much agree with

The widespread adoption of foundation models poses ethical, social, and political challenges.

OpenAI’s GPT-3 was at least partly an experiment in scale, showing that major gains could be achieved by scaling up the model size, amount of data, and training time, without major modelling innovations. If scale does turn out to be critical to success, the organizations most capable of producing competitive foundation models will be the most well-resourced: venture-funded start-ups, already-dominant tech giants, and state governments.

To conclude, this is a must read paper from a number of perspectives: Ethics, future of AI, foundation models etc

Source Prolead brokers usa

data science trends of the future 2022
Data Science Trends of the Future 2022

Photo credit: Unsplash.

Data Science is an exciting field for knowledge workers because it increasingly intersects with the future of how industries, society, governance and policy will function. While it’s one of those vague terms thrown around a lot for students, it’s actually fairly simple to define.

Data science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data, and apply knowledge and actionable insights from data across a broad range of application domains. Data science is thus related to an explosion of Big Data and optimizing it for human progress, machine learning and AI systems.

I’m not an expert in the field by any means, just a futurist analyst, and what I see is an explosion in data science jobs globally and new talent getting into the field, people who will build the companies of tomorrow. Many of those jobs will actually be in companies that do not exist yet in South and South-East Asia and China.

Data science is thus where science meets an AI, a holy grail for career aspirants and students alike. Data science continues to evolve as one of the most promising and in-demand career paths for skilled professionals. Today, successful data professionals understand that they must advance past the traditional skills of analyzing large amounts of data, data mining, and programming skills. 

This article will attempt to outline a brief overview of some of what’s going on and is by no means exhaustive or a final take on the topic. It’s also going to focus more on policy futurism rather than technical aspects of data science, since those are readily covered in our other articles on an on-going basis.

Augmented Data Management

In a future AI-human hybrid workforce, how people deal with data will be more integrated. Gartner sees this as a pervasive trend. For example, augmented data management uses ML and AI techniques to optimize and improve operations. It also converts metadata from being used in auditing, lineage and reporting to powerful dynamic systems.

Essentially augmented data management will enable active metadata to simplify and consolidate architectures and also increase automation in redundant data management tasks. As Big Data optimization takes place, automation will become more possible in several human fields, reducing task loads and creating AI-human architectures of human activity.

Hybrid Forms of Automation

Automation is one of those buzz words but things like RPA are really moving quite swiftly in the evolution of software. To put it another way, in terms of data, this has a predictable path to optimization. High-value business outcomes start with high data quality. But with the scale and complexity of modern data, the only way to truly harness its value is to automate the process of data discovery, preparation and the blending of disparate data. 

This digitally transforms the very way industries function and do their business. You can see this in nearly every sector where efficiency with data is key. Industries such as manufacturing, retail, financial services and travel and hospitality are benefiting from this trend. The retail industry, for example, has undergone multiple pivots in recent years. What happens at the intersection of consumer behavior with something like DeFi and the future of data on the blockchain?

Automation produces human convenience from the consumer side, but also new machine learning systems that become more important in certain industries. The rise of E-commerce, video streaming, FinTech and tons of other meta-trends in business all depend upon this kind of automation and data-optimization processes.

Scalable AI

As data science evolves, AI and machine learning begins to influence every sector. According to Nvidia there are around 12,000 AI startups in the world. This is important to recognize in the 2020s. There’s an AI explosion of potential that will lead to scalable AI and behavior modification at scale in humans adjusting to this new reality.

The new kinds of capitalism around scalable AI can be called augmented surveillance capitalism. This is important because the way we relate to data is transformative. The Internet of Things becomes embedded in all human systems and activities.

Big Data and “small data” unite and as AI integrates with many new and old aspects of society, something new emerges in how we are able to monitor the flow of data in real time and predict for outcomes instantaneously. It doesn’t just create a more connected society, but a living lens into the data of everything that also stimulates innovation, new companies and, potentially, incredible economic growth. Scalable AI is one of the reasons students get into data science. They realize the end game is beautiful and does improve human existence.

Augmented Consumer Interfaces

When I say ACI I mean how the consumer processes data in the shopping of the future, either online or in a physical store. In the future it’s highly unlikely you will be served by human staff. Instead there will be an AI-agent that you relate to or a new form of the interface of the exchange, such as buying and reviewing products in VR, getting a product overview synopsis audially in your ear buds or other augmented consumer interfaces.

The rise of the augmented consumer takes many forms, from AR on mobile to new methods of potential communication such as a Brain-Computer Interface (BCI). All of these interfaces have consumer, retail and global implications for the future of AI in consumerism.

As companies like Facebook, Microsoft and Amazon race to create a metaverse for the workplace or retail space, even what we think of Zoom meetings or E-commerce interfaces of today will be replaced by new augmented consumer interfaces. Things like a video meeting or an E-commerce platform front page may become outdated. This is also because the data they provide may be sub-optimal for the future of work productivity and data based consumerism.

Essentially VR, IoT, BCI, AR, AI speakers, AI agents, chatbots and so forth all evolve into a new paradigm of augmented consumer interfaces where artificial intelligence is the likely intermediary and people live in the real world but also in a corporate and retail metaverse with different layers. As Facebook tries to bring the workplace into its conception of the metaverse, other companies will create new interfaces for ACI efficiency.

Amazon recently announced that it is planning to open large physical retail stores in the United States that will function as departments stores and sell a variety of goods including clothes, household items and electronics. Yet you can expect Amazon’s various physical spaces to also have more consumer data capture to incentive purchasing or even completely automate the store such as famously their smallish Amazon Go Stores. A more seamless shopping experience creates new expectations for the consumer that finally leads to new augmented consumer interfaces, that will just become the new normal.

AI-As-a-Service Platforms

With data science growing and machine learning evolving, more B2B and AI-as-Service platforms and services will now become possible. This will gradually help democratize artificial intelligence expertise and capabilities so even the smaller entpreneurs will have access to incredible tools.

Platforms like Shopify, Square, Lightspeed and others are heading this direction to enable new small businesses optimized with AI to grow faster. Meanwhile more bigger technology firms are entering the B2B market with their own spin on AI products that other businesses might need.

China’s ByteDance really opened BytePlus which enables a wide variety of intelligent platform services for other businesses. Market leaders like Google, Baidu and Microsoft and Amazon in the Cloud have a significant ability to produce AI-As-a-Service at scale for customers in new ways that are always evolving for the news of industry. The NLP-As-a-Service firms and their progress is also an example of this movement.

The AI-as-a-Service platform growth in the 2020s is as I see it one of the biggest growth curves in data science for years to come generally speaking.

The Democratization of AI

As data science talent becomes more common in the world in highly populated countries, a slight re-balancing of the business benefits takes place in more countries. The democratization of AI will take many decades, but eventually data science will be more equally distributed around the world leading to more social equality, wealth equality, access to business and economic opportunities and AI for Good. We are however a long ways away from this goal.

Wages for data science and machine learning knowledge workers are vastly different in different regions of the planet. A company in Africa or South America may not have the same access to data capabilities, AI and talent as one in “more developed” regions of the Earth. This however will slowly change.

Democratization is the idea that everyone gets the opportunities and benefits of a particular resource. AI is currently fairly centralized however ideas of decentralization, especially in finance, crypto and blockchain may trickle down into how AI is eventually managed and distributed more fairly. AI as a tool is a resource of significant national importance and not all countries have similar per capita budgets to invest in it, in its R&D and in producing companies that harness it the best.

Yet for humanity, the democratization of AI is one of the most important collective goals of the 21st century. It is pivotal if we want to live in a world where social justice, wealth equality and opportunity for all matters in fields such as healthcare, education, commerce and fundamental human rights in an increasingly technological and data-driven world.

The most basic element of the democratization of AI is however that anybody anywhere could become a student of data science and has access to becoming a programmer or a knowledge worker who works with data, machine learning, AI and related disciplines. It all starts with access to education in this sense.

Improved Data Regulation By Design

If datascience is fueling a world full of data, analytics, predictive analytics and Big Data optimization the way we handle data needs to improve and this means better cybersecurity, data privacy protection and a whole range of things.

China’s BigTech regulatory crackdown also has to do with better data regulation. For instance just recently in August, 2021 China has passed the Personal Information Protection Law (PIPL), which lays out for the first time a comprehensive set of rules around data collection.

In augmented surveillance capitalism human rights need to be implemented by design and that means better protection of consumer data especially as AI moves into healthcare, sensitive EMR and patient data. Overall the trend of Data privacy by design incorporates a safer and more proactive approach to collecting and handling user data while training your machine learning model on it.

How we move and build in the Cloud also needs scrutiny from a policy regulation standpoint. Currently data science is moving faster than we can regulate data privacy and make sure the rights of individuals, consumers and patients are respected in the process. Even as data science and Big Data explode, rule of law needs to be maintained otherwise our AI systems and data architecture could lead to rather drastic consequences.

Top Programing Language of Data Science is Still Python

Even though I do not code I’m pretty interested in how programming languages are used and evolving. Python continues to be a defacto winner here. Data science and machine learning professionals have driven adoption of the Python programming language.

Python’s libraries, community and support system online is just incredible to behold and displays how data science is a global community of learners and practitioners. This fosters the collaborative spirit of the internet towards improved data and AI systems in society.

Python as such, is not just a tool but it’s also a culture. Python comes stacked with integrations for numerous programming languages and libraries and is thus the likely entry point of getting into data science and the AI world as a whole. What we have to realize is the programming culture is agile but also highly collaborative making it incrementally easier for new programmers and data science talent to emerge.

Cloud Computing with Exponential AI

The turning point of companies moving into the Cloud is hard to calculate in how exponential it’s leading the emergence of the AI revolution in the 21st century. This is frankly leading to an astonishing array and improvement in the services offered by Cloud computing providers.

Think of the impact of AWS marketplace and their equivalents in Azure, Google, Alibaba, Huawei and others and you get some scope of how Cloud computing and machine learning are in-it-together.

This creates countless jobs, adds value and harnesses the power of data science, machine learning and Big Data for businesses all around the world. The intersection of the Cloud, datascience and AI is truly not just a business but a technological convergence point, it’s own kind of micro singularity if you will.

The depth of the features offered by AWS and Azure is continually self-optimizing to such an extent that something like Google Cloud could harness quantum computing for a whole new era of data science, AI and predictive analytics and value creation.

Cloud automation, Hybrid Cloud solutions, Edge Intelligence, and so much of the technical stuff happening in data science today happens in the Cloud and its happy marriage with AI and datascience services. The increased use of NLP in business, quantum computing at scale, next-generation reinforcement learning – it’s all possible because the Cloud is evolving at such a rapid pace.

The Analytics Revolution

For data to become truly ubiquitous in society and business one thing is needed. The automation and accessibility that is made possible when analytics becomes a core business function. Better data and analytics means better business decisions. This is a bit what Square enables for small businesses with augmented analytics at the point of sale or real-time analysis of a small businesses finances that ensure their long-term survival and more rapid adaptation than would be possible without that data, analytics and insights.

The analytics revolution allows data to become actionable in society in-real time and unlocks the true value of datascience for merchants, businesses, smart cities, countries and public institutions at scale. Imagine if our healthcare, education and governance were all embodied analytics and data like our entertainment, E-commerce and mobile systems to today? Imagine the human good that would accomplish.

When data analytics becomes a core of a business, the value it unlocks achieves gains at every stage of its business cycle. Many businesses that were once used to approach analytics as a nice-to-have support function , gradually now embracing it as mission critical. This is what FinTech can do for consumers in a way that Bank cannot and eventually it’s a disruptive force. The analytics revolution is datascience at work in society with AI driving new value chains and optimizing existing ones.

The Impact of Natural Language Processing in the Future of Data Science

In 2021 there’s been a lot of “bling” about NLP systems of scale. It’s not just happening at OpenAI with GPT-3, it’s occurring all over the world. NLP and conversational analytics also will one day allow for AI to be more human-like and this opens up the door for a more living world of machine learning that’s not just routine algorithms, but more personalized and humanized.

Language is the key for people, so NLP will make AI more life-like. Your smart speaker and smart assistance, chat bots and automated customer service are only going to get smarter. That smart OS in movies such as Her, will soon be closer to being a reality. AI as a form of human companionship will exist within our lifetimes, probably a lot sooner.

NLP has hugely helped us progress into an era where computers and humans can communicate in common natural language, enabling a constant and fluent conversation between the two. Voice based search is becoming more common, smart appliances have voice options, IoT and NLP will have babies, it won’t be about redundant chatbots and digital assistants you never use, but about actionable NLP in the real-world that work.

While I believe using an AI assistant to help you code is a bit far-fetched it’s interesting to see what Microsoft is doing with OpenAI and GitHub. OpenAI’s codex is a good water cooler topic, and the AI-human hybrid buddy system may eventually be coming to the future of coding and programming. One thing is for certain in 2021, NLP in datascience is a hot topic full of brimming potential for knowledge workers and keen entpreneurs.

When Microsoft acquired Nuance, you knew NLP was coming to Healthcare at scale. Nuance is a pioneer and a leading provider of conversational AI and cloud-based ambient clinical intelligence for healthcare providers. The synergies of NLP companies and the Cloud are obvious.

The applications of NLP in the Cloud and in society is one of the greatest explosions of AI entering the human world we will perhaps ever see. What happens to society when AI becomes more human-like and capable of conversation? On Tesla’s AI day, they declared they were building an AI humanoid called Optimus. I suppose it will have pretty sophisticated NLP capabilities.

Conclusion and Future Ideals of the Impact of Data Science 

I hope you enjoyed the article and I tried to bring a few fresh points of view to the wonderful future of datascience and highlight how mission critical it is for a new wave of talented knowledge workers to enter data science, programming and machine learning to help transform human systems for the uplifting of the quality of life for all of us on planet Earth. 

This article was 2,929 words. 

Source Prolead brokers usa

Pro Lead Brokers USA | Targeted Sales Leads | Pro Lead Brokers USA Skip to content