Connect with us

AI

AI and Efficiency

We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a

Published

on

We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet (by contrast, Moore’s Law would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency.

Read Paper

Algorithmic improvement is a key factor driving the advance of AI. It’s important to search for measures that shed light on overall algorithmic progress, even though it’s harder than measuring such trends in compute.

44x less compute required to get to AlexNet performance 7 years later

Total amount of compute in teraflops/s-days used to train to AlexNet level performance. Lowest compute points at any given time shown in blue, all points measured shown in gray.

Download charts

Measuring efficiency

Algorithmic efficiency can be defined as reducing the compute needed to train a specific capability. Efficiency is the primary way we measure algorithmic progress on classic computer science problems like sorting. Efficiency gains on traditional problems like sorting are more straightforward to measure than in ML because they have a clearer measure of task difficulty. However, we can apply the efficiency lens to machine learning by holding performance constant. Efficiency trends can be compared across domains like DNA sequencing (10-month doubling), solar energy (6-year doubling), and transistor density (2-year doubling).

For our analysis, we primarily leveraged open-source re-implementations to measure progress on AlexNet level performance over a long horizon. We saw a similar rate of training efficiency improvement for ResNet-50 level performance on ImageNet (17-month doubling time). We saw faster rates of improvement over shorter timescales in Translation, Go, and Dota 2:

  1. Within translation, the Transformer surpassed seq2seq performance on English to French translation on WMT’14 with 61x less training compute 3 years later.
  2. We estimate AlphaZero took 8x less compute to get to AlphaGoZero level performance 1 year later.
  3. OpenAI Five Rerun required 5x less training compute to surpass OpenAI Five (which beat the world champions, OG) 3 months later.

It can be helpful to think of compute in 2012 not being equal to compute in 2019 in a similar way that dollars need to be inflation-adjusted over time. A fixed amount of compute could accomplish more in 2019 than in 2012. One way to think about this is that some types of AI research progress in two stages, similar to the “tick tock” model of development seen in semiconductors; new capabilities (the “tick”) typically require a significant amount of compute expenditure to obtain, then refined versions of those capabilities (the “tock”) become much more efficient to deploy due to process improvements.

Increases in algorithmic efficiency allow researchers to do more experiments of interest in a given amount of time and money. In addition to being a measure of overall progress, algorithmic efficiency gains speed up future AI research in a way that’s somewhat analogous to having more compute.

Other measures of AI progress

In addition to efficiency, many other measures shed light on overall algorithmic progress in AI. Training cost in dollars is related, but less narrowly focused on algorithmic progress because it’s also affected by improvement in the underlying hardware, hardware utilization, and cloud infrastructure. Sample efficiency is key when we’re in a low data regime, which is the case for many tasks of interest. The ability to train models faster also speeds up research and can be thought of as a measure of the parallelizability of learning capabilities of interest. We also find increases in inference efficiency in terms of GPU time, parameters, and flops meaningful, but mostly as a result of their economic implications rather than their effect on future research progress. Shufflenet achieved AlexNet-level performance with an 18x inference efficiency increase in 5 years (15-month doubling time), which suggests that training efficiency and inference efficiency might improve at similar rates. The creation of datasets/​environments/​benchmarks is a powerful method of making specific AI capabilities of interest more measurable.

Primary limitations

  1. We have only a small number of algorithmic efficiency data points on a few tasks. It’s unclear the degree to which the efficiency trends we’ve observed generalize to other AI tasks. Systematic measurement could make it clear whether an algorithmic equivalent to Moore’s Law in the domain of AI exists, and if it exists, clarify its nature. We consider this a highly interesting open question. We suspect we’re more likely to observe similar rates of efficiency progress on similar tasks. By similar tasks, we mean tasks within these sub-domains of AI, on which the field agrees we’ve seen substantial progress, and that have comparable levels of investment (compute and/or researcher time).
  2. Even though we believe AlexNet represented a lot of progress, this analysis doesn’t attempt to quantify that progress. More generally, the first time a capability is created, algorithmic breakthroughs may have reduced the resources required from totally infeasible to merely high. We think new capabilities generally represent a larger share of overall conceptual progress than observed efficiency increases of the type shown here.
  3. This analysis focuses on the final training run cost for an optimized model rather than total development costs. Some algorithmic improvements make it easier to train a model by making the space of hyperparameters that will train stably and get good final performance much larger. On the other hand, architecture searches increase the gap between the final training run cost and total training costs.
  4. We don’t speculate on the degree to which we expect efficiency trends will extrapolate in time, we merely present our results and discuss the implications if the trends persist.

Measurement and AI policy

We believe that policymaking related to AI will be improved by a greater focus on the measurement and assessment of AI systems, both in terms of technical attributes and societal impact. We think such measurement initiatives can shed light on important questions in policy; our AI and Compute analysis suggests policymakers should increase funding for compute resources for academia, so that academic research can replicate, reproduce, and extend industry research. This efficiency analysis suggests that policymakers could develop accurate intuitions about the cost of deploying AI capabilities—and how these costs are going to alter over time—by more closely assessing the rate of improvements in efficiency for AI systems.

Tracking efficiency going forward

If large scale compute continues to be important to achieving state of the art (SOTA) overall performance in domains like language and games then it’s important to put effort into measuring notable progress achieved with smaller amounts of compute (contributions often made by academic institutions). Models that achieve training efficiency state of the arts on meaningful capabilities are promising candidates for scaling up and potentially achieving overall top performance. Additionally, figuring out the algorithmic efficiency improvements are straightforward since they are just a particularly meaningful slice of the learning curves that all experiments generate.

We also think that measuring long run trends in efficiency SOTAs will help paint a quantitative picture of overall algorithmic progress. We observe that hardware and algorithmic efficiency gains are multiplicative and can be on a similar scale over meaningful horizons, which suggests that a good model of AI progress should integrate measures from both.

Our results suggest that for AI tasks with high levels of investment (researcher time and/or compute) algorithmic efficiency might outpace gains from hardware efficiency (Moore’s Law). Moore’s Law was coined in 1965 when integrated circuits had a mere 64 transistors (6 doublings) and naively extrapolating it out predicted personal computers and smartphones (an iPhone 11 has 8.5 billion transistors). If we observe decades of exponential improvement in the algorithmic efficiency of AI, what might it lead to? We’re not sure. That these results make us ask this question is a modest update for us towards a future with powerful AI services and technology.

For all these reasons, we’re going to start tracking efficiency SOTAs publicly. We’ll start with vision and translation efficiency benchmarks (ImageNet and WMT14), and we’ll consider adding more benchmarks over time. We believe there are efficiency SOTAs on these benchmarks we’re unaware of and encourage the research community to submit them here (we’ll give credit to original authors and collaborators).

Industry leaders, policymakers, economists, and potential researchers are all trying to better understand AI progress and decide how much attention they should invest and where to direct it. Measurement efforts can help ground such decisions. If you’re interested in this type of work, consider applying to work at OpenAI’s Foresight or Policy team!


Source: https://openai.com/blog/ai-and-efficiency/

AI

A Quick Guide to Conversational AI And It’s Working Process

Customer support is an integral part of every business; without offering support services, it is difficult to achieve maximum customer satisfaction. To ensure the same, […]

The post A Quick Guide to Conversational AI And It’s Working Process appeared first on Quytech Blog.

Published

on

Customer support is an integral part of every business; without offering support services, it is difficult to achieve maximum customer satisfaction. To ensure the same, businesses hire professionals who work round the clock to deliver support services. No matter how efficiently a business handles this segment, they might have to face problems such as “delay in responding customers’ queries” or “making a customer wait to connect with the support professionals”, and more.

A Conversational AI is a perfect solution to this most common challenge that manufacturing, FMCG, retail, e-commerce, and other industries are facing. Never heard of this term?

Well, this is the latest trend in almost all the industries that are already using artificial intelligence technology or wanting to adopt the same in their business operations. Let’s read about the same in detail.

What is Conversational AI?

Conversational AI is a specific kind of artificial intelligence that makes software interact and converse with users in a highly intuitive way. It uses natural language processing (NLP) to understand and interact in human language.

The conversational AI, when integrated into chatbots, messengers, and voice assistants, enables businesses to deliver personalized customer experience and achieve 100% customer satisfaction. Google Home and Amazon Echo are the two popular examples of it.

Applications of Conversational AI

Conversational AI a new and automated way of offering customer support services. Healthcare, Adtech, logistics, insurance, travel, hospitality, finances, and other industries are using technology in the following:

Messaging applications

Conversational AI can be used in a messaging application to offer personalized support services through chat. Your customers can choose the “chat support” option and talk to the chatbot to get the support.

Speech-based virtual assistants

A conversational AI can use speech recognition technology so that you can offer a speech-based virtual assistant for your customers. These are the type of chatbots where users can get any information through voice commands.

Virtual customer assistants

This type of conversational AI helps in offering online support to customers; you can develop the same to offer support through Web, SMS, messaging applications, and more.

Virtual personal assistants

A virtual personal assistant, powered by conversational AI, minimizes the need of hiring a huge team to offer dedicated support services to each of your customers.

Virtual employee assistants

Employees working in big organizations might need various types of assistants. A conversational AI used to build a virtual employee assistant can be the point of contact for all such employees. They can find the required information just by interacting with that assistant.

Robotic process automation

Robotic process automation using the potential of conversational AI helps a machine to understand human conversations and their intent to perform automated tasks.

Working of conversational AI

Machine learning, deep learning, and natural language processing are the three main technologies behind conversational AI. Here is how it works:

  1. Collection of unstructured data from various sources
  2. Data preprocessing and feature engineering
  3. Creating an AI model
  4. Training the model to automatically improve from experiences
  5. Testing the model
  6. Detecting patterns and making decisions
  7. AI deployment

What benefits businesses can get by using conversational AI?

Apart from helping businesses to deliver an unmatched customer experience, conversational AI can offer a plethora of other benefits that include:

Saves Time

Having an automated chatbot would help you save a considerable amount of time. The saved time can be used to perform other tasks or focus on your business’ marketing and promotion.

Helps in providing Real-Time Support Services

A conversational AI can handle multiple queries at one time, without even letting other customers wait. In short, every customer will feel like they are getting dedicated support services in real-time.

Improves business efficiency

With a conversational AI, you can have the assurance that your customers are being taken care of properly. In short, their queries are being handled and resolved immediately. You can focus on other segments of your business and improve efficiency.

Helps in lowering down Customers’ Complaints

Since a conversational AI can immediately respond to queries of the customers, it can help to reduce the number of complaints. Resolving customers’ complaints without making them call a support professional would increase customer loyalty and increase your brand reputation.

Increases chances of sales

By providing a persistent communication channel that precedes the context further, you can make your customers explore more and shop more. Moreover, a conversational AI can also help in reducing the cart abandonment rate as it can provide immediate assistance regarding the issues a user is facing while making the payment, applying a discount code, or at any other time.

The user would not even have to contact the support center. To understand this better, let’s take an example- a user has added one or more products to the cart, but he/she is unable to find his/her preferred mode of payment. After a few minutes, what would the customer do, either visit the contact us section to get the customer support number and connect to a professional (which is a time taking process) or simply get the support through a conversational AI. The latter would automatically send a message to the customer to ask the query and provide a resolution.

Now when you know everything about conversational AI and want to build one for you, then contact Quytech, a trusted AI app development company. Quytech has more than a decade of experience in working on artificial intelligence, machine learning, and other latest technologies.

Final Words

Are you curious to know about conversational artificial intelligence? Give this article a read to know the definition and working of conversational AI in detail. We have also mentioned the reasons why this technology is becoming the talk of the town among businesses of all sizes and types. After reading the article, if you want to develop a tailor-made conversational AI for your business, then reach out to a reliable and experienced AI development company or hire AI developers with considerable experience.

Source: https://www.quytech.com/blog/what-is-conversational-ai-and-how-does-it-work/

Continue Reading

AI

Are Legal chatbots worth the time and effort?

Published

on

KLoBot — the Best AI-Chatbot builder platform

The legal industry is always known for its resistance to change, but technology in the legal landscape has seen rapid growth from the past few years. The global Coronavirus pandemic has also accelerated the pace of investments in legal technology, which is likely to transform the legal marketplace.

Several law firms are majorly focusing on the adoption of innovative technology, which has the capability to modernize the practice of law. Innovative technologies, including Artificial Intelligence, Analytics, and Blockchain, among others, prioritizes the speed and efficiency of legal services.

Technology in the legal sector is an enabler that empowers attorneys and paralegals to perform their jobs better.

The use of AI and its applications in the legal industry is moving higher and is becoming the next big thing for legal firms. By 2024 the legal AI software market is expected to reach $1,236 million and is forecasted to grow at a CAGR of 31.3% during 2019–2024. (2)

Incorporating AI into legal practice can augment the workflows and streamline the work processes. AI-powered chatbots are disrupting the legal industry and are poised to become a preferred mode of communication for internal as well as external users. Leveraging NLP and NLU algorithm power, which are one of the prominent fields in AI, chatbots can understand intents, contexts, and further handle end to end human-machine interactions.

Law firms, as well as corporate legal departments, continue to look for new ways to enhance efficiency and drive productivity. AI-enabled chatbots are one such way that has the potential to revolutionize the law firm operations. These chatbots are a new approach for law firms to imitate human conversations and automatically respond to clients as well as attorneys’ queries.

Legal chatbots have the capabilities to make better and quicker decisions when compared to human agents. It reduces the burden on attorneys and paralegals to repetitively answer the same queries, which further brings consistency to users’ responses.

1. 8 Proven Ways to Use Chatbots for Marketing (with Real Examples)

2. How to Use Texthero to Prepare a Text-based Dataset for Your NLP Project

3. 5 Top Tips For Human-Centred Chatbot Design

4. Chatbot Conference Online

Internal Chatbots

Internal chatbots are nothing but the chatbots for internal operations and communications, helping law firms manage enterprise collaboration. Internal legal chatbots help law firms automate the contract review process, which is one of the most tedious tasks for attorneys and in-house counsel.

Legal chatbots for attorneys come with a predefined set of policies to review & analyze documents, perform due diligence, and automate other monotonous tasks that attorneys perform. Other basic tasks comprising scheduling meetings, setting up reminders, and searching relevant matter information can also be performed by legal chatbots. Internal legal chatbots empower attorneys to reduce the risk of human errors by automating the monotonous administrative chores and allow them to focus more on higher value and complicated tasks that need attorney’s intervention.

External Chatbots

External chatbots, on the other side, are the client-facing legal chatbots. These chatbots can draft the legal documents, including UCC filings, divorce forms, and other non-disclosure agreements based on the client inputs.

External legal chatbots empower law firms to handle the client intake process efficiently and generate leads, which further reduces an attorney’s time spent on these activities.

In the current scenario of receiving information instantly at a fingertip, legal chatbots serve as the best solution to handle client queries and provide legal advice.

External, as well as internal legal chatbots with their 24/7 supporting abilities, facilitate law firms to manage operational costs and meet the evolving client expectations.

Although chatbots are taking time to augment legal services but are worth the effort.

KLoBot is an incredibly intelligent AI chatbot builder platform that allows legal firms to create text and voice-based chatbots within minutes. KLoBot’s easy drag and drop skill interface helps law firms to design no-code chatbots that can be deployed across an organization’s favorite channels. The chatbots built on the KLoBot platform help law firms perform simple and complex routine tasks, including QnA and knowledge repository search. Few other jobs, including scheduling meetings, setting up reminders, completing actions on behalf of attorneys, finding colleagues, assisting attorneys, and much more, are also being performed by KLoBot enabled chatbots.

KLoBot enabled chatbots to act as a personal assistant and enhance attorneys as well as client experiences. These chatbots empower law firms to simplify internal as well as external communications and streamline business processes.

KLoBot AI chatbots with its feature-rich admin console, provide law firms robust security controls. To know more about KLoBot click here.

References

[1] The Law Society Capturing Technological Innovation in Legal Services Report

[2] https://www.marketsandmarkets.com/

Source: https://chatbotslife.com/are-legal-chatbots-worth-the-time-and-effort-5f44936f7e89?source=rss—-a49517e4c30b—4

Continue Reading

AI

Inbenta Announces Partnership with IntelePeer to Deliver Smarter Workflows to Customers

Inbenta Technologies, a global leader in Symbolic AI-based Customer Interactions applications (artificial intelligence (AI) and natural language processing (NLP) products) announced today a new partnership with IntelePeer, a leading Communications Platform as a Service (CPaaS), provider. The combined partnership will empower users to build smarter and more powerful workflows so organizations can provide more innovative, […]

The post Inbenta Announces Partnership with IntelePeer to Deliver Smarter Workflows to Customers appeared first on Inbenta.

Published

on

New Product Offering Defines How Voice, Messaging and Chatbots Interoperate

Inbenta Technologies, a global leader in Symbolic AI-based Customer Interactions applications (artificial intelligence (AI) and natural language processing (NLP) products) announced today a new partnership with IntelePeer, a leading Communications Platform as a Service (CPaaS), provider. The combined partnership will empower users to build smarter and more powerful workflows so organizations can provide more innovative, agile, and scalable customer and employee support processes.

The Inbenta platform integrated with Atmosphere SmartFlows enables organizations to easily configure, automate, measure, and improve business interactions across multiple channels such as voice, SMS, Social Media and Enterprise Collaboration Platforms.  Customers will be able to use the intuitive drag and drop features, without any complicated coding. This will offload agent workload and create superior digital experiences.

“We are really happy to see this partnership going forward,” said Inbenta CEO Jordi Torras. “Combining the IntelePeer easy-to-use omni-channel platform with our Symbolic AI will empower our customers to build workflows across very different channels in a cohesive way, providing intelligence along the way.”

“With customer expectations on the rise and a constantly changing business climate, companies must stay ahead of the market with agile solutions,” said Jeremy Jones, IntelePeer’s Chief Commercial Officer. “We look forward to a successful partnership with Inbenta. With growing demand for more intelligent interactions, Inbenta’s AI helps detect customer’s intents, and respond with SmartFlows across different channels including voice, SMS, and social messaging to consistently stay ahead of the curve in a fast-changing digital business.”

About Inbenta

Inbenta was founded by Jordi Torras in Barcelona and is now headquartered in Silicon Valley. Inbenta empowers the world’s largest enterprise and e-commerce companies to improve customer satisfaction rates and reduce support costs with best-in-class functionality.

About IntelePeer

IntelePeer powers the new customer experience. Our Atmosphere® CPaaS  enables companies to communicate better – driving more revenue, improving their customer experience, and making better business decisions – leveraging omni-channel Automation & Self-Service, AI, and Analytics, all delivered through a single easy-to-use cloud platform that works seamlessly with your existing business solutions. For more information visit: www.intelepeer.com

Continue Reading
AI4 hours ago

A Quick Guide to Conversational AI And It’s Working Process

AI5 hours ago

Are Legal chatbots worth the time and effort?

AI8 hours ago

Inbenta Announces Partnership with IntelePeer to Deliver Smarter Workflows to Customers

AI2 days ago

Things to Know about Free Form Templates

AI2 days ago

Are Chatbots Vulnerable? Best Practices to Ensure Chatbots Security

AI2 days ago

Best Technology Stacks For Mobile App Development

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

Trending