Connect with us


Image GPT

We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples.



We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting.


Unsupervised and self-supervised learning, or learning without human-labeled data, is a longstanding challenge of machine learning. Recently, it has seen incredible success in language, as transformer models like BERT, GPT-2, RoBERTa, T5, and other variants have achieved top performance on a wide array of language tasks. However, the same broad class of models has not been successful in producing strong features for image classification. Our work aims to understand and bridge this gap.

Transformer models like BERT and GPT-2 are domain agnostic, meaning that they can be directly applied to 1-D sequences of any form. When we train GPT-2 on images unrolled into long sequences of pixels, which we call iGPT, we find that the model appears to understand 2-D image characteristics such as object appearance and category. This is evidenced by the diverse range of coherent image samples it generates, even without the guidance of human provided labels. As further proof, features from the model achieve state-of-the-art performance on a number of classification datasets and near state-of-the-art unsupervised accuracy on ImageNet.

Evaluation Dataset Our Result Best non-iGPT Result
Logistic regression on learned features (linear probe) CIFAR-10


iGPT-L 32×32 w/ 1536 features


SimCLR w/ 8192 features



iGPT-L 32×32 w/ 1536 features


SimCLR w/ 8192 features



iGPT-L 32×32 w/ 1536 features


AMDIM w/ 8192 features



iGPT-XLa 64×64 w/ 15360 features


SimCLR w/ 8192 features

Full fine-tune CIFAR-10


iGPT-L 32×32, trained on ImageNet

GPipe, trained on ImageNet
ImageNet 32×32


iGPT-L 32×32


Isometric Nets

To highlight the potential of generative sequence modeling as a general purpose unsupervised learning algorithm, we deliberately use the same transformer architecture as GPT-2 in language. As a consequence, we require significantly more compute in order to produce features competitive with those from top unsupervised convolutional nets. However, our results suggest that when faced with a new domain where the correct model priors are unknown, a large GPT-2 can learn excellent features without the need for domain-specific architectural design choices.


Model Input

Completions right

Model-generated completions of human-provided half-images. We sample the remaining halves with temperature 1 and without tricks like beam search or nucleus sampling. While we showcase our favorite completions in the first panel, we do not cherry-pick images or completions in all following panels.


Model-generated image samples. We sample these images with temperature 1 and without tricks like beam search or nucleus sampling. All of our samples are shown, with no cherry-picking. Nearly all generated images contain clearly recognizable objects.

From language GPT to image GPT

In language, unsupervised learning algorithms that rely on word prediction (like GPT-2 and BERT) have been extremely successful, achieving top performance on a wide array of language tasks. One possible reason for this success is that instances of downstream language tasks appear naturally in text: questions are often followed by answers (which could help with question-answering) and passages are often followed by summaries (which could help with summarization). In contrast, sequences of pixels do not clearly contain labels for the images they belong to.

Even without this explicit supervision, there is still a reason why GPT-2 on images might work: a sufficiently large transformer trained on next pixel prediction might eventually learn to generate diverse samples with clearly recognizable objects. Once it learns to do so, an idea known as “Analysis by Synthesis” suggests that the model will also know about object categories. Many early generative models were motivated by this idea, and more recently, BigBiGAN was an example which produced encouraging samples and features. In our work, we first show that better generative models achieve stronger classification performance. Then, through optimizing GPT-2 for generative capabilities, we achieve top-level classification performance in many settings, providing further evidence for analysis by synthesis.

Towards general unsupervised learning

Generative sequence modeling is a universal unsupervised learning algorithm: since all data types can be represented as sequences of bytes, a transformer can be directly applied to any data type without additional engineering. Our work tests the power of this generality by directly applying the architecture used to train GPT-2 on natural language to image generation. We deliberately chose to forgo hand coding any image specific knowledge in the form of convolutions or techniques like relative attention, sparse attention, and 2-D position embeddings.

As a consequence of its generality, our method requires significantly more compute to achieve competitive performance in the unsupervised setting. Indeed, contrastive methods are still the most computationally efficient methods for producing high quality features from images. However, in showing that an unsupervised transformer model is competitive with the best unsupervised convolutional nets, we provide evidence that it is possible to trade off hand coded domain knowledge for compute. In new domains, where there isn’t much knowledge to hand code, scaling compute seems an appropriate technique to test.


We train iGPT-S, iGPT-M, and iGPT-L, transformers containing 76M, 455M, and 1.4B parameters respectively, on ImageNet. We also train iGPT-XL, a 6.8 billion parameter transformer, on a mix of ImageNet and images from the web. Due to the large computational cost of modeling long sequences with dense attention, we train at the low resolutions of 32×32, 48×48, and 64×64.

While it is tempting to work at even lower resolutions to further reduce compute cost, prior work has demonstrated that human performance on image classification begins to drop rapidly below these sizes. Instead, motivated by early color display palettes, we create our own 9-bit color palette to represent pixels. Using this palette yields an input sequence length 3 times shorter than the standard (R, G, B) palette, while still encoding color faithfully.

Experimental results

There are two methods we use to assess model performance, both of which involve a downstream classification task. The first, which we refer to as a linear probe, uses the trained model to extract features from the images in the downstream dataset, and then fits a logistic regression to the labels. The second method fine-tunes the entire model on the downstream dataset.

Since next pixel prediction is not obviously relevant to image classification, features from the final layer may not be the most predictive of the object category. Our first result shows that feature quality is a sharply increasing, then mildly decreasing function of depth. This behavior suggests that a transformer generative model operates in two phases: in the first phase, each position gathers information from its surrounding context in order to build a contextualized image feature. In the second phase, this contextualized feature is used to solve the conditional next pixel prediction task. The observed two stage performance of our linear probes is reminiscent of another unsupervised neural net, the bottleneck autoencoder, which is manually designed so that features in the middle are used.

Feature quality depends heavily on the layer we choose to evaluate. In contrast with supervised models, the best features for these generative models lie in the middle of the network.

Our next result establishes the link between generative performance and feature quality. We find that both increasing the scale of our models and training for more iterations result in better generative performance, which directly translates into better feature quality.

Hover to see sample images up

Each line tracks a model throughout generative pre-training: the dotted markers denote checkpoints at steps 131K, 262K, 524K, and 1000K. The positive slopes suggest a link between improved generative performance and improved feature quality. Larger models also produce better features than smaller models. iGPT-XL is not included because it was trained on a different dataset.

When we evaluate our features using linear probes on CIFAR-10, CIFAR-100, and STL-10, we outperform features from all supervised and unsupervised transfer algorithms. Our results are also compelling in the full fine-tuning setting.

Pre-trained on ImageNet
Evaluation Model Accuracy w/o labels w/ labels
Linear Probe
ResNet-152 94.0 check
SimCLR 95.3 check
iGPT-L 32×32 96.3 check
Linear Probe
ResNet-152 78.0 check
SimCLR 80.2 check
iGPT-L 32×32 82.8 check
Linear Probe
AMDIM-L 94.2 check
iGPT-L 32×32 95.5 check
AutoAugment 98.5
SimCLR 98.6 check
GPipe 99.0 check
iGPT-L 99.0 check
iGPT-L 88.5 check
SimCLR 89.0 check
AutoAugment 89.3
EfficientNet 91.7 check

A comparison of linear probe and fine-tune accuracies between our models and top performing models which utilize either unsupervised or supervised ImageNet transfer. We also include AutoAugment, the best performing model trained end-to-end on CIFAR.

Given the resurgence of interest in unsupervised and self-supervised learning on ImageNet, we also evaluate the performance of our models using linear probes on ImageNet. This is an especially difficult setting, as we do not train at the standard ImageNet input resolution. Nevertheless, a linear probe on the 1536 features from the best layer of iGPT-L trained on 48×48 images yields 65.2% top-1 accuracy, outperforming AlexNet.

Contrastive methods typically report their best results on 8192 features, so we would ideally evaluate iGPT with an embedding dimension of 8192 for comparison. However, training such a model is prohibitively expensive, so we instead concatenate features from multiple layers as an approximation. Unfortunately, our features tend to be correlated across layers, so we need more of them to be competitive. Taking 15360 features from 5 layers in iGPT-XL yields 72.0% top-1 accuracy, outperforming AMDIM, MoCo, and CPC v2, but still underperforming SimCLR by a decent margin.

Method Input Resolution Features Parameters Accuracy
Rotation original 8192 86M 55.4
iGPT-L 32×32 1536 1362M 60.3
BigBiGAN original 16384 86M 61.3
iGPT-L 48×48 1536 1362M 65.2
AMDIM original 8192 626M 68.1
MoCo original 8192 375M 68.6
iGPT-XL 64×64 3072 6801M 68.7
SimCLR original 2048 24M 69.3
CPC v2 original 4096 303M 71.5
iGPT-XL 64×64 3072 x 5 6801M 72.0
SimCLR original 8192 375M 76.5

A comparison of linear probe accuracies between our models and state-of-the-art self-supervised models. We achieve competitive performance while training at much lower input resolutions, though our method requires more parameters and compute.

Because masked language models like BERT have outperformed generative models on most language tasks, we also evaluate the performance of BERT on our image models. Instead of training our model to predict the next pixel given all preceding pixels, we mask out 15% of the pixels and train our model to predict them from the unmasked ones. We find that though linear probe performance on BERT models is significantly worse, they excel during fine-tuning:


Comparison of generative pre-training with BERT pre-training using iGPT-L at an input resolution of 322 × 3. Bold colors show the performance boost from ensembling BERT masks. We see that generative models produce much better features than BERT models after pre-training, but BERT models catch up after fine-tuning.

While unsupervised learning promises excellent features without the need for human-labeled data, significant recent progress has been made under the more forgiving framework of semi-supervised learning, which allows for limited amounts of human-labeled data. Successful semi-supervised methods often rely on clever techniques such as consistency regularization, data augmentation, or pseudo-labeling, and purely generative-based approaches have not been competitive for years. We evaluate iGPT-L on a competitive benchmark for this sub-field and find that a simple linear probe on features from non-augmented images outperforms Mean Teacher and MixMatch, though it underperforms FixMatch.

Model 40 labels 250 labels 4000 labels
Improved GAN 81.4 ± 2.3
Mean Teacher 67.7 ± 2.3 90.8 ± 0.2
MixMatch 52.5 ± 11.5 89.0 ± 0.9 93.6 ± 0.1
iGPT-L 73.2 ± 01.5 87.6 ± 0.6 94.3 ± 0.1
UDA 71.0 ± 05.9 91.2 ± 1.1 95.1 ± 0.2
FixMatch RA 86.2 ± 03.4 94.9 ± 0.7 95.7 ± 0.1
FixMatch CTA 88.6 ± 03.4 94.9 ± 0.3 95.7 ± 0.2

A comparison of performance on low-data CIFAR-10. By leveraging many unlabeled ImageNet images, iGPT-L is able to outperform methods such as Mean Teacher and MixMatch but still underperforms the state of the art methods. Our approach to semi-supervised learning is very simple since we only fit a logistic regression classifier on iGPT-L’s features without any data augmentation or fine-tuning—a significant difference from specially designed semi-supervised approaches.


While we have shown that iGPT is capable of learning powerful image features, there are still significant limitations to our approach. Because we use the generic sequence transformer used for GPT-2 in language, our method requires large amounts of compute: iGPT-L was trained for roughly 2500 V100-days while a similarly performing MoCo model can be trained in roughly 70 V100-days.

Relatedly, we model low resolution inputs using a transformer, while most self-supervised results use convolutional-based encoders which can easily consume inputs at high resolution. A new architecture, such as a domain-agnostic multiscale transformer, might be needed to scale further. Given these limitations, our work primarily serves as a proof-of-concept demonstration of the ability of large transformer-based language models to learn excellent unsupervised representations in novel domains, without the need for hardcoded domain knowledge. However, the significant resource cost to train these models and the greater accuracy of convolutional neural-network based methods precludes these representations from practical real-world applications in the vision domain.

Finally, generative models can exhibit biases that are a consequence of the data they’ve been trained on. Many of these biases are useful, like assuming that a combination of brown and green pixels represents a branch covered in leaves, then using this bias to continue the image. But some of these biases will be harmful, when considered through a lens of fairness and representation. For instance, if the model develops a visual notion of a scientist that skews male, then it might consistently complete images of scientists with male-presenting people, rather than a mix of genders. We expect that developers will need to pay increasing attention to the data that they feed into their systems and to better understand how it relates to biases in trained models.


We have shown that by trading off 2-D knowledge for scale and by choosing predictive features from the middle of the network, a sequence transformer can be competitive with top convolutional nets for unsupervised image classification. Notably, we achieved our results by directly applying the GPT-2 language model to image generation. Our results suggest that due to its simplicity and generality, a sequence transformer given sufficient compute might ultimately be an effective way to learn excellent features in many domains.

If you’re excited to work with us on this area of research, we’re hiring!



A Quick Guide to Conversational AI And It’s Working Process

Customer support is an integral part of every business; without offering support services, it is difficult to achieve maximum customer satisfaction. To ensure the same, […]

The post A Quick Guide to Conversational AI And It’s Working Process appeared first on Quytech Blog.



Customer support is an integral part of every business; without offering support services, it is difficult to achieve maximum customer satisfaction. To ensure the same, businesses hire professionals who work round the clock to deliver support services. No matter how efficiently a business handles this segment, they might have to face problems such as “delay in responding customers’ queries” or “making a customer wait to connect with the support professionals”, and more.

A Conversational AI is a perfect solution to this most common challenge that manufacturing, FMCG, retail, e-commerce, and other industries are facing. Never heard of this term?

Well, this is the latest trend in almost all the industries that are already using artificial intelligence technology or wanting to adopt the same in their business operations. Let’s read about the same in detail.

What is Conversational AI?

Conversational AI is a specific kind of artificial intelligence that makes software interact and converse with users in a highly intuitive way. It uses natural language processing (NLP) to understand and interact in human language.

The conversational AI, when integrated into chatbots, messengers, and voice assistants, enables businesses to deliver personalized customer experience and achieve 100% customer satisfaction. Google Home and Amazon Echo are the two popular examples of it.

Applications of Conversational AI

Conversational AI a new and automated way of offering customer support services. Healthcare, Adtech, logistics, insurance, travel, hospitality, finances, and other industries are using technology in the following:

Messaging applications

Conversational AI can be used in a messaging application to offer personalized support services through chat. Your customers can choose the “chat support” option and talk to the chatbot to get the support.

Speech-based virtual assistants

A conversational AI can use speech recognition technology so that you can offer a speech-based virtual assistant for your customers. These are the type of chatbots where users can get any information through voice commands.

Virtual customer assistants

This type of conversational AI helps in offering online support to customers; you can develop the same to offer support through Web, SMS, messaging applications, and more.

Virtual personal assistants

A virtual personal assistant, powered by conversational AI, minimizes the need of hiring a huge team to offer dedicated support services to each of your customers.

Virtual employee assistants

Employees working in big organizations might need various types of assistants. A conversational AI used to build a virtual employee assistant can be the point of contact for all such employees. They can find the required information just by interacting with that assistant.

Robotic process automation

Robotic process automation using the potential of conversational AI helps a machine to understand human conversations and their intent to perform automated tasks.

Working of conversational AI

Machine learning, deep learning, and natural language processing are the three main technologies behind conversational AI. Here is how it works:

  1. Collection of unstructured data from various sources
  2. Data preprocessing and feature engineering
  3. Creating an AI model
  4. Training the model to automatically improve from experiences
  5. Testing the model
  6. Detecting patterns and making decisions
  7. AI deployment

What benefits businesses can get by using conversational AI?

Apart from helping businesses to deliver an unmatched customer experience, conversational AI can offer a plethora of other benefits that include:

Saves Time

Having an automated chatbot would help you save a considerable amount of time. The saved time can be used to perform other tasks or focus on your business’ marketing and promotion.

Helps in providing Real-Time Support Services

A conversational AI can handle multiple queries at one time, without even letting other customers wait. In short, every customer will feel like they are getting dedicated support services in real-time.

Improves business efficiency

With a conversational AI, you can have the assurance that your customers are being taken care of properly. In short, their queries are being handled and resolved immediately. You can focus on other segments of your business and improve efficiency.

Helps in lowering down Customers’ Complaints

Since a conversational AI can immediately respond to queries of the customers, it can help to reduce the number of complaints. Resolving customers’ complaints without making them call a support professional would increase customer loyalty and increase your brand reputation.

Increases chances of sales

By providing a persistent communication channel that precedes the context further, you can make your customers explore more and shop more. Moreover, a conversational AI can also help in reducing the cart abandonment rate as it can provide immediate assistance regarding the issues a user is facing while making the payment, applying a discount code, or at any other time.

The user would not even have to contact the support center. To understand this better, let’s take an example- a user has added one or more products to the cart, but he/she is unable to find his/her preferred mode of payment. After a few minutes, what would the customer do, either visit the contact us section to get the customer support number and connect to a professional (which is a time taking process) or simply get the support through a conversational AI. The latter would automatically send a message to the customer to ask the query and provide a resolution.

Now when you know everything about conversational AI and want to build one for you, then contact Quytech, a trusted AI app development company. Quytech has more than a decade of experience in working on artificial intelligence, machine learning, and other latest technologies.

Final Words

Are you curious to know about conversational artificial intelligence? Give this article a read to know the definition and working of conversational AI in detail. We have also mentioned the reasons why this technology is becoming the talk of the town among businesses of all sizes and types. After reading the article, if you want to develop a tailor-made conversational AI for your business, then reach out to a reliable and experienced AI development company or hire AI developers with considerable experience.


Continue Reading


Are Legal chatbots worth the time and effort?



KLoBot — the Best AI-Chatbot builder platform

The legal industry is always known for its resistance to change, but technology in the legal landscape has seen rapid growth from the past few years. The global Coronavirus pandemic has also accelerated the pace of investments in legal technology, which is likely to transform the legal marketplace.

Several law firms are majorly focusing on the adoption of innovative technology, which has the capability to modernize the practice of law. Innovative technologies, including Artificial Intelligence, Analytics, and Blockchain, among others, prioritizes the speed and efficiency of legal services.

Technology in the legal sector is an enabler that empowers attorneys and paralegals to perform their jobs better.

The use of AI and its applications in the legal industry is moving higher and is becoming the next big thing for legal firms. By 2024 the legal AI software market is expected to reach $1,236 million and is forecasted to grow at a CAGR of 31.3% during 2019–2024. (2)

Incorporating AI into legal practice can augment the workflows and streamline the work processes. AI-powered chatbots are disrupting the legal industry and are poised to become a preferred mode of communication for internal as well as external users. Leveraging NLP and NLU algorithm power, which are one of the prominent fields in AI, chatbots can understand intents, contexts, and further handle end to end human-machine interactions.

Law firms, as well as corporate legal departments, continue to look for new ways to enhance efficiency and drive productivity. AI-enabled chatbots are one such way that has the potential to revolutionize the law firm operations. These chatbots are a new approach for law firms to imitate human conversations and automatically respond to clients as well as attorneys’ queries.

Legal chatbots have the capabilities to make better and quicker decisions when compared to human agents. It reduces the burden on attorneys and paralegals to repetitively answer the same queries, which further brings consistency to users’ responses.

1. 8 Proven Ways to Use Chatbots for Marketing (with Real Examples)

2. How to Use Texthero to Prepare a Text-based Dataset for Your NLP Project

3. 5 Top Tips For Human-Centred Chatbot Design

4. Chatbot Conference Online

Internal Chatbots

Internal chatbots are nothing but the chatbots for internal operations and communications, helping law firms manage enterprise collaboration. Internal legal chatbots help law firms automate the contract review process, which is one of the most tedious tasks for attorneys and in-house counsel.

Legal chatbots for attorneys come with a predefined set of policies to review & analyze documents, perform due diligence, and automate other monotonous tasks that attorneys perform. Other basic tasks comprising scheduling meetings, setting up reminders, and searching relevant matter information can also be performed by legal chatbots. Internal legal chatbots empower attorneys to reduce the risk of human errors by automating the monotonous administrative chores and allow them to focus more on higher value and complicated tasks that need attorney’s intervention.

External Chatbots

External chatbots, on the other side, are the client-facing legal chatbots. These chatbots can draft the legal documents, including UCC filings, divorce forms, and other non-disclosure agreements based on the client inputs.

External legal chatbots empower law firms to handle the client intake process efficiently and generate leads, which further reduces an attorney’s time spent on these activities.

In the current scenario of receiving information instantly at a fingertip, legal chatbots serve as the best solution to handle client queries and provide legal advice.

External, as well as internal legal chatbots with their 24/7 supporting abilities, facilitate law firms to manage operational costs and meet the evolving client expectations.

Although chatbots are taking time to augment legal services but are worth the effort.

KLoBot is an incredibly intelligent AI chatbot builder platform that allows legal firms to create text and voice-based chatbots within minutes. KLoBot’s easy drag and drop skill interface helps law firms to design no-code chatbots that can be deployed across an organization’s favorite channels. The chatbots built on the KLoBot platform help law firms perform simple and complex routine tasks, including QnA and knowledge repository search. Few other jobs, including scheduling meetings, setting up reminders, completing actions on behalf of attorneys, finding colleagues, assisting attorneys, and much more, are also being performed by KLoBot enabled chatbots.

KLoBot enabled chatbots to act as a personal assistant and enhance attorneys as well as client experiences. These chatbots empower law firms to simplify internal as well as external communications and streamline business processes.

KLoBot AI chatbots with its feature-rich admin console, provide law firms robust security controls. To know more about KLoBot click here.


[1] The Law Society Capturing Technological Innovation in Legal Services Report



Continue Reading


Inbenta Announces Partnership with IntelePeer to Deliver Smarter Workflows to Customers

Inbenta Technologies, a global leader in Symbolic AI-based Customer Interactions applications (artificial intelligence (AI) and natural language processing (NLP) products) announced today a new partnership with IntelePeer, a leading Communications Platform as a Service (CPaaS), provider. The combined partnership will empower users to build smarter and more powerful workflows so organizations can provide more innovative, […]

The post Inbenta Announces Partnership with IntelePeer to Deliver Smarter Workflows to Customers appeared first on Inbenta.



New Product Offering Defines How Voice, Messaging and Chatbots Interoperate

Inbenta Technologies, a global leader in Symbolic AI-based Customer Interactions applications (artificial intelligence (AI) and natural language processing (NLP) products) announced today a new partnership with IntelePeer, a leading Communications Platform as a Service (CPaaS), provider. The combined partnership will empower users to build smarter and more powerful workflows so organizations can provide more innovative, agile, and scalable customer and employee support processes.

The Inbenta platform integrated with Atmosphere SmartFlows enables organizations to easily configure, automate, measure, and improve business interactions across multiple channels such as voice, SMS, Social Media and Enterprise Collaboration Platforms.  Customers will be able to use the intuitive drag and drop features, without any complicated coding. This will offload agent workload and create superior digital experiences.

“We are really happy to see this partnership going forward,” said Inbenta CEO Jordi Torras. “Combining the IntelePeer easy-to-use omni-channel platform with our Symbolic AI will empower our customers to build workflows across very different channels in a cohesive way, providing intelligence along the way.”

“With customer expectations on the rise and a constantly changing business climate, companies must stay ahead of the market with agile solutions,” said Jeremy Jones, IntelePeer’s Chief Commercial Officer. “We look forward to a successful partnership with Inbenta. With growing demand for more intelligent interactions, Inbenta’s AI helps detect customer’s intents, and respond with SmartFlows across different channels including voice, SMS, and social messaging to consistently stay ahead of the curve in a fast-changing digital business.”

About Inbenta

Inbenta was founded by Jordi Torras in Barcelona and is now headquartered in Silicon Valley. Inbenta empowers the world’s largest enterprise and e-commerce companies to improve customer satisfaction rates and reduce support costs with best-in-class functionality.

About IntelePeer

IntelePeer powers the new customer experience. Our Atmosphere® CPaaS  enables companies to communicate better – driving more revenue, improving their customer experience, and making better business decisions – leveraging omni-channel Automation & Self-Service, AI, and Analytics, all delivered through a single easy-to-use cloud platform that works seamlessly with your existing business solutions. For more information visit:

Continue Reading
AI1 hour ago

A Quick Guide to Conversational AI And It’s Working Process

AI2 hours ago

Are Legal chatbots worth the time and effort?

AI5 hours ago

Inbenta Announces Partnership with IntelePeer to Deliver Smarter Workflows to Customers

AI2 days ago

Things to Know about Free Form Templates

AI2 days ago

Are Chatbots Vulnerable? Best Practices to Ensure Chatbots Security

AI2 days ago

Best Technology Stacks For Mobile App Development

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI3 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition