Connect with us

AI

Optimizing I/O for GPU performance tuning of deep learning training in Amazon SageMaker

GPUs can significantly speed up deep learning training, and have the potential to reduce training time from weeks to just hours. However, to fully benefit from the use of GPUs, you should consider the following aspects: Optimizing code to make sure that underlying hardware is fully utilized Using the latest high performant libraries and GPU […]

Published

on

GPUs can significantly speed up deep learning training, and have the potential to reduce training time from weeks to just hours. However, to fully benefit from the use of GPUs, you should consider the following aspects:

  • Optimizing code to make sure that underlying hardware is fully utilized
  • Using the latest high performant libraries and GPU drivers
  • Optimizing I/O and network operations to make sure that the data is fed to the GPU at the rate that matches its computations
  • Optimizing communication between GPUs during multi-GPU or distributed training

Amazon SageMaker is a fully managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models at any scale. In this post, we focus on general techniques for improving I/O to optimize GPU performance when training on Amazon SageMaker, regardless of the underlying infrastructure or deep learning framework. You can typically see performance improvements up to 10-fold in overall GPU training by just optimizing I/O processing routines.

The basics

A single GPU can perform tera floating point operations per second (TFLOPS), which allows them to perform operations 10–1,000 times faster than CPUs. For GPUs to perform these operations, the data must be available in the GPU memory. The faster you load data into GPU, the quicker it can perform its operation. The challenge is to optimize I/O or the network operations in such a way that the GPU never has to wait for data to perform its computations.

The following diagram illustrates the architecture of optimizing I/O.

The general steps usually involved in getting the data into the GPU memory are the following:

  • Network operations – Download the data from Amazon Simple Storage Service (Amazon S3).
  • Disk I/O – Read data from local disk into CPU memory. Local disk refers to an instance store, where storage is located on disks that are physically attached to the host computer. Amazon Elastic Block Store (Amazon EBS) volumes aren’t local resources, and involve network operations.
  • Data preprocessing – The CPU generally handles any data preprocessing such as conversion or resizing. These operations might include converting images or text to tensors or resizing images.
  • Data transfer into GPU memory – Copy the processed data from the CPU memory into the GPU memory.

The following sections look at optimizing these steps.

Optimizing data download over the network

In this section, we look at tips to optimize data transfer via network operations, e.g. downloading data from Amazon S3, use of file systems such as Amazon EBS & Amazon Elastic File System (Amazon EFS).

Optimizing file sizes

You can store large amounts of data in Amazon S3 at low cost. This includes data from application databases extracted through an ETL process into a JSON or CSV format or image files. One of the first steps that Amazon SageMaker does is download the files from Amazon S3, which is the default input mode called File mode.

Downloading or uploading very small files, even in parallel, is slower than larger files totaling up to the same size. For instance, if you have 2,000,000 files, where each file is 5 KB (total size = 10 GB = 2,000,000 X 5 * 1024), downloading these many tiny files can take a few hours, compared to a few minutes when downloading 2,000 files each 5 MB in size (total size = 10 GB = 2,000 X 5 * 1024 * 1024 ), even though the total download size is the same.

One of the primary reasons for this is the read/write block size. Assume that the total volume and the number of threads used for transfer is roughly the same for the large and the small files. If the transfer block size is 128 KB and the file size is 2 KB, instead of transferring 128 KB at one time, you only transfer 2 KB.

On the other hand, if the files are too large, you can’t take advantage of parallel processing to upload or download data to make it faster unless you use options such as Amazon S3 range gets to download different blocks in parallel.

Formats like MXNet RecordIO and TFRecord allow you to compress and densely pack multiple image files into a single file to avoid this trade-off. For instance, MXNet RecordIO for images recommends that images are reduced in size so you can fit at least a batch of images into CPU/GPU memory and multiple images are densely packed into a single file, so I/O operations on a tiny file don’t become a bottleneck.

As a general rule, the optimal file size ranges from 1–128 MB.

Amazon SageMaker ShardedByS3Key Amazon S3 data distribution for large datasets

During distributed training, you can also shard very large datasets across various instances. You can achieve this in an Amazon SageMaker training job by setting the parameter S3DataDistributionType to ShardedByS3Key. In this mode, if the Amazon S3 input dataset has total M objects and the training job has N instances, each instance handles M/N objects. For more information, see S3DataSource. For this use case, model training on each machine uses only the subset of training data.

Amazon SageMaker Pipe mode for large datasets

Compared to SageMaker File mode, Pipe mode allows large data to be streamed directly to your training instances from Amazon S3 instead of downloading to disk first. Pipe mode allows your code to access the data without having to wait for the entire download. Because data is never downloaded to disk and only a relatively smaller footprint is maintained in memory, data is continuously downloaded from Amazon S3 throughout each epoch. This makes it a great fit for working with very large datasets that can’t fit into the CPU memory. To take advantage of the partial raw bytes as they become available when streamed, you need your code to decode the bytes depending on the record format (such as CSV) and find the end of record to convert the partial bytes into a logical record. Amazon SageMaker TensorFlow provides built-in Pipe mode dataset readers for common formats such as text files and TFRecord. For more information, see Amazon SageMaker Adds Batch Transform Feature and Pipe Input Mode for TensorFlow Containers. If you use frameworks or libraries that don’t have built-in data readers, you could use ML-IO libraries or write your own data readers to make use of Pipe mode.

Another consequence of Pipe mode streaming is that to shuffle the data, you should use ShuffleConfig to shuffle the results of the Amazon S3 key prefix matches or lines in a manifest file and augmented manifest file. If you have one large file, you can’t rely on Amazon SageMaker to do the shuffling; you have to prefetch “N” number of batches and write your own code to shuffle depending on your ML framework.

If you can fit the entire dataset into CPU memory, File mode can be more efficient than Pipe mode. This is because if you can easily fit the entire dataset into the CPU memory, with File mode, you need to download the entire dataset into disk one time, load the entire dataset into memory one time, and repeatedly read from memory across all epochs. Reading from memory is typically much faster than network I/O, which allows you to achieve better performance.

The following section discusses how to deal with very large datasets.

Amazon FSx for Lustre or Amazon EFS for large datasets

For very large datasets, you can reduce Amazon S3 download times by using a distributed file system.

You can reduce startup times using Amazon FSx for Lustre on Amazon SageMaker while maintaining the data in Amazon S3. For more information, see Speed up training on Amazon SageMaker using Amazon FSx for Lustre and Amazon EFS file systems.

The first time you run a training job, FSx for Lustre automatically copies data from Amazon S3 and makes it available to Amazon SageMaker. Additionally, you can use the same FSx for Lustre file system for subsequent iterations of training jobs on Amazon SageMaker, which prevents repeated downloads of common Amazon S3 objects. Because of this, FSx for Lustre has the most benefit for training jobs that have training sets in Amazon S3 and in workflows where training jobs must be run several times using different training algorithms or parameters to see which gives the best result.

If you already have your training data on Amazon Elastic File System (Amazon EFS), you can also use Amazon EFS with Amazon SageMaker. For more information, see Speed up training on Amazon SageMaker using Amazon FSx for Lustre and Amazon EFS file systems.

One thing to consider while using this option is file size. If the file sizes are too small, the I/O performance is likely to be slower due to factors such as transfer block size.

Amazon SageMaker instances with local NVMe-based SSD storage

Some of the Amazon SageMaker GPU instances, such as the ml.p3dn.24xlarge and ml.g4dn, provide local NVMe-based SSD storage instead of EBS volumes. For instance, the ml.p3dn.24xlarge instances have 1.8 TB of local NVMe-based SSD storage. The use of local NVMe-based SSD storage means that after training data is downloaded from Amazon S3 to a local disk storage, the disk I/0 is much faster than reading from network resources such as EBS volumes or Amazon S3. This allows you to achieve faster training times when the training data size can fit into the local NVMe-based storage.

Optimizing data loading and preprocessing

In the preceding section, we described how to download data from sources like Amazon S3 efficiently. In this section, we discuss how to increase parallelism and make commonly used functions as lean as possible to make data loading more efficient.

Multiple workers for loading and processing data

TensorFlow, MXNet Gluon, and PyTorch provide data loader libraries for loading data in parallel. In the following PyTorch example, increasing the number of workers allows more workers to process items in parallel. As a general rule, you may scale up from a single worker to approximately one less than the number of CPUs. Generally, each worker represents one process and uses Python multiprocessing, although the implementation details can vary from framework to framework. The use of multiprocessing sidesteps the Python Global Interpreter Lock (GIL) to fully use all the CPUs in parallel, but it also means that memory utilization increases proportionally to the number of workers because each process has its own copy of the objects in memory. You might see out of memory exceptions as you start to increase the number of workers, in which case you should use an instance that has more CPU memory where applicable.

To understand the effect of using the workers, we present the following example dataset. In this dataset, the __get_item__ operation sleeps for 1 second, to emulate some latency in reading the next record:

class MockDatasetSleep(Dataset): """ Simple mock dataset to understand the use of workers """ def __init__(self, num_cols, max_records=32): super(MockDatasetSleep).__init__() self.max_records = max_records self.num_cols = num_cols # Initialising mock x and y self.x = np.random.uniform(size=self.num_cols) self.y = np.random.normal() print("Initialised") def __len__(self): return self.max_records def __getitem__(self, idx): curtime = datetime.datetime.now() # Emulate a slow operation sleep_seconds = 1 time.sleep(sleep_seconds) print("{}: retrieving item {}".format(curtime, idx)) return self.x, self.y

As an example, create a data loader instance with only a single worker:

# One worker
num_workers = 1
torch.utils.data.DataLoader(MockDatasetSleep(), batch_size=batch_size, shuffle=True, num_workers=num_workers)

When you use a single worker, you see items retrieved one by one, with a 1-second delay to retrieve each item:

15:39:58.833644: retrieving item 0
15:39:59.834420: retrieving item 6
15:40:00.834861: retrieving item 8
15:40:01.835350: retrieving item 5

If you increase the number of workers to 3 workers on an instance, that has at least 4 CPUs to ensure maximum parallel processing. See the following code:

# You may need to lower the number of workers if you encounter out of memory exceptions or move to a instance with more memory
num_workers = os.cpu_count() - 1
torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=num_workers)

In this example dataset, you can see that the three workers are attempting to retrieve three items in parallel, and it takes approximately 1 second for the operation to complete and the next three items are retrieved:

16:03:21.980084: retrieving item 8
16:03:21.981769: retrieving item 10
16:03:21.981690: retrieving item 25 16:03:22.980437: retrieving item 0
16:03:22.982118: retrieving item 7
16:03:22.982339: retrieving item 21

In this demo notebook example, we use the Caltech-256 dataset, which has approximately 30,600 images, using ResNet 50. In the Amazon SageMaker training job, we use a single ml.p3.2xlarge instance, which comes with 1 GPU and 8 vCPUs. With just one worker, it took 260 seconds per epoch processing approximately 100 images per second in a single GPU. With seven workers, it took 96 seconds per epoch processing approximately 300 images per second, a performance improvement that is three times faster.

The following graph shows the metric GPUUtilization for a single worker with peak utilization 50%.

The following graph shows the metric GPUUtilization for multiple workers, which has an average utilization of 95%.

Minor changes to num_workers can speed up data loading and therefore allow the GPUs to train faster because they spend less time waiting for data. This shows how optimizing the I/O performance in data loaders can improve GPU utilization.

You should only train on multi-GPU or multi-host distributed GPU training after you optimize usage on a single GPU. Therefore, it’s absolutely critical to measure and maximize utilization on a single GPU before moving on to distributed training.

Optimizing frequently used functions

Minimizing expensive operations while retrieving each record item where possible can improve training performance regardless of the GPU or CPU. You can optimize frequently used functions in many ways, such as using the right data structures.

In the demo notebook example, the naive implementation loads the image file and resizes during each item, as shown in the following code. We optimize this function by preprocessing the Caltech 256 dataset to resize the images ahead of time and save a pickled version of image files. The __getitem__ function only attempts to randomly crop the image, which makes the __getitem__ function quite lean. The GPU spends less time waiting for the CPU to preprocess the data, which makes the data available to the GPU faster. See the following code:

# Naive implementation
def __getitem__(self, idx): curtime = datetime.datetime.now() self.logger.debug("{}: retrieving item {}".format(curtime, idx)) image, label = self.images[idx], self.labels[idx] # Convert to PIL image to apply transformations # This could be faster if handled in a preprocessing step image = Image.open(image) if image.getbands()[0] == 'L': image = image.convert('RGB') # Apply transformation at each get item including resize, random crop image = self.transformer(image) self.logger.debug("{}: completed item {}".format(datetime.datetime.now(), idx)) return image, label # Optimised implementation
def __getitem__(self, idx): curtime = datetime.datetime.now() self.logger.debug("{}: retrieving item {}".format(curtime, idx)) image, label = self.images[idx], self.labels[idx] # Apply transformation at each get item - random crop image = self.transformer(image) self.logger.debug("{}: completed item {}".format(datetime.datetime.now(), idx)) return image, label

Even with this simple change, we could complete an epoch in 96 seconds, with approximately 300 images per second, which is three times faster than the unoptimized dataset with a single worker. If we increase the number of workers, it makes very little difference to the GPU utilization because the data loading process is no longer the bottleneck.

In some use cases, you may have to increase the number of workers and optimize the code to maximize the GPU utilization.

The following graph shows GPU utilization with a single worker using the optimized dataset.

The following graph shows GPU utilization with the unoptimized dataset.

Know your ML framework

The data loading libraries for the respective deep learning framework can provide additional options to optimize data loading, including Tensorflow data loader, MXNet, and PyTorch data loader. You should understand the parameters for data loaders and libraries that best work for your use case and the trade-offs involved. Some of these options include:

  • CPU pinned memory – Allows you to accelerate data transfer from the CPU (host) memory to the GPU (device) memory. This performance gain is obtained by directly allocating page-locked (or pinned) memory instead of allocating a paged memory first and copying data from CPU paged to CPU pinned memory to transfer data to the GPU. Enabling CPU pinned memory in the data loader is available in PyTorch and MXNet. The trade-off to consider is out of memory exceptions are more likely to occur when requesting pinned CPU memory instead of paged memory.
  • Modin – This lightweight parallel processing data frame allows you to perform Pandas dataframe-like operations in parallel so you can fully utilize all the CPUs on your machine. Modin can use different types of parallel processing frameworks such as Dask and Ray.
  • CuPy – This open-source matrix library, similar to NumPy, provides GPU accelerated computing with Python.

Heuristics to identify I/O bottlenecks

Amazon SageMaker provides Amazon CloudWatch metrics such as GPU, CPU, and disk utilization during training. For more information, see Monitor Amazon SageMaker with Amazon CloudWatch.

The following heuristics identify I/O-related performance issues using the out-of-the-box metrics:

  • If your training job takes a very long time to start, most of the time is spent downloading the data. You should look at ways to optimize downloading from Amazon S3, as detailed earlier.
  • If the GPU utilization is low but the disk or the CPU utilization is high, data loading or preprocessing could be potential bottlenecks. You might want to preprocess the data well ahead of training, if possible. You could also optimize the most frequently used functions, as demonstrated earlier.
  • If the GPU utilization is low and the CPU and disk utilization is continuously low but not zero, despite having a large enough dataset, it could mean that your code isn’t utilizing the underlying resources effectively. If you notice that the CPU memory utilization is also low, a quick way to potentially boost performance is to increase the number of workers in the data loader API of your deep learning framework.

Conclusion

In summary, you can see how the foundations of data loading and processing affect GPU utilization, and how you can improve GPU performance by resolving I/O- or network-related bottlenecks. It’s important to address these bottlenecks before moving to advance topics such as multi-GPU or distributed training.

For more information to help you get started with Amazon SageMaker, see the following:


About the Author

Aparna Elangovan is a Artificial Intelligence & Machine Learning Prototyping Engineer at AWS, where she helps customers develop deep learning applications.

Source: https://aws.amazon.com/blogs/machine-learning/optimizing-i-o-for-gpu-performance-tuning-of-deep-learning-training-in-amazon-sagemaker/

AI

How to Get the Best Start at Sports Betting

If you are looking into getting into sports betting, then you might be hesitant about how to start, and the whole idea of it can be quite daunting. There are many techniques to get the best possible start at sports betting and, in this article, we will have a look at some of the best […]

The post How to Get the Best Start at Sports Betting appeared first on 1redDrop.

Published

on

If you are looking into getting into sports betting, then you might be hesitant about how to start, and the whole idea of it can be quite daunting. There are many techniques to get the best possible start at sports betting and, in this article, we will have a look at some of the best tips for that.

Mental preparation

This sounds a bit pretentious, but it is very important to understand some things about betting before starting so you can not only avoid nasty surprises but also avoid losing too much money. Firstly, you need to know that, in the beginning, you will not be good at betting. It is through experience and learning from your mistakes that you will get better. It is imperative that you do not convince yourself that you are good at betting, especially if you win some early bets, because I can guarantee it will have been luck – and false confidence is not your friend. 

It is likely that you will lose some money at first, but this is to be expected. Almost any hobby that you are interested in will cost you some money so, instead, look at it as an investment. However, do not invest ridiculous amounts; rather, wait until you are confident in your betting ability to start placing larger stakes. 

Set up different accounts

This is the best way to start with sports betting, as the welcome offers will offset a lot of the risk. These offers are designed to be profitable to entice you into betting with the bookie, but it is completely legal to just profit from the welcome offer and not bet with the bookie again. 

If you do this with the most bookies, as you can, you are minimising the risk involved with your betting and maximising possible returns, so it really is a no-brainer.

As well as this clear advantage, different betting companies offer different promotions. Ladbrokes offer a boost every day, for example, where you can choose your bet and boost it a little bit, and the Parimatch betting website chooses a bet for big events and doubles the odds. 

If you are making sure you stay aware of the best offers across these platforms, then you will be able to use the most lucrative ones and, as such, you will be giving yourself the best chance of making money. The house always wins, as they say, but if you use this tip, you are skewing the odds back in your favour. 

Remember, the house wins because of gamblers that do not put in the effort and do not bet smart. Avoid those mistakes and you will massively increase your chances of making money.

Tipsters

On Twitter, especially, but also other social media platforms, there are tipsters who offer their bets for free. It is not so much the bets themselves that you are interested in, but rather why they are betting on this. It is important that you find tipsters who know what they are doing, though, because there are a lot of tipsters who are essentially scamming their customers. It is quite easy to find legitimate tipsters because they are not afraid to show their mistakes. 

Once you have found good tipsters, then you need to understand the reasoning behind their bets. When you have done that, you can start placing these bets yourself, and they will likely be of better value since some tipsters influence the betting markets considerably. You can also follow their bets as they are likely to be sensible bets, although this does not necessarily translate to success.

Source: https://1reddrop.com/2020/10/20/how-to-get-the-best-start-at-sports-betting/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-get-the-best-start-at-sports-betting

Continue Reading

AI

Natural Language Processing in Production: 27 Fast Text Pre-Processing Methods

Estimates state that 70%–85% of the world’s data is text (unstructured data) [1]. New deep learning language models (transformers) have caused explosive growth in industry applications [5,6,11]. This blog is not an article introducing you to Natural Language Processing. Instead, it assumes you are familiar with noise reduction and normalization of text. It covers text preprocessing up […]

The post Natural Language Processing in Production: 27 Fast Text Pre-Processing Methods appeared first on TOPBOTS.

Published

on

text pre-processing

Estimates state that 70%–85% of the world’s data is text (unstructured data) [1]. New deep learning language models (transformers) have caused explosive growth in industry applications [5,6,11].

This blog is not an article introducing you to Natural Language Processing. Instead, it assumes you are familiar with noise reduction and normalization of text. It covers text preprocessing up to producing tokens and lemmas from the text.

We stop at feeding the sequence of tokens into a Natural Language model.

The feeding of that sequence of tokens into a Natural Language model to accomplish a specific model task is not covered here.

In production-grade Natural Language Processing (NLP), what is covered in this blog is that fast text pre-processing (noise cleaning and normalization) is critical.

  1. I discuss packages we use for production-level NLP;
  2. I detail the production-level NLP preprocessing text tasks with python code and packages;
  3. Finally. I report benchmarks for NLP text pre-processing tasks;

Dividing NLP Processing into Two Steps

We segment NLP into two major steps (for the convenience of this article):

  1. Text pre-processing into tokens. We clean (noise removal) and then normalize the text. The goal is to transform the text into a corpus that any NLP model can use. A goal is rarely achieved until the introduction of the transformer [2].
  2. A corpus is an input (text preprocessed into a sequence of tokens) into NLP models for training or prediction.

The rest of this article is devoted to noise removal text and normalization of text into tokens/lemmas (Step 1: text pre-processing). Noise removal deletes or transforms things in the text that degrade the NLP task model. It is usually an NLP task-dependent. For example, e-mail may or may not be removed if it is a text classification task or a text redaction task. We’ll cover replacement and removal of the noise.

Normalization of the corpus is transforming the text into a common form. The most frequent example is normalization by transforming all characters to lowercase. In follow-on blogs, we will cover different deep learning language models and Transformers (Steps 2-n) fed by the corpus token/lemma stream.

NLP Text Pre-Processing Package Factoids

There are many NLP packages available. We use spaCy [2], textacy [4], Hugging Face transformers [5], and regex [7] in most of our NLP production applications. The following are some of the “factoids” we used in our decision process.

Note: The following “factoids” may be biased. That is why we refer to them as “factoids.”

NLTK [3]

  • NLTK is a string processing library. All the tools take strings as input and return strings or lists of strings as output [3].
  • NLTK is a good choice if you want to explore different NLP with a corpus whose length is less than a million words.
  • NLTK is a bad choice if you want to go into production with your NLP application [3].

Regex

The use of regex is pervasive throughout our text-preprocessing code. Regex is a fast string processor. Regex, in various forms, has been around for over 50 years. Regex support is part of the standard library of Java and Python, and is built into the syntax of others, including Perl and ECMAScript (JavaScript);

spaCy [2]

  • spaCy is a moderate choice if you want to research different NLP models with a corpus whose length is greater than a million words.
  • If you use a selection from spaCy [3], Hugging Face [5], fast.ai [13], and GPT-3 [6], then you are performing SOTA (state-of-the-art) research of different NLP models (my opinion at the time of writing this blog).
  • spaCy is a good choice if you want to go into production with your NLP application.
  • spaCy is an NLP library implemented both in Python and Cython. Because of the Cython, parts of spaCy are faster than if implemented in Python [3];
  • spacy is the fastest package, we know of, for NLP operations;
  • spacy is available for operating systems MS Windows, macOS, and Ubuntu [3];
  • spaCy runs natively on Nvidia GPUs [3];
  • explosion/spaCy has 16,900 stars on Github (7/22/2020);
  • spaCy has 138 public repository implementations on GitHub;
  • spaCy comes with pre-trained statistical models and word vectors;
  • spaCy transforms text into document objects, vocabulary objects, word- token objects, and other useful objects resulting from parsing the text ;
  • Doc class has several useful attributes and methods. Significantly, you can create new operations on these objects as well as extend a class with new attributes (adding to the spaCy pipeline);
  • spaCy features tokenization for 50+ languages;

Do you find this in-depth technical education about NLP applications to be useful? Subscribe below to be updated when we release new relevant content.

Creating long_s Practice Text String

We create long_, a long string that has extra whitespace, emoji, email addresses, $ symbols, HTML tags, punctuation, and other text that may or may not be noise for the downstream NLP task and/or model.

MULPIPIER = int(3.8e3)
text_l = 300 %time long_s = ':( 😻 😈 #google +1 608-444-0000 08-444-0004 608-444-00003 ext. 508 '
long_s += ' 888 eihtg DoD Fee https://medium.com/ #hash ## Document Title</title> '
long_s += ':( cat- \n nip'
long_s += ' immed- \n natedly <html><h2>2nd levelheading</h2></html> . , '
long_s += '# [email protected] [email protected] can\'t Be a ckunk. $4 $123,456 won\'t seven '
long_s +=' $Shine $$beighty?$ '
long_s *= MULPIPIER
print('size: {:g} {}'.format(len(long_s),long_s[:text_l]))

output =>

CPU times: user 3 µs, sys: 1 µs, total: 4 µs
Wall time: 8.11 µs
size: 1.159e+06 :( 😻 😈 #google +1 608-444-0000 08-444-0004 608-444-00003 ext. 508 888 eihtg DoD Fee https://medium.com/ #hash ## Document Title</title> :( cat- nip immed- natedly <html><h2>2nd levelheading</h2></html> . , # [email protected] [email protected] can't Be a ckunk. $4 $123,456 won't seven $Shine $$beigh

A string, long_s of 1.159 million characters is created in 8.11 µs.

Python String Corpus Pre-processing Step and Benchmarks

All benchmarks are run within a Docker container on MacOS Version 14.0 (14.0).

Model Name: Mac Pro
Processor Name: 12-Core Intel Xeon E5
Processor Speed: 2.7 GHz
Total Number of Cores: 24
L2 Cache (per Core): 256 KB
L3 Cache: 30 MB
Hyper-Threading Technology: Enabled Memory: 64 GB

Note: Corpus/text pre-processing is dependent on the end-point NLP analysis task. Sentiment Analysis requires different corpus/text pre-processing steps than document redaction. The corpus/text pre-processing steps given here are for a range of NLP analysis tasks. Usually. a subset of the given corpus/text pre-processing steps is needed for each NLP task. Also, some of required corpus/text pre-processing steps may not be given here.

1. NLP text preprocessing: Replace Twitter Hash Tags

from textacy.preprocessing.replace import replace_hashtags
%time text = replace_hashtags(long_s,replace_with= 'HASH')
print('size: {:g} {}'.format(len(text),text[:text_l])))

output =>

CPU times: user 223 ms, sys: 66 µs, total: 223 ms
Wall time: 223 ms
size: 1.159e+06 :
( 😻 😈 _HASH_ +1 608-444-0000 08-444-0004 608-444-00003 ext. 508 888 eihtg DoD Fee https://medium.com/ _HASH_ ## Document Title</title> :( cat- nip immed- natedly <html><h2>2nd levelheading</h2></html> . , # [email protected] [email protected] can't Be a ckunk. $4 $123,456 won't seven $Shine $$beigh

Notice that #google and #hash are swapped with_HASH_,and ##and _# are untouched. A million characters were processed in 200 ms. Fast enough for a big corpus of a billion characters (example: web server log).

2. NLP text preprocessing: Remove Twitter Hash Tags

from textacy.preprocessing.replace import replace_hashtags
%time text = replace_hashtags(long_s,replace_with= '')
print('size: {:g} {}'.format(len(text),text[:text_l]))

output =>

CPU times: user 219 ms, sys: 0 ns, total: 219 ms
Wall time: 220 ms
size: 1.1134e+06 :( 😻 😈 +1 608-444-0000 08-444-0004 608-444-00003 ext. 508 888 eihtg DoD Fee https://medium.com/ ## Document Title</title> :( cat- nip immed- natedly <html><h2>2nd levelheading</h2></html> . , # [email protected] [email protected] can't Be a ckunk. $4 $123,456 won't seven $Shine $$beighty?$

Notice that #google and #hash are removed and ##,and _# are untouched. A million characters were processed in 200 ms.

3. NLP text preprocessing: Replace Phone Numbers

from textacy.preprocessing.replace import replace_phone_numbers
%time text = replace_phone_numbers(long_s,replace_with= 'PHONE')
print('size: {:g} {}'.format(len(text),text[:text_l]))

output =>

CPU times: user 384 ms, sys: 1.59 ms, total: 386 ms
Wall time: 383 ms
size: 1.0792e+06
:( 😻 😈 PHONE 08-PHONE 608-444-00003 ext. 508 888 eihtg

Notice phone number 08-444-0004 and 608-444-00003 ext. 508 were not transformed.

4. NLP text preprocessing: Replace Phone Numbers – better

RE_PHONE_NUMBER: Pattern = re.compile( # core components of a phone number r"(?:^|(?<=[^\w)]))(\+?1[ .-]?)?(\(?\d{2,3}\)?[ .-]?)?(\d{2,3}[ .-]?\d{2,5})" # extensions, etc. r"(\s?(?:ext\.?|[#x-])\s?\d{2,6})?(?:$|(?=\W))", flags=re.UNICODE | re.IGNORECASE) text = RE_PHONE_NUMBER.sub('_PHoNE_', long_s)
print('size: {:g} {}'.format(len(text),text[:text_l]))

output =>

CPU times: user 353 ms, sys: 0 ns, total: 353 ms
Wall time: 350 ms
size: 1.0108e+06 :( 😻 😈 _PHoNE_ _PHoNE_ _PHoNE_ 888 eihtg DoD Fee https://medium.com/ ## Document Title</title> :( cat- nip immed- natedly <html><h2>2nd levelheading</h2></html> . , # [email protected] [email protected] can't Be a ckunk. $4 $123,456 won't seven $Shine $$beighty?$

Notice phone number 08-444-0004 and 608-444-00003 ext. 508 were transformed. A million characters were processed in 350 ms.

5. NLP text preprocessing: Remove Phone Numbers

Using the improved RE_PHONE_NUMBER pattern, we put '' in for ‘PHoNE' to remove phone numbers from the corpus.

text = RE_PHONE_NUMBER.sub('', long_s)
print('size: {:g} {}'.format(len(text),text[:text_l]))

output =>

CPU times: user 353 ms, sys: 459 µs, total: 353 ms
Wall time: 351 ms
size: 931000 :( 😻 😈 888 eihtg DoD Fee https://medium.com/ ## Document Title</title> :( cat- nip immed- natedly <html><h2>2nd levelheading</h2></html> . , # [email protected] [email protected] can't Be a ckunk. $4 $123,456 won't seven $Shine $$beighty?$

A million characters were processed in 375 ms.

6. NLP text preprocessing: Removing HTML metadata

I admit removing HTML metadata is my favorite. Not because I like the task, but because I screen-scrape frequently. There is a lot of useful data that resides on an IBM mainframe, VAX-780 (huh?), or whatever terminal-emulation that results in an HTML-based report.

These techniques of web scraping of reports generate text that has HTML tags. HTML tags are considered noise typically as they are parts of the text with little or no value in the follow-on NLP task.

Remember, we created a test string (long_s) a little over million characters long with some HTML tags. We remove the HTML tags using BeautifulSoup.

from bs4 import BeautifulSoup
%time long_s = BeautifulSoup(long_s,'html.parser').get_text()
print('size: {:g} {}'.format(len(long_s),long_s[:text_l])))

output =>

CPU times: user 954 ms, sys: 17.7 ms, total: 971 ms
Wall time: 971 ms
size: 817000 :( 😻 😈 888 eihtg DoD Fee https://medium.com/ ## Document Title :( cat- nip immed- natedly 2nd levelheading 

The result is that BeautifulSoup is able to remove over 7,000 HTML tags in a million character corpus in one second. Scaling linearly, a billion character corpus, about 200 million word, or approxiately 2000 books, would require about 200 seconds.

The rate for HTML tag removal byBeautifulSoup is about 0. 1 second per book. An acceptable rate for our production requirements.

I only benchmark BeautifulSoup. If you know of a competitive alternative method, please let me know.

Note: The compute times you get may be multiples of time longer or shorter if you are using the cloud or Spark.

7. NLP text preprocessing: Replace currency symbol

The currency symbols “[$¢£¤¥ƒ֏؋৲৳૱௹฿៛ℳ元円圆圓﷼\u20A0-\u20C0] “ are replaced with _CUR_using the textacy package:

%time textr = textacy.preprocessing.replace.replace_currency_symbols(long_s)
print('size: {:g} {}'.format(len(textr),textr[:text_l]))

output =>

CPU times: user 31.2 ms, sys: 1.67 ms, total: 32.9 ms
Wall time: 33.7 ms
size: 908200 :( 😻 😈 888 eihtg DoD Fee https://medium.com/ ## Document Title :( cat- nip immed- natedly 2nd levelheading . , # [email protected] [email protected] can't Be a ckunk. _CUR_4 _CUR_123,456 won't seven _CUR_Shine _CUR__CUR_beighty?_CUR_

Note: The option textacy replace_<something> enables you to specify the replacement text. _CUR_ is the default substitution text for replace_currency_symbols.

You may have the currency symbol $ in your text. In this case you can use a regex:

%time text = re.sub('\$', '_DOL_', long_s)
print('size: {:g} {}'.format(len(text),text[:250]))

output =>

CPU times: user 8.06 ms, sys: 0 ns, total: 8.06 ms
Wall time: 8.25 ms
size: 1.3262e+06 :( 😻 😈 #google +1 608-444-0000 08-444-0004 608-444-00003 ext. 508 888 eihtg DoD Fee https://medium.com/ #hash ## <html><title>Document Title</title></html> :( cat- nip immed- natedly <html><h2>2nd levelheading</h2></html> . , # [email protected] [email protected] can't Be a ckunk. _DOL_4 _DOL_123,456 won't seven _DOL_Shine _DOL__DOL_beighty?_DOL_ :

Note: All symbol $ in your text will be removed. Don’t use if you have LaTex or any text where multiple symbol $ are used.

8. NLP text preprocessing: Replace URL String

from textacy.preprocessing.replace import replace_urls
%time text = replace_urls(long_s,replace_with= '_URL_')
print('size: {:g} {}'.format(len(text),text[:text_l]))

output =>

CPU times: user 649 ms, sys: 112 µs, total: 649 ms
Wall time: 646 ms
size: 763800
:( 😻 😈 888 eihtg DoD Fee _URL_ ## Document Title :(

9. NLP text preprocessing: Remove URL String

from textacy.preprocessing.replace import replace_urls
%time text = replace_urls(long_s,replace_with= '')
print('size: {:g} {}'.format(len(text),text[:text_l]))

output =>

CPU times: user 633 ms, sys: 1.35 ms, total: 635 ms
Wall time: 630 ms
size: 744800
:( 😻 😈 888 eihtg DoD Fee ## Document Title :(

The rate for URL replace or removal is about 4,000 URLs per 1 million characters per second. Fast enough for 10 books in a corpus.

10. NLP text preprocessing: Replace E-mail string

%time text = textacy.preprocessing.replace.replace_emails(long_s)
print('size: {:g} {}'.format(len(text),text[:text_l]))

output =>

CPU times: user 406 ms, sys: 125 µs, total: 406 ms
Wall time: 402 ms
size: 725800
:( 😻 😈 888 eihtg DoD Fee ## Document Title :( cat-
nip immed-
natedly 2nd levelheading . , # _EMAIL_ _EMAIL_ can't Be a ckunk. $4 $123,456 won't seven $Shine $$beighty?$

The rate for email reference replace is about 8,000 emails per 1.7 million characters per second. Fast enough for 17 books in a corpus.

11. NLP text pre-processing: Remove E-mail string

from textacy.preprocessing.replace import replace_emails

%time text = textacy.preprocessing.replace.replace_emails(long_s,replace_with= '')
print('size: {:g} {}'.format(len(text),text[:text_l]))

output =>

CPU times: user 413 ms, sys: 1.68 ms, total: 415 ms
Wall time: 412 ms
size: 672600 :( 😻 😈 888 eihtg DoD Fee ## Document Title :( cat-
nip immed-
natedly 2nd levelheading . , # can't Be a ckunk. $4 $123,456 won't seven $Shine $$beighty?$

The rate for email reference removal is about 8,000 emails per 1.1 million characters per second. Fast enough for 11 books in a corpus.

12. NLP text preprocessing: normalize_hyphenated_words

from textacy.preprocessing.normalize import normalize_hyphenated_words
%time long_s = normalize_hyphenated_words(long_s)
print('size: {:g} {}'.format(len(long_s),long_s[:text_l])))

output =>

CPU times: user 186 ms, sys: 4.58 ms, total: 191 ms
Wall time: 190 ms
size: 642200 :
( 😻 😈 888 eihtg DoD Fee ## Document Title :( catnip immednatedly

Approximately 8,000 hyphenated-words, cat — nip and immed- iately (mispelled) were corrected in a corpus of 640,000 characters in 190 ms or abouut 3 million per second.

13. NLP text preprocessing: Convert all characters to lower case

### - **all characters to lower case;**
%time long_s = long_s.lower()
print('size: {:g} {}'.format(len(long_s),long_s[:text_l]))

output =>

CPU times: user 4.82 ms, sys: 953 µs, total: 5.77 ms
Wall time: 5.97 ms
size: 642200
:( 😻 😈 888 eihtg dod fee ## document title :( catnip immednatedly 2nd levelheading . , # can't be a ckunk. $4 $123,456 won't seven $shine $$beighty?$

I only benchmark the .lower Python function. The rate for lower case transformation by.lower() of a Python string of a million characters is about 6 ms. A rate that far exceeds our production rate requirements.

14. NLP text preprocessing: Whitespace Removal

%time text = re.sub(' +', ' ', long_s)
print('size: {:g} {}'.format(len(text),text[:text_l]))

output =>

CPU times: user 44.9 ms, sys: 2.89 ms, total: 47.8 ms
Wall time: 47.8 ms
size: 570000
:( 😻 😈 888 eihtg dod fee ## document title :( catnip immednatedly 2nd levelheading . , # can't be a ckunk. $4 $123,456 won't seven $shine $$beighty?$

The rate is about 0.1 seconds for 1 million characters.

15. NLP text preprocessing: Whitespace Removal (slower)

from textacy.preprocessing.normalize import normalize_whitespace

%time text= normalize_whitespace(long_s)
print('size: {:g} {}'.format(len(text),text[:text_l]))

output =>

CPU times: user 199 ms, sys: 3.06 ms, total: 203 ms
Wall time: 201 ms
size: 569999
:( 😻 😈 888 eihtg dod fee ## document title :( catnip immednatedly 2nd levelheading . , # can't be a ckunk. $4 $123,456 won't seven $shine $$beighty?$

normalize_whitespce is 5x slower but more general. For safety in production, we use normalize_whitespce.To date, we do not think we had any problems with faster regex.

16. NLP text preprocessing: Remove Punctuation

from textacy.preprocessing.remove import remove_punctuation

%time text = remove_punctuation(long_s, marks=',.#$?')
print('size: {:g} {}'.format(len(text),text[:text_l]))

output =>

CPU times: user 34.5 ms, sys: 4.82 ms, total: 39.3 ms
Wall time: 39.3 ms
size: 558599
:( 😻 😈 888 eihtg dod fee document title :( catnip immednatedly 2nd levelheading can't be a ckunk 4 123 456 won't seven shine beighty

spaCy

Creating the spaCy pipeline and Doc

In order to text pre-process with spaCy, we transform the text into a corpus Doc object. We can then use the sequence of word tokens objects of which a Doc object consists. Each token consists of attributes (discussed above) that we use later in this article to pre-process the corpus.

Our text pre-processing end goal (usually) is to produce tokens that feed into our NLP models.

  • spaCy reverses the stream of pre-processing text and then transforming text into tokens. spaCy creates a Doc of tokens. You then pre-process the tokens by their attributes.

The result is that parsing text into a Doc object is where the majority of computation lies. As we will see, pre-processing the sequence of tokens by their attributes is fast.

Adding emoji cleaning in the spaCy pipeline

import en_core_web_lg
nlp = en_core_web_lg.load() do = nlp.disable_pipes(["tagger", "parser"])
%time emoji = Emoji(nlp)
nlp.max_length = len(long_s) + 10
%time nlp.add_pipe(emoji, first=True)
%time long_s_doc = nlp(long_s)
print('size: {:g} {}'.format(len(long_s_doc),long_s_doc[:text_l]))

output =>

CPU times: user 303 ms, sys: 22.6 ms, total: 326 ms
Wall time: 326 ms
CPU times: user 23 µs, sys: 0 ns, total: 23 µs
Wall time: 26.7 µs
CPU times: user 7.22 s, sys: 1.89 s, total: 9.11 s
Wall time: 9.12 s
size: 129199
:( 😻 😈 888 eihtg dod fee document title :( catnip immednatedly 2nd levelheading can't be a ckunk 4 123 456 won't seven shine beighty

Creating the token sequence required at 14,000 tokens per second. We will quite a speedup when we use NVIDIA gpu.

nlp.pipe_names output => ['emoji', 'ner']

Note: The tokenizer is a “special” component and isn’t part of the regular pipeline. It also doesn’t show up in nlp.pipe_names. The reason is that there can only be one tokenizer, and while all other pipeline components take a Doc and return it, the tokenizer takes a string of text and turns it into a Doc. You can still customize the tokenizer. You can either create your own Tokenizer class from scratch, or even replace it with an entirely custom function.

spaCy Token Attributes for Doc Token Preprocessing

As we saw earlier, spaCy provides convenience methods for many other pre-processing tasks. It turns — for example, to remove stop words you can reference the .is_stop attribute.

dir(token[0]) output=> 'ancestors', 'check_flag', 'children', 'cluster', 'conjuncts', 'dep', 'dep_', 'doc', 'ent_id', 'ent_id_', 'ent_iob', 'ent_iob_', 'ent_kb_id', 'ent_kb_id_', 'ent_type', 'ent_type_', 'get_extension', 'has_extension', 'has_vector', 'head', 'i', 'idx', 'is_alpha', 'is_ancestor', 'is_ascii', 'is_bracket', 'is_currency', 'is_digit', 'is_left_punct', 'is_lower', 'is_oov', 'is_punct', 'is_quote', 'is_right_punct', 'is_sent_end', 'is_sent_start', 'is_space', 'is_stop', 'is_title', 'is_upper', 'lang', 'lang_', 'left_edge', 'lefts', 'lemma', 'lemma_', 'lex_id', 'like_email', 'like_num', 'like_url', 'lower', 'lower_', 'morph', 'n_lefts', 'n_rights', 'nbor', 'norm', 'norm_', 'orth', 'orth_', 'pos', 'pos_', 'prefix', 'prefix_', 'prob', 'rank', 'remove_extension', 'right_edge', 'rights', 'sent', 'sent_start', 'sentiment', 'set_extension', 'shape', 'shape_', 'similarity', 'string', 'subtree', 'suffix', 'suffix_', 'tag', 'tag_', 'tensor', 'text', 'text_with_ws', 'vector', 'vector_norm', 'vocab', 'whitespace_']

Attributes added by emoji and new.

dir(long_s_doc[0]._) output => ['emoji_desc', 'get', 'has', 'is_emoji', 'set', 'trf_alignment', 'trf_all_attentions', 'trf_all_hidden_states', 'trf_d_all_attentions', 'trf_d_all_hidden_states', 'trf_d_last_hidden_state', 'trf_d_pooler_output', 'trf_end', 'trf_last_hidden_state', 'trf_pooler_output', 'trf_separator', 'trf_start', 'trf_word_pieces', 'trf_word_pieces_'

I show spaCy performing preprocessing that results in a Python string corpus. The corpus is used to create a new sequence of spaCy tokens (Doc).

There is a faster way to accomplish spaCy preprocessing with spaCy pipeline extensions [2], which I show in an upcoming blog.

17. EMOJI Sentiment Score

EMOJI Sentiment Score is not a text preprocessor in the classic sense.

However, we find that emoji almost always is the dominating text in a document.

For example, two similar phrases from legal notes e-mail with opposite sentiment.

The client was challenging. :( The client was difficult. :)

We calcuate only emoji when present in a note or e-mail.

%time scl = [EMOJI_TO_SENTIMENT_VALUE[token.text] for token in long_s_doc if (token.text in EMOJI_TO_SENTIMENT_VALUE)]
len(scl), sum(scl), sum(scl)/len(scl)

output =>

CPU times: user 179 ms, sys: 0 ns, total: 179 ms
Wall time: 178 ms
(15200, 1090.7019922523152, 0.07175671001659968)

The sentiment was 0.07 (neutral) for 0.5 million character “note” with 15,200 emojis and emojicons in 178 ms. A fast sentiment analysis calculation!

18. NLP text preprocessing: Removing emoji

You can remove emoji using spaCy pipeline add-on

%time long_s_doc_no_emojicon = [token for token in long_s_doc if token._.is_emoji == False]
print('size: {:g} {}'.format(len(long_s_doc_no_emojicon),long_s_doc_no_emojicon[:int(text_l/5)]))

output =>

CPU times: user 837 ms, sys: 4.98 ms, total: 842 ms
Wall time: 841 ms
size: 121599
[:(, 888, eihtg, dod, fee, , document, title, :(, catnip, immednatedly, 2nd, levelheading, , ca, n't, be, a, ckunk, , 4, , 123, 456, wo, n't, seven, , shine, , beighty, , :(, 888, eihtg, dod, fee, , document, title, :(, catnip, immednatedly, 2nd, levelheading, , ca, n't, be, a, ckunk, , 4, , 123, 456, wo, n't, seven, , shine, , beighty, , :(, 888, eihtg, dod, fee, ]

The emoji spacy pipeline addition detected the emojicons, 😻 😈, but missed :) and :(.

19. NLP text pre-processing: Removing emoji (better)

We developed EMOJI_TO_PHRASEto detect the emojicons, 😻 😈, and emoji, such as :) and :(. and removed them [8,9].

%time text = [token.text if (token.text in EMOJI_TO_PHRASE) == False \
else '' for token in long_s_doc]
%time long_s = ' '.join(text)
print('size: {:g} {}'.format(len(long_s),long_s[:text_l]))

output =>

CPU times: user 242 ms, sys: 3.76 ms, total: 245 ms
Wall time: 245 ms
CPU times: user 3.37 ms, sys: 73 µs, total: 3.45 ms
Wall time: 3.46 ms
size: 569997
888 eihtg dod fee document title catnip immednatedly 2nd levelheading ca n't be a ckunk 4 123 456 wo n't seven shine beighty 888 eihtg dod fee document title catnip immednatedly 2nd levelheading ca n't be a ckunk 4 123 456 wo n't seven shine beighty 888 eihtg dod fee document title catnip imm

20. NLP text pre-processing: Replace emojis with a phrase

We can translate emojicon into a natural language phrase.

%time text = [token.text if token._.is_emoji == False else token._.emoji_desc for token in long_s_doc]
%time long_s = ' '.join(text)
print('size: {:g} {}'.format(len(long_s),long_s[:250]))

output =>

CPU times: user 1.07 s, sys: 7.54 ms, total: 1.07 s
Wall time: 1.07 s
CPU times: user 3.78 ms, sys: 0 ns, total: 3.78 ms
Wall time: 3.79 ms
size: 794197
:( smiling cat face with heart-eyes smiling face with horns 888 eihtg dod fee document title :( catnip immednatedly 2nd levelheading ca n't be a ckunk 4 123 456 wo n't seven shine beighty

The emoji spaCy pipeline addition detected the emojicons, 😻 😈, but missed :) and :(.

21. NLP text pre-processing: Replace emojis with a phrase (better)

We can translate emojicons into a natural language phrase.

%time text = [token.text if (token.text in EMOJI_TO_PHRASE) == False \
else EMOJI_TO_PHRASE[token.text] for token in long_s_doc]
%time long_s = ' '.join(text)
print('size: {:g} {}'.format(len(long_s),long_s[:text_l]))

output =>

CPU times: user 251 ms, sys: 5.57 ms, total: 256 ms
Wall time: 255 ms
CPU times: user 3.54 ms, sys: 91 µs, total: 3.63 ms
Wall time: 3.64 ms
size: 904397
FROWNING FACE SMILING CAT FACE WITH HEART-SHAPED EYES SMILING FACE WITH HORNS 888 eihtg dod fee document title FROWNING FACE catnip immednatedly 2nd levelheading ca n't be a ckunk 4 123 456 wo n't seven shine beighty FROWNING FAC

Again. EMOJI_TO_PHRASE detected the emojicons, 😻 😈, and emoji, such as :) and :(. and substituted a phrase.

22. NLP text preprocessing: Correct Spelling

We will use symspell for spelling correction [14].

SymSpell, based on the Symmetric Delete spelling correction algorithm, just took 0.000033 seconds (edit distance 2) and 0.000180 seconds (edit distance 3) on an old MacBook Pro [14].

%time sym_spell_setup() 
%time tk = [check_spelling(token.text) for token in long_s_doc[0:99999]]
%time long_s = ' '.join(tk)
print('size: {:g} {}'.format(len(long_s),long_s[:250]))

output =>

CPU times: user 5.22 s, sys: 132 ms, total: 5.35 s
Wall time: 5.36 s
CPU times: user 25 s, sys: 12.9 ms, total: 25 s
Wall time: 25.1 s
CPU times: user 3.37 ms, sys: 42 µs, total: 3.41 ms
Wall time: 3.42 ms
size: 528259 FROWNING FACE SMILING CAT FACE WITH HEART a SHAPED EYES SMILING FACE WITH HORNS 888 eight do fee document title FROWNING FACE catnip immediately and levelheading a not be a chunk a of 123 456 to not seven of shine of eighty

Spell correction was accomplished for immednatedly, ckunk and beight. Correcting mis-spelled words is our largest computation. It required 30 seconds for 0.8 million characters.

23. NLP text preprocessing: Replacing Currency Symbol (spaCy)

%time token = [token.text if token.is_currency == False else '_CUR_' for token in long_s_doc]
%time long_s = ' '.join(token)
print('size: {:g} {}'.format(len(long_s),long_s[:text_l]))aa

Note: spacy removes all punctuation including :) emoji and emoticon. You can protect the emoticon with:

%time long_s_doc = [token for token in long_s_doc if token.is_punct == False or token._.is_emoji == True]
print('size: {:g} {}'.format(len(long_s_doc),long_s_doc[:50]))

However, replace_currency_symbols and regex ignore context and replace any currency symbol. You may have multiple use of $ in your text and thus can not ignore context. In this case you can use spaCy.

%time tk = [token.text if token.is_currency == False else '_CUR_' for token in long_s_doc]
%time long_s = ' '.join(tk)
print('size: {:g} {}'.format(len(long_s),long_s[:250]))

output =>

CPU times: user 366 ms, sys: 13.9 ms, total: 380 ms
Wall time: 381 ms
CPU times: user 9.7 ms, sys: 0 ns, total: 9.7 ms
Wall time: 9.57 ms
size: 1.692e+06 😻 👍 🏿 < title > Document Title</title > :( < html><h2>2nd levelheading</h2></html > [email protected] [email protected] [email protected] ca n't bc$$ ef$4 5 66 _CUR_ wo nt seven eihtg _CUR_ nine _CUR_ _CUR_ zer$ 😻 👍 🏿 < title > Document Title</title > :( < html><h2>2nd leve

24. NLP text preprocessing: Removing e-mail address (spacy)

%time tokens = [token for token in long_s_doc if not token.like_email]
print('size: {:g} {}'.format(len(tokens),tokens[:int(text_l/3)]))

output =>

CPU times: user 52.7 ms, sys: 3.09 ms, total: 55.8 ms
Wall time: 54.8 ms
size: 99999

About 0.06 second for 1 million characters.

25. NLP text preprocessing: Remove whitespace and punctuation (spaCy)

%time tokens = [token.text for token in long_s_doc if (token.pos_ not in ['SPACE','PUNCT'])]
%time text = ' '.join(tokens)
print('size: {:g} {}'.format(len(text),text[:text_l]))

26. NLP text preprocessing: Removing stop-words

NLP models (ex: logistic regression and transformers) and NLP tasks (Sentiment Analysis) continue to be added. Some benefit from stopword removal, and some will not. [2]

Note: We now only use different deep learning language models (transformers) and do not remove stopwords.

%time tokens = [token.text for token in long_s_doc if token.is_stop == False]
%time long_s = ' '.join(tokens)
print('size: {:g} {}'.format(len(long_s),long_s[:text_l]))

27. NLP text pre-processing: Lemmatization

Lemmatization looks beyond word reduction and considers a language’s full vocabulary to apply a morphological analysis to words.

Lemmatization looks at the surrounding text to determine a given word’s part of speech. It does not categorize phrases.

%time tokens = [token.lemma_ for token in long_s_doc]
%time long_s = ' '.join(tokens)
print('size: {:g} {}'.format(len(long_s),long_s[:text_l]))

output =>

CPU times: user 366 ms, sys: 13.9 ms, total: 380 ms
Wall time: 381 ms
CPU times: user 9.7 ms, sys: 0 ns, total: 9.7 ms
Wall time: 9.57 ms
size: 1.692e+06 😻 👍 🏿 < title > Document Title</title > :( < html><h2>2nd levelheading</h2></html > [email protected] [email protected] [email protected] ca n't bc$$ ef$4 5 66 _CUR_ wo nt seven eihtg _CUR_ nine _CUR_ _CUR_ zer$ 😻 👍 🏿 < title > Document Title</title > :( < html><h2>2nd leve

Note: Spacy does not have stemming. You can add if it is you want. Stemming does not work as well as Lemmazatation because Stemming does not consider context [2] (Why some researcher considers spacy “opinionated”).

Note: If you do not know what is Stemming, you can still be on the Survivor show. (my opinion)

Conclusion

Whatever the NLP task, you need to clean (pre-process) the data (text) into a corpus (document or set of documents) before it is input into any NLP model.

I adopt a text pre-processing framework that has three major categories of NLP text pre-processing:

  1. Noise Removal
  • Transform Unicode characters into text characters.
  • convert a document image into segmented image parts and text snippets [10];
  • extract data from a database and transform into words;
  • remove markup and metadata in HTML, XML, JSON, .md, etc.;
  • remove extra whitespaces;
  • remove emoji or convert emoji into phases;
  • Remove or convert currency symbol, URLs, email addresses, phone numbers, hashtags, other identifying tokens;
  • The correct mis-spelling of words (tokens) [7];
  • Remove remaining unwanted punctuation;

2. Tokenization

  • They are splitting strings of text into smaller pieces, or “tokens.” Paragraphs segment into sentences, and sentences tokenize into words.

3. Normalization

  • Change all characters to lower case;
  • Remove English stop words, or whatever language the text is in;
  • Perform Lemmatization or Stemming.

Note: The tasks listed in Noise Removal and Normalization can move back and forth. The categorical assignment is for explanatory convenience.

Note: We do not remove stop-words anymore. We found that our current NLP models have higher F1 scores when we leave in stop-words.

Note: Stop-word removal is expensive computationally. We found the best way to achieve faster stop-word removal was not to do it.

Note: We saw no significant change in Deep Learning NLP models’ speed with or without stop-word removal.

Note: The Noise Removal and Normalization lists are not exhaustive. These are some of the tasks I have encountered.

Note: The latest NLP Deep Learning models are more accurate than older models. However, Deep Learning models can be impractically slow to train and are still too slow for prediction. We show in a follow-on article how we speed-up such models for production.

Note: Stemming algorithms drop off the end of the beginning of the word, a list of common prefixes and suffixes to create a base root word.

Note: Lemmatization uses linguistic knowledge bases to get the correct roots of words. Lemmatization performs morphological analysis of each word, which requires the overhead of creating a linguistic knowledge base for each language.

Note: Stemming is faster than lemmatization.

Note: Intuitively and in practice, lemmatization yields better results than stemming in an NLP Deep Learning model. Stemming generally reduces precision accuracy and increases recall accuracy because it injects semi-random noise when wrong.

Read more in How and Why to Implement Stemming and Lemmatization from NLTK.

Text preprocessing Action benchmarks

Our unique implementations, spaCy, and textacy are our current choice for short text preprocessing production fast to use. If you don’t mind the big gap in performance, I would recommend using it for production purposes, over NLTK’s implementation of Stanford’s NER.

In the next blogs, We see how performance changes using multi-processing, multithreading, Nvidia GPUs, and pySpark. Also, I will write about how and why our implementations, such as EMOJI_TO_PHRASEand EMOJI_TO_SENTIMENT_VALUE and or how to add emoji, emoticon, or any Unicode symbol.

References

[1] How Much Data Do We Create Every Day? The Mind-Blowing Stats Everyone Should Read.

[2] Industrial-Strength Natural Language Processing;Turbo-charge your spaCy NLP pipeline.

[3] NLTK 3.5 Documentation.

[4] Textacy: Text (Pre)-processing.

[5] Hugging Face.

[6] Language Models are Few-Shot Learners.

[7] re — Regular expression operations.

[8] Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm.

[9] How I Built Emojitracker.

[10] Classifying e-commerce products based on images and text.

[11] DART: Open-Domain Structured Data Record to Text Generation.

[12] Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT.

[13] fast.ai .

[14] 1000x faster Spelling Correction.

This article was originally published on Medium and re-published to TOPBOTS with permission from the author. Read more technical guides by Bruce Cottman, Ph.D. on Medium.

Enjoy this article? Sign up for more AI and NLP updates.

We’ll let you know when we release more in-depth technical education.

Continue Reading

AI

Microsoft BOT Framework: Building Blocks

I wrote an article last week introducing the ‘’Microsoft BOT Framework”. The highlight of the article was to educate the readers on how to…

Published

on

Photo by Tincho Franco on Unsplash

I wrote an article some weeks ago introducing the ‘’Microsoft BOT Framework”. The highlight of the article was to educate the readers on how to develop a basic chatbot. Although my workmates acknowledged the efforts but were interested in knowing more. In this article, I am going to dig in a little deeper with the various concepts involved with the Microsoft BOT framework.

I would be touching on the below-mentioned concepts in this article.

  • Channel
  • State
  • Prompt
  • Dialog
  • Waterfall
  • Connector
  • Activity
  • Turn

Channel

Channel is an application that is being used to interact with the BOT. Some of the current integrations are available with Teams, Slack, Workplace, Skype, Facebook, Telegram, Line, Webchat, etc.

Some channels are also available as an adapter. Check here for more details.

State

State in the context of the ChatBots means persisting metadata of the conversation between the BOT and the user at a certain moment. State management makes the conversation more meaningful (i.e the responses could be saved to be accessed at a later point of time.

Prompt

During a conversation between the user and the BOT, a prompt is an event when BOT asks a user any question. This question could be in the form of text, button, dropdown, etc.

Dialog

Dialogs allow forming flow in the conversation. A Dialog comprises of 2 steps.

  1. A prompt from the BOT requesting for info
  2. User Response to the BOT

If the user response is valid, BOT will send a new prompt for further information, else it will re-send the same prompt.

1. 8 Proven Ways to Use Chatbots for Marketing (with Real Examples)

2. How to Use Texthero to Prepare a Text-based Dataset for Your NLP Project

3. 5 Top Tips For Human-Centred Chatbot Design

4. Chatbot Conference Online

Waterfall

The waterfall is formed with a combination of Dialogs. It’s a sequence of dialogs which determines the complete flow of the conversation.

Let’s look at all of these concepts in a diagrammatic representation.

Connector

REST API used by BOT to communicate across multiple channels. The API allows the exchange of messages between BOT and the user on a specific channel.

Activity

As the name suggests, an activity is any communication between the user and the BOT. The connector API uses the activity object to send useful information back and forth. The most common activity type is the message. For a complete list of all Activity types, see here.

Turn

In any conversation between two parties, each party takes turns to respond to an activity (message). In the context of Microsoft BOT Framework, communication happens between user and BOT, hence a turn could be considered as the processing done by the BOT to respond to the user request.

Now that we have understood the basic concepts needed to build this sample, let’s have a look at our use case.

We would be building a ChatBot application which would enable users to Book a taxi. The conversational flow would be like:

Each box in the above diagram represents a Dialog.

Github: https://github.com/tarunbhatt9784/MFTSamples/tree/master/SuperTaxiBot

Step 1: Create a VS2017 project

I would set the name of the project as “SuperTaxiBot”.

Step 2: Install Nuget Package

Install Nuget Package Microsoft.Bot.Builder.Dialogs using VS2017.

Step 3: Create a DialogBot.cs

The class consists of bot logic which processes incoming activities from one or more channels and generates outgoing activities in response.

ActivityHandler defines various handlers for different types of activities. The activities used in this sample are:

  • OnTurnAsync: Handles any incoming activity.
  • OnMessageActivityAsync: Invoked when a message activity is received from the user. If overridden, this could potentially contain conversational logic. By default, this method does nothing.
  • OnMembersAddedAsync: Invoked when members other than this bot (like a user) are added to the conversation

Source: https://chatbotslife.com/microsoft-bot-framework-building-blocks-377be3d55dab?source=rss—-a49517e4c30b—4

Continue Reading
AI18 hours ago

How to Get the Best Start at Sports Betting

AI18 hours ago

Natural Language Processing in Production: 27 Fast Text Pre-Processing Methods

AI20 hours ago

Microsoft BOT Framework: Building Blocks

AI20 hours ago

Are Banking Chatbots Vulnerable to Attacks?

AI20 hours ago

TikTok Alexa Skill — Dance to the Tunes Hands-free!

AI2 days ago

How does it know?! Some beginner chatbot tech for newbies.

AI2 days ago

Who is chatbot Eliza?

AI2 days ago

FermiNet: Quantum Physics and Chemistry from First Principles

AI3 days ago

How to take S3 backups with DejaDup on Ubuntu 20.10

AI4 days ago

How banks and finance enterprises can strengthen their support with AI-powered customer service…

AI4 days ago

GBoard Introducing Voice — Smooth Texting and Typing

AI5 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI5 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI5 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI5 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI5 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI5 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI5 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI5 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI5 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

Trending