Connect with us

AI

Aspiring Toward Provably Beneficial AI Including The Case Of Autonomous Cars

By Lance Eliot, the AI Trends Insider As AI systems continue to be developed and fielded, one nagging and serious concern is whether the AI will achieve beneficial results. Perhaps among the plethora of AI systems are some that will be or might become eventually untoward, working in non-beneficial ways, carrying out detrimental acts that […]

Published

on

Researchers are trying to get to a provably beneficial AI that would for example prevent a self-driving car from veering into evil-doing. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

As AI systems continue to be developed and fielded, one nagging and serious concern is whether the AI will achieve beneficial results.

Perhaps among the plethora of AI systems are some that will be or might become eventually untoward, working in non-beneficial ways, carrying out detrimental acts that in some manner cause irreparable harm, injury, and possibly even death to humans. There is a distinct possibility that there are toxic AI systems among the ones that are aiming to help mankind.

We do not know whether it might be just a scant few that are reprehensible or whether it might be the preponderance that goes that malevolent route.

One crucial twist that accompanies an AI system is that they are often devised to learn while in use, thus, there is a real chance that the original intent will be waylaid and overtaken into foul territory, doing so over time, and ultimately exceed any preset guardrails and veer into evil-doing.

Proponents of AI cannot assume that AI will necessarily always be cast toward goodness.

There is the noble desire to achieve AI For Good, and likewise the ghastly underbelly of AI For Bad.

To clarify, even if AI developers had something virtuous in mind, realize that their creation can either on its own transgress into badness as it adjusts on-the-fly via Machine Learning (ML) and Deep Learning (DL), or it could contain unintentionally seeded errors or omissions that when later encountered during use are inadvertently going to generate bad acts.

Somebody ought to be doing something about this, you might be thinking and likewise wringing your hands worryingly.

For my article about the brittleness of ML/DL see: https://www.aitrends.com/ai-insider/machine-learning-ultra-brittleness-and-object-orientation-poses-the-case-of-ai-self-driving-cars/

For aspects of plasticity and DL see my discussion at: https://aitrends.com/ai-insider/plasticity-in-deep-learning-dynamic-adaptations-for-ai-self-driving-cars/

On my discussion of the possibility of AI failings see: https://www.aitrends.com/ai-insider/goto-fail-and-ai-brittleness-the-case-of-ai-self-driving-cars/

To learn about the nature of failsafe AI, see my explanation here: https://aitrends.com/ai-insider/fail-safe-ai-and-self-driving-cars/

Proposed Approach Of Provably Beneficial AI

One such proposed solution is an arising focus on provably beneficial AI.

Here’s the background.

If an AI system could be mathematically modeled, it might be feasible to perform a mathematical proof that would logically indicate whether the AI will be beneficial or not.

As such, anyone embarking on putting an AI system into the world would be able to run the AI through this provability approach and then be confident that their AI will be in the AI For Good camp, and those that endeavor to use the AI or that become reliant upon the AI will be comforted by the aspect that the AI was proven to be beneficial.

Voila, we turn the classic notion of A is to B, and as B is to C, into the strongly logical conclusion that A is to C, as a kind of tightly interwoven mathematical logic that can be applied to AI.

For those that look to the future and see a potential for AI that might overtake mankind, perhaps becoming a futuristic version of a frightening Frankenstein this idea of clamping down on AI by having it undergo a provability mechanism to ensure it is beneficial offers much relief and excitement.

We all ought to rejoice in the goal of being able to provably showcase that an AI system is beneficial.

Well, other than those that are on the foul side of AI, aiming to use AI for devious deeds and purposely seeking to do AI For Bad. They would be likely to eschew any such proofs and offer instead pretenses perhaps that their AI is aimed at goodness as a means of distracting from its true goals (meanwhile, some might come straight out and proudly proclaim they are making AI for destructive aspirations, the so-called Dr. Evil flair).

There seems to be little doubt that overall, the world would be better off if there was such a thing as provably beneficial AI.

We could use it on AI that is being unleashed into the real-world and then is heartened that we have done our best to keep AI from doing us in, and accordingly use our remaining energies on keeping watch on the non-proven AI that is either potentially afoul or that might be purposely crafted to be adverse.

Regrettably, there is a rub.

The rub is that wanting to have a means for creating or verifying provably beneficial AI is a lot harder than it might sound.

Let’s consider one such approach.

Professor Stuart Russell at the University of California Berkeley is at the forefront of provably beneficial AI and offers in his research that there are three core principles involved (as indicated in his research paper at https://people.eecs.berkeley.edu/~russell/papers/russell-bbvabook17-pbai.pdf):

1)      “The machine’s purpose is to maximize the realization of human values. In particular, it has no purposes of its own and no innate desire to protect itself.”

2)      “The machine is initially uncertain about what those human values are. The machine may learn more about human values as it goes along, of course, but it may never achieve complete certainty.”

3)      “Machines can learn about human values by overserving the choices that we humans make.”

Those core principles are then formulated into a mathematical framework, and an AI system is either designed and built according to those principles from the ground-up, or an existent AI system might be retrofitted to abide by those principles (the retrofitting would be generally unwise as it is easier and more parsimonious to start things the right way rather than trying to, later on, squeeze a square peg into a round hole, as it were).

For those of you that are AI insiders, you might recognize this approach as being characterized by being a Cooperative Inverse Reinforcement Learning (CIRL) scheme, whereby multiple agents are working cooperatively and the agents, in this case, are a human and an AI, of which the AI attempts to learn from the human by the actions of the human instead of learning from the AI’s direct actions per se.

Some would bluntly say that this particular approach to provably beneficial AI is shaped around making humans happy with the results of the AI efforts.

And making humans happy sure seems like a laudable ambition.

The Complications Involved

It turns out that there is no free lunch in trying to achieve provably beneficial AI.

Consider some of the core principles and what they bring about.

The first stated principle is that the AI is aimed to maximize the realization of human values and that the AI has no purposes of its own, including no desire to protect itself.

Part of the basis for making this rule is that it would seem to do away with the classic paperclip problem or the King Midas problem of AI.

Allow me to explain.

Hypothetically, suppose an AI system was set up to produce paperclips. If the AI is solely devoted to that function, it might opt to do so in ways that are detrimental to mankind. For example, to produce as many paperclips as possible, the AI begins to take over steel production to ensure that there are sufficient materials to make paper clips. Soon, in a draconian way, the AI has marshaled all of the world’s resources to incessantly make those darned paperclips.

Plus, horrifically, humanity might be deemed as getting in the way of the paperclip production, and so the AI then wipes out humanity too.

All in all, this is decidedly not what we would have hoped for as a result of the AI paperclip making system.

This is similar to the fable of King Midas whereby everything he touched turned to gold, which at first seemed like a handy way to great rich, but then upon touching water it turns to gold, and the food turned to gold, and so on, ultimately he does himself in and realizes that his wishes were a curse.

Thus, rather than AI having a goal that it embodies, such as making paper clips, the belief in this version of provably beneficial AI is that it would be preferred that the AI not have any self-beliefs and instead entirely be driven by the humans around it.

Notice too that the principle states that the AI is established such that it has no desire to protect itself.

Why so?

Aha, this relates to another classic AI problem, the off-switch or kill-switch issue.

Assume that any AI that we humans craft will have some form of off-switch or kill-switch, meaning that if we wanted to do so, we could stop the AI, presumably whenever we deemed desirable to so halt. Certainly, this would be a smart thing for us to do, else we might have that crazed paperclip maker and have no means to prevent it from overwhelming the planet in paperclips.

If the AI has any wits about it, which we are kind of assuming it would, the AI would be astute enough to realize that there is an off-switch and that humans could use it. But if the AI is doggedly determined to make those paper clips, the use of an off-switch would prevent it from meeting its overarching goal, and therefore the proper thing to do would be for the AI to disable that kill-switch.

It might be one of the first and foremost acts that the AI would undertake, seeking to preserve its own “lifeblood” by disabling the off switch.

To try and get around this potential loophole, the stated principle in this provably beneficial AI framework indicates that the AI is not going to have that kind of self-preservation cooked into its inherent logic.

Presumably, if the AI is going to seek to maximize the realization of human values, it could be that the AI will itself realize that disabling the off-switch is not in keeping with the needs of society and thus will refrain from doing so.  Furthermore, maybe the AI eventually realizes that it cannot achieve the realization of human values, or that it has begun to violate that key premise, and the AI might overtly turn itself off, viewing that its own “demise” is the best way to accede to human values.

This does seem enterprising and perhaps gets us out of the AI doomsday predicaments.

Not everyone sees it that way.

One concern is that if the AI does not have a cornerstone of any semblance of self, it will potentially be readily swayed in directions that are not quite so desirable for humanity.

Essentially, without a truism at its deepest realm of something ironclad about don’t harm humans, using perhaps Issac Asimov’s famous first rule that a robot may not injure a human being or via inaction allow a human to be harmed, there is no failsafe of preventing the AI from going kilter.

That being said, the counter-argument is that the core principles of this kind of provably beneficial AI are indicative that the AI will learn about human values, doing so by observation of human acts, and we might assume this includes that the AI will inevitably and inextricably discover on its own Asimov’s first rule, doing so by the mere act of observing human behavior.

Will it?

A counter to the counter-argument is that the AI might learn that humans do kill each other, somewhat routinely and with at times seemingly little regard for human life, out of which the AI might then divine that it is okay to harm or kill humans.

Since the AI lacks any ingrained precept that precludes harming humans, the AI will be open to whatever it seems to “learn” about humans, including the worst and exceedingly vile of acts.

Additionally, those that are critics of this variant of provably beneficial AI that are apt to point out that the word “beneficial” is potentially being used in a misleading and confounding way.

It would seem that the core principles do not mean to achieve “beneficial” in that sense of arriving at a decidedly “good” result per se (in any concrete or absolute way), and instead beneficial is intended as relative to whatever humans happen to be exhibiting as seemingly so-called beneficial behavior. This might be construed as a relativistic ethics stanch, and in that manner, does not abide by any presumed everlasting or considered unequivocal rules of how humans ought to behave (even if they do not necessarily behave in such ways).

You can likely see that this topic can indubitably get immersed in and possibly mired into cornerstone philosophical and ethical foundations debates.

This also takes things into the qualms about basing the AI on the behaviors of humans.

We all know that oftentimes humans say one thing and yet do another.

As such, one might construe that it is best to base the AI on what people do, rather than what they say since their actions presumably speak louder than their words. The problem with this viewpoint of humanity is that it seems to omit that words do matter and that inspection of behavior alone might be a rather narrow means of ascribing things like intent, which would seem to be an equally important element for consideration.

There is also the open question about which humans are to be observed.

Suppose the humans are part of a cult that is bent on death and destruction, and in which case, their “happiness” might be shaped around the beliefs that lead to those dastardly results, and the AI would dutifully “learn” those as the thing to maximize as human values.

And so on.

In short, as pointed out earlier, seeking to devise an approach for provably beneficial AI is a lot more challenging than meets the eye at first glance.

That being said, we should not cast aside the goal of finding a means to arrive at provably beneficial AI.

Keep on trucking, as they say.

Meanwhile, how might the concepts of provably beneficial AI be applied in a real-world context?

Consider the matter of AI-based true self-driving cars.

For my detailed discussion about the paperclip problem in AI, see: https://aitrends.com/ai-insider/super-intelligent-ai-paperclip-maximizer-conundrum-and-ai-self-driving-cars/

On the topic of AI singularity, see my explanation here: https://aitrends.com/ai-insider/singularity-and-ai-self-driving-cars/

For aspects about AI conspiracy theories, here is my take on the subject: https://aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

When considering the mindset of AI developers, see my discussion here: https://aitrends.com/ai-insider/egocentric-design-and-ai-self-driving-cars/

The Role of AI-Based Self-Driving Cars

True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Provably Beneficial AI

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One hope for true self-driving cars is that they will mitigate the approximate 40,000 deaths and about 1.2 million annual injuries that occur due to human driving in the United States alone each year. The assumption is that since the AI won’t be driving and drinking, for example, it will not incur drunk driving-related car crashes (which accounts for nearly a third of all driving fatalities).

Some offer the following “absurdity” instance for those that are considering the notion of provably beneficial AI as an approach based on observing human behavior.

Suppose AI observes the existing driving practices of humans. Undoubtedly, it will witness that humans crash into other cars, and presumably not know that it is due to being intoxicated (in that one-third or so of such instances).

Presumably, we as humans allow those humans to do that kind of driving and cause those kinds of deaths.

We must, therefore, be “satisfied” with the result, else why we would allow it to continue.

The AI then “learns” that it is okay to ram and kill other humans in such car crashes, and has no semblance that it is due to drinking and that it is an undesirable act that humans would prefer to not have taken place.

Would the AI be able to discern that this is not something it should be doing?

I realize that those of you in the provably beneficial AI camp will be chagrined at this kind of characterization, and indeed there are loopholes in the aforementioned logic, but the point generally is that these are quite complex matters and undoubtedly disconcerting in many ways.

Even the notion of having foundational precepts as absolutes is not so readily viable either.

Take as a quick example the assertion by some that an AI driving system ought to have an absolute rule like Asimov’s about not harming humans and thus this apparently resolves any possible misunderstanding or mushiness on the topic.

But, as I’ve pointed out in an analysis of a recent incident in which a man rammed his car into an active shooter, there are going to be circumstances whereby we might want an AI driving system to undertake harm, and cannot necessarily have one ironclad rule thereof.

Again, there is no free lunch, in any direction, that one takes on these matters.

For why self-driving cars are a moonshot effort, see my discussion here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/

For the edge problems and corner cases aspects, see my indication: https://aitrends.com/ai-insider/edge-problems-core-true-self-driving-cars-achieving-last-mile/

On the topic of illegal driving by autonomous cars, read my analysis here: https://aitrends.com/selfdrivingcars/illegal-driving-self-driving-cars/

Conclusion

There is no question that we could greatly benefit from a viable means to provably showcase that AI is beneficial.

If we cannot attain showing that the AI is beneficial, at least provide a mathematical proof that the AI will keep to its stated requirements (well, this opens another can of worms, but at least sidesteps the notion of “beneficial,” rightfully or wrongly so).

Imagine an AI-based self-driving car that was subjected before getting onto the roadways to a provable safety theorem, and that had something similar that worked in real-time as the vehicle navigated our public streets.

Researchers are trying to get there and we can all hope they keep trying.

At this juncture, one thing that is provably the case is that all of the upcoming AI that is rapidly emerging into society is going to be extraordinarily vexing and troublesome, and that’s something we can easily prove.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: https://www.aitrends.com/ai-insider/aspiring-toward-provably-beneficial-ai-including-the-case-of-autonomous-cars/

AI

How does it know?! Some beginner chatbot tech for newbies.

Published

on

Wouter S. Sligter

Most people will know by now what a chatbot or conversational AI is. But how does one design and build an intelligent chatbot? Let’s investigate some essential concepts in bot design: intents, context, flows and pages.

I like using Google’s Dialogflow platform for my intelligent assistants. Dialogflow has a very accurate NLP engine at a cost structure that is extremely competitive. In Dialogflow there are roughly two ways to build the bot tech. One is through intents and context, the other is by means of flows and pages. Both of these design approaches have their own version of Dialogflow: “ES” and “CX”.

Dialogflow ES is the older version of the Dialogflow platform which works with intents, context and entities. Slot filling and fulfillment also help manage the conversation flow. Here are Google’s docs on these concepts: https://cloud.google.com/dialogflow/es/docs/concepts

Context is what distinguishes ES from CX. It’s a way to understand where the conversation is headed. Here’s a diagram that may help understand how context works. Each phrase that you type triggers an intent in Dialogflow. Each response by the bot happens after your message has triggered the most likely intent. It’s Dialogflow’s NLP engine that decides which intent best matches your message.

Wouter Sligter, 2020

What’s funny is that even though you typed ‘yes’ in exactly the same way twice, the bot gave you different answers. There are two intents that have been programmed to respond to ‘yes’, but only one of them is selected. This is how we control the flow of a conversation by using context in Dialogflow ES.

Unfortunately the way we program context into a bot on Dialogflow ES is not supported by any visual tools like the diagram above. Instead we need to type this context in each intent without seeing the connection to other intents. This makes the creation of complex bots quite tedious and that’s why we map out the design of our bots in other tools before we start building in ES.

The newer Dialogflow CX allows for a more advanced way of managing the conversation. By adding flows and pages as additional control tools we can now visualize and control conversations easily within the CX platform.

source: https://cloud.google.com/dialogflow/cx/docs/basics

This entire diagram is a ‘flow’ and the blue blocks are ‘pages’. This visualization shows how we create bots in Dialogflow CX. It’s immediately clear how the different pages are related and how the user will move between parts of the conversation. Visuals like this are completely absent in Dialogflow ES.

It then makes sense to use different flows for different conversation paths. A possible distinction in flows might be “ordering” (as seen here), “FAQs” and “promotions”. Structuring bots through flows and pages is a great way to handle complex bots and the visual UI in CX makes it even better.

At the time of writing (October 2020) Dialogflow CX only supports English NLP and its pricing model is surprisingly steep compared to ES. But bots are becoming critical tech for an increasing number of companies and the cost reductions and quality of conversations are enormous. Building and managing bots is in many cases an ongoing task rather than a single, rounded-off project. For these reasons it makes total sense to invest in a tool that can handle increasing complexity in an easy-to-use UI such as Dialogflow CX.

This article aims to give insight into the tech behind bot creation and Dialogflow is used merely as an example. To understand how I can help you build or manage your conversational assistant on the platform of your choice, please contact me on LinkedIn.

Source: https://chatbotslife.com/how-does-it-know-some-beginner-chatbot-tech-for-newbies-fa75ff59651f?source=rss—-a49517e4c30b—4

Continue Reading

AI

Who is chatbot Eliza?

Between 1964 and 1966 Eliza was born, one of the very first conversational agents. Discover the whole story.

Published

on


Frédéric Pierron

Between 1964 and 1966 Eliza was born, one of the very first conversational agents. Its creator, Joseph Weizenbaum was a researcher at the famous Artificial Intelligence Laboratory of the MIT (Massachusetts Institute of Technology). His goal was to enable a conversation between a computer and a human user. More precisely, the program simulates a conversation with a Rogérian psychoanalyst, whose method consists in reformulating the patient’s words to let him explore his thoughts himself.

Joseph Weizenbaum (Professor emeritus of computer science at MIT). Location: Balcony of his apartment in Berlin, Germany. By Ulrich Hansen, Germany (Journalist) / Wikipedia.

The program was rather rudimentary at the time. It consists in recognizing key words or expressions and displaying in return questions constructed from these key words. When the program does not have an answer available, it displays a “I understand” that is quite effective, albeit laconic.

Weizenbaum explains that his primary intention was to show the superficiality of communication between a human and a machine. He was very surprised when he realized that many users were getting caught up in the game, completely forgetting that the program was without real intelligence and devoid of any feelings and emotions. He even said that his secretary would discreetly consult Eliza to solve his personal problems, forcing the researcher to unplug the program.

Conversing with a computer thinking it is a human being is one of the criteria of Turing’s famous test. Artificial intelligence is said to exist when a human cannot discern whether or not the interlocutor is human. Eliza, in this sense, passes the test brilliantly according to its users.
Eliza thus opened the way (or the voice!) to what has been called chatbots, an abbreviation of chatterbot, itself an abbreviation of chatter robot, literally “talking robot”.

Source: https://chatbotslife.com/who-is-chatbot-eliza-bfeef79df804?source=rss—-a49517e4c30b—4

Continue Reading

AI

How to take S3 backups with DejaDup on Ubuntu 20.10

DejaDup is the default backup application for Gnome. It’s a GUI for duplicity, focuses on simplicity, supports incremental encrypted backups and up until recently supported a large number of cloud providers. Unfortunately as of version 42.0, all major cloud providers have been removed. Thus given that Ubuntu 20.10 ships with the specific version, any user […]

Published

on

DejaDup is the default backup application for Gnome. It’s a GUI for duplicity, focuses on simplicity, supports incremental encrypted backups and up until recently supported a large number of cloud providers. Unfortunately as of version 42.0, all major cloud providers have been removed. Thus given that Ubuntu 20.10 ships with the specific version, any user who upgrades and has backups on Amazon S3 won’t be able to access them. In this blog post, we will provide a solution that will allow you to continue taking backups on AWS S3 using DejaDup.

The mandatory rant (feel free to skip)

The removal of the cloud providers should not come as a surprise. I’m not exactly sure which version of DejaDup deprecated them but it was around the release of Ubuntu 17.10 when they were all hidden as an option. So for 3 long years, people who had backups on Amazon S3, Google Cloud Storage, Openstack Swift, Rackspace etc could still use the deprecated feature and prepare for the inevitable removal.

So why complain you might ask? Well, first of all, when you update from an earlier version of Ubuntu to 20.10, you don’t really know that the all cloud providers are removed from DejaDup. Hence if something goes wrong during the update, you won’t be able to easily access your backups and restore your system.

Another big problem is the lack of storage options on the last version of DejaDup. They decided to change their policy and support only “consumer-targeted cloud services” but currently they only support Google Drive. So they eliminated all the cost efficient options for mass storage and kept only one single very expensive option. I’m not really sure how this is good for the users of the application. Linux was always about having a choice (too much of it in many cases), so why not maintain multiple storage options to serve both the experience and inexperienced users? Thankfully because we are on Linux, we have option to fix this.

How to use Deja Dup v42+ with AWS S3

WARNING: I have not tested thoroughly the following setup so use it at your own risk. If the computer explodes in your face, you lose your data, or your spouse takes your kids and leaves you, don’t blame me.

Installing s3fs fuse

With that out of the way, let’s proceed to the fix. We will use s3fs fuse, a program that allows you to mount an S3 bucket via FUSE and effectively make it look like a local disk. Thankfully you don’t have to compile it from source as it’s on Ubuntu’s repos. To install it, type the following in your terminal:

sudo apt install s3fs

Setting up your AWS credentials file

Next, we need to configure your credentials. The s3fs supports two methods for authentication: an AWS credential file or a custom passwd file. In this tutorial we will use the first method but if you are interested for the latter feel free to view the s3fs documentation on Github. To setup your credentials make sure that the file ~/.aws/credentials contains your AWS access id and secret key. It should look like this:


[default]
aws_access_key_id=YOUR_ACCESS_KEY_ID
aws_secret_access_key=YOUR_SECRET_ACCESS_KEY

Mounting your bucket to your local filesystem

Once your have your credentials file you are ready to mount your backup bucket. If you don’t remember the bucket name you can find it by visiting your AWS account. To mount and unmount the bucket to/from a specific location type:


# mount
s3fs BUCKET_NAME /path/to/location

# unmount
fusermount -u /path/to/location

Mounting the bucket like this is only temporary and will not persist across reboots. You can add it on /etc/fstab but I believe this only works with the passwd file. If you want to use your AWS credentials file an easy workaround it to create a shortcut in your Startup Applications Preferences.

Note that you can add a small 10 sec delay to ensure that the WiFi is connected before you try to mount the bucket. Internet access is obviously necessary for mounting it successfully. If you are behind VPNs or have other complex setups, you can also create a bash script that makes the necessary checks before you execute the mount command. Sky is the limit!

Configuring DejaDup

With the bucket mounted as a local drive, we can now easily configure DejaDup to use it. First of all we need to change the backend to local. This can be done either by using a program like dconfig or the console with the following command:

gsettings set org.gnome.DejaDup backend 'local'

Finally we open DejaDup, go to preferences and point the storage location to the directory that has your S3 backup files. Make sure you select the subdirectory that contains the backup files; this is typically a subdirectory in your mount point that has name equal to your computer’s hostname. Last but not least, make sure that the S3 mount directory is excluded from DejaDup! To do this, check the ignored folders in Preferences.

That’s it! Now go to your restore tab and DejaDup will be able to read your previous backups. You can also take new ones.

Gotchas

There are a few things to keep in mind in this setup:

  1. First of all, you must be connected on the internet when you mount the bucket. If you are not the bucket won’t be mounted. So, I advise you instead of just calling the mount command, to write a bash script that does the necessary checks before mounting (internet connection is on, firewall allows external requests etc).
  2. Taking backups like that seems slower than using the old native S3 support and it is likely to generate more network traffic (mind AWS traffic costs!). This is expected because DejaDup thinks it’s accessing the local file-system, so there is no need for aggressive caching or minimization of operations that cause network traffic.
  3. You should expect stability issues. As we said earlier, DejaDup does not know it writes data over the wire so much of the functionalities that usually exist in such setups (such as retry-on-fail) are missing. And obviously if you lose connection midway of the backup you will have to delete it and start a new one to avoid corrupting your future backups.
  4. Finally keep in mind that this is a very experimental setup and if you really want to have a reliable solution, you should do your own research and select something that meets your needs.

If you have a recommendation for an Open-Source Backup solution that allows locally encrypted incremental backups, supports S3 and has an easy to use UI please leave a comment as I’m more than happy to give it a try.

About Vasilis Vryniotis

My name is Vasilis Vryniotis. I’m a Data Scientist, a Software Engineer, author of Datumbox Machine Learning Framework and a proud geek. Learn more

Source: http://blog.datumbox.com/how-to-take-s3-backups-with-dejadup-on-ubuntu-20-10/

Continue Reading
AI1 hour ago

How does it know?! Some beginner chatbot tech for newbies.

AI1 hour ago

Who is chatbot Eliza?

AI21 hours ago

How to take S3 backups with DejaDup on Ubuntu 20.10

AI2 days ago

How banks and finance enterprises can strengthen their support with AI-powered customer service…

AI2 days ago

GBoard Introducing Voice — Smooth Texting and Typing

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

Trending