The notion of "Agent Technology" is a supposedly big up-and-coming thing. The basic idea is to have a program go out and "do things" for you rather than your having to do it yourself. For instance, a web agent might go off and look up information on the prices of your favorite stocks, collect the information together, and return it to you.
I've written a number of utilities in Perl that I have used to extract data off the web, including stock quotes for my investment portfolio, archives a favorite daily cartoon (that will remain nameless to keep copyright folk at bay), and looks up Dave Letterman's "Top Ten" list for the day. They do not act as mobile agents as they are just configured to run on my own system. Alas, this sort of thing is rather fragile, and these scripts have all fallen into uselessness, over the course of time. That is one of the risks of attempting to use agents.
Here are some useful tools:
Perl Very useful for building scripts to parse and otherwise process web pages;
Lynx I usually use Lynx to access web objects, using lynx -source URL It causes no GUI overhead, ate convenient for command line use of this fashion. I've written some "robots" that go off and spawn as many as 50 lynx instances at a time in order to more efficiently download lists of web objects. This "floods" my Squid proxy cache with requests, which means it's pretty much guaranteed that my ISP link will be busy 100% of the time while this is running...
If you're wondering why it is that your web site stats include a thundering herd of hits from some strangely-named web clients, it is likely that your site was "spidered" by one of the search engines. If you don't want that, there is a robots.txt file you can put on your site that will indicate to any well-mannered web robot that they should leave you alone.
This web page lists many Agent Construction tools; common implementation languages include:
The "neat idea" that is still pretty much on-the-drawing-board is to have agents that don't just reside on your computer, but can wander out to other places to look for whatever it is that they are looking for, and then wander back home to return the results.
The two primary reasons for using mobile agents are the following:
Users are running on computing platforms that are not very reliable.
For people running on less-than-robust computing platforms, or with slow or intermittent network connections, it may be necessary to run "resource-hungry" agents remotely.
Even when the platform is reliable, it may be very important in a transaction processing system for individual units of work, or "transactions," to be nonrepudiatable. From whence comes the notion of a Transaction Processing Monitor.
On "Thin client" systems (e.g. Network Computers , or inside web browsers), there may simply not be the memory or CPU power to handle complex data processing.
On the other hand, under Linux, I can reliably fire off 50 Lynx processes to grab stock prices and drop them into a database. I prefer not to have them all running concurrently, but the system can handle that sort of thing...
Improving I/O efficiency
If the process will be doing a lot of interaction with the remote system, and then summarizing this data in a compact fashion, it is preferable to do the "summarizing" some place near the data repository as this cuts down on the amount of data that has to travel over the network.
This represents creating a three tier client/server system involving:
Presentation server/user server (e.g. - web browser)
Application server - where the agent code runs
Hopefully this is closer to the database, thus reducing I/O requirements.
Database server - where the data resides
The essential idea is that you send out programs that run "somewhere else." Since somewhere else could represent all sorts of "computing systems," it is necessary to have such system features as platform independence, secure execution environments, secure authentication, and other such. Bytecode -compiled languages (e.g. Lucent's Inferno, Java, ...) are excellent candidates for this sort of processing, as bytecode-based systems tend to encourage portability and smallness of object code. Careful system design is necessary to ensure security.
There are a number of proposals for hosting agents; it is a really hard problem to make it work well because it invokes issues of who trusts who. There needs to be a place to run the agent, which means that:
You must be able to authenticate that the agent code that is sent out and information that comes back has not been tampered with while in transit.
The use of cryptographic mechanisms such as digital signatures may help resolve this problem. Modulo some problems with US law that may affect "agents" that have to travel in or out of the United States.
Agents as well as data transmissions may contain or transmit "secrets" that are not to be disclosed.
Data encryption can resolve this problem for data in transit. Unfortunately, the server has total control over the "agent execution environment," which means that you need to trust the server your program runs on.
There will be limitations on the quantities of CPU, memory, and communications resources that agents can use. There must be a protocol to negotiate these limits.
There will be limitations on what information agents can request. In particular, they will have to execute their code in a "secure" environment such that they cannot interfere with other agents on the system, or with system operations in general. It is probably necessary to set up some economic models so that agents somehow "pay for" the resources that they use.
This also tends to require the use of "safe" execution environments with such languages as Java, Safe-TCL, Perl, Python, Scheme, ...
There is a more or less intractable problem vis-a-vis secrecy of executable code.
The problem is that any code that I send out will be susceptible to capture/analysis.
Even if the code is "secured" by data encryption, it must, at some point, exist in decrypted form in order for the code to actually run on the remote server. While it is decrypted, the code is vulnerable to capture by the remote server.
In other words, if I have a "nifty" agent program that has a wonderful algorithm for analyzing stock values, the only way to keep it secret from everyone else is to run it only on hardware that I control. If I run it on your server, you will have the ability to get a copy of the program.
This also has interesting implications for software copyright. Typical software licenses permit me to run programs on my computer. Some new model will be necessary if "my computer" becomes a network of computers that may not all be "at my site," or perhaps not all in the same national jurisdiction.
This is a "problem," for instance, with schemes for playing music or movies (DVDs) on general purpose computers; a document may be kept encrypted in transit, but at the time that it is played back, it must be decrypted so that it may be submitted to the computer's video and audio hardware.
Once decrypted, that data is then necessarily vulnerable to processes on that computer.
This is a dilemna, with no evident way out.
Here are some further references to "Intelligent Agents," "Mobile Agents," ...
ERights.org - home of the E Language.
The E Language can be used to express what happens within an object. Based on notions from the "Actor Model," the notion of "orthogonal persistence" as implemented in KeyKOS , Concurrent Prolog , the Joule Language , amongst other systems.
See the E Language Tutorial.
E is implemented atop Java , and resembles some sort of cross between Java, Dylan , and Smalltalk . (Its design also consciously includes features of ML ). Since it sits atop Java, it includes the entire namespace of Java calls available in your Java implementation.
It requires the Cryptix crypto library, as well as Java components including the Swing graphical library, and is specifically intended for use in designing "agent" software that can negotiate contracts of one variety or another.
Programming with Agents: Title Page: Michael Travers
This dissertation investigates new metaphors, environments, and languages that make possible new ways to create programs -- and, more broadly, new ways to think about programs. In particular, it introduces the idea of programming with "agents" as a means to help people create worlds involving responsive, interacting objects. In this context, an agent is a simple mechanism intended to be understood through anthropomorphic metaphors and endowed with certain lifelike properties such as autonomy, purposefulness, and emotional state. Complex behavior is achieved by combining simple agents into more complex structures. While the agent metaphor enables new ways of thinking about programming, it also raises new problems such as inter-agent conflict and new tasks such as making the activity of a complex society of agents understandable to the user.
The goal is to be able to make use of "bulk" computing elements that may be manufactured and assembled very cheaply to build powerful computing engines.
The objective of this research is to create the system-architectural, algorithmic, and technological foundations for exploiting programmable materials. These are materials that incorporate vast numbers of programmable elements that react to each other and to their environment. Such materials can be fabricated economically, provided that the computing elements are amassed in bulk without arranging for precision interconnect and testing. In order to exploit programmable materials we must identify engineering principles for organizing and instructing myriad programmable entities to cooperate to achieve pre-established goals, even though the individual entities are unreliable and interconnected in unknown, irregular, and time-varying ways.