Early methods of transferring information
When looking at transferring information before current tools existed there were various formats and methods relied upon. Commonly people would provide what would essentially be a one-way feed of information using a file format like a CSV, TXT or latterly XML. Often data from a database would be exported from one central source into a standing document of one of these types and then made available for whoever needed it, often only on a hard copy. The clear problem with an approach like this is that you’d have the one true source producing the data but anyone who made use of it wouldn’t be able to pass anything back to add to or update this data. Alongside that issue there is the issue of how long that data remains reliable and how many versions end up in circulation. Another early approach would be to allow ODBC or SQL calls directly to your database. Whilst this allows a lot of flexibility in what you can do with the data you have little data control. A third notable approach was proprietary applications or interfaces. By this it is meant that for a large scale system someone might build a bespoke application or network interface that would allow for management of data remotely between locations. It would have its own standards and way of working that would only be available to someone familiar with the producer of it.
The beginning of a more commonly accepted approach
Following on from the earlier methods described above more standardised and flexible approaches evolved. Funnily enough the adopting of a set of standards was essential to this. A good example of moving to a standard was the use of HTTP as a communication platform being in common use for the delivery of the world-wide web. By using this protocol to transfer information remotely there were several benefits: it was something that tended to work well when a server would be using a firewall, it could be easily deployed on existing systems which provided web services and could use existing encryption mechanisms in the form of SSL without any additional knowledge. This approach was also platform agnostic, meaning that if you were using Windows. Mac or Linux as your operating system it wouldn’t matter. Support for HTTP is also built directly into all common programming languages e.g Java, .Net, PHP or Perl.
While HTTP opened a door, the introduction of the open standards of SOAP and REST as ways of transferring information using the protocol took things even further towards establishing a widely adopted approach. SOAP was the first of the two to be commonly adopted, allowing two-way management of data using XML giving a business the ability to maintain that one source of information but distribute the ability interact with it. This two way interaction relied upon the HTTP methods of GET, POST, PUT and DELETE but were provided within a framework that took the heavy lifting away from carrying them out. REST gained popularity more recently being a variation on the theme, with what is considered a simpler implementation and having the ability to use other data formats for responses and requests such as JSON. REST has become widely used however this is in addition to SOAP and not as a replacement. As the adoption of these standards grew, open libraries and frameworks were developed to provide the integrator with simple interfaces to their choice of programming language, thereby removing a significant burden of development.