Wednesday, April 24, 2019

JASON (multi-agent systems development platform)

Introduction to JASON

      Jason is a platform for the development of multi-agent systems. An extension of the AgentSpeak agent-oriented programming language is used to program the behavior of individual agents. Jason is developed in Java and allows the customization of most aspects of an agent or a multi-agent system. It comes as a plugin for either jEdit or Eclipse, and different infra-structures for the deployment of a multi-agent system, for example using JADEor SACI as an agent-based distributed system middle-ware.

      Current trends in computer science such as the semantic web, ubiquitous computing, and self-* systems make it increasingly important that programming technology suitable for open, unpredictable, dynamic environments are made available. Many abstractions and techniques that emerged from research in multi-agent systems can have major impact in the effectiveness of (the development of) such systems. Research into agent-oriented programming languages aims at making such abstractions and techniques readily available at the level of programming languages. In this perspective, agent-oriented programming, combined with ongoing work on agent-oriented software engineering, is likely to lead to a popular new paradigm for the practical development of those complex distributed systems.

      One of the most studied architectures for cognitive agents is the BDI (Beliefs-Desires-Intentions) architecture. In the area of agent-oriented programming languages in particular, AgentSpeak(L) is one of the best known languages based on the BDI architecture. AgentSpeak(L) is an abstract logic-based agent-oriented programming language introduced by Rao and subsequently extended and formalized in a series of papers by Bordini, Hübner, and various colleagues. Practical BDI agents are implemented as reactive planning systems: they run continuously, reacting to events (e.g., perceived changes in the environment) by executing plans given by the programmer. Plans are courses of actions that agents commit to execute so as achieve their goals. The pro-active behavior of agents is possible through the notion of goals (desired states of the world) that are also part of the language in which plans are written.

      Jason is a Java-based platform for the development of multi-agent systems. At the core of the platform lies an interpreter for an extended version of AgentSpeak (we use ``AgentSpeak'' to refer to the various extensions of the original AgentSpeak(L) language). Various ad hoc implementations of BDI-based (or ``goal-based'') systems exist, but one important characteristic of AgentSpeak is its theoretical foundation; In fact, the implementation of the AgentSpeak interpreter available with Jason is directly based on the operational semantics of the language.

Current Features of the Jason Platform

      The original, abstract version of the language was meant for theoretical work on relating BDI logics to implementations of reactive planning systems that followed the same philosophical principles (on ``practical reasoning'', i.e., reasoning about how to act). The AgentSpeak extensions which Jason necessary for turning the original abstract language into a practical programming language suitable for multi-agent systems.  The language extensions have the following features:

      Strong negation: as is well known in the ALP community, close-world assumption is not ideal for open systems where uncertainty cannot be avoided; it helps the modelling of such applications if we are able to refer to things agents believe to be true, believe to be false, or are ignorant about.

      Handling of plan failures: because of the dynamic nature of typical multi-agent environments, plans can fail to achieve the goals they were written to achieve; one important aspect of reactive planning systems is that the particular choice of the specific plan to achieve a goal is left for as late as possible so as to consider the latest information the agent might have, but of course plans can still fail. Jason has a particular form of plan failure handling mechanism consisting of plans that are triggered by such failure, giving the programmer the chance to act so as to undo the effects of any action already done before the plan failed, if necessary, and then adopting the goal (that was not achieved) again, if the conditions are appropriate.

      Speech-act based communication: the philosophical foundation for all the work on inter-agent communication is speech-act theory; because mental attitudes which are classically used to give semantics for speech-act based communication are formally defined for AgentSpeak we can give precise semantics for how agents interpret the basic  elocutionary forces, and this has been implemented in Jason. An interesting extension (Note that annotations as used here do not increase the expressive power of the language but are an elegant notation, making the belief base much more readable) of the language is that beliefs can have ``annotations'' which can be useful for application-specific tasks, but there is one standard annotations that is done automatically by Jason, which is on the source of each particular belief. There are essentially three different types of sources of information: precepts (i.e., information obtained by sensing the environment), inter-agent communication (i.e., information obtained from other agents), and ``mental notes'' (i.e., beliefs added by the agent itself which can facilitate various programming tasks).

      Plan annotations: in the same way that beliefs can have annotations, programmers can add annotations to plan labels, which can be used by elaborate (e.g., using decision-theoretic techniques) selection functions.  Selection functions are user-defined functions which are used by the interpreter, including which plan should be given preference in case various different plans happen to be considered applicable for a particular event.

The platform, more generally, has the following features:


      Distribution: the platform makes it easy to define the agents that will take part in the system and also determine in which machines each will run, if actual distribution is necessary. The infrastructure for actual distribution can be changed (e.g., if a particular application needs to use a particular distribution platform such as JADE). Currently, two types of infrastructure are available: one that runs all agents in the same machine and another which allows distribution using SACI (http://www.lti.pcs.usp.br/saci/).

      Environments: multi-agent systems will normally be deployed in some real-world environment. Even in that case, during development a simulation of the environment will be needed. Jason provides support for developing environments, which are programmed in Java rather than an agent language. The agent abstractions are often not appropriate  for programming environments, so we provide the necessary support for this to be done in Java.

      Customization: programmers can customize two important parts of the agent platform by providing application-specific Java methods for the certain aspects of an agent and the agent architecture (note that the AgentSpeak interpreter provides only the reasoning component of the overall agent architecture). By overriding the methods of the agent class, programmers can define the selection functions (which are used by the interpreter), belief update and revision functions, as well as a ``social'' function which determines from which agents communication can be received. By overriding the methods of the Java class for the overall agent architecture, programmers can customize the way perception of the environment (the agent's ``sensors''), inter-agent communication, and acting on the environment are implemented. The latter is useful, among other things because often, before deploying a multi-agent systems, programmers will want to test their system with a simulated environment. The move from simulation to real-world deployment is then done by providing the Java code that interfaces the agent's practical reasoning with the real-world environment.

     Language extensible and legacy code: the AgentSpeak extension available with Jason has a construct called ``internal actions''. Wherever a literal can appear in a plan, also an internal action can appear. These are then implemented in Java (or indeed any other language using JNI) as a Boolean method, and support is given, e.g., for binding of logical variables. This provides for straightforward language extensible by user-defined internal actions, which is also a straightforward way of invoking legacy code from within the high-level agent reasoning in an elegant manner.  Besides user defined internal actions, Jason comes with a library of essential standard internal actions. These implement a variety of useful operations for practical programming, but most importantly, they provide the means for programmers to do important things for BDI-inspired programming that were not possible in the original AgentSpeak language, such as checking and dropping the agent's own desires/intentions.

      Integrated Development Environment: Jason is distributed with an IDE which provides a GUI for managing the system's project (the multi-agent system), editing the source code of individual agents, and running the system. Another tool provided as part of the IDE allows the user to inspect agents' internal (i.e., ``mental'') states when the system is running in debugging mode. The IDE is a plug-in to jEdit (http://www.jedit.org/), and an Eclipse plug-in is likely to be available in the future.

Ongoing Research Related to Jason

      There is much research related to what has been done or is being done in  the development of Jason. Below, we mention some of this research.

      Plan patterns for declarative goals: in recent work, we have  devised patterns of AgentSpeak plans that can be used to define various types of declarative goals with sophisticated temporal structures.  Such types of goals are quite important in the agent's literature and an essential feature of agent-oriented programming. This allows us to express for example that an agent should persist on a goal until there is evidence that it will be impossible to achieve that goal, or there is no longer any need to achieve the goal at all. The use of patterns for this (rather than specific language constructs) provides the same flexibility as the idea of patterns in object orientation. We are in the process of extending Jason with pre-processing to help automating the generation of those plan patterns from higher-level specifications.

      Organisations: an important part of agent-oriented software engineering is related to agent organisations, which has received much research attention in the last few years. We are currently working on allowing specifications of agent organisations (with the related notions of roles, groups, relationships between groups, social norms, etc.) to be used in combination with Jason for programming the individual agents. The particular organisational model we use is Moise+.

      Plan Exchange: Work has been done to allow plan exchange between AgentSpeak agents, which can be very useful, in particular for systems of cooperating agents, but also for application in which a large number of plans cannot be kept in the agent's plan library simultaneously (e.g., for use in PDAs with limited computational resources). The motivation for this work is this simple intuition: if you do not know how to do something, ask someone who does. However, various issues need to be considered in engineering systems where such plan exchanges can happen (e.g., which plans can be exchanged, what to do with a plan retrieved from another agent, who and when to ask for plans). This work is based on the Coo-BDI plan exchange  mechanism.


          Ontological reasoning: Although this is not available in Jason yet, It was argued that the belief base of an AgentSpeak agent should be formulated as a (populated) ontology, whereby:


    CDN (Content Delivery / Distribution Network)

    What is a CDN...?

          Content delivery networks (CDN) are the transparent backbone of the Internet in charge of content delivery. Whether we know it or not, every one of us interacts with CDNs on a daily basis; when reading articles on news sites, shopping online, watching YouTube videos or perusing social media feeds.
          No matter what you do, or what type of content you consume, chances are that you’ll find CDNs behind every character of text, every image pixel and every movie frame that gets delivered to your PC and mobile browser.
          To understand why CDNs are so widely used, you first need to recognize the issue they’re designed to solve. Known as latency, it’s the annoying delay that occurs from the moment you request to load a web page to the moment its content actually appears onscreen.
          That delay interval is affected by a number of factors, many being specific to a given web page. In all cases however, the delay duration is impacted by the physical distance between you and that website’s hosting server. A CDN’s mission is to virtually shorten that physical distance, the goal being to improve site rendering speed and performance.

    How a CDN Works...?

          To minimize the distance between the visitors and your website’s server, a CDN stores a cached version of its content in multiple geographical locations (a.k.a., points of presence, or PoPs). Each PoP contains a number of caching servers responsible for content delivery to visitors within its proximity.

          In essence, CDN puts your content in many places at once, providing superior coverage to your users. For example, when someone in London accesses your US-hosted website, it is done through a local UK PoP. This is much quicker than having the visitor’s requests, and your responses, travel the full width of the Atlantic and back.
          This is how a CDN works in a nutshell. Of course, as we thought we needed an entire guide to explain the inner workings of content delivery networks, the rabbit hole goes deeper.





    Who Uses A CDN...?

          Pretty much everyone. Today, over half of all traffic is already being served by CDNs. Those numbers are rapidly trending upward with every passing year. The reality is that if any part of your business is online, there are few reasons not to use a CDN especially when so many offer their services free of charge.
          Yet even as a free service, CDNs aren't for everyone. Specifically, if you are running a strictly located website, with the vast majority of your users located in the same region as your hosting,
    having a CDN yields little benefit. In this scenario, using a CDN can actually worsen your website’s
    performance by introducing another unessential connection point between the visitor and an already
    nearby server.
          Still, most websites tend to operate on a larger scale, making CDN usage a popular choice in the
    following sectors:

    1. Advertising
    2. Mobile
    3. Media and Entertainment
    4. Health Care
    5. Online Gaming
    6. Higher Education
    7. Government

    CDN Building Blocks

            

                            PoPs

               (Points of Presence)

          CDN PoPs (Points of Presence) are strategically located data centers responsible for communicating with users in their geographic vicinity. Their main function is to reduce round trip time by bringing the content closer to the website’s visitor. Each CDN PoP typically contains numerous caching servers.
         
                Caching Servers

          Caching servers are responsible for the storage and delivery of cached files. Their main function is to accelerate website load times and reduce bandwidth consumption. Each CDN caching server typically holds multiple storage drives and high amounts of RAM resources.
            SSD/HDD + RAM
          Inside CDN caching servers, cached files are stored on solid-state and hard-disk drives (SSD and HDD) or in random-access memory (RAM), with the more commonly-used files hosted on the more speedy mediums. Being the fastest of the three, RAM is typically used to store the most frequently-accessed items.

    Start Using A CDN

          For a CDN to work, it needs to be the default inbound gateway for all incoming traffic. To make
    this happen, you’ll need to modify your root domain DNS configurations (e.g., domain.com) and 
    those of your subdomains (e.g., www.domain.com, img.domain.com).
          For your root domain, you’ll change its A record to point to one of the CDN’s IP ranges. For each subdomain, modify its CNAME record to point to a CDN-provided subdomain address (e.g.,ns1.cdn. com). In both cases, this results in the DNS routing all visitors to your CDN instead of being directed to your original server.
          If any of this sounds confusing, don’t worry. Today’s CDN vendors offer step-by-step instructions to get you through the activation phase. Additionally, they provide assistance via their support team. The entire process comes down to a few copy and pastes, and usually takes around five minutes.



    The Evolution of CDNs

          Commercial CDNs have been around since the ’90s. Like any other decades-old technology, they went through several evolutionary stages before becoming the robust application delivery platform they are today.
          The path of CDN development was shaped by market forces, including new trends in content consumption and vast connectivity advancements. The latter has been enabled by fiber optics and other new communication technologies.
          Overall, CDN evolution can be segmented into three generations, each one introducing new capabilities, technologies and concepts to its network architecture. Working in parallel, each generation saw the pricing of CDN services trend down, marking its transformation into a mass-market technology.


    The Evolution of CDNs


          1st Gen                                2nd Gen                                         3rd Gen

         Static CDN                                Dynamic CDN                                   Multi-Purpose CDN


    Reverse Proxy Living on the Edge

          Content delivery networks employ reverse proxy technology. Topology wise, this means CDNs are deployed in front of your backend server(s). This position, on the edge of your network perimeter, offers several key advantages beyond a CDN’s innate ability to accelerate content delivery.
          Today, the reverse proxy topology is being leveraged by multi-purpose CDNs to provide the following types of solutions:

    Website Security

          Cyber Security is all about managing outside access to your protected perimeter, ideally blocking all threats before they can even set foot on your doorstep.
          Deployed on the edge of your network, a CDN is perfectly situated to act as a virtual high-security fence and prevent attacks on your website and web application. The on-edge position also makes a CDN ideal for blocking DDoS floods, which need to be mitigated outside of your core network infrastructure.

    Load Balancing

          Load balancing is all about having a “traffic guard” positioned in front of your servers, alternating the flow of incoming requests in such a way that traffic jams are avoided.
          Clearly, a CDN’s reverse proxy topology is ideal for this, as is the default recipient of all incoming traffic. In addition, reverse proxy topology also provides a CDN with enhanced visibility into traffic flow. This lets it accurately gauge the amount of pending requests on each of the backend servers, thereby enabling more effective load distribution.

    CDN Infrastructure

           The choice of infrastructure architecture is critical to shaping a CDN’s product identity while also defining the value of its offering. The basic building blocks of CDN infrastructures are PoPs (points of presence)—regional data centers responsible for communicating with users in their proximity.
          Using regional content distribution centers cuts down on round-trip time (RTT), making your website faster and more responsive for all visitors, regardless of their geolocation.
          Typically, each PoP holds multiple servers and routers responsible for caching, connection optimization and other content delivery features. For CDNs providing security solutions, PoPs also hold DDoS scrubbing servers and machines responsible for other security-related functions.
          Remember, a CDN’s job is to enhance your regular hosting by reducing bandwidth consumption, minimizing latency and providing the scalability needed to handle abnormal traffic loads. These tasks can only be achieved by a robust network architecture—one that turns your CDN into a dedicated fast lane on the information superhighway.
    CDN infrastructure architecture

    Round-Trip Time

          Round-trip time (RTT) is the number of milliseconds (ms) it takes for a browser to send a request and receive a response back from a server. RTT is not influenced by file size or the speed of your Internet connection. Instead, it’s affected by:

    Round-Trip Time Number of hopes   Ammount of traffic Transmission mediums                                                                                              
    Physical              Number of                   Amount of          Transmission     
    Distances         Intermediate Nodes                   Traffic                     Mediums  


          RTT is where the battle for speed is typically won and lost, since no rendering in the user’s browser can begin before the initial outgoing request for the HTML file is returned.


    The Four Pillars of CDN Design

    Performance

          One of a CDN’s main missions is to minimize latency. From an architectural standpoint, this means having to build for optimal connectivity, where PoPs are located at major networking hub intersections where data travels.
          Physical facilities are another important consideration. As a rule, you always want your PoP to be in a premium data center where backbone providers peer with each other and your CDN provider has established peering agreements with other CDNs and major carriers. Such agreements enable CDNs to significantly reduce round-trip times and improve bandwidth utilization.

    Reliability

          CDN infrastructure scale makes a glitch-free system a statistical improbability. However, this same scale can help ensure record resilience and high-availability, enabling CDN providers to commit to 99.9% and 99.999% service level agreements (SLAs).
          As a rule, commercial CDNs adopt a “no single point of failure” approach, both by carefully phasing maintenance cycles and by integrating additional hardware and software redundancy. Many also manage internal failover and disaster recovery systems that auto-route traffic around downed servers. For additional redundancy, CDN providers also deal with multiple carriers and rely on dedicated out-of-band management channels that allow them to interact with servers in case of disaster.

    Scalability

          Built for high-speed and high-volume routing, CDNs are expected to handle any amount of traffic. CDN architecture should address these expectations by providing ample networking and processing resources on all levels—down to computing and caching resources available on each of the caching servers.
          As one would expect, CDNs offering DDoS protection services have much higher scalability requirements. To address these needs, they deploy dedicated servers built for DDoS mitigation (scrubbers). These can individually handle network-sized amounts of traffic, processing tens of gigabytes each second.

    Responsiveness

          With a global-sized network, CDNs continually strive to improve responsiveness—measured in the amount of time it takes for network-wide configuration changes to take effect.
           Keep in mind that even small configuration changes, like an order to purge a specific image from cache or the addition of an address to a blacklisted IP list, need to be communicated across all PoPs. The larger and more geographically spread out the network, the longer it takes to accomplish this.
        To ensure good quality of service to customers, the CDN should be designed with quick configuration propagation in mind. This is commonly achieved with a combination consolidate.

    Monday, April 15, 2019

    REST (Representational State Transfer)

    REPRESENTATIONAL STATE TRANSFER

          REST, or REpresentational State Transfer, is an architectural style for providing standards between computer systems on the web, making it easier for systems to communicate with each other. REST-compliant systems, often called RESTful systems, are characterized by how they are stateless and separate the concerns of client and server. We will go into what these terms mean and why they are beneficial characteristics for services on the Web.

    SEPARATION OF CLIENT AND SERVER

          In the REST architectural style, the implementation of the client and the implementation of the server can be done independently without each knowing about the other. This means that the code on the client side can be changed at any time without affecting the operation of the server, and the code on the server side can be changed without affecting the operation of the client.
          As long as each side knows what format of messages to send to the other, they can be kept modular and separate. Separating the user interface concerns from the data storage concerns, we improve the flexibility of the interface across platforms and improve scalability by simplifying the server components. Additionally, the separation allows each component the ability to evolve independently.
          By using a REST interface, different clients hit the same REST endpoints, perform the same actions, and receive the same responses.

    STATELESSNESS

          Systems that follow the REST paradigm are stateless, meaning that the server does not need to know anything about what state the client is in and vice versa. In this way, both the server and the client can understand any message received, even without seeing previous messages. This constraint of statelessness is enforced through the use of resources, rather than commands. Resources are the nouns of the Web - they describe any object, document, or thing that you may need to store or send to other services.
          Because REST systems interact through standard operations on resources, they do not rely on the implementation of interfaces.
          These constraints help RESTful applications achieve reliability, quick performance, and scalability, as components that can be managed, updated, and reused without affecting the system as a whole, even during operation of the system.
          Now, we’ll explore how the communication between the client and server actually happens when we are implementing a RESTful interface.

    COMMUNICATION BETWEEN CLIENT AND SERVER

          In the REST architecture, clients send requests to retrieve or modify resources, and servers send responses to these requests. Let’s take a look at the standard ways to make requests and send responses.

    MAKING REQUESTS

          REST requires that a client make a request to the server in order to retrieve or modify data on the server. A request generally consists of:
    • an HTTP verb, which defines what kind of operation to perform
    • header, which allows the client to pass along information about the request
    • a path to a resource
    • an optional message body containing data

    HTTP VERBS

    There are 4 basic HTTP verbs we use in requests to interact with resources in a REST system:
    • GET — retrieve a specific resource (by id) or a collection of resources
    • POST — create a new resource
    • PUT — update a specific resource (by id)
    • DELETE — remove a specific resource by id

      HEADERS AND ACCEPT PARAMETERS

            In the header of the request, the client sends the type of content that it is able to receive from the server. This is called the Accept field, and it ensures that the server does not send data that cannot be understood or processed by the client. The options for types of content are MIME Types (or Multipurpose Internet Mail Extensions.
            MIME Types, used to specify the content types in the Accept field, consist of a type and a subtype. They are separated by a slash (/).
            For example, a text file containing HTML would be specified with the type text/html. If this text file contained CSS instead, it would be specified as text/css. A generic text file would be denoted as text/plain. This default value, text/plain, is not a catch-all, however. If a client is expecting text/css and receives text/plain, it will not be able to recognize the content.
      Other types and commonly used subtypes:
      • image — image/pngimage/jpegimage/gif
      • audio — audio/wavimage/mpeg
      • video — video/mp4video/ogg
      • application — application/jsonapplication/pdfapplication/xmlapplication/octet-stream
            For example, a client accessing a resource with id 23 in an articlesresource on a server might send a GET request like this:
      GET /articles/23 Accept: text/html, application/xhtml
      The Accept header field in this case is saying that the client will accept the content in text/html or application/xhtml.

      PATHS

            Requests must contain a path to a resource that the operation should be performed on. In RESTful APIs, paths should be designed to help the client know what is going on.
            Conventionally, the first part of the path should be the plural form of the resource. This keeps nested paths simple to read and easy to understand.
            A path like fashionboutique.com/customers/223/orders/12 is clear in what it points to, even if you’ve never seen this specific path before, because it is hierarchical and descriptive. We can see that we are accessing the order with id 12 for the customer with id 223.
            Paths should contain the information necessary to locate a resource with the degree of specificity needed. When referring to a list or collection of resources, it is unnecessary to add an id to a POST request to a fashionboutique.com/customers path would not need an extra identifier, as the server will generate an id for the new object.
            If we are trying to access a single resource, we would need to append an id to the path. For example: GET fashionboutique.com/customers/:id — retrieves the item in the customers resource with the id specified. DELETE fashionboutique.com/customers/:id — deletes the item in the customersresource with the id specified.

      SENDING RESPONSES

      CONTENT TYPES

            In cases where the server is sending a data payload to the client, the server must include a content-type in the header of the response. This content-type header field alerts the client to the type of data it is sending in the response body. These content types are MIME Types, just as they are in the accept field of the request header. The content-type that the server sends back in the response should be one of the options that the client specified in the accept field of the request.
            For example, when a client is accessing a resource with id 23 in an articles resource with this GET Request:
      GET /articles/23 HTTP/1.1 Accept: text/html, application/xhtml
      The server might send back the content with the response header:
      HTTP/1.1 200 (OK) Content-Type: text/html
      This would signify that the content requested is being returning in the response body with a content-type of text/html, which the client said it would be able to accept.


      RESPONSE CODES

            Responses from the server contain status codes to alert the client to information about the success of the operation. As a developer, you do not need to know every status code (there are many of them), but you should know the most common ones and how they are used:
      Status codeMeaning
      200 (OK)This is the standard response for successful HTTP requests.
      201 (CREATED)This is the standard response for an HTTP request that resulted in an item being successfully created.
      204 (NO CONTENT)This is the standard response for successful HTTP requests, where nothing is being returned in the response body.
      400 (BAD REQUEST)The request cannot be processed because of bad request syntax, excessive size, or another client error.
      403 (FORBIDDEN)The client does not have permission to access this resource.
      404 (NOT FOUND)The resource could not be found at this time. It is possible it was deleted, or does not exist yet.
      500 (INTERNAL SERVER ERROR)The generic answer for an unexpected failure if there is no more specific information available.
      For each HTTP verb, there are expected status codes a server should return upon success:
      • GET — return 200 (OK)
      • POST — return 201 (CREATED)
      • PUT — return 200 (OK)
      • DELETE — return 204 (NO CONTENT) If the operation fails, return the most specific status code possible corresponding to the problem that was encountered.

      EXAMPLES OF REQUESTS AND RESPONSES

            Let’s say we have an application that allows you to view, create, edit, and delete customers and orders for a small clothing store hosted at fashionboutique.com. We could create an HTTP API that allows a client to perform these functions:
      If we wanted to view all customers, the request would look like this:
      GET http://fashionboutique.com/customers Accept: application/json
      A possible response header would look like:
      Status Code: 200 (OK) Content-type: application/json
      followed by the customers data requested in application/json format.
      Create a new customer by posting the data:
      POST http://fashionboutique.com/customers Body: { “customer”: { “name” = “Scylla Buss” “email” = “scylla.buss@codecademy.org” } }
      The server then generates an id for that object and returns it back to the client, with a header like:
      201 (CREATED) Content-type: application/json
      To view a single customer we GET it by specifying that customer’s id:
      GET http://fashionboutique.com/customers/123 Accept: application/json
      A possible response header would look like:
      Status Code: 200 (OK) Content-type: application/json
      followed by the data for the customer resource with id 23 in application/json format.
      We can update that customer by _PUT_ting the new data:
      PUT http://fashionboutique.com/customers/123 Body: { “customer”: { “name” = “Scylla Buss” “email” = “scyllabuss1@codecademy.com” } }
            A possible response header would have Status Code: 200 (OK), to notify the client that the item with id 123 has been modified.
      We can also DELETE that customer by specifying its id:
      DELETE http://fashionboutique.com/customers/123
            The response would have a header containing Status Code: 204 (NO CONTENT), notifying the client that the item with id 123 has been deleted, and nothing in the body.

      PRACTICE WITH REST

            Let’s imagine we are building a photo-collection site for a different want to make an API to keep track of users, venues, and photos of those venues. This site has an index.html and a style.css. Each user has a username and a password. Each photo has a venue and an owner (i.e. the user who took the picture). Each venue has a name and street address. Can you design a REST system that would accommodate:
      • storing users, photos, and venues
      • accessing venues and accessing certain photos of a certain venue
      Start by writing out:
      • what kinds of requests we would want to make
      • what responses the server should return
      • what the content-type of each response should be

      POSSIBLE SOLUTION - MODELS

      { “user”: { "id": <Integer>, “username”: <String>, “password”: <String> } }
      { “photo”: { "id": <Integer>, “venue_id”: <Integer>, “author_id”: <Integer> } }
      { “venue”: { "id": <Integer>, “name”: <String>, “address”: <String> } }

      POSSIBLE SOLUTION - REQUESTS/RESPONSES

      GET REQUESTS
      Request- GET /index.html Accept: text/html Response- 200 (OK) Content-type: text/html
      Request- GET /style.css Accept: text/css Response- 200 (OK) Content-type: text/css
      Request- GET /venues Accept:application/json Response- 200 (OK) Content-type: application/json
      Request- GET /venues/:id Accept: application/json Response- 200 (OK) Content-type: application/json
      Request- GET /venues/:id/photos/:id Accept: application/json Response- 200 (OK) Content-type: image/png

      POST REQUESTS
      Request- POST /users Response- 201 (CREATED) Content-type: application/json
      Request- POST /venues Response- 201 (CREATED) Content-type: application/json
      Request- POST /venues/:id/photos Response- 201 (CREATED) Content-type: application/json


      PUT REQUESTS
      Request- PUT /users/:id Response- 200 (OK)
      Request- PUT /venues/:id Response- 200 (OK)
      Request- PUT /venues/:id/photos/:id Response- 200 (OK)


      DELETE REQUESTS
      Request- DELETE /venues/:id Response- 204 (NO CONTENT)
      Request- DELETE /venues/:id/photos/:id Response- 204 (NO CONTENT)