Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

Hibernate Filesystem Corruption in openSUSE 13.2

UPDATE: This bug is fixed with the dracut update to version dracut-037-17.9.1

I was never very fond of dracut, but I did not think it would be so totally untested: openSUSE Bug #906592. Executive summary: hibernate will most likely silently corrupt (at least) your root filesystem during resume from disk.
If you are lucky, a later writeback from buffers / cache will "fix" it, but the way dracut resumes the system is definitely broken and I already had the filesystem corrupted on my test VM, while investigating the issue, so it is not only a theoretical problem.

Until this bug is fixed: Do not hibernate on openSUSE 13.2.

Good luck!

a silhouette of a person's head and shoulders, used as a default avatar

Service Design Patterns in Rails: Web Service Evolution

This is my fourth post on Service Design Patterns in Rails. These posts have been inspired by the Service Design Patterns book by Robert Daigneau.

The previous posts were:


The Web Service Evolution patterns talk about what to do to make the API evolve with the minimum breaking changes. What that means? It means that changing your API should not break clients that consume it, otherwise said, that your API is backward compatible.

From this chapter, three patterns apply to the Rails framework:

  • Tolerant Reader
  • Consumer driven contract
  • Versioning

Tolerant Reader refers to the client. It means that if you write a client, you should be "tolerant" on what you get as a response. Rails provides the ActiveResource object. This object will get the response and create an object from it. This object per se is very tolerant.

However, only using that object does not make you tolerant reader. ActiveResource will create a model object like ActiveRecord does. For example, you can have a Product object that inherits from ActiveResource, the same way you would with ActiveRecord

  class Product < ActiveResource
  end

ActiveResource will create an attribute for each attribute found in the response. Thus if you expect a "name" attribute, you would try:

  Product.find(1).name

However, what if name is not in the response? Then, if you really want to be tolerant, you should do

  p = Product.find(1)
  if p.respond_to?(:name)
    p.name
  end

Thus, with little effort you can have a tolerant reader, even thought Rails per se does not fully implement it.
 
Consumer driven contract means testing. Actually, it means having a test suite provided by your consumer on what it expects from the service. Testing is a common good practice in Rails and Rails provides you with a lot of resources for that (rspec, factory girl, stubs, mock ups). You only need to use them to specify what you expect from the API, or what a consumer should expect, depending on whether you are writing the producer or the consumer.

And finally, versioning. Versioning is a very good practice for an API. How you can implement versioning is very well explained on this rails cast, better than I could do myself, so I'd recommend you watching it, to see how to use namespaces in the routes file and in the controller implementation:

 
 
Web Service Evolution patterns are partly implemented on Rails. However, it needs you to do some implementation in order to follow them, because the typical example (the one you would generate with "rails generate") does not implement them, but give you the tools or the base so that you can do it.
 
For example, versioning is not implemented when you generate a new rails application, but you can easily do it with namespacing; tests are not implemented but rails gives you the tools to do so; tolerant reader can be implemented by using ActiveResource and respond_to? method.

Thus, my conclusion is that Rails, even thought it is very cool for simple applications since "rails generate" can give you what you need right away, when it comes to more serious things, you need to do some work yourself.
 
This means that, despite the fact that when you start using rails for simple things you don't need to know about patterns, if you want to do more, you need to know them.



a silhouette of a person's head and shoulders, used as a default avatar

Unbound RGB with littleCMS slow

The last days I played with lcms‘ unbound mode. In unbound mode the CMM can convert colours with negative numbers. That allows to use for instance the LMS colour space, a very basic colour space to the human visual system. As well unbound RGB, linear gamma with sRGB primaries, circulated long time as the new one covers all colour space, a kind of replacement of ICC or WCS style colour management. There are some reservations about that statement, as linear RGB is most often understood as “no additional info needed”, which is not easy to build a flexible CMS upon. During the last days I hacked lcms to write the mpet tag in its device link profiles in order to work inside the Oyranos CMS. The multi processing elements tag type (mpet) contains the internal state of lcms’ transform as a rendering pipeline. This pipeline is able to do unbound colour transforms, if no table based elements are included. The tested device link contained single gamma values and matrixes in its D2B0 mpet tag. The Oyranos image-display application renderd my LMS test pictures correctly, in opposite to the 16-bit integer version. However the speed was decreased by a factor of ~3 with lcms compared to the usual integer math transforms. The most time consuming part might be the pow() call in the equation. It is possible that GPU conversions are much faster, only I am not aware of a implementation of mpet transforms on the GPU.

a silhouette of a person's head and shoulders, used as a default avatar

pam_systemd on a server? WTF?

I noticed lots of spam in my system logs:

20141120-05:15:01.9 systemd[1]: Starting user-30.slice.
20141120-05:15:01.9 systemd[1]: Created slice user-30.slice.
20141120-05:15:01.9 systemd[1]: Starting User Manager for UID 30...
20141120-05:15:01.9 systemd[1]: Starting Session 1817 of user root.
20141120-05:15:01.9 systemd[1]: Started Session 1817 of user root.
20141120-05:15:01.9 systemd[1]: Starting Session 1816 of user wwwrun.
20141120-05:15:01.9 systemd[1]: Started Session 1816 of user wwwrun.
20141120-05:15:01.9 systemd[22292]: Starting Paths.
20141120-05:15:02.2 systemd[22292]: Reached target Paths.
20141120-05:15:02.2 systemd[22292]: Starting Timers.
20141120-05:15:02.2 systemd[22292]: Reached target Timers.
20141120-05:15:02.2 systemd[22292]: Starting Sockets.
20141120-05:15:02.2 systemd[22292]: Reached target Sockets.
20141120-05:15:02.2 systemd[22292]: Starting Basic System.
20141120-05:15:02.2 systemd[22292]: Reached target Basic System.
20141120-05:15:02.2 systemd[22292]: Starting Default.
20141120-05:15:02.2 systemd[22292]: Reached target Default.
20141120-05:15:02.2 systemd[22292]: Startup finished in 21ms.
20141120-05:15:02.2 systemd[1]: Started User Manager for UID 30.
20141120-05:15:02.2 CRON[22305]: (wwwrun) CMD (/usr/bin/php -f /srv/www/htdocs/owncloud/cron.php)
20141120-05:15:02.4 systemd[1]: Stopping User Manager for UID 30...
20141120-05:15:02.4 systemd[22292]: Stopping Default.
20141120-05:15:02.4 systemd[22292]: Stopped target Default.
20141120-05:15:02.4 systemd[22292]: Stopping Basic System.
20141120-05:15:02.4 systemd[22292]: Stopped target Basic System.
20141120-05:15:02.4 systemd[22292]: Stopping Paths.
20141120-05:15:02.4 systemd[22292]: Stopped target Paths.
20141120-05:15:02.4 systemd[22292]: Stopping Timers.
20141120-05:15:02.4 systemd[22292]: Stopped target Timers.
20141120-05:15:02.4 systemd[22292]: Stopping Sockets.
20141120-05:15:02.4 systemd[22292]: Stopped target Sockets.
20141120-05:15:02.4 systemd[22292]: Starting Shutdown.
20141120-05:15:02.4 systemd[22292]: Reached target Shutdown.
20141120-05:15:02.4 systemd[22292]: Starting Exit the Session...
20141120-05:15:02.4 systemd[22292]: Received SIGRTMIN+24 from PID 22347 (kill).
20141120-05:15:02.4 systemd[1]: Stopped User Manager for UID 30.
20141120-05:15:02.4 systemd[1]: Stopping user-30.slice.
20141120-05:15:02.4 systemd[1]: Removed slice user-30.slice.

This is a server only system. I investigated who is starting and tearing down a sytemd instance for every cronjob, every user login etc.
After some searching, I found that pam_systemd is to blame: it seems to be enabled by default. Looking into the man page of pam_systemd, I could not find anything in it that would be useful on a server system so I disabled it, and pam_gnome_keyring also while I was at it:
pam-config --delete --gnome_keyring --systemd
...and silence returned to my logfiles again.

a silhouette of a person's head and shoulders, used as a default avatar

Aurora goes Firefox Developer Edition

Mozilla announced a few days ago a new flavour of the Firefox browser called Firefox Developer Edition. This new Firefox edition is in fact replacing the previous Aurora version. Just to add some background how it was structured until a few days ago:

  • Nightly – nightly builds of mozilla-central which is basically Mozilla’s HEAD
  • Aurora – regular builds published for openSUSE under mozilla:alpha
  • Beta – weekly builds for openSUSE under mozilla:beta
  • Release – full stable public releases as shipped as end user product for openSUSE under mozilla and in Factory/Tumbleweed

There is a 6 weeks cycle where the codebase goes from Nightly to Aurora to Beta to Release while it stabilizes.

Now as Aurora is replaced to be Firefox Developer Edition I am also changing the way how to deliver those to openSUSE users:

  • Nightly – there are no RPMs provided. People who want to run it can/should grab an upstream tarball
  • Firefox Developer Edition – now available as package firefox-dev-edition from the mozilla repository
  • Beta – no changes, available (as long time permits) from mozilla:beta
  • Release – no changes, available in mozilla and submitted to openSUSE Factory / Tumbleweed

A few more notes on the Firefox Developer Edition RPMs for openSUSE:

  • it’s brand new so please send me feedback about packaging issues
  • it can be installed in parallel to Release or Beta and is started as firefox-dev and is using a different profile unless you change that default; therefore it can even run in parallel with regular Firefox
  • it carries most of the openSUSE specific patches (including KDE integration)
  • it currently has NO branding packages and therefore does not use exactly the same preferences as the openSUSE provided Firefox so it behaves like Firefox when installed with MozillaFirefox-branding-upstream

a silhouette of a person's head and shoulders, used as a default avatar

Service Design Patterns in Rails: Web Service Implementation Styles

This is the third post on Service Design Patterns in Rails. The first two where



 In these series of posts, I am trying to map the patterns described in the Service Design Patterns book by Robert Daigneau to a typical Rails application.

So far, I've proved that Rails already implements some of the patterns described in the book and thus, by only using the Rails framework, you are already following those patterns. This means you already follow certain good practices on implementing services without actually knowing about them.

However, we've also seen that some patterns are not implemented by default. The common thing on them is that they are related to the response: Response Mapper, Response Data Transfer Object Mapper, and Linked Service.

The chapter about Web Service Implementation Styles, is about five patterns:

  • Transaction script
  • Datasource adapter
  • Operation script
  • Command invoker
  • Workflow connector

Transaction script pattern means writing a service that, without any helper, nor model, accesses the database. This is not the Rails way, since in Rails we access the database through a domain model object. Personally, I think it is good that this pattern is not in the Rails framework. I am surprised to have find out this pattern in the book.

Datasource adapter means having a class that will get the data from the database, making the controller unaware of, for example, if the data is stored on a postgresql db or sqlite, and managing metadata, like the relationships between different entities (i.e. one-to-many, belong-to, many-to-many). Does it sound familiar? Yes! This looks like ActiveRecord, doesn't it?

Operation script means delegating work to a class that can be used by different web-services. Translating that to Rails means implementing features in the model so that different controllers can reuse it.

Command invoker means having a class that can be either call from a web service or put to a queue to be run later. For example, a class with a perform method can be run by calling the perform method, or queued to a delayed job queue. This may not be the preferred way of using delayed job, but certainly can be done (See Custom Jobs) . This class will then call domain model methods.

Workflow connector is something more complicated. It means designing a workflow where each step can be a web service. However, you need to maintain information on the state and be able to rollback (typically done by compensating, by calling a web service that performs the negative function).

I don't think you can implement a workflow in Rails with what the framework provides, nor with the typical gems. From what I know, you will need to add additional gems/tools.


So far, I've seen that rails implements the patterns for a basic REST API, which is cool. However, when it comes to more complex workflows where the services you implement are part of a bigger services infraestucture, rails does not implement patterns for that, like the Workflow Connector, the Response Mapper or the Linked Service.
a silhouette of a person's head and shoulders, used as a default avatar

Salt and Pepper Squid with Fresh Greens

A few days ago I told Andrew Wafaa I’d write up some notes for him and publish them here. I became hungry contemplating this work, so decided cooking was the first order of business:

Salt and Pepper Squid with Fresh Greens

It turned out reasonably well for a first attempt. Could’ve been crispier, and it was quite salty, but the pepper and chilli definitely worked (I’m pretty sure the chilli was dried bhut jolokia I harvested last summer). But this isn’t a post about food, it’s about some software I’ve packaged for managing Ceph clusters on openSUSE and SUSE Linux Enterprise Server.

Specifically, this post is about Calamari, which was originally delivered as a proprietary dashboard as part of Inktank Ceph Enterprise, but has since been open sourced. It’s a Django app, split into a backend REST API and a frontend GUI implemented in terms of that backend. The upstream build process uses Vagrant, and is fine for development environments, but (TL;DR) doesn’t work for building more generic distro packages inside OBS. So I’ve got a separate branch that unpicks the build a little bit, makes sure Calamari is installed to FHS paths instead of /opt/calamari, and relies on regular packages for all its dependencies rather than packing everything into a Python virtualenv. I posted some more details about this to the Calamari mailing list.

Getting Calamari running on openSUSE is pretty straightforward, assuming you’ve already got a Ceph cluster configured. In addition to your Ceph nodes you will need one more host (which can be a VM, if you like), on which Calamari will be installed. Let’s call that the admin node.

First, on every node (i.e. all Ceph nodes and your admin node), add the systemsmanagement:calamari repo (replace openSUSE_13.2 to match your actual distro):

# zypper ar -f http://download.opensuse.org/repositories/systemsmanagement:/calamari/openSUSE_13.2/systemsmanagement:calamari.repo

Next, on your admin node, install and initialize Calamari. The calamari-ctl command will prompt you to create an administrative user, which you will use later to log in to Calamari.

# zypper in calamari-clients
# calamari-ctl initialize

Third, on each of your Ceph nodes, install, configure and start salt-minion (replace CALAMARI-SERVER with the hostname/FQDN of your admin node):

# zypper in salt-minion
# echo "master: CALAMARI-SERVER" > /etc/salt/minion.d/calamari.conf
# systemctl enable salt-minion
# systemctl start salt-minion

Now log in to Calamari in your web browser (go to http://CALAMARI-SERVER/). Calamari will tell you your Ceph hosts are requesting they be managed by Calamari. Click the “Add” button to allow this.

calamari-authorize-hosts calamari-authorize-hosts-wait

Once that’s complete, click the “Dashboard” link at the top to view the cluster status. You should see something like this:

calamari-status

And you’re done. Go explore. You might like to put some load on your cluster and see what the performance graphs do.

Concerning ceph-deploy

The instructions above have you manually installing and configuring salt-minion on each node. This isn’t too much of a pain, but is even easier with ceph-deploy which lets you do the whole lot with one command:

ceph-deploy calamari connect --master <calamari-fqdn> <node1> [<node2> ...]

Unfortunately, at the time of writing, we don’t have a version of ceph-deploy on OBS which supports the calamari connect command on openSUSE or SLES. I do have a SUSE-specific patch for ceph-deploy to fix this (feel free to use this if you like), but rather than tacking that onto our build of ceph-deploy I’d rather push something more sensible upstream, given the patch as written would break support for other distros.

Distros systemsmanagement:calamari Builds Against

The systemsmanagement:calamari project presently builds everything for openSUSE 13.1, 13.2, Tumbleweed and Factory. You should be able to use the packages supplied to run a Calamari server on any of these distros.

Additionally, I’m building salt (which is how the Ceph nodes talk to Calamari) and diamond (the metrics collector) for SLE 11 SP3 and SLE 12. This means you should be able to use these packages to connect Calamari running on openSUSE to a Ceph cluster running on SLES, should you so choose. If you try that and hit any missing Python dependencies, you’ll need to get these from devel:languages:python.

Disconnecting a Ceph Cluster from Calamari

To completely disconnect a Ceph cluster from Calamari, first, on each Ceph node, stop salt and diamond:

# systemctl disable salt-minion
# systemctl stop salt-minion
# systemctl disable diamond
# systemctl stop diamond

Then, make the Calamari server forget the salt keys, ceph nodes and ceph cluster. You need to use the backend REST API for this. Visit each of /api/v2/key, /api/v2/server and /api/v2/cluster in your browser. Look at the list of resources, and for each item to be deleted, construct the URL for that and click “Delete”. John Spray also mentioned this on the mailing list, and helpfully included a couple of screenshots.

Multiple Cluster Kinks

When doing development or testing, you might find yourself destroying and recreating clusters on the same set of Ceph nodes. If you keep your existing Calamari instance running through this, it’ll still remember the old cluster, but will also be aware of the new cluster. You may then see errors about the cluster state being stale. This is because the Calamari backend supports multiple clusters, but the frontend doesn’t (this is planned for version 1.3), and the old cluster obviously isn’t providing updates any more, as it no longer exists. To cope with this, on the Calamari server, run:

# calamari-ctl clear --yes-i-am-sure
# calamari-ctl initialize

This will make Calamari forget all the old clusters and hosts it knows about, but will not clear out the salt minion keys from the salt master. This is fine if you’re reusing the same nodes for your new cluster.

Sessions to Attend at SUSECon

SUSECon starts tomorrow (or the day after, depending on what timezone you’re in). It would be the height of negligence for me to not mention the Ceph related sessions several of my esteemed colleagues are running there:

  • FUT7537 – SUSE Storage – Software Defined Storage Introduction and Roadmap: Getting your tentacles around data growth
  • HO8025 – SUSE Storage / Ceph hands-on session
  • TUT8103 – SUSE Storage: Sizing and Performance
  • TUT6117 – Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES – With a look at SUSE Studio, Manager and Build Service
  • OFOR7540 – Software Defined Storage / Ceph Round Table
  • FUT8701 – The Big Picture: How the SUSE Management, Cloud and Storage Products Empower Your Linux Infrastructure
  • CAS7994 – Ceph distributed storage for the cloud, an update of enterprise use-cases at BMW

Update: for those who were hoping for an actual food recipe, please see this discussion.

a silhouette of a person's head and shoulders, used as a default avatar

Is anyone still using PPP at all?

After talking to colleagues about how easy it is to contribute to the Linux Kernel simply by reporting a bug, I was actually wondering why I was the first and apparently the only one to hit this bug.
So there are two possible reasons:

  • nobody is testing -rc kernels
  • nobody is using PPP anymore
To be honest, I'm also only using PPP for some obscure VPN, but I would have expected it to be in wider usage due to UMTS/3G cards and such. So is nobody testing -rc kernels? This would indeed be bad...
a silhouette of a person's head and shoulders, used as a default avatar

Switching from syslog-ng to rsyslog - it's easier than you might think

I had looked into rsyslog years ago, when it became default in openSUSE and for some reason I do not remember anymore, I did not really like it. So I stayed at syslog-ng.
Many people actually will not care who is taking care of their syslog messages, but since I had done a few customizations to my syslog-ng configuration, I needed to adapt those to rsyslog.

Now with Bug 899653 - "syslog-ng does not get all messages from journald, journal and syslog-ng not playing together nicely" which made it into openSUSE 13.2, I had to reconsider my choice of syslog daemon.

Basically, my customizations to syslog-ng config are pretty small:

  • log everything from VDR in a separate file "/var/log/vdr"
  • log everything from dnsmasq-dhcp in a separate file "/var/log/dnsmasq-dhcp"
  • log stuff from machines on my network (actually usually only a VOIP telephone, but sometimes some embedded boxes will send messages via syslog to my server) in "/var/log/netlog"
So I installed rsyslog -- which due to package conflicts removes syslog-ng -- and started configuring it to do the same as my old syslog-ng config had done. Important note: After changing the syslog service on your box, reboot it before doing anyting else. Otherwise you might be chasing strange problems and just rebooting is faster.

Now to the config: I did not really like the default time format of rsyslog:
2014-11-10T13:30:15.425354+01:00 susi rsyslogd: ...
Yes, I know that this is a "good" format. Easy to parse, unambiguosly, clear. But It is usually me reading the Logs and I still hate it, because I do not need microsecond precision, I do know in which timezone I'm in and it uses half of a standard terminal width if I don't scroll to the right.
So the first thing I changed was to create /etc/rsyslog.d/myformat.conf with the following content:
$template myFormat,"%timegenerated:1:4:date-rfc3339%%timegenerated:6:7:date-rfc3339%%timegenerated:9:10:date-rfc3339%-%timegenerated:12:21:date-rfc3339% %syslogtag%%msg%\n"
$ActionFileDefaultTemplate myFormat
This changes the log format to:
20141110-13:54:23.0 rsyslogd: ...
Which means the time is shorter, still can be parsed and has sub-second-precision, the hostname is gone (which might be bad for the netlog file, but I don't care) and it's 12 characters shorter.
It might be totally possible to do this in an easier fashion, I'm not a rsyslog wizard at all (yet) ;)

For /var/log/vdr and /var/log/dnsmasq-dhcp, I created the config file /etc/rsyslog.d/myprogs.conf, containing:
if $programname == 'dnsmasq-dhcp' then {
    -/var/log/dnsmasq-dhcp
    stop
}
if $programname == 'vdr' then {
    -/var/log/vdr
    stop
}
That's it! It's really straightforward, I really can't understand why I hated rsyslog years ago :)

The last thing missing was the netlog file, handled by /etc/rsyslog.d/mynet.conf:
$ModLoad imudp.so # provides UDP syslog reception
$UDPServerRun 514 # start syslog server, port 514
if $fromhost-ip startswith '192.168.' then {
    -/var/log/netlog
    stop
}
Again, pretty straightforward.
And that's it! Maybe I'll add an extra logformat for netlog to specify the hostname in there, but that would just be the icing on the cake.

What I liked especially on the rsyslog implementation in openSUSE (it might be default, but I don't know that) is, that the location of the "$IncludeConfig /etc/rsyslog.d/*.conf" is placed so that you really can do useful things without touching the distributions default config. With syslog-ng, the include of the conf.d directory was too late (for me), so you could not "split off" messages from the default definitions, e.g. the VDR messages would appear in /var/log/messages and /var/log/vdr. In order to change this, you had to change the syslog-ng.conf and this would need to be checked after a package update, and new distro-configs would need to be re-merged into my changed configuration.
Now it is totally possible that after an update of the distribution, I will need to fix my rsyslog configs because of changes in syntax or such, but at least it is possible that it might just work without that.

a silhouette of a person's head and shoulders, used as a default avatar

Service Design Patterns in Rails: Request and Response Management

This is the second post on the Service Design Patterns in Rails. The first one was about Client-Service Interaction styles patterns. This one is about Request and Response Management patterns (from the book Service Design Patterns by Robert Daigneau).

The Request and Response Management patterns are:
  • Service Controller
  • Data Transfer Object
  • Request Mapper
  • Response Mapper

Service Controller pattern means that there is a class that decides which controller should be called. The decision is made based on the Request (for example "GET /customer/123") and a set of rules.

Does it sound familiar? Yes! This is the routes.rb file. Here is the simplest rule of all:

resources :products

which means, any request to /products will be routed to the products controller.

The Data Transfer Object is an intermediate object used in the request or the response instead of using the domain objects. That is, instead of serializing the Products object on the typical rails example, you would instead use another object populated from the values in the Product.

On the request side, this is achieved through the params object. See this code which is typically in the typical Products controller example:


    def set_product
      @product = Product.find(params[:id])
    end


this is how we find in the database a product with id "params[:id]".
"params[:id]" is the 123 from "GET products/123" request.

Another example, when we want to update, how we update it with the params object:

def update
  respond_to do |format|
    if @product.update(params)
    ....
    else
    ...
    end
  end
end


However, in the response side I have bad news. A rails typical application is not using a Data Transfer Object. See the typical example:

class ProductsController < ApplicationController
  def index  
    @products = Products.all        
    respond_to do |format|                                                      
      format.json { render json @products}   

    end
  end
end

This code means that the object used in the response, is the Products object serialized as json. Under the hoods, the "ActiveRecord::to_json" call is made for this purpose. This means that the response is highly coupled with the domain object.

What we should do instead is rendering the response in the view, by using a template.

Rails does come with builder for "xml templating". If we were using xml for the response, the following code will work as long as we had an index.xml.builder file.

class ProductsController < ApplicationController
  def index  
    @products = Products.all        
    respond_to do |format|                                                      
      format.xml  
    end
  end
end

Thus, for json to work, we would expect something like:


class ProductsController < ApplicationController
  def index  
    @products = Products.all        
    respond_to do |format|                                                      
      format.json
    end
  end
end


However, rails does not come by default with "json templating" and thus the previous code won't work unless we add some "json templating" gem.

Researching a bit on google, I've found this post that explains the case very well and much better than I could and gives some options on "json templating":



Finally, Request and Response Mapper are two patterns that can be used to interact with different web services, different on the syntax, but semantically equivalent. In that situation, you need an object that maps different request/responses to the same controller.


In the case of the Request Mapper, we get back to the routes.rb file, where you can declare matching patterns. For example if you were authenticating against github, you may have this line on your routes.rb file

  get "/auth/:provider/callback" => "sessions#create"

The Response Mapper is used to construct a response but used by different web services. This can be used when you need to create a response that matches some kind of agreement between different parties. Sadly I don't know of a typical rails example that does that. If you know one, please tell me.