November 22, 2014

Mike Jones - MicrosoftA JSON-Based Identity Protocol Suite [Technorati links]

November 22, 2014 01:02 AM

quillMy article A JSON-Based Identity Protocol Suite has been published in the Fall 2014 issue of Information Standards Quarterly, with this citation page. This issue on Identity Management was guest-edited by Andy Dale. The article’s abstract is:

Achieving interoperable digital identity systems requires agreement on data representations and protocols among the participants. While there are several suites of successful interoperable identity data representations and protocols, including Kerberos, X.509, SAML 2.0, WS-*, and OpenID 2.0, they have used data representations that have limited or no support in browsers, mobile devices, and modern Web development environments, such as ASN.1, XML, or custom data representations. A new set of open digital identity standards have emerged that utilize JSON data representations and simple REST-based communication patterns. These protocols and data formats are intentionally designed to be easy to use in browsers, mobile devices, and modern Web development environments, which typically include native JSON support. This paper surveys a number of these open JSON-based digital identity protocols and discusses how they are being used to provide practical interoperable digital identity solutions.

This article is actually a follow-on progress report to my April 2011 position paper The Emerging JSON-Based Identity Protocol Suite. While standards can seem to progress slowly at times, comparing the two makes clear just how much has been accomplished in this time and shows that what was a prediction in 2011 is now a reality in widespread use.

November 21, 2014

Vittorio Bertocci - MicrosoftGetting Rid of Residual Cookies in Windows Store Apps [Technorati links]

November 21, 2014 06:30 PM

This is a classic Q I get pretty often – it’s time to get a post out and start replying by reference instead of by value Smile

The issue at hand is how to fully “sign out” (whatever that means for a native app) a user from a Windows Store client.

The actual user session is determined by two different components: the token cache (under ADAL’s control, see this) and any session tracking cookies that might be present in the system (not under ADAL’s control). As shown in the aforelinked post, you can easily take care of the token cache part. Clearing cookies is harder tho, Windows Store authentication takes place within the WebAuthenticationBroker – which has its own cookie jar that is separate and unreachable from your application code. The most robust approach there is not to create any persistent cookie (e.g. NOT clicking “remember me” during authentication. In fact, we should stop even showing it soon). However if you end up with such a cookie, the main way of getting rid of it is triggering a sign out form the same WebAuthenticationBroker – the server will take care of cleaning things up.

    string requestUrl = "https://login.windows.net/common/oauth2/logout";
    Task.Run(async () =>
    {
        try
        {
            await WebAuthenticationBroker.AuthenticateAsync(WebAuthenticationOptions.SilentMode, new Uri(requestUrl));
        }
        catch (Exception)
        {
            // timeout. That's expected
        }
    });

Julian BondFun infographic. Note it's already 15 years old but I believe most of the graphs are still going up... [Technorati links]

November 21, 2014 05:55 PM
Fun infographic. Note it's already 15 years old but I believe most of the graphs are still going up.

http://globaia.org/wp-content/uploads/2013/09/the_anthropocene_igbp_globaia1.jpg

Found via
http://dismagazine.com/disillusioned/discussion-disillusioned/70983/mckenzie-wark-digital-labor-and-the-anthropocene/

http://www.anthropocene.info/en/anthropocene/the-great-acceleration/the-great-acceleration

http://globaia.org/
 globaia.org/wp-content/uploads/2013/09/the_anthropocene_igbp_globaia1.jpg »

[from: Google+ Posts]

KatasoftEasy Single Sign-On [Technorati links]

November 21, 2014 03:00 PM

Since the beginning of time, developers have been writing code to store and handle user accounts.

Then Stormpath came out, and made that process a lot simpler. Instead of writing all that code yourself, you just make a few API calls to our service, and we take care of the heavy lifting: storing user info, handling authentication and authorization, ensuring data security, etc.

This brings us to the present.

We recently released our new ID Site product, a Single Sign On (SSO) feature that makes it easy to completely remove user authentication logic from your web application. Now, you can handle it on a completely separate subdomain.

This authentication subdomain is hosted by us, so all you need to do is point your DNS records at us, add a few lines of code to your webapp, and BAM, you’ve got authentication ready to go!

ID Site offers a basic Single Sign On experience, allowing your users to access multiple applications seamlessly, with one set of credentials — all within the same session.

This post will take you through what it is and how it works.

ID Site: Single Sign On with Stormpath

ID Site is a hosted web app (built in Angular.js) that provides pre-built screens for login, registration, password reset — all the common end user functions of your application. It is fully hosted by Stormpath, which makes it really easy for your application to access these features, as well as add SSO across your apps, with very little code.

When you sign up for a Stormpath account, we’ll give you a configurable authentication subdomain that is ready to use – just add some basic information into our console: the domains of apps authorized to use your ID Site, and callback URLs your ID Site is allowed to communicate with — AND JUST LIKE THAT — you are on your way!

How It Works

ID Site is easy — really easy — to integrate with your application. The functionality is already built into our client libraries.

At a high level, its very simple: when you want to authenticate a user, you redirect them (using our libraries) to your new authentication subdomain (like login.mysite.com, for instance), we’ll handle the authentication and authorization checks or any workflows like password reset or account verification, then we’ll redirect the user back to your application transparently.

Here’s how it works:

This seems complex and full of moving parts, but it really isn’t. To get your user to the ID Site to authenticate, this is what the code actually looks like (here’s a Node.js example):

// Creating a simple http server in Node.
http.createServer(function (req, res) {

  // If the user requested to login, we redirect them to the ID using the
  // Stormpath SDK.
  if (req.url==='/login') {
    res.writeHead(302, {
      'Cache-Control': 'no-store',
      'Pragma': 'no-cache',
      'Location': application.createIdSiteUrl({
        callbackUri: "https://myapplication.com/loginCallback"
      })
    });
  res.end();
}

This code lands the user on your ID Site, which is fully customizable to your brand and hosted on Stormpath infrastructure:

Once the user logs in, they will be redirected to the callbackUri that was specified in the request. From there, you can validate the information and get the account for the login with the following code:

if (req.url.lastIndexOf('/loginCallback', 0) === 0) {
  application.handleIdSiteCallback(req.url, function(err, result) {
    if (err) {
      showErrorPage(req, res, err)
    } else {
      if (result.status === "AUTHENTICATED") {
        req.account = result.account;
        showDashboard(req, res);
      }
    }
  });
}

There are two ways developers can handle the callback from the ID Site. One is to have a callbackUri specific to each action, like login and the /loginCallback in the code above. The other is to have a generic callback, like /idSiteCallback that handles the response for all actions taken on the ID Site. Stormpath exposes a status so you can know what action occurred on the ID site for any given callback.

Although ID Site is built in Angular, you can connect to it from any application. ID Site support has been added to our Node, Java, and Python libraries, and is available through the Stormpath REST API, so you can take advantage of it it even if you aren’t using one of those languages.

Why ID Site?

Almost every feature at Stormpath comes out of developer requests and ID Site solves issues and use cases we hear about frequently:

Stormpath Single Sign On Demo

If you want to get a feel for how ID Site looks and feels to end users, I built a demo to show a basic Single Sign On experience. This allows you to log into and share sessions across two different websites:

http://shielded-journey-8142.herokuapp.com/

and

http://limitless-ravine-7654.herokuapp.com/

Both of these web applications use ID Site and share a 5 minute session timeout.

To learn more, check out our Guide to Using ID Site In Your Application.

If you have any questions / comments, we would love to hear them! Let me know how to make ID Site more useful to you: (tom@stormpath.com or @omgitstom).

Kuppinger ColeSAP Security Made Easy. How to Keep Your SAP Systems Secure [Technorati links]

November 21, 2014 10:37 AM
In KuppingerCole Podcasts

Security in SAP environments is a key requirement of SAP customers. SAP systems are business critical. They must run reliably, they must remain secure – despite a growing number of attacks. There are various levels of security to enforce in SAP environments. It is not only about user management, access controls, or code security. It is about integrated approaches.



Watch online

Nat SakimuraIDMからIRMへ~変わるアイデンティティーの地平 [Technorati links]

November 21, 2014 06:00 AM

本日(2014/11/21)14:15より、品川インターシティにて行われた第6回OpenAMコンソーシアムセミナーで、基調講演をやってまいりました。

題して

「IDMからIRMへ
変わるアイデンティティーの地平」
IDMからIRMへ

November 20, 2014

Ian GlazerNo Person is an Island: How Relationships Make Things Better [Technorati links]

November 20, 2014 05:26 PM

(The basic text to my talk at Defragcon 2014. The slides I used are at the end of this post and if they don’t show up you can get them here.)

What have we done to manage people, their “things,” and how they interact with organizations?

The sad truth that we tried to treat the outside world of our customers and partners, like the inside world of employees. And we’ve done poorly at both. I mean, think about, “Treat your customers like you treat your employees” is rarely a winning strategy. If it was, just imagine the Successories you’d have to buy for your customers… on second thought, don’t do that.

We started by storing people as rows in a database. Rows and rows of people. But treating people like just a row in a database is, essentially, sociopathic behavior. It ignores the reality that you, your organization, and the other person, group, or organization are connected. We made every row, every person an island – disconnected from ourselves.

What else did we try? In the world of identity and access management we started storing people as nodes in an LDAP tree. We created an artificial hierarchy and stuff people, our customers, into it. Hierarchies and our love for them is the strange lovechild of Confucius and the military industrial complex. Putting people into these false hierarchies doesn’t help us delight our customers. And it doesn’t really help make management tasks any easier. We made every node, every person, an island – disconnected from ourselves.

We tried other things realizing that those two left something to be desired. We tried roles. You have this role and we can treat you as such. You have that role and we should treat you like this. But how many people actually do what their job title says? How many people actually meaningful job titles? And whose customers come with job titles? So, needless to say, roles didn’t work as planned in most cases.

We knew this wasn’t going to work. We’ve known since 1623. John Donne told us as much. And his words then are more relevant now than he could have possibly imagined then. Apologies to every English teacher I have ever had as I rework Donne’s words:

No one is an island, entire of itself; everyone is a piece of the continent, a part of the main. If a clod be washed away by the sea, we are the less. Anyone’s death diminishes us, because we are involved in the connected world.

What should we do?

If treating our customers like employees isn’t a winning strategy, if making an island out of each of our customers won’t work, if we are involved with the connected world, then what should we do?

We have to acknowledge that relationships exist. We have to acknowledge that the connections exists between a customer, their devices and things, and us. No matter what business you are in. No matter if you are a one woman IT consulting shop, or two-guys and a letterpress on Etsy, or even a multi-national corporation – you are connected to your customers; you have a relationship with them.

This isn’t necessarily a new thought and, in fact, there are two disciplines that have sought to map and use those relationships: CRM and VRM. Customer relationship management models one organization to many people. Vendor relationship management models one person to many organizations. Both, unknowingly share an important truth – the connections between people and organizations are key. It’s not “CRM vs VRM;” it’s “CRM and VRM.” What I am proposing is the notion of IRM – identity relationship management. IRM puts the relationships front and center, but more on that in a minute.

I believe that acknowledging relationships re-humanizes our digital relationships with one another. I believe that this is one of the reasons why online forums descend into antisocial behavior. It’s because those systems don’t make you feel like you have a relationship with the other party. “There’s no person there, just a tweet.” And this is a shame – that platforms meant to provide scalable human-to-human interactions and contact and closeness often dehumanize those very interactions.

I believe that we ought to use relationships to manage our interactions. You can’t get delighted customers by just treating them like a row in a database. You cannot manage data from all of your customer’s “things” without fully recognizing there’s a customer there with whom you have a relationship.

What I know about relationships

I believe we must build “relationship-literate” systems and processes. We should stop operating on rows of customers and start using digital representations of relationships. What follows are nine aspects of relationships that can serve as design considerations for relationship-literate systems.

Scalable

If we are going to use relationships as a management tool in this world of ever-increasing connections between people, their things, and organizations, then we have to tackle scalability issues. The three obvious ones are huge numbers of actors, attributes, and relationships. But there’s another that is often left out: administration. If we don’t do something better than we do today, we’ll be stuck with the drop-list from hell in which an admin has to scroll through a few thousand enteries to find the “thing” she wants to manager.

Acknowledgeable

I’ve got to know I’m in a relationship before anything else can meaningfully happen. I can’t buy a one-sided birthday card: Happy birthday to a super awesome partner who doesn’t know who I am. All parties have to know. Otherwise there is an asymmetry of power. And that tends to tilt towards the heavier object, e.g. the organization and not the individual. Familiar with the Law of Gross Tonnage? It’s part of the maritime code that says the heavier ship has the right of way. Now growing up outside of Boston, this is basically how I learned to drive. The Law of Gross Tonnage is useful in that situation but absolutely inequitable and unhelpful in terms of delighting a customer.

Provable

There’s got to be a way for us to know if multiple parties are in a relationship. This can take many flavors: single party, multi-party, and 3rd party asserted. Things like Facebook can serve as that 3rd party vouching two people are connected. But should there be alternatives to social networks for this? And who connects people and their “things”?

Actionable

We want our relationships to be able to do something. And by looking at the relationship each party can know what they can do. Without having to consult some distant authority. Without waiting for an online connection. The relationship leads to action and does so without consulting some back-end service somewhere.

Constrainable

Not just because a relationship can do something doesn’t mean it can do everything. We need to be able to put limits of what things and people can do; we all need constraints. Examples of this are things like granting consent or enforcing digital rights management.

Immutable

Some things are in a relationship forever. This is useful to know when you want to make sure that a “thing” was really made by one of your partners and is authentic.

Transferable

Some relationships can be transferred. We have legal proxies that we transfer a relationship to on a temporary or conditional basis. There are plenty of familial relationships in which we transfer authority on a semi-permanent basis. And some relationships are permanently transferred – like selling a jet engine to someone.

Activatable

Many relationships exist but aren’t very useful, until a condition changes. My relationship to my auto insurance provider isn’t a very vibrant relationship. I don’t use the relationship on most days. But then I get into an accident that inert relationship between my car, the insurer, and me becomes active. There’s something out there, some condition out there, that can make a relationship active and vital.

Revocable

Some relationships end or have to come to an end. What happens then? What happens to the data now that the relationship is gone? At this point we have to turn to renowned privacy expert, John Mellencamp for his insight. You might not know it but he wrote about the Right to Be Forgotten and other privacy issues in “Jack and Diane”. As he sang, “oh yeah data goes on / long after the thrill of the relationship is gone.” But this problem is at the root of the “Right to Be Forgotten” debate. This will only become a larger problem as our digital footprints get heavier and heavier. And this gets especially messy when relationships that I am not even aware of create data about me and my devices and my things.

In summary, relationships:

If we were to do this, how would things be better?

Relationships add back the fidelity and color that we have drained from the digital identity world. By focusing on relationships, we would behave more like we do in the real world, but with all the efficiencies of the digital world. We’d be able to use familiar language to describe how and what people and things can do.

How should we do this?

I don’t fully know. This is the least satisfying and most accurate thought in this whole talk. I don’t fully know. And I am looking for help.

So I lied to you dear audience. This is a sales pitch. I want you to do something. If you have any interest in this vague notion of relationships and using them to make our world better, then I ask you to join the Kantara Initiative. It’s free to join and free to participate. It’s the home of some amazing identity and IoT thinking. And we need your help. I’d like you to join the Identity Relationship Management working group. I’d love it if you could bring your use cases to us. Share with a group of awesome people from around the world how you, your business, your service, your things connect and relate. Help us stop treating people like islands unto themselves. Help us to use relationships to make our digital interactions rich, meaningful, humanizing, and manageable.

No Person is an Island: How Relationships Make Things Better from iglazer

Radovan Semančík - nLightNever Use Closed-Source IAM Again [Technorati links]

November 20, 2014 03:45 PM

I will never use any closed-source IAM again. You will have to use force to persuade me to do it. I'm not making this statement lightly. I was working with closed-source IAM systems for the better part of 2000s and it made quite a good living. But I'm not going to do that again. Never ever.

What's so bad about closed-source IAM? It is the very fact that it is closed. A deployment engineer cannot see inside it. Therefore the engineer has inherently limited possibilities. No documentation is ever perfect and no documentation ever describes the system well enough. Therefore the deployment engineer is also likely to have limited understanding of the system. And engineer that does not understands what he is doing is unlikely to do a good job.

Closed-source software also leads to vendor lock-in. That makes it unbelievably expensive in the end. The Sun-Oracle acquisition of 2010 clearly demonstrated the impact of vendor lock-in for me. Our company was a very successful Sun partner in 2000s. But we have almost gone out of business because of that acquisition and the events that followed. That was the moment when I have realized that this must never happen again.

Open source is the obvious alternative. But how good it really is? Can it actually replace closed-source software? The short answer is a clear and loud "Yes!". The situation might have been quite bad in 2000s. But now there is a lot of viable open source alternatives for every IAM component. Directory servers, simple SSO, comprehensive SSO, social login and federation, identity management, RBAC and privileges and so on. There is plenty to choose from. Most of these projects are in a very good and stable state. They are at least as good as closed-source software.

But what is so great about open source software? It makes no sense to switch to open source just because of some philosophically-metaphysical differences, does it? So where are the tangible benefits? Simply speaking there are huge advantages to open source software all around you. But they might not be exactly what you expect.

Contrary to the popular belief the ability to meddle with the source code does not bring any significant direct advantage to the end customer. The customers are unlikely to even see the source code let alone modify it. But this ability brings a huge advantage to a system integrator who deploys the software. The deployment engineers do not need vendor assistance with every deployment step. The source code is the ultimate documentation therefore the deployment engineers can work almost independently. This eliminates the need for hugely overpriced vendor professional services - which also reduces the cost of the entire solution. The deployment engineers can fix product bugs themselves and submit the fixes back to the vendor. Which significantly speeds up the project. Any competent engineer can fix a simple bug in a couple of days if he has the source code. He or she does not need to raise each and every trivial issue and fight the way through all the levels of bloated support organization. And then wait for weeks or months to get the answer from the vendors development team. The open source way is so much more efficient. This dramatically reduces the deployment time and also the overall deployment cost.

The source code also allows ultimate customization. Software architects know very well how difficult it is to design and implement good extensible system. As with many other things it is actually very easy to do it badly but it is extremely difficult to do it well. A system which has all the extensibility that the IAM needs would inevitably become extremely complicated. Therefore the best way how to customize a system is sometimes the simple modification of the source code. And this is only possible in open source projects. Oh yes, there is this tricky upgradeablity problem. Customizations are difficult to upgrade, right? Right. Customized closed-source software is usually very difficult to upgrade. But that does not necessarily applies to well-managed open source projects. Distributed source code control software such as Git makes this kind of customization feasible. We are using this method for years and it survived many upgrades already.

But perhaps the most important advantage is the lack of vendor lock-in. The source code of open source project does not "belong" to any single individual or company. If the product is good there will be many open source companies that can offer services that only a single closed-source vendor can provide. This creates a healthy competition. In the extreme case the partner can always take over the product maintenance if the vendor misbehaves. Therefore it is unlikely that the cost of the open source solution spins out of control. Open source also provides much better protection against vendor failure. Yes, I'm aware that many companies behind the open source projects are small and that they can easily fail. But in the open source world the company failure does not necessarily mean project failure. If the project is any good then it will continue even if the original maintainer fails. Other companies will take over, most likely by employing at least a part of the original engineers. And the project goes on. This is the ultimate business continuity guarantee. And it has happened several times already. On the other hand the failure (or acquisition) of a closed source vendor is often fatal for the project. This has also happened several times. And we still feel the consequences today.

The difference between open-source and closed-source world is enormous. Any engineer that ever goes there and understands open source is very unlikely to go back. Open source is much easier to work with. The engineers have the power to change what they do not like. Open source is much more cost efficient and the business model is sustainable. And it actually works!

Therefore I would never ever use closed-source IAM again.

(Reposted from https://www.evolveum.com/never-use-closed-source-iam/)

Kaliya Hamlin - Identity WomanProtected: Dear IDESG, I’m sorry. I didn’t call you Nazi’s. [Technorati links]

November 20, 2014 02:18 PM

This content is password protected. To view it please enter your password below:

IS4UFIM 2010: Event driven scheduling [Technorati links]

November 20, 2014 12:25 PM
In a previous post I described how I implemented a windows service for scheduling Forefront Identity Manager.

Since then, me and my colleagues used it in every FIM project. For one project I was asked if it was possible to trigger the synchronization "on demand". A specific trigger for a synchronization cycle for example, was the creation of a user in the FIM portal. After some brainstorming and Googling, we came up with a solution.

We asked ourselves following question: "Is it possible to send a signal to our existing Windows service to start a synchronization cycle?". All the functionality for scheduling was already there, so it seemed reasonable to investigate and explore this option. As it turns out, it is possible to send a signal to a Windows service and the implementation turned out to be very simple (and simple is good, right?).

In addition to the scheduling on predefined moments defined in the job configuration file, which is implemented through the Quartz framework, we started an extra thread:

while (true)
{
 if (scheduler.GetCurrentlyExecutingJobs().Count == 0 
  && !paused)
 {
  scheduler.PauseAll();
  if (DateTime.Compare(StartSignal, LastEndTime) > 0)
  {
   running = true;
   StartSignal = DateTime.Now;
   LastEndTime = StartSignal;
   SchedulerConfig schedulerConfig = 
      new SchedulerConfig(runConfigurationFile);
   if (schedulerConfig != null)
   {
     schedulerConfig.RunOnDemand();
   }
   else
   {
    logger.Error("Scheduler configuration not found.");
    throw new JobExecutionException
        ("Scheduler configuration not found.");
   }
   running = false;
  }
  scheduler.ResumeAll();
 }
 // 5 second delay
 Thread.Sleep(5000);
}
First thing it does is check if one of the time-triggered schedules is not running and the service is not paused. Then it checks to see if an on-demand trigger was received by checking the StartSignal timestamp. So as you can see, the StartSignal timestamp is the one controlling the action. If the service receives a signal to start a synchronization schedule, it simply sets the StartSignal parameter:

protected override void OnCustomCommand(int command)
{
 if (command == ONDEMAND)
 {
  StartSignal = DateTime.Now;
 }
}

If you want to know more about developing custom activities, this article is a good starting point.

The first thing it does next if a signal was received, is pause the time-triggered mechanism. If the synchronization cycle finishes the time-triggered scheduling is resumed. The beautiful thing about this way of working is that the two separate mechanisms work alongside each other. The time-triggered schedule is not fired if an on-demand schedule is running and vice versa. If a signal was sent during a period of time the service was paused, the on-demand schedule will fire as soon as the service is resumed. The StartSignal timestamp will take care of that.
So, how do you send a signal to this service, you ask? This is also fairly straightforward. I implemented the FIM portal scenario I described above by implementing a custom C# workflow with a single code activity:

using System.ServiceProcess;
 
private const int OnDemand = 234;
 
private void startSync(){
 ServiceController is4uScheduler = 
  new ServiceController("IS4UFIMScheduler");
 is4uScheduler.ExecuteCommand(OnDemand);
}

If you want to know more about developing custom activities, this article is a good starting point.
The integer value is arbitrary. You only need to make sure you send the same value as is defined in the service source code. The ServiceController takes the system name of the Windows service. The same is possible in Powershell:

[System.Reflection.Assembly]::Load("System.ServiceProcess, 
  Version=2.0.0.0, Culture=neutral, 
  PublicKeyToken=b03f5f7f11d50a3a")
$is4uScheduler = New-Object System.ServiceProcess.ServiceController
$is4uScheduler.Name = "IS4UFIMScheduler"
$is4uScheduler.ExecuteCommand(234)

Another extension I implemented (inspired by Dave Nesbitt's question on my previous post) was the delay step. This kind of step allows you to insert a window of time between two management agent runs. This in addition to the default delay, which is inserted between every step. So now there are four kind of steps possible in the run configuration file: LinearSequence, ParallelSequence, ManagementAgent and Delay. I saw the same idea being implemented in powershell here.

A very usefull function I didn't mention in my previous post, but was already there, is the cleanup of the run history (which can become very big in a fast-synchronizing FIM deployment). This function can be enabled by setting the option "ClearRunHistory" to true and setting the number of days in the "KeepHistory" option. If you enable this option, you need to make sure the service account running the service is a member of the FIM Sync Admins security group. If you do not use this option, membership of the FIM Sync Operators group is sufficient.

To end I would like to give you pointers to some other existing schedulers for FIM:
FIM 2010: How to Automate Sync Engine Run Profile Execution

GluuOAuth2 for IoT? [Technorati links]

November 20, 2014 03:07 AM

cambells_soup

Today, consumers have no way to centrally manage access to all their Web stuff and IOT devices are threatening to create a whole new silo of security problems. This is one of the reasons I’ve been participating in the Open InterConnect Consortium Security Task Group.

People can’t individually manage every IOT device in their house. So it seems likely that some kind of centralized management tools will be necessary. Last week, I proposed the use of OAuth2 profiles OpenID Connect and UMA as the “two legs” of IOT security. Since then, a discussion has been active on the feasibility of OAuth2 for IOT?

One challenge for this design is that OAuth2 relies on HTTPS for transport security. While many devices will be powerful enough to handle an HTTPS connection, some devices are too small. Says Justin Richer from Mitre, “Basically replacing HTTP with CoAP and TLS with DTLS, you get a lot of functional equivalence.” In fact this effort is already in progress at the IETF, and research projects are in progress to build this out in simulation. For more info see the following three articles:

Assuming the transport layer security gets solved, another sticking point is the idea of central control. Here is the case against central control paraphrased by one of my comrades:

If you buy a light switch and a light bulb, they need to magically work together. When we state this as almost impossible, they will accept that the user needs a smartphone for the initial setup but not that he needs some extra dedicated authorization server. (Nor do I think that running this in the cloud will be acceptable either.)

Let’s consider the use case: How could an IOT light bulb connect to an IOT light switch.

Lets say the light bulb publishes three APIs:
/shutOffLight
/turnOnLight
/dimLight

For central control, using the UMA profile of OAuth2, a client must present a valid RPT token from an Authorization Server to the light bulb. All the light bulb has to do is validate this token. This should be the default configuration for most IOT devices–they should quickly hook into the existing home security infrastructure with very little effort from IOT developers. There is no need for the light bulb to store or evaluate policies with this solution. I disagree that the cloud won’t be a likely place to manage your digital resources (what don’t we use Google for these days). The home router might also be a handy place to have your home policy decision point.

But what if there is no central UMA authorization server? Is there a need for an alternate method of local authorization? Yes! The light bulb is the resource server, and it can always have some backup policies, for example, a USB connection or button, could bypass UMA authorization.

For the light switch to make this call to the APIs, it would need a client credential. The light bulb itself could have a tiny OAuth2 chip that would provide the bare minimum server APIs for client discovery, client authentication, and dynamic client registration.

The light bulb can offer a few different ways for the light switch to “authenticate” depending on how fancy it is:
1) None (sometimes you’re on a trusted network)
2) API key / secret
3) JSON Web Key

In cases where the light bulb was not configured to use central authentication, it could check the access token against its cache of tokens issued to local clients.

OpenID Connect offers lots of features for client registration. For example, you could correlate client registrations with “request_uris.” (Think entityID if you are familiar with SAML). See the registration request section of the OpenID Connect Dynamic Client Registration Spec

Why write a new OAuth2 based client authentication protocol when we already have OpenID Connect? Connect has been shown to be usable by developers, was designed to make simple things simple, and scales to complex requirements. Wouldn’t it make sense to just create a mapping for a new transport layer? Won’t there be even more transport layers in the future? What about secure-Bluetooth, secure-NFC, or secure-ESP? Will we have to re-invent client registration every time there is a new secure transport layer?

If the Open Interconnect Consortium Core Framework TG decides to mandate support for CoAP, then it may not be possible to use OpenID Connect, UMA or any other existing security protocol developed for HTTP.

Says Eve Maler, VP Innovation & Emerging Technology at ForgeRock, “My suspicion has been that a CoAP binding of UMA would be an interesting and worthwhile project… it could be done through the UMA extensibility profiles now–basically replacing the HTTP parts of UMA with CoAP parts”

Nat Sakimura, Chairman of the OpenID Foundation, commented “binding to other transport protocols, definitely yes. That was our intention from the beginning. That’s why we abstracted it. Defining a binding to CoAP etc. would be a good starting point. In the ACE Working Group at the IETF, Hannes Tschofenig from ARM has already started the work.”

Mike Jones - MicrosoftJOSE -37 and JWT -31 drafts addressing remaining IESG review comments [Technorati links]

November 20, 2014 01:19 AM

IETF logoThese JOSE and JWT drafts contain updates intended to address the remaining outstanding IESG review comments by Pete Resnick, Stephen Farrell, and Richard Barnes, other than one that Pete may still provide text for. Algorithm names are now restricted to using only ASCII characters, the TLS requirements language has been refined, the language about integrity protecting header parameters used in trust decisions has been augmented, we now say what to do when an RSA private key with “oth” is encountered but not supported, and we now talk about JWSs with invalid signatures being considered invalid, rather than them being rejected. Also, added the CRT parameter values to example JWK RSA private key representations.

The specifications are available at:

HTML formatted versions are available at:

November 19, 2014

Matt Pollicove - CTISome thoughts on database locking in Oracle and Microsoft SQL Server [Technorati links]

November 19, 2014 06:29 PM

Deadlocks are the bane of those of us responsible for designing and maintaining any type of database system. I’ve written about these before on the dispatcher level. However this time around, I’d like to discuss them a little further “down” so to speak, at the database level. Also in talking to various people about this topic I've found that it’s potentially the most divisive question since “Tastes good vs. Less filling

Database deadlocks are much like application ones, typically come when two processes are trying to access the same database row at the same time. Most often this is when the system is trying to read and write to the row at the same time. A nice explanation can be found here. What we essentially wind up with is the database equivalent of a traffic jam where no one can move. It’s interesting to note that both Oracle and Microsoft SQL server handle these locking scenarios differently. I’m not going to go into DB2 at the moment but will address it if there is sufficient demand.

When dealing with SQL Server, management of locks is handled through the use of the “Hint” called No Lock. According to MSDN:

Hints are options or strategies specified for enforcement by the SQL Server query processor on SELECT, INSERT, UPDATE, or DELETE statements. The hints override any execution plan the query optimizer might select for a query. (Source)
When NOLOCK is used this is the same as using READUNCOMMITTED which some of you might have be familiar with if you did the NetWeaver portion of the IDM install when setting up the data source. Using this option keeps the SQL Server database engine from issuing locks. The big issue here is that one runs the risk of having dirty (old) data in the database operations. Be careful when using NOLOCK for this reason. Even though the SAP Provisioning Framework makes extensive use of the NOLOCK functionality, they regression test the heck out of the configuration. Make sure you do, too misuse of NOLOCK can lead to bad things happening in the Identity Store database.

There is also a piece of SQL Server functionality referred to as Snapshot Isolation which appears to work as a NOLOCK writ large where database snapshots are held in the TEMPDB for processing (source) This functionality was recommended by a DBA I worked with on a project some time ago. The functionality was tested in DEV and then rolled to the customer’s PRODUCTION instance.

Oracle is a little different in the way that it approaches locking in that the system has more internal management of conflicts through use of rollback logs forcing data to be committed before writes can occur and thus deadlocks occur much less often (Source) This means that there is no similar NOLOCK functionality in the Oracle Database System.

One final thing to consider with database deadlocks is how the database is being accessed, regardless of the database being used.  It is considered a best practice in SAP IDM to use To Identity Store passes as opposed to uIS_SetValue whenever possible (Source)

At the end of the day, I don’t know that I can really tell you to employ these mechanisms or not. In general we do know that it’s better not to have deadlocks than to have them and to do what you can to achieve this goal. In general, if you are going to use these techniques, do make sure you are doing so in concert with your DBA team and after careful testing. I have seen Microsoft SQL Server’s Snapshot Isolation work well in a busy productive environment, but I will not recommend its universal adoption as I can’t tell you how well it will work in yourenvironment. I will however recommend that you look into it with your DBA team if you are experiencing Deadlocks in SQL Server.


Kuppinger ColeDatabase Security On and Off the Cloud [Technorati links]

November 19, 2014 11:05 AM
In KuppingerCole Podcasts

Continued proliferation of cloud technologies offering on-demand scalability, flexibility and substantial cost savings means that more and more organizations are considering moving their applications and databases to IaaS or PaaS environments. However, migrating sensitive corporate data to a 3rd party infrastructure brings with it a number of new security and compliance challenges that enterprise IT has to address. Developing a comprehensive security strategy and avoiding point solutions for ...



Watch online

Vittorio Bertocci - MicrosoftFrom Domain to TenantID [Technorati links]

November 19, 2014 06:03 AM

Ha, I discovered that I kind of like to write short posts Smile so here there’s another one.

Azure AD endpoints can be constructed with both domain and tenantID interchangeably, “https://login.windows.net/developertenant.onmicrosoft.com/oauth2/authorize” and “https://login.windows.net/6c3d51dd-f0e5-4959-b4ea-a80c4e36fe5e/oauth2/authorize” are functionally equivalent – however the tenantID has some clear advantages. For example: it is immutable, globally unique and non-reassignable, while domains do indeed change hands on occasions. Moreover, you can have many domains associated to a tenant but only one tenantID. Really, the only thing that the domain has going for itself is that it is human readable and there’s a reasonable chance a user can remember and type it.

Per the above, there are times in which it can come in useful to find out the TenantID for a given domain. The trick is reeeeeally simple. You can use the domain to construct one of the AAD endpoints which return tenant metadata, for example the OpenId Connect one; such metadata will contain the tenantID. In practice: say that you know that the target domain is developertenant.onmicrosoft.com. How do I find out the corresponding tenantID, without even being authenticated?

Easy. I do a GET of https://login.windows.net/developertenant.onmicrosoft.com/.well-known/openid-configuration.

The result is a JSON file that has the tenantID all over it:

{
   "authorization_endpoint" : "https://login.windows.net/6c3d51dd-f0e5-4959-b4ea-a80c4e36fe5e/oauth2/authorize",
   "check_session_iframe" : "https://login.windows.net/6c3d51dd-f0e5-4959-b4ea-a80c4e36fe5e/oauth2/checksession",
   "end_session_endpoint" : "https://login.windows.net/6c3d51dd-f0e5-4959-b4ea-a80c4e36fe5e/oauth2/logout",
   "id_token_signing_alg_values_supported" : [ "RS256" ],
   "issuer" : "https://sts.windows.net/6c3d51dd-f0e5-4959-b4ea-a80c4e36fe5e/",
   "jwks_uri" : "https://login.windows.net/common/discovery/keys",
   "microsoft_multi_refresh_token" : true,
   "response_modes_supported" : [ "query", "fragment", "form_post" ],
   "response_types_supported" : [ "code", "id_token", "code id_token", "token" ],
   "scopes_supported" : [ "openid" ],
   "subject_types_supported" : [ "pairwise" ],
   "token_endpoint" : "https://login.windows.net/6c3d51dd-f0e5-4959-b4ea-a80c4e36fe5e/oauth2/token",
   "token_endpoint_auth_methods_supported" : [ "client_secret_post", "private_key_jwt" ],
   "userinfo_endpoint" : "https://login.windows.net/6c3d51dd-f0e5-4959-b4ea-a80c4e36fe5e/openid/userinfo"
}

Whip out your favorite JSON parsing class, and you’re done. Ta—dahh ♫

Kantara InitiativeEuropean Workshop on Trust & Identity [Technorati links]

November 19, 2014 03:02 AM

For those who are in the EU or will be near Vienna, Austria.  You may wish to attend the European Workshop on Trust and Identity to discuss “Connecting Identity Management Initiatives.” This is an openspace workshop where attendees will have the opportunity to network and share with others.

Details: https://identityworkshop.eu/tiki-index.php

Openspace workshops have been finding and solving trust and identity issues for years. Starting in 2013 EWTI made this format available in Europe and received excellent feedback from participants. If you are looking for a substantial discussion on this subject it is likely that you will meet the right people here!

Meet at the EU Identity Workshop in Vienna 2014

EWTI is the opportunity to discuss, share knowledge, and learn about everything related to Internet Trust and Identity today. Topics at the EWTI in 2013 included:
  • Gov/Academic/Social ID
  • How to use SAML with REST and SOAP
  • eID in your country: where is it today, where is it heading?
  • SLO Single Logout for SAML & OAuth
  • STORK – existing federations user cases, interoperability
  • Binding LoA attributes to social ids (non-technical – strategy)
  • NSTIC: impressions, feedback, relation to other world-wide projects
  • Banks and Telcos as strong Identity Providers in Finland (Business model)
  • Trust and Market for Personal Data: Privacy – How to re-establish trust?
  • Trust Frameworks beyond Secotrs: Release of attributes, LOA
  • Authorization in SAML federations
  • Scaleable & comprehensive attributes design (authN & authZ)
  • E-Mail as global identifier: embrace/defend/fight it?
  • eID and Government stuff
  • Metadata exchange session: Federations at scaleSCIM 101
  • Rich-clients for mobile devices
  • Step up AuthN as a Service
  • Is SPML dead – who uses SCIM?
  • SAML2 test tool
  • All identities are self asserted
  • de-/provisioning / federated notification
  • Biobank Cloud Security
November 17, 2014

Vittorio Bertocci - MicrosoftSkipping the Home Realm Discovery Page in Azure AD [Technorati links]

November 17, 2014 04:43 PM

A typical authentication transaction with Azure AD will open with a  generic credential gathering page. As the user enters his/her username, Azure AD figures out from the domain portion of the username if the actual credential gathering should take place elsewhere (for example, if the domain is associated with a federated tenant the actual cred gathering will happen on the associated ADFS pages) and if it’s the case it will redirect accordingly.

Sometimes your app logic is such that you know in advance whether such transfer should happen. In those situations you have the opportunity to let our libraries (ADAL or the OWIN middlewares for OpenId Connect/WS-Federation) know where to go right from the start.

In OAuth2 and OpenId Connect you do so by passing the target domain in the “domain_hint” parameter.
In ADAL you can pass it via the following:

AuthenticationResult ar =
    ac.AcquireToken("https://developertenant.onmicrosoft.com/WebUXplusAPI",
                    "71aefb3b-9218-4dea-91f2-8b23ce93f387",
                    new Uri("http://any"), PromptBehavior.Always, 
                    UserIdentifier.AnyUser, "domain_hint=mydomain.com");

 

In the OWIN middleware for OpenId Connect you can do the same in the RedirectToIdentityProvider notification:

app.UseOpenIdConnectAuthentication(
    new OpenIdConnectAuthenticationOptions
    {
        ClientId = clientId,
        Authority = authority,
        PostLogoutRedirectUri = postLogoutRedirectUri,
        Notifications = new OpenIdConnectAuthenticationNotifications()
        {
            RedirectToIdentityProvider = (context) => 
            {                                                        
                context.ProtocolMessage.DomainHint = "mydomain.com"; 
                return Task.FromResult(0); 
            }, 
        }
    });

 

Finally, in WS-Fed you do the following:

app.UseWsFederationAuthentication(
   new WsFederationAuthenticationOptions
   {
      Notifications = new WsFederationAuthenticationNotifications
      {
         RedirectToIdentityProvider = (context) =>
         {
            context.ProtocolMessage.Whr = "mydomain.com";
            return Task.FromResult(0);
         }
      }
   }
}

Party on! Smile

Kuppinger ColeAdvisory Note: Security and the Internet of Everything and Everyone - 71152 [Technorati links]

November 17, 2014 03:06 PM
In KuppingerCole

The vision for the Internet of Everything and Everyone is for more than just an Internet of Things; it makes bold promises for the individual as well as for businesses. However the realization of this vision is based on existing systems and infrastructure which contain known weaknesses.


more
November 16, 2014

Anil JohnRFI - EMV Enabled Debit Cards as Authentication Tokens? [Technorati links]

November 16, 2014 08:55 PM

The U.S. is finally moving to EMV compliant payment cards. Can these cards be used as multi-factor authentication tokens for electronic transactions outside the payment realm? What are the security and privacy implications? Who needs to buy into and be in the transaction loop to even consider this as a possibility?

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.


The opinions expressed here are my own and do not represent my employer’s view in any way.

November 14, 2014

CourionFinancial Services Ready to Embrace Identity and Access intelligence [Technorati links]

November 14, 2014 02:32 PM

Access Risk Management Blog | Courion

Nick BerentsThis week at London’s Hotel Russell, the Identity Management 2014 conference brought together hundreds of technology professionals and security specialists across government and enterprises of all sizes and industries.

It was fascinating to hear from industry leaders discussing the next generation of Identity and Access Management, representing diverse firms and organizations such as ISACA, Visa Europe, Ping Identity, CyberArk, and beverage giant SABMiller.Identity Management 2014 London

A highlight for me was a session that included Nick Taylor, Director of IAM at Deloitte, and Andrew Bennett, CTO of global private bank Kleinwort Benson.

Taylor discussed the challenges that IAM professionals face in making access governance reviews business friendly, as often there is not enough context to understand the risks that they face. For example, an equities trader making lots of trades at a certain time of the day may be normal, but maybe not so normal if that trader is doing it from different locations or geographies.

Bennett supported that notion by pointing out that technical jargon can mask risk that exists, so he recommended that the financial services industry look into the concept of identity and access intelligence and start taking it on now. Adopting such a solution is not a case of throwing more tools at the problem; it is a matter of having the right tool to make sense of the mess.

Also good to hear our partner Ping Identity's session “It’s Not About the Device – It’s All About the Standards” and how modern identity protocols allow the differentiation of business & personal identities.

Overall a good conference that provided attendees with lots of opportunity to learn best practices and hear how their colleagues are approaching identity management. But rather than waiting for next year’s conference, anyone can learn more in the near term by attending Courion’s upcoming webinar Data Breach - Top Tips to Protect, Detect and Deter on Thursday November 20th at 11 a.m. ET, 8 a.m. PT, 4 p.m. GMT.

blog.courion.com

Ludovic Poitou - ForgeRockThe new ForgeRock Community site [Technorati links]

November 14, 2014 10:55 AM

Earlier this week, a new major version of ForgeRock Community site was pushed to production.

ForgeRock.org

Beside a cleaner look and feel and a long awaited reorganisation of content, the new version enables better collaboration around the open source projects and initiatives. You will find Forums, for general discussions or project specific ones, new Groups around specific topics like UMA or IoT. We’ve also added a calendar with different views, so that you can find or suggest events, conferences, webinars touching the projects and IRM at large.
Great work Aron and Marius for the new ForgeRock.org site ! Thank you.

Venn Of Authorization with UMAAnd we’ve also announced a new project OpenUMA. If you haven’t paid attention to it yet, I suggest you do now. User-Managed Access (UMA) is an OAuth-based protocol that enables an individual to control the authorization of data sharing and service access made by others. The OpenUMA community shares an interest in informing, improving, and extending the development of UMA-compatible open-source software as part of ForgeRock’s Open Identity Stack.

 


Filed under: General Tagged: collaboration, community, ForgeRock, forgerock.org, identity, opensource, projects
November 13, 2014

Julian BondThe lights are going out in Syria. Literally. [Technorati links]

November 13, 2014 06:48 PM
The lights are going out in Syria. Literally.
http://cassandralegacy.blogspot.co.uk/2014/11/the-olduvai-cliff-are-lights-going-out.html
 The Olduvai cliff: are the lights going out already? »
Image from Li and Li, "international journal of remote sensing." h/t Colonel Cassad". The image shows the nighttime light pattern in Syria three years ago (a) and today (b). Those among us who are diehard catastrophists surel...

[from: Google+ Posts]
November 12, 2014

Kuppinger Cole16.12.2014: Secure Mobile Information Sharing: addressing enterprise mobility challenges in an open, connected business [Technorati links]

November 12, 2014 02:44 PM
In KuppingerCole

Fuelled by the exponentially growing number of mobile devices, as well as by increasing adoption of cloud services, demand for various technologies that enable sharing information securely within organizations, as well as across their boundaries, has significantly surged. This demand is no longer driven by IT; on the contrary, organizations are actively looking for solutions for their business needs.
more
November 11, 2014

Nat SakimuraXACML v3.0 Privacy Policy Profile Version 1.0 パブリック・レビュー [Technorati links]

November 11, 2014 09:05 PM

eXtensible Access Control Markup Language (XACML) のCommittee Specification Draft (CSD) の15日間のパブリックレビューピリオドが、11/12から始まります。

この規格案は、プライバシーポリシーをXACMLで表すためのものです。

期間は11/12 0:00 UTC ~11/26 23:59 UTCです。

対称の文書のURLは以下のとおり:

Editable source (Authoritative):
http://docs.oasis-open.org/xacml/3.0/privacy/v1.0/csprd03/xacml-3.0-privacy-v1.0-csprd03.doc

HTML:
http://docs.oasis-open.org/xacml/3.0/privacy/v1.0/csprd03/xacml-3.0-privacy-v1.0-csprd03.html

HTML with inline tags for direct commenting:
http://docs.oasis-open.org/xacml/3.0/privacy/v1.0/csprd03/xacml-3.0-privacy-v1.0-csprd03-COMMENT-TAGS.html

PDF:
http://docs.oasis-open.org/xacml/3.0/privacy/v1.0/csprd03/xacml-3.0-privacy-v1.0-csprd03.pdf

コメントは、OASISのコメント機能を使って送信可能です。

送信されたコメントは以下から参照可能です。

http://lists.oasis-open.org/archives/xacml-comment/

送信された全てのコメントは、OASIS Feedback Licenseによって提出されたとみなされます。詳しくは以下の[3][4]をご参照ください。

========== Additional references:

[1] OASIS eXtensible Access Control Markup Language (XACML) TC
http://www.oasis-open.org/committees/xacml/

[2] Previous public reviews:

* 15-day public review, 23 May 2014: https://lists.oasis-open.org/archives/members/201405/msg00019.html

* 60-day public review, 21 May 2009: https://lists.oasis-open.org/archives/members/200905/msg00006.html

[3]http://www.oasis-open.org/policies-guidelines/ipr

[4] http://www.oasis-open.org/committees/xacml/ipr.php
https://www.oasis-open.org/policies-guidelines/ipr#s10.2.3
RF on Limited Terms Mode

Kaliya Hamlin - Identity WomanQuotes from Amelia on Systems relevant to Identity. [Technorati links]

November 11, 2014 08:14 PM

This is coverage of at WSJ interview with Amelia Andersdotter the former European Parliament member from the Pirate Party from Sweden. Some quote stuck out for me as being relevant

If we also believe that freedom and individualism, empowerment and democratic rights, are valuable, then we should not be constructing and exploiting systems of control where individual disempowerment are prerequisites for the system to be legal.

We can say that most of the legislation around Internet users protect systems from individuals. I believe that individuals should be protected from the system. Individual empowerment means the individual is able to deal with a system, use a system, work with a system, innovate on a system—for whatever purpose, social or economic. Right now we have a lot of legislation that hinders such [empowerment]. And that doesn’t necessarily mean that you have anarchy in the sense that you have no laws or that anyone can do whatever they want at anytime. It’s more a question of ensuring that the capabilities you are deterring are actually the capabilities that are most useful to deter. [emphasis mine].

This statement is key  “individuals should be protected from the system” How do we create accountability from systems to people and not just the other way around. I continue to raise this issue about so called trust frameworks that are proposed as the solution to interoperable digital identity – there are many concerning aspects to the solutions including what seems to be very low levels of accountability of systems to people.

The quotes from Ameila continued…

I think the Internet and Internet policy are very good tools for bringing power closer to people, decentralizing and ensuring that we have distributive power and distributive solutions. This needs to be built into the technical, as well as the political framework. It is a real challenge for the European Union to win back the confidence of European voters because I think a lot of people are increasingly concerned that they don’t have power or influence over tools and situations that arise in their day-to-day lives.

The European Union needs to be more user-centric. It must provide more control [directly] to users. If the European Union decides that intermediaries could not develop technologies specifically to disempower end users, we could have a major shift in global political and technical culture, not only in Europe but worldwide, that would benefit everyone.

Mike Jones - MicrosoftJWK Thumbprint spec adopted by JOSE working group [Technorati links]

November 11, 2014 08:01 PM

IETF logoThe JSON Web Key (JWK) Thumbprint specification was adopted by the JOSE working group during IETF 91. The initial working group version is identical to the individual submission version incorporating feedback from IETF 90, other than the dates and document identifier.

JWK Thumbprints are used by the recently approved OpenID Connect Core 1.0 incorporating errata set 1 spec. JOSE working group co-chair Jim Schaad said during the working group meeting that he would move the document along fast.

The specification is available at:

An HTML formatted version is also available at:

Kuppinger ColeHow to Protect Your Data in the Cloud [Technorati links]

November 11, 2014 06:07 PM
In KuppingerCole Podcasts

More and more organizations and individuals are using the Cloud and, as a consequence, the information security challenges are growing. Information sprawl and the lack of knowledge about where data is stored are in stark contrast to the internal and external requirements for its protection. To meet these requirements it is necessary to protect data not only but especially in the Cloud. With employees using services such as iCloud or Dropbox, the risk of information being out of control and l...



Watch online

Kuppinger ColeA Haven of Trust in the Cloud? [Technorati links]

November 11, 2014 08:59 AM
In Mike Small

In September a survey was published in Dynamic CISO that showed that “72% of Businesses Don’t Trust Cloud Vendors to Obey Data Protection Laws and Regulations”.  Given this lack of trust by their customers what can cloud service vendors do?

When an organization stores data on its own computers, it believes that it can control who can access that data. This belief may be misplaced given the number of reports of data breaches from on premise systems; but most organizations trust themselves more than they trust others.  When the organization stores data in the cloud, it has to trust the cloud provider, the cloud provider’s operations staff and the legal authorities with jurisdiction over the cloud provider’s computers. This creates many serious concerns about moving applications and data to the cloud and this is especially true in Europe and in particular in geographies like Germany where there are very strong data protections laws.

One approach is to build your own cloud where you have physical control over the technology but you can exploit some of the flexibility that a cloud service provides. This is the approach that is being promoted by Microsoft.  In October Microsoft in conjunction with Dell announced their “Cloud Platform System”.  This is effectively a way for an organization to deploy Dell servers running the Microsoft Azure software stack on premise.  Using this platform, an organization can build and deploy on premise applications that are Azure cloud ready.  At the same time it can see for itself what goes on “under the hood”.  Then, when the organization has built enough trust, or when it needs more capacity it can easily extend the existing workload in to the cloud.   This approach is not unique to Microsoft – other cloud vendors also offer products that can be deployed on premise where there are specific needs.

In the longer term Microsoft researchers are working to create what is being described as a “Haven in the Cloud”.  This was described in a paper at the 11th USENIX Symposium on Operating Systems Design and Implementation.  In this paper, Baumann and his colleagues offer a concept they call “shielded execution,” which protects the confidentiality and the integrity of a program, as well as the associated data from the platform on which it runs—the cloud operator’s operating system, administrative software, and firmware. They claim to have shown for the first time that it is possible to store data and perform computation in the cloud with equivalent trust to local computing.

The Haven prototype uses the hardware protection proposed in Intel’s Software Guard Extensions (SGX)—a set of CPU instructions that can be used by applications to isolate code and data securely, enabling protected memory and execution. It addresses the challenges of executing unmodified legacy binaries and protecting them from a malicious host.  It is based on “Drawbridge” another piece of Microsoft research that is a new kind of virtual-machine container.

The question of trust in cloud services remains an important inhibitor to their adoption. It is good to see that vendors are taking these concerns seriously and working to provide solutions.  Technology is an important component of the solution but it is not, in itself sufficient.  In general computers do not breach data by themselves; human interactions play an important part.  The need for cloud services to support better information stewardship as well as for cloud service providers to create an information stewardship culture is also critical to creating trust in their services.  From the perspective of the cloud service customer my advice is always trust but verify.

November 10, 2014

Ian GlazerThe Only Two Skills That Matter: Clarity of Communications and Empathy [Technorati links]

November 10, 2014 04:49 PM

I meant to write a post describing how I build presentations, but I realized that I can’t do that without writing this one first.

I had the honor of working with Drue Reeves when I was at Burton and Gartner. Drue was my chief of research and as an agenda manager we worked closely in shaping what and how our teams would research. More importantly we got to define the kind of analysts we hired. We talked about all the kinds of skills an analyst should have. We’d list out all sorts of technical certifications, evidence of experience, and the like. But in the end, that list always reduced down to two things. If you have them, you can be successful in all your endeavors. The two most important skills someone needs to be successful in what they do are:

Radical clarity

To make oneself understood and understandable regardless of the situation. Clarity that transcends generations, languages, sets of belief, and knowledge. That is what is required. And that is a far cry from the typical “strong communication skills” b.s. you see on a lot of resumes.

The trick to communicating clearly is realizing that it’s not about the prettiness or exactness of what you say. It’s all in understanding what will be absorbed by and resonate with the other: the person across from you, the audience, the reader, etc. Strip all of the superfluous bits and layers away and get down to that genuine message that you want the other to keep with them.

To do that requires empathy.

Genuinely giving a shit

There is no way to communicate with an audience (or even just another person) unless you actually care about them. You have to care about their wellbeing. You have to be invested in their success. Even when they don’t want to hear your heretical opinion. Even when they have competing ideas. Especially then.

If you start phoning it in, it you just give a stock answer or deliver the same old deck in the same old format, the audience knows and they know that you’ve checked out and are no longer interested in their success. Even if you hold a universal truth and wondrous innovation, the audience will not care because you don’t either.

Clarity and empathy. These aren’t skills you take classes in. Sure, you can refine techniques through training. But you actually get better that these things by simply trying to do them. Just like giving presentations. I’ll tackle that one next…

 

Ludovic Poitou - ForgeRockHighlights of IRMSummit Europe 2014… [Technorati links]

November 10, 2014 03:10 PM

Powerscourt hotelLast week at the nice Powerscourt Estate, outside Dublin, Ireland, ForgeRock hosted the European Identity Relationship Management Summit, attended by over 200 partners, customers, prospects, users of ForgeRock technologies. What a great European IRMSummit it was !

If you haven’t been able to attend, here’s some highlights:

I heard many talks and discussions about Identity being the cornerstone in the digital transformation of enterprises and organizations. It shifting identity projects from a cost center to revenue generators.

There was lots of focus on consumer identity and access management, with some perspectives on current identity standards and what is going to be needed from the IRM solutions. We’ve also heard from security and analytics vendors, demonstrating how ForgeRock’s Open Identity Stack can be combined with the network security layer or with analytics tools to increase security and context awareness when controlling access.

User Managed Access is getting more and more real, as the specifications are getting close to be finalised and ForgeRock announced the OpenUMA initiative for foster ideas and code around it. See forgerock.org/openuma.

Chris and Allan around an Internet connected coffee machine, powered by ARMMany talks about Internet of Things and especially demonstration around defining the relationship between a Thing and a User, securing the access to the data produced by the Thing. We’ve seen a door lock being unlocked with a NFC enabled mobile phone, by provisioning over the air the appropriate credentials, a smart coffee machine able to identify the coffee type and the user, pushing the data to a web service, and asking the user for consent to share. There’s a common understanding that all the things will have identities and relations with other identities.

There were several interesting discussions and presentations about Digital Citizens, illustrated by reports from deployments in Norway, Switzerland, Nigeria, and the European Commission cross-border authentication initiatives STORK and eIDAS

Half a day was dedicated to ForgeRock products, with introductory trainings, demonstrations of coming features in OpenAM, OpenDJ, OpenIDM and OpenIG. During the Wednesday afternoon, I did 2 presentations on OpenIG, demonstrating the ease of integration of OAuth2.0 and OpenID Connect to protect applications and APIs, and on OpenDJ, demonstrating the flexibility and power of the REST to LDAP interface.

All presentations and materials are available online as pdf (I will update this article when the videos will also be available). Meanwhile, you can find here a short summary of the Summit in a video produced by Markus.

Powerscourt Estate HousePowerscourt Estate gardens
The summit wouldn’t be such a great conference if there was no plan for social interactions and fun. This year we had a nice dinner in the Powerscourt house (aka the Castle) followed by live music in the pub. The band was great, but became even better when Joni and Eve joined them for a few songs, for the great pleasure of all the guests.

15542471759_d6d2ee842d_m

The band15542475489_04dabb40ff_m

Slainte
Of course, I have to admit that the best part of the IRM Summit in Ireland was the pints of Guinness !

To all attendees, thank you for your participation, the interesting discussions and the input to our products. I’m looking forward to see you again next year for the 2015 edition. Sláinte !

As usual, you can find the photos that I’ve taken at the Powerscourt Estate on Flickr. Feel free to copy for non commercial use, and if you do republish them, I would appreciate getting the credit for them.

[Updated on Nov 11] Added link to the highlight video produced by Markus
[Updated on Nov 13] Added link to the slideshare folder where all presentations have been published


Filed under: Identity Tagged: conference, ForgeRock, identity, IRM, IRMSummit2014, IRMSummitEurope, openam, opendj, openidm, openig, summit

KatasoftBootstrapping an Express.js App with Yeoman [Technorati links]

November 10, 2014 03:00 PM

So, you want to build an Express.js web application, eh? Well, you’re in the right place!

In this short article I’ll hold your hand, sing you a song (not literally), and walk you through creating a bare-bones Express.js web application and deploying it on Heroku with Stormpath and Yeoman.

In the next few minutes you’ll have a live website, ready to go, with user registration, login, and a simple layout.

Step 1: Get Ready!

Before we dive into the code and stuff, you’ve got to get a few things setup on your computer!

First off, you need to go and create an account with Heroku if you haven’t already. Heroku is an application hosting platform that’s really awesome! So awesome that I even wrote a book about them (true story)! But what makes them really great for our example here today is that they’re free, and easy to use.

Once you’ve got Heroku installed you then need to install their toolbelt app on your computer. This is what lets you build Heroku apps.

Next off, you need to have Node installed and working on your computer. If you don’t already have it installed, go visit the Node website and get it setup.

Lastly, you need to install a few Node packages. You can install them all by running the commands below in your terminal:

$ sudo npm install -g yo generator-stormpath

The yo package is yeoman — this is a tool we’ll be using to create an application for us.

The generator-stormpath package is the actual project that yo will install — it is what holds the actual project code and information we need to get started.

Got through all that? Whew! Good work!

Step 2: Bootstrap a Project

OK! Now that the boring stuff is over, let’s create a project!

The first thing we need to do is create a directory to hold our new project. You can do this by running the following command in your terminal:

$ mkdir myproject
$ cd my project

You should now be inside your new project directory.

At this point, you can now safely bootstrap your new project by running:

$ yo stormpath

This will kick off a script that creates your project files, and asks if you’d like to deploy your new app to Heroku. When prompted, enter ‘y’ for yes. If you don’t do this, your app won’t be live :(

NOTE: If you don’t get asked a question about Heroku, then you didn’t follow my instructions and install Heroku like I said to earlier! Go back to Step #1!

Assuming everything worked, you should see something like this:

Tip: For a full resolution link so you can actually see what I’m typing, view this image directly. yo-stormpath-bootstrap

Now, if you take a look at your directory, you’ll notice there are a few new files in there for you to play around with:

We’ll get into the code in the next section, but for now, go ahead and run:

$ heroku open

In your terminal. This will open your browser automatically, and open up your brand new LIVE web app! Cool, right?

As I’m sure you’ve noticed by now, your app is running live, and lets you sign up, log in, log out, etc. Pretty good for a few seconds of work!

And of course, here are some obligatory screenshots:

Screenshot: Yo Stormpath Index Page yo-stormpath-index

Screenshot: Yo Stormpath Registration Page yo-stormpath-registration

Screenshot: Yo Stormpath Logged in Page yo-stormpath-logged-in

Screenshot: Yo Stormpath Login Page yo-stormpath-login

Step 3: CODE THINGS! GO!

So, as of this very moment in time, we’ve:

So now that we’ve got those things out of the way, we’re free to build a real web app! This is where the real fun begins!

For thoroughness, let’s go ahead and implement a simple dashboard page on our shiny new web app.

Go ahead and open up the routes/index.js file, and add a new function call:

router.get('/dashboard', stormpath.loginRequired, function(req, res) {
  res.send('Hi, ' + req.user.givenName + '. Welcome to your dashboard!');
});

Be sure to place this code above the last line in the file that says module.exports = router.

This will render a nice little dashboard page for us. See the stormpath.loginRequired middleware we’re using there? That’s going to force the user to log in before allowing them to access that page — cool, huh?

You’ve also probably noticed that we’re saying req.user.givenName in our route code — that’s because Stormpath’s library automatically creates a user object called req.user once a user has been logged in — so you can easily retrieve any user params you want!

NOTE: More information on working with user objects can be found in our official docs.

Anyway — now that we’ve got that little route written, let’s also tweak our Stormpath setup so that once a user logs in, they’ll be automatically redirected to the new dashboard page we just wrote.

To do this, open up your index.js file in the root of your project and add the following line to the stormpath.init middleware:

app.use(stormpath.init(app, {
  apiKeyId:     process.env.STORMPATH_API_KEY_ID,
  apiKeySecret: process.env.STORMPATH_API_KEY_SECRET,
  application:  process.env.STORMPATH_URL || process.env.STORMPATH_APPLICATION,
  secretKey:    process.env.STORMPATH_SECRET_KEY,
  redirectUrl: '/dashboard',
}));

The redirectUrl setting (explained in more detail here), tells Stormpath that once a user has logged in, they should be redirected to the given URL — PERFECT!

Now, let’s see if everything is working as expected!

$ git add --all
$ git commit -m "Adding a new dashboard page!"
$ git push heroku master

The last line there, git push heroku master, will deploy your updates to Heroku. Once that’s finished, just run:

$ heroku open

To open your web browser to your app page again — now take a look around! If you log into your account, you’ll see that you’ll end up on the new dashboard page! It should look something like this:

Screenshot: Yo Stormpath Dashboard Page yo-stormpath-dashboard

BONUS: What happens if you log out of your account, then try visiting the dashboard page directly? Does it let you in? HINT: NOPE!

Step 4: READ MORE THINGS

So if you’ve gotten this far — congrats! You are awesome, amazing, and super cool. You’re probably wondering “What next?” And, that’s a great question!

If you’re hungry for more, you’ll want to check out the following links:

They’re all awesome tools, and I hope you enjoy them.

Lastly — if you’ve got any feedback, questions, or concerns — leave me a comment below. I’ll do my best to respond in a timely fashion.

Now GO FORTH and build some stuff!

November 09, 2014

OpenID.netErrata to OpenID Connect Specifications Approved [Technorati links]

November 09, 2014 07:28 PM

Errata to the following specifications have been approved by a vote of the OpenID Foundation members:

An Errata version of a specification incorporates corrections identified after the Final Specification was published.

The voting results were:

Total votes: 46 (out of 194 members = 24% > 20% quorum requirement)

The original final specification versions remain available at these locations:

The specifications incorporating the errata are available at the standard locations and at these locations:

— Michael B. Jones – OpenID Foundation Board Secretary

OpenID.netImplementer’s Draft of OpenID 2.0 to OpenID Connect Migration Specification Approved [Technorati links]

November 09, 2014 07:26 PM

The following specification has been approved as an OpenID Implementer’s Draft by a vote of the OpenID Foundation members:

An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification.

This Implementer’s Draft is available at these locations:

The voting results were:

Total votes: 46 (out of 194 members = 24% > 20% quorum requirement)

— Michael B. Jones – OpenID Foundation Board Secretary

November 08, 2014

Anil JohnWhy Multi-Factor and Two-Factor Authentication May Not Be the Same [Technorati links]

November 08, 2014 06:20 PM

Two Factor Authentication is currently the bright and shining star that everyone, from those who offer ‘free’ services to those who offer high value services, wants to know and emulate. When designing such implementations, it is important to understand the implications to identity assurance if the two-factor implementation does not correctly incorporate the principles of multi-factor authentication.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.


The opinions expressed here are my own and do not represent my employer’s view in any way.

November 07, 2014

Julian BondSaccades and LED lights. [Technorati links]

November 07, 2014 06:37 PM

Paul MadsenApplication unbundling & Native SSO [Technorati links]

November 07, 2014 04:33 PM
You used to have a single application on your phone from a single social provider, you likely now have multiple.

Where the was Google Drive, there is now Sheets, Docs, and Slides - each individual application optimized for a particular document format.

Where the chat function used to be a tab within the larger Facebook application , there is now Facebook Messenger - a dedicated chat app.

LinkedIn has 4 individual applications.

The dynamic is not unique to social applications.



 According to this article
Mobile app unbundling occurs when a feature or concept that was previously a small piece of a larger app is spun off on it’s own with the intention of creating a better product experience for both the original app and the new stand-alone app.
The unbundling trend seems mostly driven by the constraints of mobile devices - multiple functions hidden behind tabs may work on a desktop browser, but on a small screen, they may be hidden and only accessible through scrolling or clicking.

That was the stated justification for Facebook's unbundling of Messenger
We wanted to do this because we believe that this is a better experience. Messaging is becoming increasingly important. On mobile, each app can only focus on doing one thing well, we think. The primary purpose of the Facebook app is News Feed. Messaging was this behavior people were doing more and more. 10 billion messages are sent per day, but in order to get to it you had to wait for the app to load and go to a separate tab. We saw that the top messaging apps people were using were their own app. These apps that are fast and just focused on messaging. You're probably messaging people 15 times per day. Having to go into an app and take a bunch of steps to get to messaging is a lot of friction.
Of course, unbundling clearly isn't for everybody ....



I can't help but think about unbundling from an identity angle. Do the math - if you break a single application up into multiple applications, then what was a single authentication & authorization step becomes multiple such steps. And, barring some sort of integration between the unbundled applications (where one application could leverage a 'session' established for another) this would mean the user having to explicitly login to each and every one of those applications.

The premise of 'one application could leverage a session established for another' is exactly that which the Native Applications (NAPPS) WG in the OpenID Foundation is enabling in a standardized manner. NAPPS is defining both 1) an extension and profile of OpenID Connect by which one native application (or the mobile OS) can request a security token for some other native application 2) mechanisms by which the individual native applications can request and return such tokens.

Consequently, NAPPS can mitigate (at least one of) the negative implications of unbundling.

The logical end-state of the trend towards making applications 'smaller' would appear to be applications that are fully invisible, ie those that the user doesn't typically launch by clicking on an icon, but rather receives interactive notifications & prompts only when relevant (as determined by the application's algorithm). What might the implications of such invisible applications be for identity UX?







Rakesh RadhakrishnanESA embedded in EA [Technorati links]

November 07, 2014 12:57 AM
Similar to "Secure by Design" or "Privacy Baked In", to me no Enterprise Architecture "EA" initiative can succeeds without a SOLID Enterprise Security Architecture "ESA" in place. An ESA is also driven by Business Directions/Business Strategy and takes Business Risks as the driving force to identify an "As-IS" state and an "Aspired" state. While ESA focuses on  Security, Data Privacy, Incident Response Modernization/Optimization, Compliance and more, and EA focusses more on the Business Process Modernization, Business Application and Relevant IT Infrastructure (private and public cloud). All the Systems Modernization Programs, NG SDLC, Data Center Optimization and more that are driven by an EA effort heavily rely on the success and the foundation setup by an ESA. An ESA also relies on EA program - especially the Enterprise Data Architecture (driven by enterprise wide MDM and Big Data initiatives) to identify and classify high risk and medium risk data and their respective data flow. Therefore a successful EA team will comprise of specialist EA's focused on EA for Cloud/Infrastructure, ESA, Enterprise Data Architects, Enterprise Application Architects, Enterprise Integration Architects, and more, who work as a team and collaborate extensively (collaboration leading to innovative ways of integration). Here is an excellent white paper describing the synergies of EA (TOGAF 9) and ESA (SABSA). Adopting an integrated methodology TOGAF with SABSA or TOGAF ADM with SEI ADDM (for Secure SDLC), is critical as each methodology is focused within one domain (SEI ADDM for Secure SDLC, TOGAF for EA, SABSA for ESA, ITIL for Enterprise Service Management, OMG MDA for Enterprise Data and Meta data Architecture) or Oracle's EA Framework for Enterprise Information Architecture and more). This paper was one that I authored in 2006, that talks to these integrated views - as I had just gotten my Executive Masters in IT Management from University of Virginia, along with my TOGAF certification as an EA and SEI Certification as a Software Architect. Sun Microsystems also heavily invested in training their battalion of employees on Six Sigma - what was then referenced as Sun Six Sigma, along with ITIL and Prince 2. I wanted to align these tools and techniques so that they made some sense when utilized together. This paper also aligns all these methodologies for EA, ESA, Enterprise SW Architecture, and more.
November 06, 2014

Rakesh RadhakrishnanInvesting in Systemic Security - An enabler or an impediment [Technorati links]

November 06, 2014 03:39 PM
An enterprise in any industry today can function as a business if and only if;
a) it can protect its intellectual property that act as a core competitive differentiator
b) it can survive a disasters -like earth quake - not just via collected insurance money - but also continued operations
c) it can safely and compliantly extend to the cloud computing models to derive the economies of scale promised by clouds
d) it can maintain confidentiality and privacy of data - the reputational damage caused by one data breach can kill a business completely
e) and it can ensure uptime and availability of its transactional site (internet presence of ecommerce) and communication and collaboration tools (over the internet again).
 
Therefore if a business entity needs to survive and thrive in todays world its OXYMORON to see "Security (and Security Investments) as an Impediment to Business". To me investing in security is investing in the "Quality" aspects of a business and hence has always been perceived as a "true enabler". Investing in my health and immune system is an enabler for me to be more productive physically and mentally - which in turn helps me personally, physically and professionally. The same is true with IT Security Investments - Anywhere between 0.5% to 1% of a business entities revenue is expected to be its annual IT security budget (for example $100m to $200m for a 20 billion dollar business entity), when a typical enterprise is spending 5% on IT as a whole.
 
In addition to investing prudently with a Systemic Security Architecture (topic for another blog post), its equally important to make an organization's culture (every single employee) - security conscious (a topic for another blog post).
 
 
 
 

KatasoftHosted Login and API Authentication for Python Apps [Technorati links]

November 06, 2014 03:00 PM

If you’re building Python web apps — you might have heard of our awesome Python libraries which make adding users and authentication into your web apps way easier:

What you probably didn’t know, however, is that our Python library just got a whole lot more interesting. Last week we made a huge release which added several new features.

The Basics

Since the beginning of time, our Python library has made creating user accounts, managing groups and permissions, and even storing profile information incredibly easy.

If you’re not familiar with how this works, take a look at the code below:

from stormpath.client import Client

client = Client(id='xxx', secret='xxx')

# Create an app.
app = client.applications.create({
    'name': 'myapp',
}, create_directory=True)

# Create a user.
account = app.accounts.create({
    'given_name': 'Randall',
    'surname': 'Degges',
    'email': 'r@rdegges.com',
    'password': 'iDONTthinkso!222',
    'custom_data': {
        'secret_keys': [
            'blah',
            'woot',
            'bankstuff',
        ],
    },
})

# Create a group.
admins_group = app.groups.create({ 'name': 'admins' })

# Add the user to the group.
account.add_group(admins_group)

The code above creates a new user account, stores some account information, creates a group, and puts that user into the group — all in a few lines of code.

With Stormpath, all users are stored on Stormpath’s servers, where we encrypt the user information and provide abstractions and libraries to make handling authentication as simple as possible.

NOTE: You can install our library via PyPI, the Python package manager: pip install stormpath.

ID Site

A while back, some of us over here were chatting about ways to make authentication better, and the idea of ID Site was born.

What if, as a developer, you didn’t have to render views and templates to perform common authentication tasks?

What if, all you had to do was redirect the user to some sub-domain (like login.yoursite.com), and all of the authentication and registration stuff would be completely taken care of for you?

Furthermore — what if you could fully customize the way your login pages look using all the latest-and-greatest tools?

It would be totally awesome, right?

Well — that’s what ID Site does!

ID Site is a hosted product we run that allows you to easily handle complex authentication rules (including SSO, social login, and a bunch of other stuff), while providing users a really nice, clean experience.

And as of our latest Python release — you can now use it really easily!

To redirect a user to your ID Site to handle authentication stuff, all you need to do is generate a secure URL using our helper functions:

url = app.build_id_site_redirect_url('http://login.mysite.com/redirect')
# Then you'd want to redirect the user to url.

By default, the normal login page looks something like this (depending on whether or not you have social login and other features enabled):

id-site-python

After the user signs in, they’ll be redirect back to whatever URL you specify as a parameter above — then you can create a user session and persist the user’s information — this way you know they’ve been logged in.

Again — this is super easy.

Assuming you’re writing code to handle the redirect, you’d do something like this:

result = app.handle_id_site_callback(request)
# result.account is the user's account.

Bam! And just like that, you can register, login, and logout users.

API Keys and Authentication!

Let’s say you’re building a REST API, and need to ensure only certain users have access to the API. This means you’ve got to generate API keys for each user, and authenticate incoming API requests.

Depending on the tools and libraries you’re using, this could be either a very simple or very painful task.

With our latest Python release, you can now generate as many API keys as you want for each of your users. This means building API services just got a wholeeeeee lot easier:

# Generate an API key for a user.
key = account.api_keys.create()
print key.id, key.secret

Each API key has two parts:

Once you’ve generated an API key for a user, and given that key TO the user, they can then use their API key to authenticate against your API service using either:

To authenticate a user via HTTP Basic Authentication, you write code that looks like this:

# Assuming the user sent you their API credentials properly, by passing in
the `headers` option our library will handle authentication for you.
result = app.authenticate_api(
    allowed_scopes=None,
    http_method=None,
    uri=None,
    body=None,
    headers=request.headers
)
# result.account is now the user's account object!

The above code will work properly when a developer sends an authenticated API request of the form:

GET /troopers/tk421/equipment 
Accept: application/json
Authorization: Basic MzRVU1BWVUFURThLWDE4MElDTFVUMDNDTzpQSHozZitnMzNiNFpHc1R3dEtOQ2h0NzhBejNpSjdwWTIwREo5N0R2L1g4
Host: api.trooperapp.com

For OAuth flows, things are equally simple — firstly, you need to request an OAuth token by exchanging your API keys for an OAuth token:

POST /oauth/token
Accept: application/json
Authorization: Basic MzRVU1BWVUFURThLWDE4MElDTFVUMDNDTzpQSHozZitnMzNiNFpHc1
Content-Type: application/x-www-form-urlencoded
Host: api.trooperapp.com

    grant_type=client_credentials

When this request is made, on the server-side you can generate a token by calling the authenticate_api method:

result = app.authenticate_api(
    allowed_scopes=None
    http_method='POST',
    uri='/blah',
    body=request.body,
    headers=request.headers
)
# result.token is now the user's OAuth token object!

From this point on, the developer can now pass that token as their credentials:

GET /troopers/tk421/equipment 
Accept: application/json
Authorization: Bearer 7FRhtCNRapj9zs.YI8MqPiS8hzx3wJH4.qT29JUOpU64T
Host: api.trooperapp.com

And if you want to secure an API endpoint with OAuth, you just use the same authenticate_api method as before:

result = app.authenticate_api(
    allowed_scopes=None
    http_method='POST',
    uri='/blah',
    body=request.body,
    headers=request.headers
)
# result.token is now the user's OAuth token object!

Cool, right?!

Using our new API stuff, you can easily build out a public (or private) facing API service complete with both HTTP Basic Authentication and OAuth2.

Github and LinkedIn

Lastly, we’ve also added two brand new social providers to platform: Github and LinkedIn.

This means that if you want to allow your web users to log into your app via:

You can easily do so with just a few lines of code!

Future Stuff

We’re still working really hard to improve our Python library — we’re cleaning up our docs, simplifying our internal APIs, and doubling down on our efforts to make it the most awesome, simple, and powerful tool out there.

If you have any feedback (good or bad), please send us an email! We’d love hear from you: support@stormpath.com

Kuppinger ColeLeadership Compass: IAM/IAG Suites - 71105 [Technorati links]

November 06, 2014 01:09 PM
In KuppingerCole

Leaders in innovation, product features, and market reach for IAM/IAG Suites. Integrated, comprehensive solutions for Identity and Access Management and Governance, covering all of the major aspects of this discipline such as Identity Provisioning, Federation, and Privilege Management. Your compass for finding the right path in the market.


more

Julian BondIf Harry Potter is so clever, why isn't he dealing with climate change, pollution and the energy crisis... [Technorati links]

November 06, 2014 08:35 AM
If Harry Potter is so clever, why isn't he dealing with climate change, pollution and the energy crisis? And peace in the Middle East.

http://io9.com/seriously-why-isnt-hogwarts-using-all-that-magic-to-ex-1655211405 
 Seriously, Why Isn't Hogwarts Using All That Magic To Explore Space? »
We've all asked it at some point or another: if the wizards of the Harry Potter universe can conjure up such amazing miracles, why don't they use it to solve the energy crisis or explore the wonders of the universe? Boulet's latest webcomic dreams up all magical possibilities for the wizarding world.

[from: Google+ Posts]

Rakesh RadhakrishnanBAN BYOD and Big Data [Technorati links]

November 06, 2014 07:00 AM
Here is a link to the paper and presentation I am working on called "Strategic Technology Architecture Roadmap" based on the digital disruption that is taking shape especially for digital health. There are hundreds of papers on Body Area Networks and several conferences that talk to the same and several sub topics within BAN as well (such as nano-bots traversing a nano-net for blood flow, etc.). To me these sensor devices (drug diagnostics, drug discovery personalization and drug delivery) connected together as a BAN can create a comprehensive (diagnostic telemetry) continuous (real time collection) and customized (personalized) loop of diagnostics and delivery (very useful for patients quality of life and efficacy).

Now these BAN's talk to Clouds via a Mobile or a BYOD (like the iPhone 6 with its tons of mobile medical apps), over 4GLTE today, and in the near future 5G networks (GB uploads/download speeds per seconds). These powerful end point now become the conduit and gateway between these personal small networks (such as BAN, VAN for Vehicle are networks, HAN for Home area networks, and more). Hence BYOD device securely bootstrapping into the access networks (4G and 5G) becomes very critical for secure paths to the clouds. The 3rd dimension in these developments is the Big Data technologies (like map reduce, tensor, tuples, DDS, metadata and more) that allow for exceptional high speed analytics for these high volume, multi variety (multi-media like data types) and high velocity data sets moving at GB per seconds bi directionally.

This is expected to strain the back end systems and hardware (that hosts these service clouds) and create bottlenecks in the cloud systems, and then, I see that Oracle has released systems based on T5 CPU's (see this video), truly amazing technology - something like 128 cores or 256 cores (the cores themselves communicate at GB ps internal shared memory) with 4Terabyte system memory so you can store the entire database in memory if needed - per session/per user/per patient..

To me as we move forward from 2015 to 2020 - the health care industry- think fit bit, telecom industry -think iphone6 and 5G and cloud computing - think SOA in steroids (IT industry) are going to discover new USAGE models for these disruptive digital technologies (IOT, BigData, Social, Mobile, etc.), which will actually act as a catalyst for the next economic boom especially led by these industries, as we are now laying the infrastructural foundation for an innovation driven economic future  (akin to the national highway in the 1920's revitalizing the us economy and spurring more new industries).

 
November 05, 2014

Paul MadsenSticky Fingers [Technorati links]

November 05, 2014 06:51 PM
Digits is a new phone-number based login system from Twitter.
Digits is a simple, safe way of using your phone number to sign in to your favorite apps.
Note that Digits is not just using your phone to sign in (there are a number of existing mobile-based systems), but your phone number. 

Digits is an SMS-based log in system (unlike mobile OTP systems like Google Authenticator). When trying to login to some service, the user supplies their phone number, at which they soon receives an SMS, this SMS carrying a one-time code to be entered into the login screen. After Twitter's service validates the code, the application can be (somewhat) confident that the user is the authorized owner of that phone number.

Now, the above makes it clear that Digits relies on only a single factor, ie a 'what you have' of the phone associated with the given phone number. This post even brags that you need not worry about any additional account names or passwords. But that same post claims that Digits is actually more than a single factor
Digits.com, an easy way for your users to manage their Digits accounts and enable two-factor authentication
As much as I squint, I can see no other factor in the mix. (And it sure isn't the phone number.)

Digits apparently also has privacy advantages.
Digits won't post on your behalf, so what you say and where you say it is completely up to you
Well, to be precise, Digits can't post on your behalf ... And is it not somewhat ironic that Twitter touts as an advantage of Digits the fact that it is not hooked into your Twitter account??

Presumably this is presented in contrast to the existing 'Sign-in with Twitter' system, use of which can allow a user to authorize applications to post to Twitter on their behalf (as the system is based on OAuth 1.0).

But of course, 'Sign-in with Twitter' allows applications to post on behalf of users only because Twitter made the business decision to make this permission part of the default set of authorizations. Twitter could have chosen to make their consent more granular and tightened up the default.

Dick Hardt analyzed Digits and hilited two fundamental issues of using phone numbers as identifier


  1. the privacy risk associated with a user presenting the same identifier to all applications (as it enables subsequent correlation amongst those applications without the user's consent). It's pretty trivial to spin up new email addresses (even disposable ones) to segment your online interactions and prevent correlation. Is that viable for phone numbers?
  2. that applications generally aren't satisfied with only knowing that who a particular user is, but almost always want to know the what as well, ie their other identity attributes, social streams etc

Dick, having made the second point, perversely then conjectures that it may not be an issue
as mobile apps replace desktop web sites, the profile data may not be as relevant as it was a decade ago
I can't imagine why the native vs browser model would impact something as fundamental as wanting to understand your customer?  

Twitter actually tries to position this limitation as a strength of Digits
Each developer is in control with Digits. It lets you build your own profiles and apps, giving you the security of knowing your users are SMS-verified. 
The motivation for Digits.com becomes a bit clearer when you read more
We built Digits after doing extensive research around the world about how people use their smartphones. What we found was that first-time Internet users in places like Jakarta, Mumbai and São Paulo were primarily using a phone number to identify themselves to their friends.
Twitter must have looked at their share in these markets and determined they needed a different way to mediate user's application interactions.

Source - http://stats.areppim.com/stats/stats_socmediaxtime_afr.htm











KatasoftBuild an API Service with Oauth2 Authentication, using Restify and Stormpath [Technorati links]

November 05, 2014 06:00 PM

Building APIs is a craft; you have you have to balance the integrity of your data model with with the convenience needs of your API consumers. As you build an API, you will come across these questions:

In this article I’ll focus on the concerns of authentication and access control, specifically within the context of Restify – a Node.js Framework for building APIs. I will walk you through the process of building an API with the Restify framework and how you can secure it with Stormpath’s API Authentication features.

We’ll be using the Oauth2 Client Credentials workflow as an authentication strategy and JWTs for the format of the tokens.

I’ll touch on client libraries and the resource design. That section is heavily influenced by how we have designed our own API and I encourage you to read our principles on Designing REST JSON APIs and Node API Clients

Why Restify?

Restify is an HTTP framework for Node.js that is focused on building API applications. It differs from Express (the other Node.js web framework) in it’s focus on APIs. Express gives you a lot of things you need for web applications, like templating engine and component-like “middleware” design. Because Restify is focused on APIs it does not provide those things. Instead it provides things like DTrace support and request throttling – very important tools for API services.

What is Stormpath?

Stormpath is an API service that allows developers to create, edit, and securely store user accounts and user account data, and connect them with one or multiple applications. Our API enables you to:

In short: we make user account management a lot easier, more secure, and more scalable than what you’re probably used to. Our sample application will use Stormpath to provision API keys for the users of our API.

Ready to get started? Register for a free developer account at https://api.stormpath.com/register

Why Oauth2 and JWT?

The current de-facto practice for API Authentication is to provide an API Key/Secret combination to the consumer of your API and have them submit this as the Authorization header on every request, which looks like this:

Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==

The value after Basic is a base64 encoded version of they key and secret. This will be sent on every request. Assuming you use HTTPS, this is a secure way to authenticate users.

In our demo application we will take this a step further and use Oauth2, specifically the client-credentials workflow. In this workflow the user supplies the Basic Auth once and then receives a token that contains “claims” which can be used for authentication (and access control!) on subsequent requests. The token is always validated by your server, and because it already contains the claims, it is stateless.

Here is an overview of what the flow looks like:

Oauth2 Client Credentials Workflow Basic Auth

The stateless, portable nature of the token makes this strategy superior to Basic Auth. It also helps to future-proof your application for when your customers ask you for it.

At Stormpath, we use JWT as the token format because we believe it’s a great way to structure the internal data of the token. If you’re looking to build a Single-Sign-On (SSO) architecture you will find JWT very friendly to that use case. Also: it’s basically taken off as the standard for Oauth tokens.

For more see Claims Based Identity and JSON Web Token (JWT)

Our Sample Application – The Things API

For our demo application we’re going to build the Things API.

Things API

We have a collection of things, that collection will be available at /things. We want to return a collection of all things when someone makes a GET request of that URL. If someone posts to it we will create a new Thing in the Things collection and we will assign it an ID.

All thing resources will be available as /thing/:id and we want to allow deletion of things.

All users (including anonymous users) must be able to read the things collection. Only authenticated users are allowed to post new things. Only trusted users are allowed to delete things. Trusted users will be in a special group (we will use Stormpath to manage the user group state).

We’ll be creating three separate node modules: a server, a client library, and a example app that uses the client library. Our code structure will look like this:

|--things-api-server/   <-- the API server
|   |--server.js
|   |--things-db.js
|   |--package.json
|--things-api/          <-- the API client library
|   |--index.js
|   |--register.js
|   |--package.json
|--developer-app/       <-- the client demo app
|   |--app.js
|   |--package.json

As we work through this demo, we will be context switching between these different folders and files. If you get lost or aren’t sure where to paste something please see the example files in the git repo to get a preview of what the final code will look like

HTTPS – Make Sure You Use It

You MUST use HTTPS in production!

In this demo we will work on our local machine and will not using HTTPS – but you MUST use HTTPS in production. Without it, all API authentication mechanisms are compromised.

You have been warned.

Server Prep – Create the Server Module

If you don’t already have Node.js on your system, head over and install it on your computer. In our examples I will be using a Mac, all commands you see should be entered in your Terminal (without the $ in front – that’s a symbol to let you know that these are terminal commands).

First, create a folder for this module and change into that directory:

$ mkdir things-api-server
$ cd things-api-server

Now that we are in the folder we want to create a package.json file for this module. This file is used by Node.js to keep track of the libraries (aka modules) your module depends on. To create the file:

$ npm init

You will be asked a series of questions, for most of them you can just press enter to allow the default value to be used. I decided to call my main file server.js, I set my own description and set the license to MIT – everything else I left as default.

Now install the required packages:

$ npm install --save restify stormpath-restify uuid underscore

The save option will add this module to your dependencies in package.json. Here is what each module does:

Note: Restify parlance uses filter in lieu of middleware. Both are valid, but for consistency we will use filter here.

Gather Your Stormpath API Credentials and Application Href

We will be using Stormpath to manage our users and their API keys, and our server will need to communicate with the Stormpath API in order to do this. If you haven’t already signed up for a free Stormpath developer account you can get one at api.stormpath.com/register

Like all APIs the communication between your app and Stormpath is secured with an “API Key Pair”. You can download your API key pair as a file from your dashboard in the Stormpath Admin Console. Retain this file – we will use this in a moment.

While you are in the Admin Console you want to get the href for your default Stormpath Application. In Stormpath, an Application object is used to link your server app to your user stores inside Stormpath. All new developer accounts have an app called “My Application”. Click on “Applications” in the Admin Console, then click on “My Application”. On that page you will see the Href for the Application. Copy this — we will need it shortly.

Coding Time – Build the Server Code (server.js)

It’s time to create the actual server – the Node.js process that serves API requests. You can do that from Sublime Text or you can do this in the terminal:

$ touch server.js

Now open that file and paste in this boilerplate to get Restify up and running:

var restify = require('restify');
var host = process.env.HOST || '127.0.0.1';
var port = process.env.PORT || '8080';

var server = restify.createServer({
  name: 'Things API Server'
});

server.use(restify.queryParser());
server.use(restify.bodyParser());

server.use(function logger(req,res,next) {
  console.log(new Date(),req.method,req.url);
  next();
});

server.on('uncaughtException',function(request, response, route, error){
  console.error(error.stack);
  response.send(error);
});

server.listen(port,host, function() {
  console.log('%s listening at %s', server.name, server.url);
});

That’s the bare-bones you need to get the server running. What that code does:

You can take a sneak peak at your server by running it like so:

$ node server.js

If all is well you will see this message in the terminal:

Things API Server listening at http://127.0.0.1:8080

At this point you can try it out by requesting a URL from the server. We’ll use Curl for the example:

$ curl http://127.0.0.1:8080/

Because we haven’t created any routes in the server yet, you will get a “Resource Not Found” message:

{"code":"ResourceNotFound","message":"/ does not exist"}

If you inspect the details of this message by specifying verbosity with Curl, you’ll see that the status code is set to 404:

$ curl -v http://127.0.0.1:8080

* About to connect() to 127.0.0.1 port 8080 (#0)
*   Trying 127.0.0.1...
* Adding handle: conn: 0x7fb32c004000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fb32c004000) send_pipe: 1, recv_pipe: 0
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.30.0
> Host: 127.0.0.1:8080
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Content-Type: application/json
< Content-Length: 56
< Date: Mon, 03 Nov 2014 04:58:27 GMT
< Connection: keep-alive
<
* Connection #0 to host 127.0.0.1 left intact

{"code":"ResourceNotFound","message":"/ does not exist"}

Now let’s move on and register some route handlers!

Set Up Your Things Database

In a real world situation you would use a proper database engine, such as MongoDB or PostgreSQL. For the simplicity of this demo we will create a simple in-memory database that only lives for the duration of the server. Create a file called things-db.js and place the following into it:

var uuid = require('uuid');
var _ = require('underscore');

module.exports = function createDatabase (options) {

  var baseHref = options.baseHref;

  var things = {};

  function thingAsResource(thing){
    var resource = _.extend({
      href: baseHref + thing.id
    },thing);
    delete resource.id;
    return resource;
  }

  function thingsAsCollection(){
    return Object.keys(things).map(function(id){
      return thingAsResource(things[id]);
    });
  }

  return {
    all: function(){
      return thingsAsCollection();
    },
    getThingById: function(id){
      var thing = things[id];
      return thing ? thingAsResource(thing) : thing;
    },
    deleteThingById: function(id){
      delete things[id];
    },
    createThing: function(thing){
      var newThing = _.extend({
        id: uuid()
      },thing);
      var newRef = things[newThing.id] = newThing;
      return thingAsResource(newRef);
    }
  };
};

Now we need to require this database in our server.js and create an instance of the database. Place this in your server.js file, just below the host and port declarations:

var thingDatabse = require('./things-db');

var db = thingDatabse({
  baseHref: 'http://' + host + ( port ? (':'+ port): '' ) + '/things/'
});

That creates a database instance and tells it the base URL of the server so that it can assign the appropriate href to resources.

Set Up the GET Routes

Now that we have our DB instance setup, we can wire up a route handler to it. We’ll do the collection and single-resource URLs first, as they do not require any authentication. Insert these route handlers above your server.listen statement but after your server.use statements:

server.get('/things',function(req,res){
  res.json(db.all());
});

server.get('/things/:id',function(req,res,next){
  var id = req.params.id;
  var thing = db.getThingById(id);
  if(!thing){
    next(new restify.errors.ResourceNotFoundError());
  }else{
    res.json(thing);
  }
});

Restart your server (Ctrl + C to kill the process in your terminal) and try it again with Curl. If you request the things collection you will get an empty collection (seen as empty array brackets):

$ curl http://127.0.0.1:8080/things
[]

Trying to get a resource that does not yet exist will result in a 404 message:

$ curl http://127.0.0.1:8080/things/1

{"code":"ResourceNotFound","message":""}

Great! Now let’s actually create some things by setting up a POST handler for the collection. To do that, we need to setup authentication for the routes.

Pro trip: use a file watecher like nodemon to automatically restart your server as you edit it.

Set Up Authentication

As mentioned above we will use the Oauth2 client credentials workflow. This means we need a POST handler for /oauth/tokens and some code to exchange the Basic Auth credentials for a JWT. We will also need a filter for any route that requires the JWT, so we can assert it’s existence and validity before allowing the rest of the route handlers to be processed.

To meet these requirements we will leverage Stormpath and its API Key authentication features. The Stormpath Node SDK contains a method on application instances, authenticateApiRequest, which does everything we just mentioned! To make it even easier we’ve wrapped that method in the stormpath-restify module as a filter so it’s even easier to use.

In order to use these filters you will need to configure a “filter set”, this is a set of filters that are bound to your Stormpath Application. To create this filter set you need to add this to the top of your file, place it after the restify require:

var stormpathRestify = require('stormpath-restify');

var stormpathConfig = {
  apiKeyId: 'YOUR_STORMPATH_API_KEY',
  apiKeySecret: 'YOUR_STORMPATH_API_SECRET',
  appHref: 'YOUR_STORMPATH_APP_HREF'
};

var stormpathFilters = stormpathRestify.createFilterSet(stormpathConfig);

The variable stormpathFilters is now an object with some other functions that you can use to create the necessary authentication filters.

To use the Oauth filter, simply create a new one and assign it to a variable, you can paste this below the code we just did:

var oauthFilter = stormpathFilters.createOauthFilter();

Now you can register a post handler which uses this filter as the only filter. Paste this after your server.use statements:

server.post('/oauth/token', oauthFilter);

That’s it! If your API user posts a valid API Key pair to that URL, they will receive a token in exchange. If it’s not valid they will get a descriptive error.

Once your user has obtained a token they will use it to post a new thing. We’ll create a POST hander for this, and apply the Stormpath filter to it as well. This will check that the token is valid and if so allow the POST to continue into our handler. If the token is not valid, an error will be sent and our handler will not be reached. Here is the handler to paste in below your other routes:

server.post('/things', [oauthFilter, function(req,res){
  res.json(db.createThing(req.body));
}]);

Try It Out – Token Exchange

In order to try our new POST endpoint we need to do the token exchange and obtain a JWT.

At this point let’s pretend we are a consumer of our API and need to provision an account. Later we’ll discuss how to automate this, but for this first user you can head over to the Stormpath Admin Console and go to your “My Application” and create a dummy account in the Directory of that application. After creating the account, create an API Key pair (available on the Account details view).

Once you have the key pair, you can exchange those credentials for a JWT by using the new /oauth/tokens route on your server:

$ curl -u ID:SECRET -X POST http://127.0.0.1:8080/oauth/token?grant_type=client_credentials

You’ll get the following JSON response in your terminal with the “access_token” value:

{"access_token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiI0R05XMTNRUE5aMjlaS0JKQk02VE40RkM2IiwiaXNzIjoiaHR0cHM6Ly9hcGkuc3Rvcm1wYXRoLmNvbS92MS9hcHBsaWNhdGlvbnMvMWg3MlBGV29HeEhLaHlzS2pZSWtpciIsImlhdCI6MTQxNDk5NDAxNiwiZXhwIjoxNDE0OTk0MDI2fQ.SaOJ6R8iX2fbNlr8eWTzydglZzFV14FtagrBjScBRdE","token_type":"bearer","expires_in":10,"scope":""}

Copy this “access_token” value and use it in your next request when you create a thing:

$ curl -X POST -H "Authorization: Bearer YOUR_TOKEN_HERE" -H "Content-Type: application/json;charset=UTF-8" -d '{"myThing":"isAnAwesomeThing"}' http://127.0.0.1:8080/things

The API should respond with the thing you’ve created, and its href identifier:

{"href":"http://127.0.0.1:8080/things/b70fd4d9-4a4f-43a7-a4ea-fb9e18e78b2c","myThing":"isAnAwesomeThing"}

If we ask for the entire collection again, we will see it in the set:

$ curl http://127.0.0.1:8080/things
[{"href":"http://127.0.0.1:8080/things/b70fd4d9-4a4f-43a7-a4ea-fb9e18e78b2c","myThing":"isAnAwesomeThing"}]

Pretty sweet, right? You’ve now got an API with authorization, resources and collections. But… working with Curl gets pretty clunky once you start dealing with tokens. We still need to build some other features into our API, but I want to switch over to the client library for a little while, so it’s quicker to use our API and ensure things are working as we expect.

Build the Client Library

As mentioned above, you should check out Les Hazlewood’s post on Designing Node API Clients. Our client library will look very similar: it abstracts how we interact with resources and collections and exports an API that is developer-friendly with well-named methods.

For our Things API we’re going to use Restify in the client as well. In addition to the server library we’ve built, Restify includes a client library you can use to build your own client. These great little clients do a lot of the underlying HTTP and content type work for you.

The stormpath-restify library includes an Oauth2 client that extends the JSON client with credential exchange and token work – all that stuff that we just did with Curl.

The client library for your API will be provided to your end-users as a node module, published on NPM, so we should create a new project for this. Create a new folder and do the npm init process, as we did for the server. I’ll call mine the “things-api” – a predictable name end-users will recognize when they look for a client for my service:

$ cd ..
$ mkdir things-api
$ cd thing-api
$ npm init
$ npm install --save restify stormpath-restify underscore prompt
$ touch index.js

We will use index.js as the entry point for this module, as it’s very straightforward. You may want something more elaborate as your client module evolves.

Paste this into your index.js as a starting point:

var oauthClient = require('stormpath-restify/oauth-client');

module.exports = {
  createClient: function(opts){
    opts.url = opts.url || 'http://127.0.0.1:8080';

    // This creates an instance of the oauth client,
    // which will handle all HTTP communication with your API

    var myOauthClient = oauthClient.createClient(opts);

    // Here we directly bind to the underlying GET method,
    // as this is a simple request

    myOauthClient.getThings =
      myOauthClient.get.bind(myOauthClient,'/things');

    return myOauthClient;
  }
};

With that you can now export a client that has a method, getThings, which gets all the things in the collection and returns them to the developer. Super simple. What does it look like for them to use this client library? We’ll cover that in the next section.

While the collection get is simple and can be directly bound to the underlying get method, the add thing method will have some more logic because we want to do some “client side” validation in order to assert that the data is correct before we even try posting it to the server.

Here is what that looks like, paste this into index.js after the getThings method:

myOauthClient.addThing = function addThing(thing,cb){
  if(typeof thing!=='object'){
    process.nextTick(function(){
      cb(new Error('Things must be be an object'));
    });
  }else{
    myOauthClient.post('/things',thing,cb);
  }
};

Build the Developer App

Before switching back to the server, let’s also build our developer demo app. This shows you how a developer would use your client library to consume your API. Create a new folder for this module and initialize it with dependencies and an app.js file:

$ cd ..
$ mkdir developer-app
$ cd deverloper-app
$ npm init            # use app.js as the main entry
$ npm install --save prettyjson
$ touch app.js

Now paste the following into your app.js:

// Her we use a local, relative require path to require your
// client library. When you publish on NPM you should change
// it to the absolute module name

var thingsApi = require('../things-api');

var prettyjson = require('prettyjson');

var client = thingsApi.createClient({
  key:'ACCOUNT_API_KEY',
  secret:'ACCOUNT_API_KEY_SECRET'
});

// Read all the things in the collection

client.getThings(function(err,things) {
  if(err){
    console.error(err);
  }else{
    console.log('Things collection has these items:');
    console.log(prettyjson.render(things));
  }
});

// Create a new thing in the collection

client.addThing(
  {
    myNameIs: 'what?'
  },
  function(err,thing) {
    if(err){
      console.error(err);
    }else{
      console.log('New thing created:');
      console.log(prettyjson.render(thing));
    }
  }
);

Look familiar? If you’ve used API clients before this definitely looks familiar – but this time YOU created it and it’s for your API :)

You can demo your app by invoking it in the Terminal (make sure that you have the server running in another terminal):

$ node app.js

Round Out the Server – Delete for Trusted Users

We have one last handler to implement in the server, and that is the DELETE method for trusted users. We want users in the ‘trusted’ Group to be able to delete resources from the things collection.

We’re going to setup another filter, using Stormpath to help us out. stormpath-restify provides a group filter which allows us to assert that a user is in a given group, in this case a group called trusted (you can create this group in the Stormpath Admin Console. If the user is in the group, we pass control to your handler, otherwise we issue a 403 error response. If you wish to customize the error response, you can pass an errorHandler property to createGroupFilter, a function to receive the arguments (err,req,res,next).

To create the trusted group filter, paste this below your other filter invocations:

var trustedFilter = stormpathFilters.createGroupFilter({
  inGroup: 'trusted'
});

This filter will assert that the authenticated user is in the trusted group.

Let’s use this new filter to setup our DELETE handler:

server.del('/things/:id',[oauthFilter,trustedFilter,function(req,res,next){
  var id = req.params.id;
  var thing = db.getThingById(id);
  if(!thing){
    next(new restify.errors.ResourceNotFoundError());
  }else{
    db.deleteThingById(id);
    res.send(204);
  }
}]);

Now that the server can accept DELETE requests, we want to add a corresponding convenience method to our client. Paste this method into your client library, below the other methods:

myOauthClient.deleteThing = function deleteThing(thing,cb){
  if(typeof thing!=='object'){
    process.nextTick(function(){
      cb(new Error('Things must be be an object'));
    });
  }
  if(typeof thing.href!=='string'){
    process.nextTick(function(){
      cb(new Error('Missing property: href'));
    });
  }
  myOauthClient.del(thing.href,function(err){
    if(err){
      cb(err); // If the API errors, just pass that along
    }else{
      // Here you could do something custom before
      // calling back to the original callback
      cb();
    }
  });
};

This method ensures the developer is passing an actual thing object, with an href, before making the request of the server.

At this point your developer can use the client to delete things:

client.deleteThing(thing,function(err){
  if(err){
    console.error(err);
  }else{
    console.log('Thing was deleted');
  }
});

If you haven’t created the test group yet, or haven’t added the account to it, you will get the 403 error when we try to delete the item. To create the group and add the user to it you can use the Stormpath Admin Console or our Node SDK to talk directly with the API to create the group and the account membership

How To Provision Your API Keys

The last thing to discuss is how to provision API Keys for your end developers. Clearly you wouldn’t want to use the Stormpath Admin Console to create every API Key pair. Instead, you’ll want to automate this process.

From a product perspective, I suggest you offer a web-based landing page where someone can create an account and then view a dashboard where they can provision their own API keys.

Stormpath can help with this process as well. We have great workflows around account creation and email verification. For building the web-based component of your registration workflow, I suggest trying out our stormpath-express library. Yes, I am suggesting that you use Express for this – and that’s because Express is designed for that! It’s totally normal to have one sever for your API and one for your web app(s). In fact, it’s encouraged: for a good read check out the Twelve-factor App

However! I don’t want to leave you hanging, so I’ll show you a very simple way to allow developers to obtain an API key, but only after they have verified their email address.

In order to enable email verification, please log into the Stormpath Admin Console and visit the Workflows section of the directory in your default “My Application”.

After the workflow is enabled, we will implement another route handler to leverage two more Stormpath filters. Create them below your other filter invocations:

var newAccountFilter = stormpathFilters.newAccountFilter();
var accountVerificationFilter = stormpathFilters.accountVerificationFilter();

Then we’ll use those filters with two new routes:

server.post('/accounts',newAccountFilter);
server.get('/verifyAccount',accountVerificationFilter);

These routes will allow users to post their email, password, and other required user information to create an account. To make that easier, let’s create a quick command line tool developers can use to register for our API service.

Command-line Registration Tool

Since we’re not building a full-blown web app for handling account creation, we’ll create a small command-line utility instead. I’ve created an example of how you might do this, but it’s a pretty large file so I won’t inline it here. You can get the source here: register.js source

This register.js file leverages the prompt library to do the following:

Switch back to the things-api directory and copy the source of that file into a register.js file in your client module. Then modify the package.json for your client library to have this configuration:

{
  "bin" : { "register" : "./register.js" }
}

With that configuration, you can tell your developers to execute this command after they’ve installed your client module in their application:

$ ./node_modules/.bin/register

If you haven’t published your module to NPM you can still try this CLI tool by running it from inside the things-api directory:

$ node register.js

Doing this will bring up the registration CLI:

Registration CLI tool

Stormpath will send an email to the given email address, with a link that will retrieve an API Key Pair. You want to customize the email message to point to the /verifyAccount URL that we created in the API server. You can configure the email in the Stormpath console: “My Application” —> Directory —> Workflows. Then configure the message like this, making sure you set the Base URL to your local development app:

Email Template

With that email template, your API users will receive a confirmation email:

Email Message

Because we configured the email template to point to our server, when the user clicks on the email link they will land on our Restify server, where the Stormpath filter will kick in. It will verify that this link was actually generated by Stormpath, and if valid it will create an API key pair for the user and show it to them:

Api Key Pair

Your developer can then take those keys and start using them with your client library. Success!

The Proverbial “Me”

One last piece of awesome: the /me route. This very common route in APIs lets the consumer know who they are currently authenticated as.

Setting this up in the server is incredibly simple. The stormpath-restify library will attach the Stormpath account to req.account if the user is successfully authenticated. Thus, we just need a simple route handler:

server.get('/me',[oauthFilter,function(req,res){
  res.json(req.account);
}]);

Adding a convenience method to the client library is equally simple because it’s just a simple get request:

myOauthClient.getCurrentUser = myOauthClient.get.bind(myOauthClient,'/me');

That allows our developers to do this in their application:

client.getCurrentUser(function(err,user) {
  if(err){
      console.error(err);
  }else{
    console.log('Who am I?');
    console.log(user.fullName + ' (' + user.email + ')');
  }
});

Wrap It Up

And with that… we have built a fully-functional API, complete with registration and API Key distribution – go forth and build more API!

I hope you have learned a bit about the Oauth2 client credentials flow. I also hope I’ve shown you how easy it is to use Stormpath to implement that flow in your API, so you can get on with what you really want to do – writing your API endpoints.

If you’d like to learn more about our Restify integration please head over to stormpath-restify on Github.

If you want to dig even deeper into Stormpath you should check out the Stormpath Documentation as well as the Stormpath Node.JS SDK

For help with all Stormpath libraries and integrations, just hit us up on support@stormpath.com. We’re happy to help!

Julian BondOn this day, it's especially important to follow the directions on boxes of matches. "Keep Dry and Away... [Technorati links]

November 05, 2014 09:40 AM
On this day, it's especially important to follow the directions on boxes of matches. "Keep Dry and Away From Children". However if you are taking your little darlings to bonfire night, you should also heed the advice from Scarfolk Council. "Always Light Children At Arms Length"

http://scarfolk.blogspot.co.uk/2014/11/arms-length-safety-poster-bonfire-night.html
 "Arms Length" Safety Poster (Bonfire Night Part 1) »
When Scarfolk Council issued the poster below in 1972, it was met with complaints from parents, teachers and arsonists. While the poster does offer the safety guideline of an 'arms length', it does not specify how long that a...

[from: Google+ Posts]
November 04, 2014

Kuppinger ColeOne Identity for All: Successfully Converging Digital and Physical Access [Technorati links]

November 04, 2014 11:09 PM
In KuppingerCole Podcasts

Imagine you could use just one card to access your company building and to authenticate to your computer. Imagine you had only one process for all access, instead of having to queue at the gate waiting for new cards to be issued and having to call the helpdesk because the system access you requested still isn’t granted. A system that integrates digital and physical access can make your authentication stronger and provide you with new options, by reusing the same card for all access infrastruc...



Watch online

Kuppinger ColeKuppingerCole Analysts' View on Connected Enterprise [Technorati links]

November 04, 2014 10:59 PM
In KuppingerCole

The digitalization of businesses has created an imperative for change that cannot be resisted. IT has to support fundamental organizational change. IT must become a business enabler, rather than obstructing change.

However, enabling new forms of digital business requires that IT take a fundamentally different role. In fact, IT is not about technology anymore, it must focus on understanding and fostering the digital business. It must enable the shift to new business models and...
more

Kantara InitiativeWhat is IoT without Identity? [Technorati links]

November 04, 2014 02:45 PM

What is IoT with out Identity? IoT without identity is just oT.

IoT offers a world of promise that is (partially) built upon leveraging the human-to-device connection for new opportunities. Without Identity, IoT is still the enabler of M2M communications, but perhaps with less impact toward transforming our connected lives. IoT+Identity represents a powerful equation that brings identity, security, software, hardware, policy and privacy experts to the same table.

Identity services are the key that unlocks the world if IoT for human interaction in all walks of life.  We see many opportunities for IoT to improve lives ranging from devices that monitor our health and quality of sleep, to those that help us to manage our homes, or cars. To fully leverage the beneficial powers of IoT vendors need to know that IoT+Identity enabled products and services won’t fail and severely damage their brand reputation. Users need to know these new tools respect their preferences. See “I’m Terrified of My New TV: Why I’m Scared to Turn This Thing On — And You’d Be, Too.” Collaboration is needed to address the privacy and security requirements of consumers, enterprise, and governments to develop scalable programs for verified assurance of technologies and policies that will transform our connected life.

Serious access management challenges are approaching.  The numbers of relationships between people, entities, and things will be larger in magnitudes of order. How will users manage their connected lives? How will they set preferences for data sharing permissions? The sheer number of devices, connections, and relationships presents unique opportunities and challenges.  At the low end of the scale, the number of devices and connections will be in the billions.  User Managed Access provides an open standard approach to help empower and engage users for the management of resource access and sharing.

The multitude of sensors and apps that are gathering and communicating personal information magnifies security and privacy risks.  Personal data can fall in to the wrong hands, be sold without consent, be leveraged in ways the user did not imagine, like having one’s car insurance rates rise due to recorded driving habits. Users will need to know their personal data can be managed and properly protected for privacy. Smart physical spaces will become more and more prevalent. Legislation is developing around proper notice and consent practices both on-line and in physical spaces. The Kantara Consent and Information Sharing WG is developing a number of solutions to address these issues and to develop a more useable form of consent.

Interoperability of IoT+Identity will also have challenges.  When device identifiers are not standardized discovery mechanisms are but one of the challenges to solve. At Kantara, the IDentities of Things WG is hard at work to deliver an industry analysis of the current landscape opportunities, challenges, and gaps to address.

Identity Relationship Management (IRM) focuses on building relationships using identity technologies, practices, and techniques. IRM is especially powerful to leverage the IoT+Identity connection. Kantara is the home of IRM development working to connect the components that are necessary to unleash the power of IoT+Identity. This week we’re at the Europe IRMSummit produced by ForgeRock. The venue is referenced by National Geographic as the 3rd top garden park in the world (See Powerscourt #3). We are surrounded by a picturesque thick fog and forest which works wonders for keeping wandering identity and hardware experts in one place!

The IRM Summit has 3 tracks.

  1. Identity Relationship Management (IRM) – building relationships using identity service technologies, techniques, and practices.
    See the Laws of Relationships (a work in progress) to get a flavour.
  2. Digital Citizen – identity technology as an enabler of innovative and dynamic Government and civil services.
  3. Digital Transformation – identity technologies that transform the way we do business and our lives.

Kantara Initiative members are hard at work innovating IRM solutions and practices for businesses, governments, and for our connected lives.  Building on the concepts of IRM, Kantara Initiative focuses on the idea of a “connected life.”  Developing open standards, innovations, pilots, and programs is the key to accelerating the transformation our digital-to-human world in a way that respects users.

To power the IoT+Identity connection we’ll need:

Kantara Initiative is home of IRM where you can connect your priorities to a broader global expertise. Join Kantara now to network among leaders to shape identity today and toward the IoT enabled future. Join. Innovate. Trust.

From the desk of Joni Brennan
Executive Director, Kantara Initiative

Julian BondSome fallout from the IPCC report on climate change [Technorati links]

November 04, 2014 12:22 PM
Some fallout from the IPCC report on climate change

There's a bit of received wisdom I've been seeing stated as fact as people comment on the IPCC report. "You can have growth in global GDP without growth in the consumption of energy and resources". It's a nice fantasy and has an element of truthiness about it, because obviously increased efficiency and productivity means producing more for less. Except that rises in GDP have ALWAYS resulted in increasing consumption of energy and resources. So where's the counter example? And there's an underlying assumption that continued 3% compound growth is desirable and necessary. Is that true?

There's some interesting lines of macro-economic research here. In each case it needs to provide not just answers, ideas and proofs by example but routes to get there. And solutions need to be appropriate for global macro-economics, not just a tiny self sufficient community in the middle of Wales.

1) Can you have improving quality of life with zero or negative GDP growth?

2) Can you have increasing GDP without corresponding increases in energy and resource consumption?

3) Can we reduce our dependence on borrowing from the future via debt to fund growth in GDP?

4) Can we control the pollution side effects of growth in GDP?

It's not enough to do like Paul Krugman and just use homilies and parables about making shipping more efficient by sailing slower or such like. If it's even true, that's a local solution when answers need to be global models.

All the Limits to Growth models show hockey-stick style exponential growth leading to a brief peak followed by a catastrophic correction. More technical fixes and productivity improvements seem to lead to making the same graph more extreme; faster growth, a higher peak, a more dramatic correction. So I think this leads to the most important question.

What can we do now to create a soft landing as we transition from a growth state to a sustainable state? And that's both personally and as a global society.

If that's not hard enough. Then bear in mind what might be required to force global society to follow the optimum path when most of the actors are ill-informed and are treating the game as an iterated prisoner's dilemma where their own personal short-term gain is all that matters. And there's a lot of them spread all over the world.
[from: Google+ Posts]

Radovan Semančík - nLightWhat can we really do about the insider threat? [Technorati links]

November 04, 2014 10:46 AM

The "insider" has been indicated as a the most severe security threat for decades. Almost every security study states that the insiders are among the highest risk in almost any organization. Employees, contractors, support engineers - they have straightforward access to the assets, they know the environment and they are in the best position to work around any security controls that are in place. Therefore it is understandable that the insider threat is consistently placed among the highest risks.

But what has the security industry really done to mitigate this threat? Firewall, VPN, IDS and cryptography is of no help here. 2-factor authentication also does not help. The insiders already have the access they need therefore securing such the access is not going to help. There is not much that the traditional information security can do about the insider threat. So, we have threat that is consistently rated among the top risks and we have nothing to do about it?

The heart of the problem is in the assets that we are trying to protect. The data are stored inside applications. Typically the data of all sensitivity levels are stored in the same application. Therefore network-based security techniques are almost powerless. Network security can usually control only whether a user has access to application or not. But it is almost impossible to discriminate the individual parts of the application which the user is allowed to access - let alone individual assets. Network perimeter is long gone. Therefore there is no longer even a place where to place network security devices as the data move between cloud applications and mobile devices. This is further complicated by the defense in depth approach. Significant part of the internal nework communication is encrypted. Therefore there is very little an Intrusion Detection System (IDS) can do because it simply does not see inside the encrypted stream. Network security is just not going to do it.

Can application security help? Enterprises usually have quite a strict requirements for application security. Each application has to have proper authentication, authorization, policies, RBAC, ... you name it. If we secure the application then we also secure the assets, right? No. Not really. This approach might work in early 1990s when applications were isolated. But now the applications are integrated. Approaches such as Service-Oriented Architecture (SOA) bring in industrial-scale integration. The assets almost freely travel from application to application. There are even composite applications that are just automated processes that live somewhere "between applications" in the integration layer. Therefore it is no longer enough to secure a couple of sensitive applications. All the applications, application infrastructure and integration layers needs to be secured as well.

As every security officer knows there is an aspect which is much more important than high security. It is consistent security. It makes no sense to have high security in one application while other application that works with same data is left unsecured. The security policies must be applied consistently across all applications. And this cannot be done in each of the application individually as this would be daunting and error-prone task. This has to be automated. As applications are integrated then also the security needs to be integrated. If it is not integrated then the security efficiently disappears.

Identity Management (IDM) systems are designed to integrate security policies across applications and infrastructure. The IDM systems are the only components that can see inside all the applications. The IDM system can make sure that the RBAC and SoD policies are applied consistently in all the applications. It can make sure that the accounts are deleted or disabled on time. As the IDM system can correlate data in many applications it can check for illegal accounts (e.g. accounts without a legal owner or sponsor).

IDM systems are essential. It is perhaps not possible to implement reasonable information security policy without it. However the IDM technology has a very bad reputation. It is considered to be very expensive and never-ending project. And rightfully so. The combination of inadequate products, vendor hype and naive deployment methods contributed to a huge number of IDM project failures in 2000s. The Identity and Access Management (IAM) projects ruined many security budgets. Luckily this first-generation IDM craze is drawing to an end. The second-generation products of 2010s are much more practical. They are lighter, open and much less expensive. Iterative and lean IDM deployments are finally possible.

Identity management must be an integral part of the security program. There is no question about that. Any security program is shamefully incomplete without the IDM part. The financial reasons to exclude IDM from the security program are gone now. Second generation of IDM systems finally delivers what the first generation has promised.

(Reposted from https://www.evolveum.com/security-insider-threat/)
November 03, 2014

Axel NennkerX-Auto-Login at Google [Technorati links]

November 03, 2014 12:57 PM
Below you can find evidence that Google is using the X-Auto-Login header in production.
Please see my other post for context: http://ignisvulpis.blogspot.de/2014/09/deviceautologin.html
 I am using "wget" to get gmail web page and the HTTP response contains the X-Auto-Login header.

I think that Google should standardize this.
Currently Google is using OpenID2 here but it is probably ease to standardize this with OpenID Connect.

ignisvulpis@namenlos:~/mozilla-central$ wget -S https://mail.google.com/mail --user-agent="Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2049.0 Safari/537.36"
--2014-11-03 12:23:50-- https://mail.google.com/mail
Connecting to 212.201.109.5:8080... connected.
Proxy request sent, awaiting response...
HTTP/1.1 302 Moved Temporarily
Content-Type: text/html; charset=UTF-8
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Date: Mon, 03 Nov 2014 11:23:51 GMT
Location: https://accounts.google.com/ServiceLogin?service=mail&passive=true&rm=false&continue=https://mail.google.com/mail/&ss=1&scc=1&ltmpl=googlemail&emr=1
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Server: GSE
Alternate-Protocol: 443:quic,p=0.01
Connection: close
Location: https://accounts.google.com/ServiceLogin?service=mail&passive=true&rm=false&continue=https://mail.google.com/mail/&ss=1&scc=1&ltmpl=googlemail&emr=1 [following]
--2014-11-03 12:23:51-- https://accounts.google.com/ServiceLogin?service=mail&passive=true&rm=false&continue=https://mail.google.com/mail/&ss=1&scc=1&ltmpl=googlemail&emr=1
Connecting to 212.201.109.5:8080... connected.
Proxy request sent, awaiting response...
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Strict-Transport-Security: max-age=10893354; includeSubDomains
Set-Cookie: GAPS=1:lAGQAL021CeF4UofSLjbzRnvJw_Eqw:256mW0v3ZoeLVjLo;Path=/;Expires=Wed, 02-Nov-2016 11:23:51 GMT;Secure;HttpOnly;Priority=HIGH
Set-Cookie: GALX=xATUIfBPIN4;Path=/;Secure
X-Frame-Options: DENY
Cache-control: no-cache, no-store
Pragma: no-cache
Expires: Mon, 01-Jan-1990 00:00:00 GMT
X-Auto-Login: realm=com.google&args=service%3Dmail%26continue%3Dhttps%253A%252F%252Fmail.google.com%252Fmail%252F
  Transfer-Encoding: chunked
Date: Mon, 03 Nov 2014 11:23:51 GMT
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Server: GSE
Alternate-Protocol: 443:quic,p=0.01
Connection: close
Length: unspecified [text/html]

2014-11-03 12:23:51 (1,44 MB/s) - ‘mail’ saved [70172]

ignisvulpis@namenlos:~/mozilla-central$
November 02, 2014

Julian BondThe Copenhagen IPCC report is released today. [Technorati links]

November 02, 2014 10:50 AM
The Copenhagen IPCC report is released today.
http://www.theguardian.com/environment/2014/nov/02/rapid-carbon-emission-cuts-severe-impact-climate-change-ipcc-report

The article contains these two conflicting comments. 

"The lowest cost route to stopping dangerous warming would be for emissions to peak by 2020 – an extremely challenging goal – and then fall to zero later this century."

but

"The report also makes clear that carbon emissions, mainly from burning coal, oil and gas, are currently rising to record levels, not falling." 

I'm afraid that looks to this bear of little brain like we're all doomed. Mankind will continue business as usual, with accelerating carbon emissions until either resource limits or pollution (in the form of global warming, smog or whatever) put a hard stop to it. The question is when, not if.

I've no doubt people will latch onto the uncertainties, or to phrases like this. "Tackling climate change need only trim economic growth rates by a tiny fraction, the IPCC states, and may actually improve growth by providing other benefits, such as cutting health-damaging air pollution. And they'll try to say that it's not that bad really and can be dealt with. I'm afraid though that I simply don't see how China, India, USA and others will ever want to slow down until nature forces them to. 
 IPCC: rapid carbon emission cuts vital to stop 'severe' impact of climate change »
Most important assessment of global warming yet warns carbon emissions must be cut sharply and soon, but UN’s IPCC says solutions are available and affordable

[from: Google+ Posts]
November 01, 2014

Anil JohnIdentity Establishment, Management and Services [Technorati links]

November 01, 2014 08:05 PM

Delivering high value digital services to a particular individual requires knowing who that individual is with a high degree of assurance. That identity assurance in turn has dependencies on the sources used to validate the information and the techniques used to verify that the validated information belongs to the person claiming it. All too often, we focus on verification techniques while neglecting the whole chain of trust that goes into validation.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.


The opinions expressed here are my own and do not represent my employer’s view in any way.

October 31, 2014

WAYF NewsRoyal Society of Chemistry now a WAYF service [Technorati links]

October 31, 2014 12:29 PM

Online resources from the chemistry publisher Royal Society of Chemistry (RSC) can now be accessed through WAYF. Institutions connected to WAYF and subscribing to the RSC services must write to ejournals@rsc.org to have their WAYF access enabled.

Kuppinger Cole11.03.2015: Identity Management Crash Course [Technorati links]

October 31, 2014 10:43 AM
In KuppingerCole

An overall view on IAM/IAG and the various subtopics - define your own "big picture" for your future IAM infrastructure.
more