November 29, 2014

Julian BondIf OPEC just declared war on the US fracking industry by forcing oil prices lower, you have to wonder... [Technorati links]

November 29, 2014 07:34 AM
If OPEC just declared war on the US fracking industry by forcing oil prices lower, you have to wonder where that leaves the UK Tory party and their friends at Cuadrilla. The companies heavily invested in fracking are largely funded by junk bond debt. How long can they last out unprofitability? What happens when they fail?

There's some nasty ramifications (and some upside) of a sustained low oil price until OPEC has regained control again and oil prices are allowed to rise.

Peak oil doesn't mean a gentle rise in price as predicted by classic supply-demand economics. It means increasing volatility and chaotic price movements. It will be dominated by the major players jockeying for position and control.
 The Fracking Boom Just Went Bust »
Opec chose not to decrease oil production today. Oil (WTI) closed as 69.05 / bbl.

To quote a Russian oligarch reported via Bloomberg :
“In 2016, when OPEC completes this objective of cleaning ...

[from: Google+ Posts]
November 28, 2014

Bill Nelson - Easy IdentityOpenIDM 3.1: A Wake Up Call for Other Identity Vendors [Technorati links]

November 28, 2014 02:58 PM

Having implemented Sun, Novell, and Oracle provisioning solutions in the past, the one thing that I found to be lacking in ForgeRock’s OpenIDM solution was an easy to use administrative interface for connecting to and configuring target resources.

Sure, you could configure JSON objects at the file level, but who wants to do that when “point and click” and “drag and drop” is the way to go.

With OpenIDM 3.1 my main objection has been eliminated as the the new resource configuration interfaces have arrived – and boy have they arrived!


OpenIDM Admin InterfaceSee the OpenIDM Integrator’s Guide for more information.

The latest release now places OpenIDM directly in line as a viable alternative to the big boys and will make our deployments much quicker and less prone to error.  Way to go ForgeRock, thanks for listening (and responding).

Kuppinger ColeExecutive View: CyberArk Privileged Threat Analytics - 70859 [Technorati links]

November 28, 2014 09:19 AM
In KuppingerCole

In gewisser Hinsicht war Privilege Management (PxM) schon in den ersten Mainframe-Umgebungen vorhanden: Diese frühen Mehrbenutzersysteme boten bereits gewisse Möglichkeiten, Administrator-Accounts sowie gemeinsam genutzte Accounts zu prüfen und zu überwachen. Trotzdem waren diese Technologien noch bis vor kurzem außerhalb von IT-Abteilungen nahezu unbekannt. Aber aufgrund der derzeitigen Entwicklungen in der IT-Industrie wendet sich die...

Kuppinger ColeExecutive View: CyberArk Privileged Threat Analytics - 70859 [Technorati links]

November 28, 2014 09:19 AM
In KuppingerCole

In gewisser Hinsicht war Privilege Management (PxM) schon in den ersten Mainframe-Umgebungen vorhanden: Diese frühen Mehrbenutzersysteme boten bereits gewisse Möglichkeiten, Administrator-Accounts sowie gemeinsam genutzte Accounts zu prüfen und zu überwachen. Trotzdem waren diese Technologien noch bis vor kurzem außerhalb von IT-Abteilungen nahezu unbekannt. Aber aufgrund der derzeitigen Entwicklungen in der IT-Industrie wendet sich die...

WAYF NewsWAYF now has 10 million logins a year [Technorati links]

November 28, 2014 08:17 AM
WAYF now has 10 million logins per running year. The statistics can be seen here (in the combobox on the page, select Ever by Running Year).
November 27, 2014

Kuppinger ColeEuropean Identity & Cloud Conference 2015 Teaser [Technorati links]

November 27, 2014 12:37 PM
In KuppingerCole Podcasts

European Identity & Cloud Conference 2015, taking place May 5 – 8, 2015 at the Dolce Ballhaus Forum Unterschleissheim, Munich/Germany, is the place where identity management, cloud and information security thought leaders and experts get together to discuss and shape the Future of secure, privacy-aware agile, business- and innovation driven IT.

Watch online

Bill Nelson - Easy IdentityTaking Time to Give Thanks [Technorati links]

November 27, 2014 11:35 AM

With Halloween in the rear view mirror and Christmas right around the corner, it is easy for Thanksgiving to get lost in the shuffle. Bordered by two holidays where much of our society is focused on gifts of candy and presents, Thanksgiving is sort of an “odd man out” and like many of the other holidays, much of its meaning is oftentimes overlooked.


While not the official start of the Thanksgiving holiday that we celebrate today, it was George Washington who in 1789 declared Thursday, Nov. 26, a day of “thanksgiving.” This was a one time occurrence and its intent was to devote a day to “public thanksgiving and prayer” in gratitude to “the service of that great and glorious Being who is the beneficent Author of all the good that was, that is, or that will be.”


Washington’s Thanksgiving Proclamation


(Read the full proclamation here)

It wasn’t until 1863 that Abraham Lincoln set aside the fourth Thursday in November as our official Thanksgiving holiday, but it is the day that George Washington set aside that gives this holiday special meaning to me. The meaning of the word “thanks” is associated with an “expression of gratitude”; and to give thanks is to express that gratitude to others.

(Read the full proclamation here)

In both cases, George Washington and Abraham Lincoln were expressing gratitude to the Almighty God for the wonderful gifts He had bestowed on a fledgling nation. While we can join in these expressions, each of us has something unique to be grateful for. Maybe it’s your health, or your family or friends. Maybe its your finances or the fact that you have achieved long sought after goals in your life, or simply that you have a roof over your head – each of us has something to be thankful for on this Thanksgiving Day.

So, on one of the most important holidays of the year, one that focuses on giving thanks for the blessings that we have received in the past year, let’s stop and take the time to thank the God Almighty, respective spouses, family members, friends, or whomever deserves that expression of gratitude.

After all, isn’t giving thanks what Thanksgiving is all about?


November 26, 2014

Kantara InitiativeSpotlight on Kantara Initiative Member “GlobalSign” [Technorati links]

November 26, 2014 03:59 PM

In this edition of Spotlight, we are pleased to tell readers more about GlobalSign, their unique role in IdM, and why they became Members of Kantara Initiative.

1) Why was your service/product created, and how is it providing real-world value today?  globalsign-logo

GlobalSign, founded in 1996, is a provider of identity services. As a leading Certificate Authority, we have always provided trusted identities – initially issuing SSL certificates for ecommerce transactions. Today, we provide a range of identity services for the Internet of Everything (IoE), where the ability to make secure networked connections among people, processes, data and things, will require that every “thing” have a trusted identity that can be managed. We like to call it the IoE, not IoT, because people, organizations, systems, processes and applications all need identities, not just “things.”

Our broad identity portfolio, in addition to SSL and other PKI-based services, includes identity verification, authentication, access control, single sign-on (SSO), federation and delegation services that make it easy to build and scale self-service portals and also enables tiered delegated administration of external identities. They help businesses reap the benefits of enhanced customer and partner interactions with easily deployed, yet robust e-Service solutions.

GlobalSign’s identity relationship management solutions are already in use by an impressive number of government agencies, financial institutions and enterprises for large scale applications, including telecom service providers that allow their customers to manage their own account features; by the Finnish tax bureau to provide services to millions of citizens and companies; by utility companies to provide legally required usage information to customers; and by insurance organizations, using federated identities to cross-sell services to banking customers. These organizations are saving millions of dollars in account management costs and providing higher levels of service to their customers by using GlobalSign’s technology to integrate with existing CRM solutions and enable self-registration, dynamic authentication based on transaction value, and delegated multi-tiered identity and access administration features.

2) Where is your organization envisioned to be strategically in the next 5-10 years?

GlobalSign aspires to be a leading provider of identity services for the Internet of Everything, mediating trust to enable safe commerce, communications, content delivery and community interactions for the billions of online transactions, occurring around the world at every moment. It’s a lofty mission but as we all recognize, identity is key to all trusted online transactions. We are building the solutions that will allow organizations to issue, manage and provide appropriate access rights to their networks of identities, as well as federate trusted identities. With our experience in mediating billions of trusted transactions online, we understand and can address the massive scalability that will allow us, and our customers, to extend IAM beyond individuals to all of the components that make up the IoE.

3) Why did you join Kantara Initiative?

We believe that a collaborative effort is needed to create the framework and standards for successfully managing the billions of identities and the federation capabilities that will be needed as the IoE explodes, and that the results of this effort should be made publicly available. GlobalSign heartily endorses the Kantara Initiative’s focus on developing a better trusted identity framework. Actually GlobalSign’s team has been active in the Kantara Initiative even before it began – via Ubisecure of Helsinki which Globalsign acquired in September 2014, which was a part of Kantara’s predecessor, the Liberty Alliance. Since the Kantara Initiative’s formation in 2009, Ubisecure has held leadership roles in the Kantara Initiative’s telecommunications identity and eGovernment working groups, and participated in testing programs now coordinated by Kantara Initiative to ensure interoperability and standards. Team members currently participate in the Identity of Things (IDoT) discussion group and will be joined by others in our organization, as we expand our focus on identity relationship and access management services.

4) What else should we know about your organization, the service/product, or even your own experiences?

As one of the leading certificate authorities in the world, GlobalSign has been providing trusted identities and related identity solutions since 1996. Our expertise in digital certificates at scale, our large portfolio of solutions and our vast global reach, give us a comprehensive suite of solutions for today’s identity needs as well as tomorrow’s, and a perspective that few others have. With our newly acquired Ubisecure solutions and expertise, we’re eager to ramp up our industry involvement as we help customer and partners manage identities within and beyond their organizations, and into the burgeoning Internet of Everything (IoE) opportunity.

Kuppinger ColeRegin Malware: Stuxnet’s Spiritual Heir? [Technorati links]

November 26, 2014 08:42 AM
In Alexei Balaganski

As if IT security community hasn’t had enough bad news recently, this week has begun with a big one: according to a report from Symantec, a new, highly sophisticated malware has been discovered, which the company dubbed “Regin”. Apparently, the level of complexity and customizability of the malware rivals if not trumps its famous relatives, such as Flamer, Duqu and Stuxnet. Obviously, the investigation is still ongoing and Symantec, together with other researchers like Kaspersky Lab and F-Secure are still analyzing their findings, but even those scarce details allow us to make a few far-reaching conclusions.

Let’s begin with a short summary of currently known facts (although I do recommend reading the full reports from Symantec and Kaspersky Lab linked above, they are really fascinating if a bit too long):

  1. Regin isn’t really new. Researchers have been studying its samples since 2012 and the initial version seems to have been in use since at least 2008. Several components have timestamps from 2003. Makes you appreciate even more how it managed to stay under radars for so long. And did it really? According to F-Secure, at least one company affected by this malware two years ago has explicitly decided to keep quiet about it. What a ground for conspiracy theorists!
  2. Regin’s level of complexity trumps practically any other known piece of malware. Five stages of deployment, built-in drivers for encryption, compression, networking and virtual file systems, utilization of different stealth techniques, different deployment vectors, but most importantly a large number of various payload modules – everything indicates a level of technical competence and financial investment of a state-sponsored project.
  3. Nearly half of affected targets have been private individuals and small businesses and the primary vertical the malware appears to be targeting is telecommunications industry. According to Kaspersky Lab’s report, code for spying on GSM networks has been discovered in it. Geographically, primary targets appear to be Russia and Saudi Arabia, as well as Mexico, Ireland and several other European and Middle Eastern countries.

So, is Regin really the new Stuxnet? Well, no. Surely, its incredible level of sophistication and flexibility indicates that it most certainly is a result of a state-sponsored development. However, Regin’s mode of operation is completely opposite to that of its predecessor. Stuxnet has been a highly targeted attack on Iranian nuclear enrichment facilities with the ultimate goal of sabotaging their work. Regin, on the other hand, is an intelligence-gathering spyware tool, and it doesn’t seem to be targeted on a specific company or government organization. To the contrary, it’s a universal and highly flexible tool designed for long-term covert operations.

Symantec has carefully avoided naming a concrete nation-state or agency that may have been behind this development, but the fact that no infections have been observed in the US or UK is already giving people ideas. And, looking at the Regin discovery as a part of a bigger picture, this makes me feel uneasy.

After Snowden’s revelations, there’s been a lot of hope that public outcry and pressure on governments will somehow lead to major changes limiting intelligence agencies’ powers for cyber spying. Unfortunately, nothing of that kind has happened yet. In fact, looking at the FUD campaign FBI and DoJ are currently waging against mobile vendors (“because of your encryption, children will die!”) or the fact that the same German BND intelligence service that’s promoting mandatory encryption is quietly seeking to install backdoors into email providers and spending millions on zero-day exploits, there isn’t much hope for a change left. Apparently, they seem oblivious to the fact that they are not just undermining trust in the organizations that supposedly exist to protect us from foreign attackers, but also open new attack surfaces for them by setting up backdoors and financing development of new exploits. Do they honestly believe that such a backdoor or exploit won’t be discovered and abused by hackers? This could probably be a topic for a separate blog post…

Isn’t it ironic that among all the talks about Chinese and Russian hackers, the biggest threat to our cybersecurity might come from the West?

Ludovic Poitou - ForgeRockAnother great resource to get started with OpenIG [Technorati links]

November 26, 2014 08:39 AM

guillaumeI forgot to mention, but Guillaume, the lead developer for OpenIG, has also started a blog to discuss about Middleware, and share his experience and thoughts about OpenIG.

He has started a great series of posts introducing OpenIG, it’s use cases, some terminology…

I encourage you to take a look at it here : In Between – a Blog by Guillaume Sauthier

Filed under: Identity Gateway Tagged: blog, ForgeRock, gateway, identity, identity gateway, openig, opensource
November 25, 2014

Ian GlazerMy 9 Step Process for Building a Presentation [Technorati links]

November 25, 2014 04:15 PM

“How do you build a presentation?” I’ve had the question asked of me a few times recently. And I’ve had enough flights recently to spend some time thinking about the answer. As I mentioned, before I could actually answer the question I had to write this other post about clarity and empathy. Go read that and then come back. With that as context, here is my stripped down process – my 9 essential steps to building a presentation.

1 – Find the nucleus

I start with a few pithy quotes or few, very few, key points. In the case of my “Killing IAM” talk, all I had was the phrase “Behold the comma.” For my more recent “No One Is An Island” what I had was “Hierarchies and our love for them is the strange love child of Confucius and the military industry complex” and “Treating people like just nodes just rows in a database is, essentially, sociopathic behavior. It ignores the reality that you, your organization, and the other person, group, or organization are connected.” What you need is just enough to grow a talk upon.

2 – Build an outline

Next up – I build an outline. The top-level items will become the sections of the talk. Under each top-level item I add just a few bullets, the essential points for that section. I’ll also add cues for visuals where I can. Sometimes I have a strong image in my mind how to illustrate a certain point, or there’ll be a joke I want to tell that requires a visual. Don’t stress over not having visual cues; they’re nice to have but by no means required to proceed to step 3.

3 – Write the speech

Yup. I write out my full talk. All of it. Write out the story that you want to tell. Hit each top-level item from the outline; make them headers. Weave the associated bullets into full sentences. Paragraphs grow from there. I will also put in parenthetical notes to myself for visuals, staging, and other things I want to remember when I present. I’ll also put in the quotes and ideas that served as my nucleus.

You might be surprised to learn but I don’t spend a ton of time of the actual writing. It takes me about an afternoon or so to write a speech. Your mileage may vary.

The text should flow. If it doesn’t, then you aren’t ready to write it. Go back to your nucleus and ask if it inspires you. Go back to your outline and look for weaknesses and holes.

As for length, I find that each page of single spaced text is about 3 to 4 minutes of talk. To get a sense for how long your text is, fire up text to speech. Time how long it takes the computer to read the text. This will give you a sense if you are in the right neighborhood lengthwise.

4 – Make a skeleton deck

Time to build the slide deck. Each major header from is a section of the deck. Paste each paragraph from the speech into the deck – one paragraph per slide. Put the nucleus quotes on separate slides. You also should rough-out the visuals where you can but do not spend a lot of time doing so. Once text of the speech is transposed into the skeleton deck you are done with this step.

5 – Get to v1

Here’s the hard labor intensive part. Take each paragraph and split it down to a sufficient number of slides. How’s that for a sufficiently vague instruction? “How many is a sufficient number?” you ask. I don’t know.

Here’s what I do. I am fairly militant about one thought per slide. This keeps me from getting lost and droning on on any one particular slide. More importantly, one-thought-per-slide keeps the audience focused. Even if they only can focus on the slide briefly; they stay focused.

It’s important to note that the mission is not to make a pretty deck at this point. You can refine images and visuals if you want, but do not get too hung up on it. Definitely do not spend too much time on animations, builds, and transitions. And be warned – you’ll want to because it is more fun working on animations than it is getting to a v1 deck. You have to fight that urge. To be clear, working on animations and builds is fine if it helps directly convey your point but again don’t spend too much time.

The mission of this step is to build a complete deck from a message and story perspective. Not a pretty deck. Not a polished deck. You want a complete story in slide form; a deck that tells the story of your speech.

6 – Perform v1

If step 5 was the hard step, then this step is the painful step. Time yourself presenting the deck out loud. There are few things worse than presenting your deck out loud for the first time. But it has to be done.

Why do this? You’ll get a sense for the flow (or lack thereof.) You’ll get a sense for the places that the talk bogs down and which slides need to be split up. You’ll feel where you are being cheeky or too cute. The things you think are funny will likely not be. The bits you think matter as much will tend to shine.

Again this step isn’t pleasant or enjoyable but it has to be done.

7 – Revise to v2 or so

With what you’ve learned from step 6, it’s time to make a pretty deck.
Find awesome images where you just have a cue for a visual. Refine your animations. I like to optimize those animations for flow. One big thing here is to reduce the number of clicker clicks needed to progress an animation.

Reduce the text on each slide to its most crucial bit. The slide is meant to trigger you to say something. That something is not exactly what you wrote in the speech, and that’s okay. That something you say will be right no matter what you say. How do I know that? Because you know the spirit of what you want to say. Don’t fixate on the letter of what you want to say.

Move the slides with the paragraphs of text into the speaker notes of the section headers. Move individual sentences into the speaker note for the appropriate slides.

This step takes me one or two revisions. So at this point I’m usually at a version 3 of the deck.

8 – Rehearse

No way around it – building a deck requires you to rehearse the deck. You should rehearse until you are able to see each slide in your head. Maybe not clearly and maybe not completely, but you should “see” the shape of each slide.

Focus on delivering your speech. Each phrase or thought from your speech is prompted by a slide.

Lastly, FOR THE LOVE OF ALL THAT IS GOOD, DO NOT READ THE SPEAKER NOTES. Nothing will screw you up more than trying to read the speaker notes as you rehearse. Nothing. Do not do it. You’ve been warned.

To be clear, you will revise the deck as you go. Overly complex animations will get replaced with simpler ones. Dense slides will be simplified. You’ll probably break something along the way so don’t be afraid to save often and create new versions as you go. I usually will have 5 to 10 versions of a deck before I am done.

9 – Ship it!

How do I know when the presentation is done? Typically, when I get sick of looking at it. At that point I know it is time to ship the deck.

If you’ve followed these steps, your presentation is in a good place. You will be in a good place too. You’ll have the spirit of your speech in your mind and a bunch of rehearsals under your belt; that’s all you need.

When you deliver your presentation, you won’t be conscious. You will enter a state in which the deck and the speech will flow through you to your audience.

You will stick the landing on the slides that are the nucleus of the talk.

You will be awesome.

Ludovic Poitou - ForgeRockSimplifying OpenIG configuration… [Technorati links]

November 25, 2014 08:58 AM

In the article that I’ve posted yesterday, I’ve outline portions of configuration files for OpenIG. The configuration is actually only working with the latest OpenIG nightly builds, as it leverages some of the newest updates to the code.

One of the feedback that we got after we released was that configuring OpenIG was still too complex and verbose. So, we’ve made changes to the model, simplifying it, removing intermediate objects… The result is much smaller and easier to understand configuration files, but more importantly, easier to read back and understand the flow they represents.

My colleague Mark has done a great job of describing and illustrating those changes in a few articles :

OpenIG’s improved configuration files (Part 1)

OpenIG: A quick look at decorators

OpenIG’s improve configuration files Part 2


Filed under: Identity Gateway Tagged: configuration, ease of use, engineering, ForgeRock, gateway, identity, openig, opensource
November 24, 2014

Ludovic Poitou - ForgeRockMissed the IRM Summit Europe ? We’ve got it recorded ! [Technorati links]

November 24, 2014 04:59 PM

All the sessions from the European IRMSummit that took place early this month in Dublin were recorded, and the videos are now available.

To make it even easier for everyone, our Marketing team has produced playlists according to the agenda :

Enjoy and I hope this will give you envy to be with us next year !

Filed under: General Tagged: conference, Dublin, ForgeRock, identity, Identity Relationship Management, IRM, IRMSummit2014, opensource, presentations, summit, videos

Ian GlazerRound Wheel talk from the IRM Summit [Technorati links]

November 24, 2014 03:26 PM

Here’s my “Do we have a round wheel yet?” talk from the recent IRM Summit in Dublin.

Ludovic Poitou - ForgeRockAPI Protection with OpenIG: Controlling access by methods [Technorati links]

November 24, 2014 09:46 AM

OpenIGUsually, one of the first thing you want to do when securing APIs is to only allow specifics calls to them. For example, you want to make sure that you can only read to specific URLs, or can call PUT but not POST to other ones.
OpenIG, the Open Identity Gateway, has a everything you need to do this by default using a DispatchHandler, in which you express the methods that you want to allow as a condition.
The configuration for the coming OpenIG 3.1 version, would look like this:

     "name": "MethodFilterHandler",
     "type": "DispatchHandler",
     "config": {
         "bindings": [
             "handler": "ClientHandler",
             "condition": "${exchange.request.method == 'GET' or exchange.request.method == 'HEAD'}",
             "baseURI": ""
             "handler": {
                 "type": "StaticResponseHandler",
                 "config": {
                     "status": 405,
                     "reason": "Method is not allowed",
                     "headers": {
                         "Allow": [ "GET", "HEAD" ]

This is pretty straightforward, but if you want to allow another method, you need to update the both the condition and the rejection headers. And when you have multiple APIs with different methods that you want to allow or deny, you need to repeat this block of configuration or make a much complex condition expression.

But there is a simpler way, leveraging the scripting capabilities of OpenIG.
Create a file under your .openig/scripts/groovy named MethodFilter.groovy with the following content:

 * The contents of this file are subject to the terms of the Common Development and
 * Distribution License 1.0 (the License). You may not use this file except in compliance with the
 * License.
 * Copyright 2014 ForgeRock AS.
 * Author: Ludovic Poitou
import org.forgerock.openig.http.Response

 * Filters requests that have the allowedmethods supplied using a
 * configuration like the following:
 * {
 *     "name": "MethodFilter",
 *     "type": "ScriptableFilter",
 *     "config": {
 *         "type": "application/x-groovy",
 *         "file": "MethodFilter.groovy",
 *         "args": {
 *             "allowedmethods": [ "GET", "HEAD" ]
 *         }
 *     }
 * }

if (allowedmethods.contains(exchange.request.method)) {
    // Call the next handler. This returns when the request has been handled.
} else {
    exchange.response = new Response()
    exchange.response.status = 405
    exchange.response.reason = "Method not allowed: (" + exchange.request.method +")"
    exchange.response.headers.addAll("Allow", allowedmethods)

And now in all the places where you need to filter specific methods for an API, just add a filter to the Chain as below:

    "heap": [
            "name": "MethodFilterHandler",
            "type": "Chain",
            "config": {
                "filters": [
                        "type": "ScriptableFilter",
                        "config": {
                            "type": "application/x-groovy",
                            "file": "MethodFilter.groovy",
                            "args": {
                                "allowedmethods": [ "GET", "HEAD" ]
                "handler": "ClientHandler"
    "handler": "MethodFilterHandler",
    "baseURI": ""

This solution allows to filter different methods for different APIs with a simple configuration element, the “allowedmethods” field, for greater reusability.

Filed under: Identity Gateway Tagged: access control, API, ForgeRock, methods, openig, opensource, Tips

Vittorio Bertocci - MicrosoftIdentity Libraries: Status as of 11/23/2014 [Technorati links]

November 24, 2014 08:11 AM

It’s been few years that we’ve been innovating in the area of development libraries for authentication. Thanks to new technologies (e.g. NuGet) and our new approach to development (OSS) in the last couple of years we really picked up our pace, pushing out libraries at unprecedented speed. That is all more power for you, my dear readers, but it also makes selecting the correct library/version harder than it used to be.

For that reason, I decided to add to the blog a permapage keeping track of the latest status of our releases. You can find it here, but given that this is the very first update I decided to paste its content in a regular post as well, so that you guys are aware of that new page’s existence Smile


To help you navigate the vast array of choices we offer, I whipped together a quick diagram. There are few key dimensions you want to keep in mind:

The diagram above captures all those dimensions, giving you a snapshot of the situation as of today (November the 23rd, 2014). I’ll try to keep it up to date. Also, it is actually not 100% complete – we have Java (server-side) and Node JS versions of ADAL, but I need to consult with my colleagues before placing it in the diagram.

Some links:

Will add more details as we go.

November 23, 2014

Anil JohnHow Identity Resolution Can Help Attribute Providers Overcome Blindness [Technorati links]

November 23, 2014 08:30 PM

Stand-alone attribute providers typically use a lookup key based scheme in order to return attributes associated with that key. But as we move to a more attribute-centric world, they will need to incorporate identity resolution as the first step in the attribute query flow in order to meet the needs of a wide range of customers.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

The opinions expressed here are my own and do not represent my employer’s view in any way.

November 22, 2014

Mike Jones - MicrosoftA JSON-Based Identity Protocol Suite [Technorati links]

November 22, 2014 01:02 AM

quillMy article A JSON-Based Identity Protocol Suite has been published in the Fall 2014 issue of Information Standards Quarterly, with this citation page. This issue on Identity Management was guest-edited by Andy Dale. The article’s abstract is:

Achieving interoperable digital identity systems requires agreement on data representations and protocols among the participants. While there are several suites of successful interoperable identity data representations and protocols, including Kerberos, X.509, SAML 2.0, WS-*, and OpenID 2.0, they have used data representations that have limited or no support in browsers, mobile devices, and modern Web development environments, such as ASN.1, XML, or custom data representations. A new set of open digital identity standards have emerged that utilize JSON data representations and simple REST-based communication patterns. These protocols and data formats are intentionally designed to be easy to use in browsers, mobile devices, and modern Web development environments, which typically include native JSON support. This paper surveys a number of these open JSON-based digital identity protocols and discusses how they are being used to provide practical interoperable digital identity solutions.

This article is actually a follow-on progress report to my April 2011 position paper The Emerging JSON-Based Identity Protocol Suite. While standards can seem to progress slowly at times, comparing the two makes clear just how much has been accomplished in this time and shows that what was a prediction in 2011 is now a reality in widespread use.

November 21, 2014

Vittorio Bertocci - MicrosoftGetting Rid of Residual Cookies in Windows Store Apps [Technorati links]

November 21, 2014 06:30 PM

This is a classic Q I get pretty often – it’s time to get a post out and start replying by reference instead of by value Smile

The issue at hand is how to fully “sign out” (whatever that means for a native app) a user from a Windows Store client.

The actual user session is determined by two different components: the token cache (under ADAL’s control, see this) and any session tracking cookies that might be present in the system (not under ADAL’s control). As shown in the aforelinked post, you can easily take care of the token cache part. Clearing cookies is harder tho, Windows Store authentication takes place within the WebAuthenticationBroker – which has its own cookie jar that is separate and unreachable from your application code. The most robust approach there is not to create any persistent cookie (e.g. NOT clicking “remember me” during authentication. In fact, we should stop even showing it soon). However if you end up with such a cookie, the main way of getting rid of it is triggering a sign out form the same WebAuthenticationBroker – the server will take care of cleaning things up.

    string requestUrl = "";
    Task.Run(async () =>
            await WebAuthenticationBroker.AuthenticateAsync(WebAuthenticationOptions.SilentMode, new Uri(requestUrl));
        catch (Exception)
            // timeout. That's expected

Julian BondFun infographic. Note it's already 15 years old but I believe most of the graphs are still going up... [Technorati links]

November 21, 2014 05:55 PM
Fun infographic. Note it's already 15 years old but I believe most of the graphs are still going up.

Found via »

[from: Google+ Posts]

KatasoftEasy Single Sign-On [Technorati links]

November 21, 2014 03:00 PM

Since the beginning of time, developers have been writing code to store and handle user accounts.

Then Stormpath came out, and made that process a lot simpler. Instead of writing all that code yourself, you just make a few API calls to our service, and we take care of the heavy lifting: storing user info, handling authentication and authorization, ensuring data security, etc.

This brings us to the present.

We recently released our new ID Site product, a Single Sign On (SSO) feature that makes it easy to completely remove user authentication logic from your web application. Now, you can handle it on a completely separate subdomain.

This authentication subdomain is hosted by us, so all you need to do is point your DNS records at us, add a few lines of code to your webapp, and BAM, you’ve got authentication ready to go!

ID Site offers a basic Single Sign On experience, allowing your users to access multiple applications seamlessly, with one set of credentials — all within the same session.

This post will take you through what it is and how it works.

ID Site: Single Sign On with Stormpath

ID Site is a hosted web app (built in Angular.js) that provides pre-built screens for login, registration, password reset — all the common end user functions of your application. It is fully hosted by Stormpath, which makes it really easy for your application to access these features, as well as add SSO across your apps, with very little code.

When you sign up for a Stormpath account, we’ll give you a configurable authentication subdomain that is ready to use – just add some basic information into our console: the domains of apps authorized to use your ID Site, and callback URLs your ID Site is allowed to communicate with — AND JUST LIKE THAT — you are on your way!

How It Works

ID Site is easy — really easy — to integrate with your application. The functionality is already built into our client libraries.

At a high level, its very simple: when you want to authenticate a user, you redirect them (using our libraries) to your new authentication subdomain (like, for instance), we’ll handle the authentication and authorization checks or any workflows like password reset or account verification, then we’ll redirect the user back to your application transparently.

Here’s how it works:

This seems complex and full of moving parts, but it really isn’t. To get your user to the ID Site to authenticate, this is what the code actually looks like (here’s a Node.js example):

// Creating a simple http server in Node.
http.createServer(function (req, res) {

  // If the user requested to login, we redirect them to the ID using the
  // Stormpath SDK.
  if (req.url==='/login') {
    res.writeHead(302, {
      'Cache-Control': 'no-store',
      'Pragma': 'no-cache',
      'Location': application.createIdSiteUrl({
        callbackUri: ""

This code lands the user on your ID Site, which is fully customizable to your brand and hosted on Stormpath infrastructure:

Once the user logs in, they will be redirected to the callbackUri that was specified in the request. From there, you can validate the information and get the account for the login with the following code:

if (req.url.lastIndexOf('/loginCallback', 0) === 0) {
  application.handleIdSiteCallback(req.url, function(err, result) {
    if (err) {
      showErrorPage(req, res, err)
    } else {
      if (result.status === "AUTHENTICATED") {
        req.account = result.account;
        showDashboard(req, res);

There are two ways developers can handle the callback from the ID Site. One is to have a callbackUri specific to each action, like login and the /loginCallback in the code above. The other is to have a generic callback, like /idSiteCallback that handles the response for all actions taken on the ID Site. Stormpath exposes a status so you can know what action occurred on the ID site for any given callback.

Although ID Site is built in Angular, you can connect to it from any application. ID Site support has been added to our Node, Java, and Python libraries, and is available through the Stormpath REST API, so you can take advantage of it it even if you aren’t using one of those languages.

Why ID Site?

Almost every feature at Stormpath comes out of developer requests and ID Site solves issues and use cases we hear about frequently:

Stormpath Single Sign On Demo

If you want to get a feel for how ID Site looks and feels to end users, I built a demo to show a basic Single Sign On experience. This allows you to log into and share sessions across two different websites:


Both of these web applications use ID Site and share a 5 minute session timeout.

To learn more, check out our Guide to Using ID Site In Your Application.

If you have any questions / comments, we would love to hear them! Let me know how to make ID Site more useful to you: ( or @omgitstom).

Kuppinger ColeSAP Security Made Easy. How to Keep Your SAP Systems Secure [Technorati links]

November 21, 2014 10:37 AM
In KuppingerCole Podcasts

Security in SAP environments is a key requirement of SAP customers. SAP systems are business critical. They must run reliably, they must remain secure – despite a growing number of attacks. There are various levels of security to enforce in SAP environments. It is not only about user management, access controls, or code security. It is about integrated approaches.

Watch online

Nat SakimuraIDMからIRMへ~変わるアイデンティティーの地平 [Technorati links]

November 21, 2014 06:00 AM




November 20, 2014

Ian GlazerNo Person is an Island: How Relationships Make Things Better [Technorati links]

November 20, 2014 05:26 PM

(The basic text to my talk at Defragcon 2014. The slides I used are at the end of this post and if they don’t show up you can get them here.)

What have we done to manage people, their “things,” and how they interact with organizations?

The sad truth that we tried to treat the outside world of our customers and partners, like the inside world of employees. And we’ve done poorly at both. I mean, think about, “Treat your customers like you treat your employees” is rarely a winning strategy. If it was, just imagine the Successories you’d have to buy for your customers… on second thought, don’t do that.

We started by storing people as rows in a database. Rows and rows of people. But treating people like just a row in a database is, essentially, sociopathic behavior. It ignores the reality that you, your organization, and the other person, group, or organization are connected. We made every row, every person an island – disconnected from ourselves.

What else did we try? In the world of identity and access management we started storing people as nodes in an LDAP tree. We created an artificial hierarchy and stuff people, our customers, into it. Hierarchies and our love for them is the strange lovechild of Confucius and the military industrial complex. Putting people into these false hierarchies doesn’t help us delight our customers. And it doesn’t really help make management tasks any easier. We made every node, every person, an island – disconnected from ourselves.

We tried other things realizing that those two left something to be desired. We tried roles. You have this role and we can treat you as such. You have that role and we should treat you like this. But how many people actually do what their job title says? How many people actually meaningful job titles? And whose customers come with job titles? So, needless to say, roles didn’t work as planned in most cases.

We knew this wasn’t going to work. We’ve known since 1623. John Donne told us as much. And his words then are more relevant now than he could have possibly imagined then. Apologies to every English teacher I have ever had as I rework Donne’s words:

No one is an island, entire of itself; everyone is a piece of the continent, a part of the main. If a clod be washed away by the sea, we are the less. Anyone’s death diminishes us, because we are involved in the connected world.

What should we do?

If treating our customers like employees isn’t a winning strategy, if making an island out of each of our customers won’t work, if we are involved with the connected world, then what should we do?

We have to acknowledge that relationships exist. We have to acknowledge that the connections exists between a customer, their devices and things, and us. No matter what business you are in. No matter if you are a one woman IT consulting shop, or two-guys and a letterpress on Etsy, or even a multi-national corporation – you are connected to your customers; you have a relationship with them.

This isn’t necessarily a new thought and, in fact, there are two disciplines that have sought to map and use those relationships: CRM and VRM. Customer relationship management models one organization to many people. Vendor relationship management models one person to many organizations. Both, unknowingly share an important truth – the connections between people and organizations are key. It’s not “CRM vs VRM;” it’s “CRM and VRM.” What I am proposing is the notion of IRM – identity relationship management. IRM puts the relationships front and center, but more on that in a minute.

I believe that acknowledging relationships re-humanizes our digital relationships with one another. I believe that this is one of the reasons why online forums descend into antisocial behavior. It’s because those systems don’t make you feel like you have a relationship with the other party. “There’s no person there, just a tweet.” And this is a shame – that platforms meant to provide scalable human-to-human interactions and contact and closeness often dehumanize those very interactions.

I believe that we ought to use relationships to manage our interactions. You can’t get delighted customers by just treating them like a row in a database. You cannot manage data from all of your customer’s “things” without fully recognizing there’s a customer there with whom you have a relationship.

What I know about relationships

I believe we must build “relationship-literate” systems and processes. We should stop operating on rows of customers and start using digital representations of relationships. What follows are nine aspects of relationships that can serve as design considerations for relationship-literate systems.


If we are going to use relationships as a management tool in this world of ever-increasing connections between people, their things, and organizations, then we have to tackle scalability issues. The three obvious ones are huge numbers of actors, attributes, and relationships. But there’s another that is often left out: administration. If we don’t do something better than we do today, we’ll be stuck with the drop-list from hell in which an admin has to scroll through a few thousand enteries to find the “thing” she wants to manager.


I’ve got to know I’m in a relationship before anything else can meaningfully happen. I can’t buy a one-sided birthday card: Happy birthday to a super awesome partner who doesn’t know who I am. All parties have to know. Otherwise there is an asymmetry of power. And that tends to tilt towards the heavier object, e.g. the organization and not the individual. Familiar with the Law of Gross Tonnage? It’s part of the maritime code that says the heavier ship has the right of way. Now growing up outside of Boston, this is basically how I learned to drive. The Law of Gross Tonnage is useful in that situation but absolutely inequitable and unhelpful in terms of delighting a customer.


There’s got to be a way for us to know if multiple parties are in a relationship. This can take many flavors: single party, multi-party, and 3rd party asserted. Things like Facebook can serve as that 3rd party vouching two people are connected. But should there be alternatives to social networks for this? And who connects people and their “things”?


We want our relationships to be able to do something. And by looking at the relationship each party can know what they can do. Without having to consult some distant authority. Without waiting for an online connection. The relationship leads to action and does so without consulting some back-end service somewhere.


Not just because a relationship can do something doesn’t mean it can do everything. We need to be able to put limits of what things and people can do; we all need constraints. Examples of this are things like granting consent or enforcing digital rights management.


Some things are in a relationship forever. This is useful to know when you want to make sure that a “thing” was really made by one of your partners and is authentic.


Some relationships can be transferred. We have legal proxies that we transfer a relationship to on a temporary or conditional basis. There are plenty of familial relationships in which we transfer authority on a semi-permanent basis. And some relationships are permanently transferred – like selling a jet engine to someone.


Many relationships exist but aren’t very useful, until a condition changes. My relationship to my auto insurance provider isn’t a very vibrant relationship. I don’t use the relationship on most days. But then I get into an accident that inert relationship between my car, the insurer, and me becomes active. There’s something out there, some condition out there, that can make a relationship active and vital.


Some relationships end or have to come to an end. What happens then? What happens to the data now that the relationship is gone? At this point we have to turn to renowned privacy expert, John Mellencamp for his insight. You might not know it but he wrote about the Right to Be Forgotten and other privacy issues in “Jack and Diane”. As he sang, “oh yeah data goes on / long after the thrill of the relationship is gone.” But this problem is at the root of the “Right to Be Forgotten” debate. This will only become a larger problem as our digital footprints get heavier and heavier. And this gets especially messy when relationships that I am not even aware of create data about me and my devices and my things.

In summary, relationships:

If we were to do this, how would things be better?

Relationships add back the fidelity and color that we have drained from the digital identity world. By focusing on relationships, we would behave more like we do in the real world, but with all the efficiencies of the digital world. We’d be able to use familiar language to describe how and what people and things can do.

How should we do this?

I don’t fully know. This is the least satisfying and most accurate thought in this whole talk. I don’t fully know. And I am looking for help.

So I lied to you dear audience. This is a sales pitch. I want you to do something. If you have any interest in this vague notion of relationships and using them to make our world better, then I ask you to join the Kantara Initiative. It’s free to join and free to participate. It’s the home of some amazing identity and IoT thinking. And we need your help. I’d like you to join the Identity Relationship Management working group. I’d love it if you could bring your use cases to us. Share with a group of awesome people from around the world how you, your business, your service, your things connect and relate. Help us stop treating people like islands unto themselves. Help us to use relationships to make our digital interactions rich, meaningful, humanizing, and manageable.

No Person is an Island: How Relationships Make Things Better from iglazer

Radovan Semančík - nLightNever Use Closed-Source IAM Again [Technorati links]

November 20, 2014 03:45 PM

I will never use any closed-source IAM again. You will have to use force to persuade me to do it. I'm not making this statement lightly. I was working with closed-source IAM systems for the better part of 2000s and it made quite a good living. But I'm not going to do that again. Never ever.

What's so bad about closed-source IAM? It is the very fact that it is closed. A deployment engineer cannot see inside it. Therefore the engineer has inherently limited possibilities. No documentation is ever perfect and no documentation ever describes the system well enough. Therefore the deployment engineer is also likely to have limited understanding of the system. And engineer that does not understands what he is doing is unlikely to do a good job.

Closed-source software also leads to vendor lock-in. That makes it unbelievably expensive in the end. The Sun-Oracle acquisition of 2010 clearly demonstrated the impact of vendor lock-in for me. Our company was a very successful Sun partner in 2000s. But we have almost gone out of business because of that acquisition and the events that followed. That was the moment when I have realized that this must never happen again.

Open source is the obvious alternative. But how good it really is? Can it actually replace closed-source software? The short answer is a clear and loud "Yes!". The situation might have been quite bad in 2000s. But now there is a lot of viable open source alternatives for every IAM component. Directory servers, simple SSO, comprehensive SSO, social login and federation, identity management, RBAC and privileges and so on. There is plenty to choose from. Most of these projects are in a very good and stable state. They are at least as good as closed-source software.

But what is so great about open source software? It makes no sense to switch to open source just because of some philosophically-metaphysical differences, does it? So where are the tangible benefits? Simply speaking there are huge advantages to open source software all around you. But they might not be exactly what you expect.

Contrary to the popular belief the ability to meddle with the source code does not bring any significant direct advantage to the end customer. The customers are unlikely to even see the source code let alone modify it. But this ability brings a huge advantage to a system integrator who deploys the software. The deployment engineers do not need vendor assistance with every deployment step. The source code is the ultimate documentation therefore the deployment engineers can work almost independently. This eliminates the need for hugely overpriced vendor professional services - which also reduces the cost of the entire solution. The deployment engineers can fix product bugs themselves and submit the fixes back to the vendor. Which significantly speeds up the project. Any competent engineer can fix a simple bug in a couple of days if he has the source code. He or she does not need to raise each and every trivial issue and fight the way through all the levels of bloated support organization. And then wait for weeks or months to get the answer from the vendors development team. The open source way is so much more efficient. This dramatically reduces the deployment time and also the overall deployment cost.

The source code also allows ultimate customization. Software architects know very well how difficult it is to design and implement good extensible system. As with many other things it is actually very easy to do it badly but it is extremely difficult to do it well. A system which has all the extensibility that the IAM needs would inevitably become extremely complicated. Therefore the best way how to customize a system is sometimes the simple modification of the source code. And this is only possible in open source projects. Oh yes, there is this tricky upgradeablity problem. Customizations are difficult to upgrade, right? Right. Customized closed-source software is usually very difficult to upgrade. But that does not necessarily applies to well-managed open source projects. Distributed source code control software such as Git makes this kind of customization feasible. We are using this method for years and it survived many upgrades already.

But perhaps the most important advantage is the lack of vendor lock-in. The source code of open source project does not "belong" to any single individual or company. If the product is good there will be many open source companies that can offer services that only a single closed-source vendor can provide. This creates a healthy competition. In the extreme case the partner can always take over the product maintenance if the vendor misbehaves. Therefore it is unlikely that the cost of the open source solution spins out of control. Open source also provides much better protection against vendor failure. Yes, I'm aware that many companies behind the open source projects are small and that they can easily fail. But in the open source world the company failure does not necessarily mean project failure. If the project is any good then it will continue even if the original maintainer fails. Other companies will take over, most likely by employing at least a part of the original engineers. And the project goes on. This is the ultimate business continuity guarantee. And it has happened several times already. On the other hand the failure (or acquisition) of a closed source vendor is often fatal for the project. This has also happened several times. And we still feel the consequences today.

The difference between open-source and closed-source world is enormous. Any engineer that ever goes there and understands open source is very unlikely to go back. Open source is much easier to work with. The engineers have the power to change what they do not like. Open source is much more cost efficient and the business model is sustainable. And it actually works!

Therefore I would never ever use closed-source IAM again.

(Reposted from

Kaliya Hamlin - Identity WomanProtected: Dear IDESG, I’m sorry. I didn’t call you Nazi’s. [Technorati links]

November 20, 2014 02:18 PM

This content is password protected. To view it please enter your password below:

IS4UFIM 2010: Event driven scheduling [Technorati links]

November 20, 2014 12:25 PM
In a previous post I described how I implemented a windows service for scheduling Forefront Identity Manager.

Since then, me and my colleagues used it in every FIM project. For one project I was asked if it was possible to trigger the synchronization "on demand". A specific trigger for a synchronization cycle for example, was the creation of a user in the FIM portal. After some brainstorming and Googling, we came up with a solution.

We asked ourselves following question: "Is it possible to send a signal to our existing Windows service to start a synchronization cycle?". All the functionality for scheduling was already there, so it seemed reasonable to investigate and explore this option. As it turns out, it is possible to send a signal to a Windows service and the implementation turned out to be very simple (and simple is good, right?).

In addition to the scheduling on predefined moments defined in the job configuration file, which is implemented through the Quartz framework, we started an extra thread:

while (true)
 if (scheduler.GetCurrentlyExecutingJobs().Count == 0 
  && !paused)
  if (DateTime.Compare(StartSignal, LastEndTime) > 0)
   running = true;
   StartSignal = DateTime.Now;
   LastEndTime = StartSignal;
   SchedulerConfig schedulerConfig = 
      new SchedulerConfig(runConfigurationFile);
   if (schedulerConfig != null)
    logger.Error("Scheduler configuration not found.");
    throw new JobExecutionException
        ("Scheduler configuration not found.");
   running = false;
 // 5 second delay
First thing it does is check if one of the time-triggered schedules is not running and the service is not paused. Then it checks to see if an on-demand trigger was received by checking the StartSignal timestamp. So as you can see, the StartSignal timestamp is the one controlling the action. If the service receives a signal to start a synchronization schedule, it simply sets the StartSignal parameter:

protected override void OnCustomCommand(int command)
 if (command == ONDEMAND)
  StartSignal = DateTime.Now;

If you want to know more about developing custom activities, this article is a good starting point.

The first thing it does next if a signal was received, is pause the time-triggered mechanism. If the synchronization cycle finishes the time-triggered scheduling is resumed. The beautiful thing about this way of working is that the two separate mechanisms work alongside each other. The time-triggered schedule is not fired if an on-demand schedule is running and vice versa. If a signal was sent during a period of time the service was paused, the on-demand schedule will fire as soon as the service is resumed. The StartSignal timestamp will take care of that.
So, how do you send a signal to this service, you ask? This is also fairly straightforward. I implemented the FIM portal scenario I described above by implementing a custom C# workflow with a single code activity:

using System.ServiceProcess;
private const int OnDemand = 234;
private void startSync(){
 ServiceController is4uScheduler = 
  new ServiceController("IS4UFIMScheduler");

If you want to know more about developing custom activities, this article is a good starting point.
The integer value is arbitrary. You only need to make sure you send the same value as is defined in the service source code. The ServiceController takes the system name of the Windows service. The same is possible in Powershell:

  Version=, Culture=neutral, 
$is4uScheduler = New-Object System.ServiceProcess.ServiceController
$is4uScheduler.Name = "IS4UFIMScheduler"

Another extension I implemented (inspired by Dave Nesbitt's question on my previous post) was the delay step. This kind of step allows you to insert a window of time between two management agent runs. This in addition to the default delay, which is inserted between every step. So now there are four kind of steps possible in the run configuration file: LinearSequence, ParallelSequence, ManagementAgent and Delay. I saw the same idea being implemented in powershell here.

A very usefull function I didn't mention in my previous post, but was already there, is the cleanup of the run history (which can become very big in a fast-synchronizing FIM deployment). This function can be enabled by setting the option "ClearRunHistory" to true and setting the number of days in the "KeepHistory" option. If you enable this option, you need to make sure the service account running the service is a member of the FIM Sync Admins security group. If you do not use this option, membership of the FIM Sync Operators group is sufficient.

To end I would like to give you pointers to some other existing schedulers for FIM:
FIM 2010: How to Automate Sync Engine Run Profile Execution

GluuOAuth2 for IoT? [Technorati links]

November 20, 2014 03:07 AM


Today, consumers have no way to centrally manage access to all their Web stuff and IOT devices are threatening to create a whole new silo of security problems. This is one of the reasons I’ve been participating in the Open InterConnect Consortium Security Task Group.

People can’t individually manage every IOT device in their house. So it seems likely that some kind of centralized management tools will be necessary. Last week, I proposed the use of OAuth2 profiles OpenID Connect and UMA as the “two legs” of IOT security. Since then, a discussion has been active on the feasibility of OAuth2 for IOT?

One challenge for this design is that OAuth2 relies on HTTPS for transport security. While many devices will be powerful enough to handle an HTTPS connection, some devices are too small. Says Justin Richer from Mitre, “Basically replacing HTTP with CoAP and TLS with DTLS, you get a lot of functional equivalence.” In fact this effort is already in progress at the IETF, and research projects are in progress to build this out in simulation. For more info see the following links:

Assuming the transport layer security gets solved, another sticking point is the idea of central control. Here is the case against central control paraphrased by one of my comrades:

If you buy a light switch and a light bulb, they need to magically work together. When we state this as almost impossible, they will accept that the user needs a smartphone for the initial setup but not that he needs some extra dedicated authorization server. (Nor do I think that running this in the cloud will be acceptable either.)

The idea of central control has already been embraced by Apple and ARM. Homekit is Apple’s “smart home hub, providing an overview of all your connected smart devices.” The mBed platform describes a central “mbed Server”, a Java application that includes “a Device Server [that] handles the connections from IoT devices.”

Let’s consider the use case against OAuth2: How could an IOT light bulb connect to an IOT light switch?

Lets say the light bulb publishes three APIs:

For central control, using the UMA profile of OAuth2, a client must present a valid RPT token from an Authorization Server to the light bulb. All the light bulb has to do is validate this token. This should be the default configuration for most IOT devices–they should quickly hook into the existing home security infrastructure with very little effort from IOT developers. There is no need for the light bulb to store or evaluate policies with this solution. I disagree that the cloud won’t be a likely place to manage your digital resources (what don’t we use Google for these days). The home router might also be a handy place to have your home policy decision point.

But what if there is no central UMA authorization server? Is there a need for an alternate method of local authorization? Yes! The light bulb is the resource server, and it can always have some backup policies, for example, a USB connection or button, could bypass UMA authorization.

For the light switch to make this call to the APIs, it would need a client credential. The light bulb itself could have a tiny OAuth2 chip that would provide the bare minimum server APIs for client discovery, client authentication, and dynamic client registration.

The light bulb can offer a few different ways for the light switch to “authenticate” depending on how fancy it is:
1) None (sometimes you’re on a trusted network)
2) API key / secret
3) JSON Web Key

In cases where the light bulb was not configured to use central authentication, it could check the access token against its cache of tokens issued to local clients.

OpenID Connect offers lots of features for client registration. For example, you could correlate client registrations with “request_uris.” (Think entityID if you are familiar with SAML). See the registration request section of the OpenID Connect Dynamic Client Registration Spec

Why write a new OAuth2 based client authentication protocol when we already have OpenID Connect? Connect has been shown to be usable by developers, was designed to make simple things simple, and scales to complex requirements. Wouldn’t it make sense to just create a mapping for a new transport layer? Won’t there be even more transport layers in the future? What about secure-Bluetooth, secure-NFC, or secure-ESP? Will we have to re-invent client registration every time there is a new secure transport layer?

If the Open Interconnect Consortium Core Framework TG decides to mandate support for CoAP, then it may not be possible to use OpenID Connect, UMA or any other existing security protocol developed for HTTP.

Says Eve Maler, VP Innovation & Emerging Technology at ForgeRock, “My suspicion has been that a CoAP binding of UMA would be an interesting and worthwhile project… it could be done through the UMA extensibility profiles now–basically replacing the HTTP parts of UMA with CoAP parts”

Nat Sakimura, Chairman of the OpenID Foundation, commented “binding to other transport protocols, definitely yes. That was our intention from the beginning. That’s why we abstracted it. Defining a binding to CoAP etc. would be a good starting point. In the ACE Working Group at the IETF, Hannes Tschofenig from ARM has already started the work.”

Feedback from John Bradly of Ping Identity, and also one of the authors of OpenID Connect, was interesting. He referenced the IETF work of GSSAPI and suggests that a binding there to OAuth2 might address the CoAP requirement.

@GluuFederation work has happened on bindings for GSSAPI to use connect\OAuth with non http resources. A OAuth binding that Connect can use

— John Bradley (@ve7jtb) November 24, 2014

Obviously we SHOULD NOT design IOT security for the lowest common denominator, but connect IOT into our current Web infrastructure, as show in this diagram.


Mike Jones - MicrosoftJOSE -37 and JWT -31 drafts addressing remaining IESG review comments [Technorati links]

November 20, 2014 01:19 AM

IETF logoThese JOSE and JWT drafts contain updates intended to address the remaining outstanding IESG review comments by Pete Resnick, Stephen Farrell, and Richard Barnes, other than one that Pete may still provide text for. Algorithm names are now restricted to using only ASCII characters, the TLS requirements language has been refined, the language about integrity protecting header parameters used in trust decisions has been augmented, we now say what to do when an RSA private key with “oth” is encountered but not supported, and we now talk about JWSs with invalid signatures being considered invalid, rather than them being rejected. Also, added the CRT parameter values to example JWK RSA private key representations.

The specifications are available at:

HTML formatted versions are available at:

November 19, 2014

Matt Pollicove - CTISome thoughts on database locking in Oracle and Microsoft SQL Server [Technorati links]

November 19, 2014 06:29 PM

Deadlocks are the bane of those of us responsible for designing and maintaining any type of database system. I’ve written about these before on the dispatcher level. However this time around, I’d like to discuss them a little further “down” so to speak, at the database level. Also in talking to various people about this topic I've found that it’s potentially the most divisive question since “Tastes good vs. Less filling

Database deadlocks are much like application ones, typically come when two processes are trying to access the same database row at the same time. Most often this is when the system is trying to read and write to the row at the same time. A nice explanation can be found here. What we essentially wind up with is the database equivalent of a traffic jam where no one can move. It’s interesting to note that both Oracle and Microsoft SQL server handle these locking scenarios differently. I’m not going to go into DB2 at the moment but will address it if there is sufficient demand.

When dealing with SQL Server, management of locks is handled through the use of the “Hint” called No Lock. According to MSDN:

Hints are options or strategies specified for enforcement by the SQL Server query processor on SELECT, INSERT, UPDATE, or DELETE statements. The hints override any execution plan the query optimizer might select for a query. (Source)
When NOLOCK is used this is the same as using READUNCOMMITTED which some of you might have be familiar with if you did the NetWeaver portion of the IDM install when setting up the data source. Using this option keeps the SQL Server database engine from issuing locks. The big issue here is that one runs the risk of having dirty (old) data in the database operations. Be careful when using NOLOCK for this reason. Even though the SAP Provisioning Framework makes extensive use of the NOLOCK functionality, they regression test the heck out of the configuration. Make sure you do, too misuse of NOLOCK can lead to bad things happening in the Identity Store database.

There is also a piece of SQL Server functionality referred to as Snapshot Isolation which appears to work as a NOLOCK writ large where database snapshots are held in the TEMPDB for processing (source) This functionality was recommended by a DBA I worked with on a project some time ago. The functionality was tested in DEV and then rolled to the customer’s PRODUCTION instance.

Oracle is a little different in the way that it approaches locking in that the system has more internal management of conflicts through use of rollback logs forcing data to be committed before writes can occur and thus deadlocks occur much less often (Source) This means that there is no similar NOLOCK functionality in the Oracle Database System.

One final thing to consider with database deadlocks is how the database is being accessed, regardless of the database being used.  It is considered a best practice in SAP IDM to use To Identity Store passes as opposed to uIS_SetValue whenever possible (Source)

At the end of the day, I don’t know that I can really tell you to employ these mechanisms or not. In general we do know that it’s better not to have deadlocks than to have them and to do what you can to achieve this goal. In general, if you are going to use these techniques, do make sure you are doing so in concert with your DBA team and after careful testing. I have seen Microsoft SQL Server’s Snapshot Isolation work well in a busy productive environment, but I will not recommend its universal adoption as I can’t tell you how well it will work in yourenvironment. I will however recommend that you look into it with your DBA team if you are experiencing Deadlocks in SQL Server.

Kuppinger ColeDatabase Security On and Off the Cloud [Technorati links]

November 19, 2014 11:05 AM
In KuppingerCole Podcasts

Continued proliferation of cloud technologies offering on-demand scalability, flexibility and substantial cost savings means that more and more organizations are considering moving their applications and databases to IaaS or PaaS environments. However, migrating sensitive corporate data to a 3rd party infrastructure brings with it a number of new security and compliance challenges that enterprise IT has to address. Developing a comprehensive security strategy and avoiding point solutions for ...

Watch online

Vittorio Bertocci - MicrosoftFrom Domain to TenantID [Technorati links]

November 19, 2014 06:03 AM

Ha, I discovered that I kind of like to write short posts Smile so here there’s another one.

Azure AD endpoints can be constructed with both domain and tenantID interchangeably, “” and “” are functionally equivalent – however the tenantID has some clear advantages. For example: it is immutable, globally unique and non-reassignable, while domains do indeed change hands on occasions. Moreover, you can have many domains associated to a tenant but only one tenantID. Really, the only thing that the domain has going for itself is that it is human readable and there’s a reasonable chance a user can remember and type it.

Per the above, there are times in which it can come in useful to find out the TenantID for a given domain. The trick is reeeeeally simple. You can use the domain to construct one of the AAD endpoints which return tenant metadata, for example the OpenId Connect one; such metadata will contain the tenantID. In practice: say that you know that the target domain is How do I find out the corresponding tenantID, without even being authenticated?

Easy. I do a GET of

The result is a JSON file that has the tenantID all over it:

   "authorization_endpoint" : "",
   "check_session_iframe" : "",
   "end_session_endpoint" : "",
   "id_token_signing_alg_values_supported" : [ "RS256" ],
   "issuer" : "",
   "jwks_uri" : "",
   "microsoft_multi_refresh_token" : true,
   "response_modes_supported" : [ "query", "fragment", "form_post" ],
   "response_types_supported" : [ "code", "id_token", "code id_token", "token" ],
   "scopes_supported" : [ "openid" ],
   "subject_types_supported" : [ "pairwise" ],
   "token_endpoint" : "",
   "token_endpoint_auth_methods_supported" : [ "client_secret_post", "private_key_jwt" ],
   "userinfo_endpoint" : ""

Whip out your favorite JSON parsing class, and you’re done. Ta—dahh ♫

Kantara InitiativeEuropean Workshop on Trust & Identity [Technorati links]

November 19, 2014 03:02 AM

For those who are in the EU or will be near Vienna, Austria.  You may wish to attend the European Workshop on Trust and Identity to discuss “Connecting Identity Management Initiatives.” This is an openspace workshop where attendees will have the opportunity to network and share with others.


Openspace workshops have been finding and solving trust and identity issues for years. Starting in 2013 EWTI made this format available in Europe and received excellent feedback from participants. If you are looking for a substantial discussion on this subject it is likely that you will meet the right people here!

Meet at the EU Identity Workshop in Vienna 2014

EWTI is the opportunity to discuss, share knowledge, and learn about everything related to Internet Trust and Identity today. Topics at the EWTI in 2013 included:
  • Gov/Academic/Social ID
  • How to use SAML with REST and SOAP
  • eID in your country: where is it today, where is it heading?
  • SLO Single Logout for SAML & OAuth
  • STORK – existing federations user cases, interoperability
  • Binding LoA attributes to social ids (non-technical – strategy)
  • NSTIC: impressions, feedback, relation to other world-wide projects
  • Banks and Telcos as strong Identity Providers in Finland (Business model)
  • Trust and Market for Personal Data: Privacy – How to re-establish trust?
  • Trust Frameworks beyond Secotrs: Release of attributes, LOA
  • Authorization in SAML federations
  • Scaleable & comprehensive attributes design (authN & authZ)
  • E-Mail as global identifier: embrace/defend/fight it?
  • eID and Government stuff
  • Metadata exchange session: Federations at scaleSCIM 101
  • Rich-clients for mobile devices
  • Step up AuthN as a Service
  • Is SPML dead – who uses SCIM?
  • SAML2 test tool
  • All identities are self asserted
  • de-/provisioning / federated notification
  • Biobank Cloud Security
November 17, 2014

Vittorio Bertocci - MicrosoftSkipping the Home Realm Discovery Page in Azure AD [Technorati links]

November 17, 2014 04:43 PM

A typical authentication transaction with Azure AD will open with a  generic credential gathering page. As the user enters his/her username, Azure AD figures out from the domain portion of the username if the actual credential gathering should take place elsewhere (for example, if the domain is associated with a federated tenant the actual cred gathering will happen on the associated ADFS pages) and if it’s the case it will redirect accordingly.

Sometimes your app logic is such that you know in advance whether such transfer should happen. In those situations you have the opportunity to let our libraries (ADAL or the OWIN middlewares for OpenId Connect/WS-Federation) know where to go right from the start.

In OAuth2 and OpenId Connect you do so by passing the target domain in the “domain_hint” parameter.
In ADAL you can pass it via the following:

AuthenticationResult ar =
                    new Uri("http://any"), PromptBehavior.Always, 
                    UserIdentifier.AnyUser, "");


In the OWIN middleware for OpenId Connect you can do the same in the RedirectToIdentityProvider notification:

    new OpenIdConnectAuthenticationOptions
        ClientId = clientId,
        Authority = authority,
        PostLogoutRedirectUri = postLogoutRedirectUri,
        Notifications = new OpenIdConnectAuthenticationNotifications()
            RedirectToIdentityProvider = (context) => 
                context.ProtocolMessage.DomainHint = ""; 
                return Task.FromResult(0); 


Finally, in WS-Fed you do the following:

   new WsFederationAuthenticationOptions
      Notifications = new WsFederationAuthenticationNotifications
         RedirectToIdentityProvider = (context) =>
            context.ProtocolMessage.Whr = "";
            return Task.FromResult(0);

Party on! Smile

Kuppinger ColeAdvisory Note: Security and the Internet of Everything and Everyone - 71152 [Technorati links]

November 17, 2014 03:06 PM
In KuppingerCole

The vision for the Internet of Everything and Everyone is for more than just an Internet of Things; it makes bold promises for the individual as well as for businesses. However the realization of this vision is based on existing systems and infrastructure which contain known weaknesses.

November 16, 2014

Anil JohnRFI - EMV Enabled Debit Cards as Authentication Tokens? [Technorati links]

November 16, 2014 08:55 PM

The U.S. is finally moving to EMV compliant payment cards. Can these cards be used as multi-factor authentication tokens for electronic transactions outside the payment realm? What are the security and privacy implications? Who needs to buy into and be in the transaction loop to even consider this as a possibility?

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

The opinions expressed here are my own and do not represent my employer’s view in any way.

November 14, 2014

CourionFinancial Services Ready to Embrace Identity and Access intelligence [Technorati links]

November 14, 2014 02:32 PM

Access Risk Management Blog | Courion

Nick BerentsThis week at London’s Hotel Russell, the Identity Management 2014 conference brought together hundreds of technology professionals and security specialists across government and enterprises of all sizes and industries.

It was fascinating to hear from industry leaders discussing the next generation of Identity and Access Management, representing diverse firms and organizations such as ISACA, Visa Europe, Ping Identity, CyberArk, and beverage giant SABMiller.Identity Management 2014 London

A highlight for me was a session that included Nick Taylor, Director of IAM at Deloitte, and Andrew Bennett, CTO of global private bank Kleinwort Benson.

Taylor discussed the challenges that IAM professionals face in making access governance reviews business friendly, as often there is not enough context to understand the risks that they face. For example, an equities trader making lots of trades at a certain time of the day may be normal, but maybe not so normal if that trader is doing it from different locations or geographies.

Bennett supported that notion by pointing out that technical jargon can mask risk that exists, so he recommended that the financial services industry look into the concept of identity and access intelligence and start taking it on now. Adopting such a solution is not a case of throwing more tools at the problem; it is a matter of having the right tool to make sense of the mess.

Also good to hear our partner Ping Identity's session “It’s Not About the Device – It’s All About the Standards” and how modern identity protocols allow the differentiation of business & personal identities.

Overall a good conference that provided attendees with lots of opportunity to learn best practices and hear how their colleagues are approaching identity management. But rather than waiting for next year’s conference, anyone can learn more in the near term by attending Courion’s upcoming webinar Data Breach - Top Tips to Protect, Detect and Deter on Thursday November 20th at 11 a.m. ET, 8 a.m. PT, 4 p.m. GMT.

Ludovic Poitou - ForgeRockThe new ForgeRock Community site [Technorati links]

November 14, 2014 10:55 AM

Earlier this week, a new major version of ForgeRock Community site was pushed to production.

Beside a cleaner look and feel and a long awaited reorganisation of content, the new version enables better collaboration around the open source projects and initiatives. You will find Forums, for general discussions or project specific ones, new Groups around specific topics like UMA or IoT. We’ve also added a calendar with different views, so that you can find or suggest events, conferences, webinars touching the projects and IRM at large.
Great work Aron and Marius for the new site ! Thank you.

Venn Of Authorization with UMAAnd we’ve also announced a new project OpenUMA. If you haven’t paid attention to it yet, I suggest you do now. User-Managed Access (UMA) is an OAuth-based protocol that enables an individual to control the authorization of data sharing and service access made by others. The OpenUMA community shares an interest in informing, improving, and extending the development of UMA-compatible open-source software as part of ForgeRock’s Open Identity Stack.


Filed under: General Tagged: collaboration, community, ForgeRock,, identity, opensource, projects
November 13, 2014

Julian BondThe lights are going out in Syria. Literally. [Technorati links]

November 13, 2014 06:48 PM
The lights are going out in Syria. Literally.
 The Olduvai cliff: are the lights going out already? »
Image from Li and Li, "international journal of remote sensing." h/t Colonel Cassad". The image shows the nighttime light pattern in Syria three years ago (a) and today (b). Those among us who are diehard catastrophists surel...

[from: Google+ Posts]
November 12, 2014

Kuppinger Cole16.12.2014: Secure Mobile Information Sharing: addressing enterprise mobility challenges in an open, connected business [Technorati links]

November 12, 2014 02:44 PM
In KuppingerCole

Fuelled by the exponentially growing number of mobile devices, as well as by increasing adoption of cloud services, demand for various technologies that enable sharing information securely within organizations, as well as across their boundaries, has significantly surged. This demand is no longer driven by IT; on the contrary, organizations are actively looking for solutions for their business needs.
November 11, 2014

Nat SakimuraXACML v3.0 Privacy Policy Profile Version 1.0 パブリック・レビュー [Technorati links]

November 11, 2014 09:05 PM

eXtensible Access Control Markup Language (XACML) のCommittee Specification Draft (CSD) の15日間のパブリックレビューピリオドが、11/12から始まります。


期間は11/12 0:00 UTC ~11/26 23:59 UTCです。


Editable source (Authoritative):


HTML with inline tags for direct commenting:




送信された全てのコメントは、OASIS Feedback Licenseによって提出されたとみなされます。詳しくは以下の[3][4]をご参照ください。

========== Additional references:

[1] OASIS eXtensible Access Control Markup Language (XACML) TC

[2] Previous public reviews:

* 15-day public review, 23 May 2014:

* 60-day public review, 21 May 2009:


RF on Limited Terms Mode

Kaliya Hamlin - Identity WomanQuotes from Amelia on Systems relevant to Identity. [Technorati links]

November 11, 2014 08:14 PM

This is coverage of at WSJ interview with Amelia Andersdotter the former European Parliament member from the Pirate Party from Sweden. Some quote stuck out for me as being relevant

If we also believe that freedom and individualism, empowerment and democratic rights, are valuable, then we should not be constructing and exploiting systems of control where individual disempowerment are prerequisites for the system to be legal.

We can say that most of the legislation around Internet users protect systems from individuals. I believe that individuals should be protected from the system. Individual empowerment means the individual is able to deal with a system, use a system, work with a system, innovate on a system—for whatever purpose, social or economic. Right now we have a lot of legislation that hinders such [empowerment]. And that doesn’t necessarily mean that you have anarchy in the sense that you have no laws or that anyone can do whatever they want at anytime. It’s more a question of ensuring that the capabilities you are deterring are actually the capabilities that are most useful to deter. [emphasis mine].

This statement is key  “individuals should be protected from the system” How do we create accountability from systems to people and not just the other way around. I continue to raise this issue about so called trust frameworks that are proposed as the solution to interoperable digital identity – there are many concerning aspects to the solutions including what seems to be very low levels of accountability of systems to people.

The quotes from Ameila continued…

I think the Internet and Internet policy are very good tools for bringing power closer to people, decentralizing and ensuring that we have distributive power and distributive solutions. This needs to be built into the technical, as well as the political framework. It is a real challenge for the European Union to win back the confidence of European voters because I think a lot of people are increasingly concerned that they don’t have power or influence over tools and situations that arise in their day-to-day lives.

The European Union needs to be more user-centric. It must provide more control [directly] to users. If the European Union decides that intermediaries could not develop technologies specifically to disempower end users, we could have a major shift in global political and technical culture, not only in Europe but worldwide, that would benefit everyone.

Mike Jones - MicrosoftJWK Thumbprint spec adopted by JOSE working group [Technorati links]

November 11, 2014 08:01 PM

IETF logoThe JSON Web Key (JWK) Thumbprint specification was adopted by the JOSE working group during IETF 91. The initial working group version is identical to the individual submission version incorporating feedback from IETF 90, other than the dates and document identifier.

JWK Thumbprints are used by the recently approved OpenID Connect Core 1.0 incorporating errata set 1 spec. JOSE working group co-chair Jim Schaad said during the working group meeting that he would move the document along fast.

The specification is available at:

An HTML formatted version is also available at:

Kuppinger ColeHow to Protect Your Data in the Cloud [Technorati links]

November 11, 2014 06:07 PM
In KuppingerCole Podcasts

More and more organizations and individuals are using the Cloud and, as a consequence, the information security challenges are growing. Information sprawl and the lack of knowledge about where data is stored are in stark contrast to the internal and external requirements for its protection. To meet these requirements it is necessary to protect data not only but especially in the Cloud. With employees using services such as iCloud or Dropbox, the risk of information being out of control and l...

Watch online

Kuppinger ColeA Haven of Trust in the Cloud? [Technorati links]

November 11, 2014 08:59 AM
In Mike Small

In September a survey was published in Dynamic CISO that showed that “72% of Businesses Don’t Trust Cloud Vendors to Obey Data Protection Laws and Regulations”.  Given this lack of trust by their customers what can cloud service vendors do?

When an organization stores data on its own computers, it believes that it can control who can access that data. This belief may be misplaced given the number of reports of data breaches from on premise systems; but most organizations trust themselves more than they trust others.  When the organization stores data in the cloud, it has to trust the cloud provider, the cloud provider’s operations staff and the legal authorities with jurisdiction over the cloud provider’s computers. This creates many serious concerns about moving applications and data to the cloud and this is especially true in Europe and in particular in geographies like Germany where there are very strong data protections laws.

One approach is to build your own cloud where you have physical control over the technology but you can exploit some of the flexibility that a cloud service provides. This is the approach that is being promoted by Microsoft.  In October Microsoft in conjunction with Dell announced their “Cloud Platform System”.  This is effectively a way for an organization to deploy Dell servers running the Microsoft Azure software stack on premise.  Using this platform, an organization can build and deploy on premise applications that are Azure cloud ready.  At the same time it can see for itself what goes on “under the hood”.  Then, when the organization has built enough trust, or when it needs more capacity it can easily extend the existing workload in to the cloud.   This approach is not unique to Microsoft – other cloud vendors also offer products that can be deployed on premise where there are specific needs.

In the longer term Microsoft researchers are working to create what is being described as a “Haven in the Cloud”.  This was described in a paper at the 11th USENIX Symposium on Operating Systems Design and Implementation.  In this paper, Baumann and his colleagues offer a concept they call “shielded execution,” which protects the confidentiality and the integrity of a program, as well as the associated data from the platform on which it runs—the cloud operator’s operating system, administrative software, and firmware. They claim to have shown for the first time that it is possible to store data and perform computation in the cloud with equivalent trust to local computing.

The Haven prototype uses the hardware protection proposed in Intel’s Software Guard Extensions (SGX)—a set of CPU instructions that can be used by applications to isolate code and data securely, enabling protected memory and execution. It addresses the challenges of executing unmodified legacy binaries and protecting them from a malicious host.  It is based on “Drawbridge” another piece of Microsoft research that is a new kind of virtual-machine container.

The question of trust in cloud services remains an important inhibitor to their adoption. It is good to see that vendors are taking these concerns seriously and working to provide solutions.  Technology is an important component of the solution but it is not, in itself sufficient.  In general computers do not breach data by themselves; human interactions play an important part.  The need for cloud services to support better information stewardship as well as for cloud service providers to create an information stewardship culture is also critical to creating trust in their services.  From the perspective of the cloud service customer my advice is always trust but verify.

November 10, 2014

Ian GlazerThe Only Two Skills That Matter: Clarity of Communications and Empathy [Technorati links]

November 10, 2014 04:49 PM

I meant to write a post describing how I build presentations, but I realized that I can’t do that without writing this one first.

I had the honor of working with Drue Reeves when I was at Burton and Gartner. Drue was my chief of research and as an agenda manager we worked closely in shaping what and how our teams would research. More importantly we got to define the kind of analysts we hired. We talked about all the kinds of skills an analyst should have. We’d list out all sorts of technical certifications, evidence of experience, and the like. But in the end, that list always reduced down to two things. If you have them, you can be successful in all your endeavors. The two most important skills someone needs to be successful in what they do are:

Radical clarity

To make oneself understood and understandable regardless of the situation. Clarity that transcends generations, languages, sets of belief, and knowledge. That is what is required. And that is a far cry from the typical “strong communication skills” b.s. you see on a lot of resumes.

The trick to communicating clearly is realizing that it’s not about the prettiness or exactness of what you say. It’s all in understanding what will be absorbed by and resonate with the other: the person across from you, the audience, the reader, etc. Strip all of the superfluous bits and layers away and get down to that genuine message that you want the other to keep with them.

To do that requires empathy.

Genuinely giving a shit

There is no way to communicate with an audience (or even just another person) unless you actually care about them. You have to care about their wellbeing. You have to be invested in their success. Even when they don’t want to hear your heretical opinion. Even when they have competing ideas. Especially then.

If you start phoning it in, it you just give a stock answer or deliver the same old deck in the same old format, the audience knows and they know that you’ve checked out and are no longer interested in their success. Even if you hold a universal truth and wondrous innovation, the audience will not care because you don’t either.

Clarity and empathy. These aren’t skills you take classes in. Sure, you can refine techniques through training. But you actually get better that these things by simply trying to do them. Just like giving presentations. I’ll tackle that one next…


Ludovic Poitou - ForgeRockHighlights of IRMSummit Europe 2014… [Technorati links]

November 10, 2014 03:10 PM

Powerscourt hotelLast week at the nice Powerscourt Estate, outside Dublin, Ireland, ForgeRock hosted the European Identity Relationship Management Summit, attended by over 200 partners, customers, prospects, users of ForgeRock technologies. What a great European IRMSummit it was !

If you haven’t been able to attend, here’s some highlights:

I heard many talks and discussions about Identity being the cornerstone in the digital transformation of enterprises and organizations. It shifting identity projects from a cost center to revenue generators.

There was lots of focus on consumer identity and access management, with some perspectives on current identity standards and what is going to be needed from the IRM solutions. We’ve also heard from security and analytics vendors, demonstrating how ForgeRock’s Open Identity Stack can be combined with the network security layer or with analytics tools to increase security and context awareness when controlling access.

User Managed Access is getting more and more real, as the specifications are getting close to be finalised and ForgeRock announced the OpenUMA initiative for foster ideas and code around it. See

Chris and Allan around an Internet connected coffee machine, powered by ARMMany talks about Internet of Things and especially demonstration around defining the relationship between a Thing and a User, securing the access to the data produced by the Thing. We’ve seen a door lock being unlocked with a NFC enabled mobile phone, by provisioning over the air the appropriate credentials, a smart coffee machine able to identify the coffee type and the user, pushing the data to a web service, and asking the user for consent to share. There’s a common understanding that all the things will have identities and relations with other identities.

There were several interesting discussions and presentations about Digital Citizens, illustrated by reports from deployments in Norway, Switzerland, Nigeria, and the European Commission cross-border authentication initiatives STORK and eIDAS

Half a day was dedicated to ForgeRock products, with introductory trainings, demonstrations of coming features in OpenAM, OpenDJ, OpenIDM and OpenIG. During the Wednesday afternoon, I did 2 presentations on OpenIG, demonstrating the ease of integration of OAuth2.0 and OpenID Connect to protect applications and APIs, and on OpenDJ, demonstrating the flexibility and power of the REST to LDAP interface.

All presentations and materials are available online as pdf and now as videos on the ForgeRock’s YouTube page. You can also find here a short summary of the Summit in a video produced by Markus.

Powerscourt Estate HousePowerscourt Estate gardens
The summit wouldn’t be such a great conference if there was no plan for social interactions and fun. This year we had a nice dinner in the Powerscourt house (aka the Castle) followed by live music in the pub. The band was great, but became even better when Joni and Eve joined them for a few songs, for the great pleasure of all the guests.


The band15542475489_04dabb40ff_m

Of course, I have to admit that the best part of the IRM Summit in Ireland was the pints of Guinness !

To all attendees, thank you for your participation, the interesting discussions and the input to our products. I’m looking forward to see you again next year for the 2015 edition. Sláinte !

As usual, you can find the photos that I’ve taken at the Powerscourt Estate on Flickr. Feel free to copy for non commercial use, and if you do republish them, I would appreciate getting the credit for them.

[Updated on Nov 11] Added link to the highlight video produced by Markus
[Updated on Nov 13] Added link to the slideshare folder where all presentations have been published
[Updated on Nob 24] Added link to the all videos on ForgeRock’s YouTube page

Filed under: Identity Tagged: conference, ForgeRock, identity, IRM, IRMSummit2014, IRMSummitEurope, openam, opendj, openidm, openig, summit

KatasoftBootstrapping an Express.js App with Yeoman [Technorati links]

November 10, 2014 03:00 PM

So, you want to build an Express.js web application, eh? Well, you’re in the right place!

In this short article I’ll hold your hand, sing you a song (not literally), and walk you through creating a bare-bones Express.js web application and deploying it on Heroku with Stormpath and Yeoman.

In the next few minutes you’ll have a live website, ready to go, with user registration, login, and a simple layout.

Step 1: Get Ready!

Before we dive into the code and stuff, you’ve got to get a few things setup on your computer!

First off, you need to go and create an account with Heroku if you haven’t already. Heroku is an application hosting platform that’s really awesome! So awesome that I even wrote a book about them (true story)! But what makes them really great for our example here today is that they’re free, and easy to use.

Once you’ve got Heroku installed you then need to install their toolbelt app on your computer. This is what lets you build Heroku apps.

Next off, you need to have Node installed and working on your computer. If you don’t already have it installed, go visit the Node website and get it setup.

Lastly, you need to install a few Node packages. You can install them all by running the commands below in your terminal:

$ sudo npm install -g yo generator-stormpath

The yo package is yeoman — this is a tool we’ll be using to create an application for us.

The generator-stormpath package is the actual project that yo will install — it is what holds the actual project code and information we need to get started.

Got through all that? Whew! Good work!

Step 2: Bootstrap a Project

OK! Now that the boring stuff is over, let’s create a project!

The first thing we need to do is create a directory to hold our new project. You can do this by running the following command in your terminal:

$ mkdir myproject
$ cd my project

You should now be inside your new project directory.

At this point, you can now safely bootstrap your new project by running:

$ yo stormpath

This will kick off a script that creates your project files, and asks if you’d like to deploy your new app to Heroku. When prompted, enter ‘y’ for yes. If you don’t do this, your app won’t be live :(

NOTE: If you don’t get asked a question about Heroku, then you didn’t follow my instructions and install Heroku like I said to earlier! Go back to Step #1!

Assuming everything worked, you should see something like this:

Tip: For a full resolution link so you can actually see what I’m typing, view this image directly. yo-stormpath-bootstrap

Now, if you take a look at your directory, you’ll notice there are a few new files in there for you to play around with:

We’ll get into the code in the next section, but for now, go ahead and run:

$ heroku open

In your terminal. This will open your browser automatically, and open up your brand new LIVE web app! Cool, right?

As I’m sure you’ve noticed by now, your app is running live, and lets you sign up, log in, log out, etc. Pretty good for a few seconds of work!

And of course, here are some obligatory screenshots:

Screenshot: Yo Stormpath Index Page yo-stormpath-index

Screenshot: Yo Stormpath Registration Page yo-stormpath-registration

Screenshot: Yo Stormpath Logged in Page yo-stormpath-logged-in

Screenshot: Yo Stormpath Login Page yo-stormpath-login


So, as of this very moment in time, we’ve:

So now that we’ve got those things out of the way, we’re free to build a real web app! This is where the real fun begins!

For thoroughness, let’s go ahead and implement a simple dashboard page on our shiny new web app.

Go ahead and open up the routes/index.js file, and add a new function call:

router.get('/dashboard', stormpath.loginRequired, function(req, res) {
  res.send('Hi, ' + req.user.givenName + '. Welcome to your dashboard!');

Be sure to place this code above the last line in the file that says module.exports = router.

This will render a nice little dashboard page for us. See the stormpath.loginRequired middleware we’re using there? That’s going to force the user to log in before allowing them to access that page — cool, huh?

You’ve also probably noticed that we’re saying req.user.givenName in our route code — that’s because Stormpath’s library automatically creates a user object called req.user once a user has been logged in — so you can easily retrieve any user params you want!

NOTE: More information on working with user objects can be found in our official docs.

Anyway — now that we’ve got that little route written, let’s also tweak our Stormpath setup so that once a user logs in, they’ll be automatically redirected to the new dashboard page we just wrote.

To do this, open up your index.js file in the root of your project and add the following line to the stormpath.init middleware:

app.use(stormpath.init(app, {
  apiKeyId:     process.env.STORMPATH_API_KEY_ID,
  apiKeySecret: process.env.STORMPATH_API_KEY_SECRET,
  application:  process.env.STORMPATH_URL || process.env.STORMPATH_APPLICATION,
  secretKey:    process.env.STORMPATH_SECRET_KEY,
  redirectUrl: '/dashboard',

The redirectUrl setting (explained in more detail here), tells Stormpath that once a user has logged in, they should be redirected to the given URL — PERFECT!

Now, let’s see if everything is working as expected!

$ git add --all
$ git commit -m "Adding a new dashboard page!"
$ git push heroku master

The last line there, git push heroku master, will deploy your updates to Heroku. Once that’s finished, just run:

$ heroku open

To open your web browser to your app page again — now take a look around! If you log into your account, you’ll see that you’ll end up on the new dashboard page! It should look something like this:

Screenshot: Yo Stormpath Dashboard Page yo-stormpath-dashboard

BONUS: What happens if you log out of your account, then try visiting the dashboard page directly? Does it let you in? HINT: NOPE!


So if you’ve gotten this far — congrats! You are awesome, amazing, and super cool. You’re probably wondering “What next?” And, that’s a great question!

If you’re hungry for more, you’ll want to check out the following links:

They’re all awesome tools, and I hope you enjoy them.

Lastly — if you’ve got any feedback, questions, or concerns — leave me a comment below. I’ll do my best to respond in a timely fashion.

Now GO FORTH and build some stuff!

November 09, 2014

OpenID.netErrata to OpenID Connect Specifications Approved [Technorati links]

November 09, 2014 07:28 PM

Errata to the following specifications have been approved by a vote of the OpenID Foundation members:

An Errata version of a specification incorporates corrections identified after the Final Specification was published.

The voting results were:

Total votes: 46 (out of 194 members = 24% > 20% quorum requirement)

The original final specification versions remain available at these locations:

The specifications incorporating the errata are available at the standard locations and at these locations:

— Michael B. Jones – OpenID Foundation Board Secretary

OpenID.netImplementer’s Draft of OpenID 2.0 to OpenID Connect Migration Specification Approved [Technorati links]

November 09, 2014 07:26 PM

The following specification has been approved as an OpenID Implementer’s Draft by a vote of the OpenID Foundation members:

An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification.

This Implementer’s Draft is available at these locations:

The voting results were:

Total votes: 46 (out of 194 members = 24% > 20% quorum requirement)

— Michael B. Jones – OpenID Foundation Board Secretary

November 08, 2014

Anil JohnWhy Multi-Factor and Two-Factor Authentication May Not Be the Same [Technorati links]

November 08, 2014 06:20 PM

Two Factor Authentication is currently the bright and shining star that everyone, from those who offer ‘free’ services to those who offer high value services, wants to know and emulate. When designing such implementations, it is important to understand the implications to identity assurance if the two-factor implementation does not correctly incorporate the principles of multi-factor authentication.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

The opinions expressed here are my own and do not represent my employer’s view in any way.

November 07, 2014

Julian BondSaccades and LED lights. [Technorati links]

November 07, 2014 06:37 PM

Paul MadsenApplication unbundling & Native SSO [Technorati links]

November 07, 2014 04:33 PM
You used to have a single application on your phone from a single social provider, you likely now have multiple.

Where the was Google Drive, there is now Sheets, Docs, and Slides - each individual application optimized for a particular document format.

Where the chat function used to be a tab within the larger Facebook application , there is now Facebook Messenger - a dedicated chat app.

LinkedIn has 4 individual applications.

The dynamic is not unique to social applications.

 According to this article
Mobile app unbundling occurs when a feature or concept that was previously a small piece of a larger app is spun off on it’s own with the intention of creating a better product experience for both the original app and the new stand-alone app.
The unbundling trend seems mostly driven by the constraints of mobile devices - multiple functions hidden behind tabs may work on a desktop browser, but on a small screen, they may be hidden and only accessible through scrolling or clicking.

That was the stated justification for Facebook's unbundling of Messenger
We wanted to do this because we believe that this is a better experience. Messaging is becoming increasingly important. On mobile, each app can only focus on doing one thing well, we think. The primary purpose of the Facebook app is News Feed. Messaging was this behavior people were doing more and more. 10 billion messages are sent per day, but in order to get to it you had to wait for the app to load and go to a separate tab. We saw that the top messaging apps people were using were their own app. These apps that are fast and just focused on messaging. You're probably messaging people 15 times per day. Having to go into an app and take a bunch of steps to get to messaging is a lot of friction.
Of course, unbundling clearly isn't for everybody ....

I can't help but think about unbundling from an identity angle. Do the math - if you break a single application up into multiple applications, then what was a single authentication & authorization step becomes multiple such steps. And, barring some sort of integration between the unbundled applications (where one application could leverage a 'session' established for another) this would mean the user having to explicitly login to each and every one of those applications.

The premise of 'one application could leverage a session established for another' is exactly that which the Native Applications (NAPPS) WG in the OpenID Foundation is enabling in a standardized manner. NAPPS is defining both 1) an extension and profile of OpenID Connect by which one native application (or the mobile OS) can request a security token for some other native application 2) mechanisms by which the individual native applications can request and return such tokens.

Consequently, NAPPS can mitigate (at least one of) the negative implications of unbundling.

The logical end-state of the trend towards making applications 'smaller' would appear to be applications that are fully invisible, ie those that the user doesn't typically launch by clicking on an icon, but rather receives interactive notifications & prompts only when relevant (as determined by the application's algorithm). What might the implications of such invisible applications be for identity UX?

Rakesh RadhakrishnanESA embedded in EA [Technorati links]

November 07, 2014 12:57 AM
Similar to "Secure by Design" or "Privacy Baked In", to me no Enterprise Architecture "EA" initiative can succeeds without a SOLID Enterprise Security Architecture "ESA" in place. An ESA is also driven by Business Directions/Business Strategy and takes Business Risks as the driving force to identify an "As-IS" state and an "Aspired" state. While ESA focuses on  Security, Data Privacy, Incident Response Modernization/Optimization, Compliance and more, and EA focusses more on the Business Process Modernization, Business Application and Relevant IT Infrastructure (private and public cloud). All the Systems Modernization Programs, NG SDLC, Data Center Optimization and more that are driven by an EA effort heavily rely on the success and the foundation setup by an ESA. An ESA also relies on EA program - especially the Enterprise Data Architecture (driven by enterprise wide MDM and Big Data initiatives) to identify and classify high risk and medium risk data and their respective data flow. Therefore a successful EA team will comprise of specialist EA's focused on EA for Cloud/Infrastructure, ESA, Enterprise Data Architects, Enterprise Application Architects, Enterprise Integration Architects, and more, who work as a team and collaborate extensively (collaboration leading to innovative ways of integration). Here is an excellent white paper describing the synergies of EA (TOGAF 9) and ESA (SABSA). Adopting an integrated methodology TOGAF with SABSA or TOGAF ADM with SEI ADDM (for Secure SDLC), is critical as each methodology is focused within one domain (SEI ADDM for Secure SDLC, TOGAF for EA, SABSA for ESA, ITIL for Enterprise Service Management, OMG MDA for Enterprise Data and Meta data Architecture) or Oracle's EA Framework for Enterprise Information Architecture and more). This paper was one that I authored in 2006, that talks to these integrated views - as I had just gotten my Executive Masters in IT Management from University of Virginia, along with my TOGAF certification as an EA and SEI Certification as a Software Architect. Sun Microsystems also heavily invested in training their battalion of employees on Six Sigma - what was then referenced as Sun Six Sigma, along with ITIL and Prince 2. I wanted to align these tools and techniques so that they made some sense when utilized together. This paper also aligns all these methodologies for EA, ESA, Enterprise SW Architecture, and more.
November 06, 2014

Rakesh RadhakrishnanInvesting in Systemic Security - An enabler or an impediment [Technorati links]

November 06, 2014 03:39 PM
An enterprise in any industry today can function as a business if and only if;
a) it can protect its intellectual property that act as a core competitive differentiator
b) it can survive a disasters -like earth quake - not just via collected insurance money - but also continued operations
c) it can safely and compliantly extend to the cloud computing models to derive the economies of scale promised by clouds
d) it can maintain confidentiality and privacy of data - the reputational damage caused by one data breach can kill a business completely
e) and it can ensure uptime and availability of its transactional site (internet presence of ecommerce) and communication and collaboration tools (over the internet again).
Therefore if a business entity needs to survive and thrive in todays world its OXYMORON to see "Security (and Security Investments) as an Impediment to Business". To me investing in security is investing in the "Quality" aspects of a business and hence has always been perceived as a "true enabler". Investing in my health and immune system is an enabler for me to be more productive physically and mentally - which in turn helps me personally, physically and professionally. The same is true with IT Security Investments - Anywhere between 0.5% to 1% of a business entities revenue is expected to be its annual IT security budget (for example $100m to $200m for a 20 billion dollar business entity), when a typical enterprise is spending 5% on IT as a whole.
In addition to investing prudently with a Systemic Security Architecture (topic for another blog post), its equally important to make an organization's culture (every single employee) - security conscious (a topic for another blog post).