July 23, 2014

MythicsReview of the ZS3 Storage Appliances:  Incredible Performance and Efficiencies [Technorati links]

July 23, 2014 08:39 PM

As you know, the Oracle Storage product line has recently undergone some major updates with several significant technology upgrades from the older ZFS Series.


Mike Jones - MicrosoftOAuth Assertions specs describing Privacy Considerations [Technorati links]

July 23, 2014 07:19 PM

OAuth logoBrian Campbell updated the OAuth Assertions specifications to add Privacy Considerations sections, responding to area director feedback. Thanks, Brian!

The specifications are available at:

HTML formatted versions are also available at:

Ian GlazerDo we have a round wheel yet? Musings on identity standards (Part 1) [Technorati links]

July 23, 2014 06:18 PM

Why do human continually seem to reinvent what they already have? Why is it that we take a reasonably functional thing and attempt to rebuild it and in doing so render that reasonably functional thing non-functional for a while? This is a pattern that is familiar. You have a working thing. You attempt to “fix” it and in doing so break it. You then properly fix it and get a slightly more functional thing in the end.

Why is it that we reinvent the wheel? Because eventually, we get a round one. Anyone who has worked on technical standards, especially identity standards, recognizes this pattern. We build reasonably workable standards only to rebuild and recast them a few years later.

We do this not because we develop some horrid allergy to angle brackets – an allergy that can only be calmed by mustache braces. This is not why we reinvent the wheel, why we revisit and rebuild our standards. Furthermore, revisiting and rebuilding standards isn’t simply a “make-work” affair for identity geeks. Nor is it an excuse to rack up frequent flyer miles.

Identity in transition

We reinvent the wheel the tasks needed of those wheels change. In IAM, the shift from SOA, SOAP, and XML to little s services, REST, and JSON was profound. And we had to stay contemporary with the way the web and developers worked. In this case, the technical load that our IAM wheels had to carry changed.

But there is a more profound change to the tasks we must perform and the loads we must transport and it too will require us to examine our standards and see if they are up to the task.

It used to be that enterprise IAM was concerned with answering did the right people get the right access. But that is increasingly not the relevant question. The question we must answer is did the right people get the right experience? And not just right people but also right “things” – did they get the experience (or data) they needed at the right time.

There is another transition underway. This transition is closely related to IAM’s transition from delivering and managing access to delivering and managing experience. We are being asked to haul more and different identities

We are pretty good as an industry at managing a reasonable number of identities each with a reasonable number of attributes. Surely, what is “reasonable” has increased over the years and it is fairly safe to say that no longer is a few million identities in a directly a big deal.

But how well will we handle things? Things will have a relatively few number of attributes. Things will produce a data stream that really interesting but their own attributes might not be that interesting. And, needless to say, there will be a completely unreasonable number of them: 20 billion? 50 billion? a whole lot of billions of them.

The transition of IAM isn’t just from managing identities of people carbon-based life forms to silicon ones. This transition also includes relationships. Today we are okay at managing a few relationships each with very few attributes. But what we as an industry must do is manage a completely unreasonable number of relationships between an unreasonable number of things and each of these relationships has a fair number of attributes of their own.

That, my friends, is a heavy load to haul. And so it is worth spending a little time considering if our identity standards wheels are round. Let’s look at 4 different areas of IAM to see if we have round wheels:

  1. Authentication
  2. Authorization
  3. Attributes
  4. User provisioning


Overall, I’d say the authentication wheel is round. We’ve got multiple protocols, multiple standards, which is both a reflection of the complexity of the problem and the maturity of the problem. OpenID Connect needs a few more miles on the road, but by no means does this mean you shouldn’t use it today. Expect new profiles over time but you certainly can get going today. And where OpenID Connect cannot take you, trusty SAML still can.

Although authentication is okay, representing assurance isn’t. I wonder if we need to harmonize level of assurance. I also wonder if this is even possible. Knowing that a person was proofed and how they were authenticated is nice, but as Mark Diodati will be the first to tell you deployment matters. You can deploy a strong auth technology poorly and thus transform it into a weak auth system. So knowing your LOA 3 is equivalent to my LOA 2.25 might not be useful. More importantly, I wonder how small and medium sized organizations, those without a resident identity dork, figure out what LOA to require, what trust framework to use, and how to proceed. This, to me, seems like a place for the IDESG and its ilk.

And although the authentication wheel is round, that doesn’t mean it isn’t without its lumps. First, we do see some reinventing the wheel just to reinvent the wheel. OAuth A4C is simply not a fruitful activity and should be put down. Second, the fact that password vaulting exists at this point in history is an embarrassment. To be clear, I am not saying that password vaulting solutions and vendors are an embarrassment. It is the fact that we still have the need to password vault is IAM collective shame.

We have had workable authentication standards for this many years and yet we still password vault. It means that identity vendors have not done enough to enable service providers. It means that service providers still exist who do not want to operate in the best interest of their enterprise customers. At the minimum those service provider must offer a standards-based approach to authentication (and user provisioning would be nice too.)

Let me be crystal clear: if your service provider doesn’t support identity standards, that service provider is not acting in your best interest. Period.

The existence of password vaulting also means that organizations haven’t been loud enough in their demands for a better login experience. Interestingly enough, I think the need for a mobile-optimized authentication experience will force service providers hands.

I know we are all trying to kill the password but I think a more reasonable, more achievable, and more effective goal is to eliminate the need for password vaulting through the use of authentication and federated SSO standards. By 2017, if I am still saying this, our industry has failed.


Authorization’s wheel is simultaneously over-inflated and flat. You can’t talk about authZ without talking about XACML. XACML can do anything; it really is an amazing standard. But the problem with things that allow you to do anything is that they tend to make it hard to do anything. My recommendation to the industry is to focus on the policy tools and the PAPS, not the core protocol. Now the XACML TC knows it needs to be contemporary. The work on the JSON and REST bindings is a great start to make XACML more relevant for the modern web.

What about OAuth? Certainly OAuth can be used to represent the output of authorization decisions. But to do this, in some sense, requires diving into the semantics of scopes. It requires that your partners understand what your scopes mean. Understanding of the semantics of scopes isn’t a horrible requirement, but it does require service providers have to invest time to understand that.

What about UMA? It definitely holds promise, especially when we consider the duties of all the parties involved in managing and enforcement access to resources. I really like the idea of a standard that has a profile that describes duties of the actors separate from the wireline protocol description. UMA definitely needs more miles on the road and to be perfectly honest I still have a hard time understanding it in an enterprise context. Maybe now that Eve is coming back to the product world, the community will get more UMA awesomeness.

There is another thing to think about as we study the roundness of the authorization wheel. Knowing that the load we will have to carry is a heavy one and one that includes “things” I think we need to think about how those “things” can make decisions with more autonomy. How can our authorization systems make authorization decision closer to the place of use at the time of use? I believe we need actionable relationships. Actionable relationships allow a thing or a human agent to be able to do something on my behalf without consulting a backend service. Very important in the IoT world. For more on actionable relationships, you can check out my talk on the Laws of Relationships.

Tomorrow I’ll post the rest of the talk and hopefully by Friday the video of it will be available as well.

Mike Jones - MicrosoftJWK Thumbprint spec incorporating feedback from IETF 90 [Technorati links]

July 23, 2014 03:11 PM

IETF logoI’ve updated the JSON Web Key (JWK) Thumbprint specification to incorporate the JOSE working group feedback on the -00 draft from IETF 90. The two changes were:

If a canonical JSON representation standard is ever adopted, this specification could be revised to use it, resulting in unambiguous definitions for those values (which are unlikely to ever occur in JWKs) as well. (Defining a complete canonical JSON representation is very much out of scope for this work!)

The specification is available at:

An HTML formatted version is also available at:

Kuppinger ColeOperation Emmental: another nail in the coffin of SMS-based two-factor authentication [Technorati links]

July 23, 2014 11:17 AM
In Alexei Balaganski

On Tuesday, security company Trend Micro has unveiled a long and detailed report on “Operation Emmental”, an ongoing attack on online banking sites in several countries around the world. This attack is able to bypass the popular mTAN two-factor authentication scheme, which uses SMS messages to deliver transaction authorization numbers. There are very few details revealed about the scale of the operation, but apparently the attack has been first detected in February and has affected over 30 banking institutions in Germany, Austria, Switzerland, as well as Sweden and Japan. The hackers supposedly got away with millions stolen from both consumer and commercial bank accounts.

Now, this is definitely not the first time when hackers could defeat SMS-based two-factor authentication. Trojans designed to steal mTAN codes directly from mobile phones first appeared in 2010. Contrary to a popular belief, these Trojans are not targeting only Android phones: in fact, the most widespread one, ZeuS-in-the-Mobile, has been discovered on various mobile platforms, including Android, Symbian, Blackberry and Windows Mobile. In 2012, an attack campaign dubbed “Eurograbber” has successfully stolen over 36 million euros from banks in Italy, Spain and the Netherlands. Numerous smaller-scale attacks have been uncovered by security researchers as well. So, what exactly is new and different about the Emmental attack?

First it’s necessary to explain in a few words how a typical attack like Eurograbber actually works.

  1. Using traditional methods like phishing emails or compromised web sites, hackers lure a user to click a link and download a Windows-based Trojan onto their computer. This Trojan will run in the background and wait for the user to visit their online banking site.
  2. As soon as the Trojan detects a known banking site, it will inject its own code into the web page. This code can, for example, display a “security advice” instructing the customer to enter their mobile phone number.
  3. As soon as the hackers have a phone number, an SMS message with a link to a mobile Trojan is sent to it and the customer is instructed to install the malicious SMS-grabbing app on their phone.
  4. By having both customer’s online banking PIN and SMS TAN, hackers can easily initiate a fraudulent transaction, transferring the money from customer’s account.

It’s quite obvious that such a scheme can only work when both PC and mobile Trojans operate in parallel, coordinating their actions through a C&C server run by hackers. This means that it can also be relatively easily disrupted simply by using an antivirus, which would detect and disable the Trojan. Another method is deploying special software on the banking site, which detects and prevents web page injections.

The hackers behind the Emmental attack are using a different approach. Instead of delivering a Trojan to a customer’s computer, they are using a small agent that masks as a Windows updater. Upon start, this program makes changes to local DNS settings, replacing IP addresses of known online banking sites with the address of a server controlled by hackers. Additionally, it installs a new root SSL certificate, which forces browsers to consider this hacked server a trusted one. After that, the program deletes itself, leaving no traces of malware on the computer.

The rest of the attack is similar to the one described above, but with a twist: the user never connects to the real banking site again, all communications will take place with the fraudulent server. This deception can continue for a long time, and only after receiving a monthly statement from the bank the user would find out that their account has been cleared of all money.

In other words, while Emmental is not the first attack on mTAN infrastructure, it’s an important milestone demonstrating that hackers are actively working on new methods of defeating it, and that existing solutions that are supposed to make banks more resilient against this type of attack are much less effective than believed. SMS-based two-factor authentication has been compromised and should no longer be considered a strong authentication method. The market already offers a broad range of solutions from smartcards and OTP tokens to Mobile ID and smartphone apps. It’s really time to move on.

July 21, 2014

Kaliya Hamlin - Identity WomanResources for HopeX Talk. [Technorati links]

July 21, 2014 03:11 PM

I accepted an invitation from Aestetix to present with him at HopeX (10).

It was a follow-on talk to his Hope 9 presentation that was on #nymwars.

He is on the volunteer staff of the HopeX conference and was on the press team that helped handle all the press that came for the Ellsberg - Snowden conversation that happened mid-day Saturday.  It was amazing and it went over an hour - so our talk that was already at 11pm (yes) was scheduled to start at midnight.

Here are the slides for it - I modified them enough that they make sense if you just read them.  My hope is that we explain NSTIC, how it works and the opportunity to get involved to actively shape the protocols and policies maintained.

Hope x talk from Kaliya
I am going to put the links about joining the IDESG up front. Cause that was our intention in giving the talk to encourage folks coming to HopeX to get involved to ensure that the technologies and policies for for citizens to use verified identity online when it is appropriate and also most importantly make SURE that the freedom to be anonymous and pseudonymous online.
This image is SOOO important I'm pulling it out and putting it here in the resources list.


Given that there is like 100 active people within the organization known as the Identity Ecosystem Steering Group as called for in the National Strategy for Trusted Identities in Cyberspace published by the White House and signed by president Obama in April 2011 that originated from the Cyberspace Policy Review that was done just after he came into office in 2009. Here is the website for the National Program Office.

The organization's website is here:  ID Ecosystem - we have just become an independent organization.

My step by step instructions How to JOIN.

Information on the committees - the one that has the most potential to shape the future is the Trust Framework and Trust Mark Committee


From the Top of the Talk

Links to us:
Aestetix -  @aestetix Nym Rights
Kaliya - @identitywoman  -  my blog identitywoman.net

Aestetix - background + intro #nymwars from Hope 9

     Aestetix's links will be up here within 24h
We mentioned Terms and Conditions May Apply - follows Mark Zuckerberg at the end.

Kaliya  background + intro

I have had my identity woman blog for almost 10 years  as an Independent Advocate for the Rights and Dignity of our Digital Selves. Saving the world with User-Centric Identity

In the early 2000’s I was working on developing distributed Social Networks  for Transformation.
I got into technology via Planetwork and its conference in 2000 themed: Global Ecology and Information Technology.  They had a think tank following that event and then published in 2003 the Augmented Social Network: Building Identity and Trust into the Next Generation Internet.
The ASN and the idea that user-centric identity based on open standards were essential - all made sense to me - that the future of identity online - our freedom to connect and organize was determined by the protocols.  The future is socially constructed and we get to MAKE the protocols . . . and without open protocols for digital identity our ID's will be owned by commercial entities - the situation we are in now.
Protocols are Political - this book articulates this - Protocols: How Control Exists after Decentralization by Alexander R. Galloway. I excerpted key concepts of Protocol on my blog in my NSTIC Governance Notice of Inquiry.
I c0-founded the Internet Identity Workshop in 2005 with Doc Searls and Phil Windley.  We are coming up on number 19 the last week of October in Mountain View and number 20 the third week of April 2015.
I founded the Personal Data Ecosystem Consortium in 2010 with the goal to connect start-ups around the world building tools for individual collect manage and get value from their personal data along with fostering ethical data markets.  The World Economic Forum has done work on this (I have contributed to this work) with their Rethinking Personal Data Project.
I am shifting out of running PDEC to Co-CEO with my partner William Dyson of a company in the field The Leola Group.


Aestetix and I met just after his talk at HOPE 9 around the #nymwars (we were both suspended.
So where did NSTIC come from? The Cyberspace Policy Review in 2009 just after Obama came into office.
Near-Term Action Plan:
#10 Build a cybersecurity-based identity management vision and strategy that addresses privacy and civil liberties interests, leveraging privacy-enhancing technologies for the Nation.
Mid-Term Action Plan:
#13 Implement, for high-value activities (e.g., the Smart Grid), an opt-in array of interoperable identity management systems to build trust for online transactions and to enhance privacy.
NSTIC was published in 2011: Main Document - PDF  announcement on White House Blog.
Trust Frameworks  are at the heart of what they want to develop to figure out how navigate how things work.
MY POST the Trouble with Trust and the Case for Accountability Frameworks.
What will happen with results of this effort?
The Cyber Security Framework  (paperObama Administration just outlined . NSTIC is not discussed in the framework itself – but both it and the IDESG figure prominently in the Roadmap that was released as a companion to the Framework.  The Roadmap highlights authentication as the first of nine different, high-priority “areas of improvement” that need to be addressed through future collaboration with particular sectors and standards-developing organizations.

The inadequacy of passwords for authentication was a key driver behind the 2011 issuance of the National Strategy for Trusted Identities in Cyberspace (NSTIC), which calls upon the private sector to collaborate on development of an Identity Ecosystem that raises the level of trust associated with the identities of individuals, organizations, networks, services, and devices online.

The National Program Office was launched in  January 2012 and Jeremy Grant leads it.  You can read Commerce Secretary Locke comments at the announcement at Stanford.
I wrote this article just afterwards: National! Identity! Cyberspace! Why we shouldn't Freak out about NSTIC   (it looks blank - scroll down).
Aaron Titus writes a similar post explaining more about NSTIC relative to the concerns arising online about the fears this is a National ID.
Staff for National Program Office

The put out a Notice of Inquiry - to figure out How this Ecosystem should be governed.

Many people responded to the NOI - here are all of them.

I wrote a response to the NSTIC Notice of Inquiry about Governance.  This covers that covers much of the history of the user-centric community  my vision of how to grow consensus. Most important for my NSTIC candidacy are the chapters about citizen's engagement in the systems co-authored with Tom Atlee the author of the Tao of Democracy and the just published Empowering Public Wisdom.

The NPO hosted a workshop on Governance,  another one Privacy - that they invited me to present on the Personal Data Ecosystem.  The technology conference got folded into IIW in the fall of 2011.

OReilly Radar - called it The Manhattan Project for online identity.

The National Program Office published a proposed:

Charter for the  IDESG Organization

ByLaws  and Rules of Association for the IDESG Organization

Also what committees should exist and how it would all work in this webinar presentation.  The Recommended Structure is on slide 6.  They also proposed a standing committee on privacy as part of the IDESG.

THEN (because they were so serious about private sector leadership) they published a proposed 2 year work plan.  BEFORE the first Plenary meeting in Chicago in August 2012

They put out a bid for a Secretariat to support the forthcoming organization and awarded it to a company called Trusted Federal Systems.
The plenary was and is open - to anyone and any organization from any where in the world. It is still open to anyone. You can join by following the steps on my blog post about it.
At the first meeting in August 2012 the management council was elected. The committees they decided should exist ahead of time had meetings.
The committees - You can join them - I have a whole post about the committees so you can adopt one.

Nym Issues!!!

So after the #nymwars it seemed really important to bring the issues around Nym Rights and Issues into NSTIC - IDESG.  They were confused - even though their bylaws say that committees. I supported Aestetix writing out a charter for a new committee - I read it for the plenary in November of 2012 - he attended the Feb 2013 Pleanary in Pheonix. I worked with several other Nym folks to attend the meeting too.
They suggested that NymRights was to confrontational a name so we agreed that Nym Issues would be a fine name. They also wanted to make sure that it would just become a sub-committee of the Privacy Committee.
It made sense to organize "outside" the organization so we created NymRights.
Basically the committee and its efforts have been stalled in limbo.
        Aestetix's links will be up here within 24h

The Pilot Grants from the NPO

Year 1 - announcement about the FFO , potential applicant Webinar - announcement about all the grantees and an FAQ.

Year 2 - announcement about the FFO, potential applicant webinar, annoucement about the grantees.

Year 3 - ? announcement about FFO - grantees still being determined.

Big Issues with IDESG

Diversity and Inclusion

I have been raising these issues from its inception (pre-inception in fact I wrote about them in my NOI).

I was unsure if I would run for the management council again -  I wrote a blog post about these concerns that apparently made the NPO very upset.  I was subsequently "univited" to the International ID Conf they were hosting at the White House Conference Center for other western liberal democracies trying to solve these problems.

Tech President Covered the issues and did REAL REPORTING about what is going on.  In Obama Administration's People Powered Digital Security Initiative, There's Lots of Security, Fewer People.

This in contrast to a wave of hysterical posts about National Online ID pilots being launched.

They IDESG have Issues with how the process happens. It is super TIME INTENSIVE.  It is not well designed so that people with limited time can get involved.  We have an opportunity to change tings becoming our own organization.

The 9th Plenary Schedule - can be seen here.  There was a panel on the first day with representatives who said that people like them and others from other different communities needed to be involved AS the policy is made.  Representatives from these groups were on the panel and it was facilitated by Jim Barnett from the AARP.

The Video is available online.


The organization is shifting from being a government initiative to being one that is its own independent organization.

The main work where the TRUST FRAMEWORKS are being developed is in the Trust Framework and Trust Mark Committee.  You can see their presentation from the last committee here.


Key Words & Key Concept form the Identity Battlefield


What is Identity?  Its Socially Constructed and Contextual

Identity is Subjective

Aestetix's links will be up here within 24h

What are Identifiers?: Pointers to things within particular contexts.

Abrahamic Cultural Frame for Identity / Identifiers

Relational  Cultural Frame for Identity / Identifiers

What does Industry mean when it says "Trusted Identities"?

What is Verified?

Verified ID in the context of the Identity Spectrum : My post about the spectrum.


In Conclusion: HOPE!

We won the #nymwars!

Links to Google's apology.

Skud's the Apology we hopped for.

More of Aestetix's links will be up here within 24h

The BC Government's Triple Blind System

Article about & the system  they have created and the citizen engagement process to get citizen buy-in - with 36 randomly selected citizens to develop future policy recommendations for it.

Article about what they have rolled out in Government Technology.

Join the Identity Ecosystem Steering Group

Get engaged in the process to make sure we maintain the freedom to be anonymous and pseudonymous online.

Attend the next  (10th) Plenary in mid-September in Tampa at the Biometrics Conference

Join Nym Rights group.


Come to the Internet Identity Workshop

Number 19 - Last week of October - Registration Open

Number 20 - Third week of April







KatasoftHosted Login for Modern Web Apps [Technorati links]

July 21, 2014 03:00 PM

Hosted Login from Stormpath

It’s no big secret: if you’re not using SaaS products to build your next great app, you’re wasting a lot of time.

Seasoned web developers have learned to solve common (i.e. annoying) problems with packaged solutions. If you’re really badass, your latest app is a symphony of amazing services, not a monolithic codebase that suffers from Not Invented Here.

But I’m gonna put money on this: you’re still building your login and registration forms from scratch and maintaining your own user database.

Why do we build login from scratch?

I have a few hypotheses on this, but one always seems to be true: user systems are the first thing we do after we master the Todo demo app. It’s fun, it’s a feature and we feel like we’ve accomplished something. Eventually we learn that there are a lot of things you can get wrong:

I could go on, but you already know. We commit these sins in the spirit of Ship It!.

Sometimes we use a framework like Rails, Express or Django and avoid most of these pitfalls by using their configurable user components. But we’re trying to get to App Nirvana, we want fewer concrete dependencies, less configuration, fewer resources to provision.

Login as a Service

What if you could send your user to a magical place, where they prove their identity and return to you authenticated?

Announcing Hosted Login – our latest offering from Stormpath!

With Hosted Login you simply redirect the user to a Stormpath-hosted login page, powered by our ID Site service. We handle all the authentication and send users back to your application with an Identity Assertion. This assertion contains all the information you need to get on with your business logic.

And the best part? Very minimal contact with your backend application. In fact, just two lines of code (using our SDKs):

And with that.. your entire user system is now completely service-oriented. No more framework mashing, no more resource provisioning. Oh, did we mention that’s beautiful as well? That’s right: if you don’t want to do any frontend work either, you can just use our default screens:


What problems does it solve?

Hosted Login solves a lot of the problems that are sacrificed in the name of Ship It, plus a few you may not have thought of:


While we provide default screens for hosted login, you can fully customize your user experience. Just create a Github repository for your ID Site assets and give us the Github URL! We’ll import the files into our CDN for fast access and serve your custom login pages instead of our default.

To customize your hosted login pages, you’ll want to use Stormpath.js, a small library that I’ve written just for this purpose. It gives you easy access to the API for ID Site and at ~5k minified it won’t break the bank.

For more information on this feature please refer to our in-depth document: Using Stormpath’s ID Site to Host your User Management UI

We’d love to know how you find Hosted Login! Feel free to tweet us at @gostormpath or contact me directly via robert@stormpath.com

CourionExtending IAM into the Cloud [Technorati links]

July 21, 2014 02:12 PM

Access Risk Management Blog | Courion

describe the imageYour data is everywhere. And so are your applications. In the past, everything resided in the data center, but today they're stored in the cloud, by a partner (MSP), and even running on mobile devices.

Your customers, partners and employees are also everywhere. As a security professional, you need to ensure that the right people have access to the right data and are doing the right things with it. That's where Intelligent Identity Access Management comes in. But in the era of cloud-computing, who knows where the data physically resides? And with users and accounts spread around the globe, how can you ensure the data is being accessed by the right people, according to your policies? Again, that's where Intelligent Identity Access Management is crucial.

If your data were just centrally located and being accessed by individuals and devices that you manage, traditional IAM solutions work well. But that's probably not the case. You have data in internal and outsourced systems. Some of the outsourced systems may be wholly controlled by your contracts, while others may be shared among thousands of other organizations. And that data is being accessed by employees, partners and customers from their homes, phones and tablets, on planes trains and automobiles.

From a security perspective, it's imperative to provision, govern and monitor information access wherever that information resides and however it's being accessed, whether those are physically in your IT environment or in the cloud. So what are your options?

Options for Provisioning, Governance and Monitoring in the Cloud

Two obvious questions are "where's my IAM solution?" and "where's my data?" After all, both must reside somewhere and be secured. If we constrain the answers to those questions to "on premise" or "in the cloud", we have four options.

1. Host internally, manage internal applications

Traditional IAM solutions reside on IT managed hardware within an enterprise. They're typically located in a server room where they can be physically controlled by IT. They are configured to manage applications that also reside on servers physically controlled by IT. This is a largely closed system, with the administrative control and the application resources both co-located within IT. It makes security simpler, but in the era of cloud computing, is becoming increasingly rare.

2. Host internally, manage internal and cloud-based applications

As enterprise applications have migrated outside of the data center, the need to manage those applications has fallen to traditional IAM solutions. IAM vendors like Courion have evolved their suites to natively connect to cloud-based systems from an on premise administration point. Existing "connector libraries" have been extended to include connectors to cloud-based systems. These new connectors sit side-by-side with existing on premise connectors and reach out to cloud applications.

This evolution has been largely seamless, as the same architecture used for managing internal resources has been applied to external, cloud-based resources. The protocols change, like using SOAP over HTTP rather than files over SMB, or RESTful web services rather than SOAP, but the architecture and techniques survived.

3. Host in the cloud, manage internal and cloud-based applications

Just as enterprise applications are now hosted in the cloud, there is increasing interest in hosting security systems in the cloud. This enables enterprises to focus on their core competencies rather than security management and identity management, while at the same time optimizing CapEx for OpEx expenditures.

Early experiments are promising, with IAM solutions providing tunneling capabilities from cloud-based infrastructure. Tunneling can be through VPNs, reverse proxies or dedicated appliances. Over time, this will likely become the preferred deployment option.

4. Host in the cloud; manage cloud-based applications

If an enterprise has no data in house, then a pure cloud-based solution is ideal. Operating on Office 365 + SalesForce + ADP, a cloud-based IAM solution can effectively provision and govern cloud-based applications. This scenario eliminates the complexity and cost of network tunneling solutions since everything is natively in the cloud. Here, the protocols are rapidly standardizing on RESTFul web services, with common token-based security and federation. However, like the all-internal scenario, all-cloud environments are rare.

Hybrid – the viable solution

Of these options, only two are typically feasible, since most organizations have some data on premise and some in the cloud. There are exceptions, like a startup which is native-cloud or in certain government situations, but in general, a hybrid solution is required. Choosing between the 2nd and 3rd option described above, whether you host your IAM solution in the cloud or host it internally, comes down to a deployment choice.

Courion has customers who are doing each. Most run our IAM solution on premise, while some use deployment in the cloud. For cloud deployments, most choose private cloud infrastructure, while some go for public infrastructure. But the predominant approach, even in 2014, is to deploy on premise. This is chiefly because most data still resides locally, so most applications reside locally, tilting the equation to an internally hosted IAM solution. As more enterprise applications migrate to the cloud, the decision to host the Courion suite in the cloud will likely shift.

Unlike enterprise data however, people have already shifted to the cloud. Mobile devices, from phones to tablets, are the norm. Most organizations provide secure access to critical systems on a 7x24 basis, to individuals located on premise and on the go. So parts of your IAM infrastructure must be either in the cloud, or on the edge (DMZ).

Again, Courion solutions are well suited for this shift. The most common security transaction, other than login, is the humble Password Reset. This must be accessible from anywhere and must be very reliable. It's required from the road, at night, on weekends and 2 minutes before the big sales presentation. Courion customers have hosted their password reset infrastructure in the DMZ for exactly this purpose. In addition, the Courion suite is tooled with a clean interface so customers, partners and employees are met with a consumer-grade experience, accessible on their laptop, tablet or phone.

As your data and apps move to the cloud, so do your identity repositories and access control models, as mentioned earlier. Your IAM solution can span both, but it's still advantageous to consolidate identities and provide a more seamless and simple sign on experience for customers, partners and employees. Enter Ping Identity, another cloud app that integrates with Courion solutions. Just as we expanded to cloud apps as they entered the business, a strong partnership allows for seamless integration with Ping to offer federation and SSO capabilities.

Single Sign On (SSO) impacts the decision of where to deploy an IAM solution. While IAM can provision, govern and monitor access applications in cloud-based and on premise environments, SSO systems provide seamless application login and access to the user community. By coupling the flexibility of Courion's industry leading IAM solution with the SSO and federation capabilities of Ping, organizations can manage access across all of their applications. Because both products leverage a common structure with Active Directory, the result is great experience for the end user and a manageable system for IT.


As the computing world shifts to the cloud, with consumer-grade technology leading the enterprise, our customers, partners and employees expect great access to information. As security professionals, our job is to balance "great" access with "secure" access. We make choices every day in choosing the solutions we deploy and the infrastructure on which it resides. Courion is here to help.


Ludovic Poitou - ForgeRockWhat we build at ForgeRock… [Technorati links]

July 21, 2014 10:43 AM

Since I’ve started working at ForgeRock, I’ve had hard times to explain to my non-technical relatives and friends, what we were building. But those days are over.

Thanks to our Marketing department, I can now refer them to our “ForgeRock Story” video :

Filed under: Identity Tagged: ForgeRock, iam, identity, IRM, opensource, security, video

Julian BondApparently the CMax II is for sale. [Technorati links]

July 21, 2014 09:35 AM
Apparently the CMax II is for sale.

Sale details here. http://bikeweb.com/node/2909
Bike details here: http://bikeweb.com/image/tid/114

T-Max III, Volvo seat, occasional 2 seater and large luggage area. Faster (maybe!), safer, warmer, more comfortable than a conventional T-Max.

Not sure I can afford it. It's likely to be priced to reflect the work rather than cheap because it's unusual.
 bikeweb.com/files/images/cmax%20leaves%203%20small.preview.jpg »

[from: Google+ Posts]
July 19, 2014

Eve MalerA new identity relationship [Technorati links]

July 19, 2014 06:25 PM

I’ve been writing on this blog about identity and relationships for a long time (some samples…). Now I’ve forged (see what I did there?) a new relationship, and have joined ForgeRock’s Office of the CTO. Check out my first post on the ForgeRock blog. I’m really psyched about this company and my new opportunities to make cool Identity Relationship Management progress there. And I’ve found a lot of fellow rock ‘n’ rollers and Scotch drinkers in residence too — apparently that’s something of a job requirement for me, as many of my dear friends and erstwhile colleagues at Forrester have similar habits!

My new blogging goal is to add some pointers here to my ForgeRock posts, and — hopefully — to blog here more often than I had been in recent years. (Maybe some fresh nutrition-blogging?)

See the icons in the About Me section to the right. If you’re an old friend, stay in touch, and if we haven’t met yet, you can use the links to see about forging a new online relationship.

Anil JohnWhat are KBA Metrics? [Technorati links]

July 19, 2014 01:15 PM

There is currently a discussion going on in the Identity Ecosystem Steering Group (IDESG) regarding knowledge based authentication (KBA) metrics. I am a bit unsure about what is being sought by the IDESG from a standards development organization (SDO). This blog post is an attempt at framing the questions, as I understand them, to determine if there is value here, or if it is the application of makeup to porcine livestock.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

Julian Bond [Technorati links]

July 19, 2014 08:12 AM
July 18, 2014

ForgeRockThe care and feeding of online relationships [Technorati links]

July 18, 2014 06:32 PM

I’m really excited to join ForgeRock! ForgeRock is doing amazing work around identity relationship management, and relationships — secure, identity-enabled, privacy-respecting, data-sharing, network-connected — are near and dear to my heart. (You didn’t think I was talking about Tinder, did you?)

My new role involves driving innovation for the ForgeRock Open Identity Stack, and in just a few short days I’ve already had mind-blowing conversations with my new colleagues about ways we can enable and enhance lots of types of relationships through the OIS. For one, take Scott McNealy’s invocation of “who’s who, what’s what, and who gets access to what” — we know it applies to organizations needing to control access; of course, it’s critically important to every organization to achieve this goal. We’re working on extending this privilege even to consumers who use systems fueled by ForgeRock, so that these individuals have a say in their own digital-footprint destinies. (Admittedly, a Tinder-like use case did come up in discussions today…) For those of you who have followed my work on User-Managed Access for while, yes, UMA will play a part in this story.

Having jumped in with both feet, I’m getting some chances to represent ForgeRock at events in the very near term. For starters, I’ll be speaking at SecureCIO in San Francisco this Friday, and will have the pleasure of joining my colleague Allan Foster for a talk at the Cloud Identity Summit next week. And then there’s our July 29 webinar with my alma mater Forrester Research about adding relationship management to identity — hope you’ll register and join us!

The post The care and feeding of online relationships appeared first on ForgeRock.

Kuppinger Cole05.02.2015: Cloud Compliance & Datenschutz [Technorati links]

July 18, 2014 04:50 PM
In KuppingerCole

Dieses Seminar vermittelt Ihnen die grundlegenden und brachenspezifischen Regelungen für Ihre Cloud-Strategie und informiert Sie über die heutigen und künftigen Anforderungen an Datensicherheit und Datenschutz.

Sie tragen Verantwortung für die Planung, Einführung und das Management von Cloud Services in Ihrem Unternehmen? Dann wird dieses Seminar alle Ihre offenen Fragen zum Thema Compliance und Datenschutz beantworten.

Julian BondI hate it when good services on the internet go dark and disappear. [Technorati links]

July 18, 2014 04:43 PM
I hate it when good services on the internet go dark and disappear.

There used to be a wonderful tool for exploring music space at http://audiomap.tuneglue.net/ It gathered data from last.fm and Discogs about related artists and presented it in a Java applet spider diagram.

Now it redirects to an EMI Hosting holding page and that sucks.

There's analternative one here http://www.liveplasma.com/ that's not bad but it's not the same.
 EMI Hosting »

[from: Google+ Posts]

Kuppinger Cole13.11.2014: Cloud Compliance & Datenschutz [Technorati links]

July 18, 2014 04:42 PM
In KuppingerCole

Dieses Seminar vermittelt Ihnen die grundlegenden und brachenspezifischen Regelungen für Ihre Cloud-Strategie und informiert Sie über die heutigen und künftigen Anforderungen an Datensicherheit und Datenschutz.

Sie tragen Verantwortung für die Planung, Einführung und das Management von Cloud Services in Ihrem Unternehmen? Dann wird dieses Seminar alle Ihre offenen Fragen zum Thema Compliance und Datenschutz beantworten.

Kuppinger Cole02.02.2015: Big Data für die Informationssicherheit [Technorati links]

July 18, 2014 04:22 PM
In KuppingerCole

Realtime Security Analytics: Worauf Sie beim Einstieg achten müssen.

Erhalten Sie einen Überblick zur Echtzeit-Überwachung mit Hilfe von Big Data Tools und lernen Sie wie Sie die datenschutzrechtlichen Regulatorien im Kontext der Netzwerküberwachung einhalten.

Kuppinger Cole12.11.2014: Big Data für die Informationssicherheit [Technorati links]

July 18, 2014 04:05 PM
In KuppingerCole

Realtime Security Analytics: Worauf Sie beim Einstieg achten müssen.

Erhalten Sie einen Überblick zur Echtzeit-Überwachung mit Hilfe von Big Data Tools und lernen Sie wie Sie die datenschutzrechtlichen Regulatorien im Kontext der Netzwerküberwachung einhalten.

Kuppinger ColeWhat’s the deal with the IBM/Apple deal? [Technorati links]

July 18, 2014 10:58 AM
In Alexei Balaganski

So, unless you’ve been hiding under a rock this week, you’ve definitely heard about a historical global partnership deal forged between IBM and Apple this Tuesday. The whole Internet’s been abuzz for the last few days, discussing what long-term benefits the partnership will bring to both parties, as well as guessing who will be the competitors that will suffer the most from it.

Different publications would name Microsoft, Google, Oracle, SAP, Salesforce and even Blackberry as the companies that the deal was primary targeted against. Well, at least for BlackBerry this could indeed be one of the last nails in the coffin, as their shares have plummeted after the announcement and the trend seems to be long-term. IBM’s and Apple’s shares rose unsurprisingly, however, financial analysts don’t seem to be too impressed (in fact, some recommend selling IBM stocks). This is, however, not the point of my post.

Apple and IBM have a history of bitter rivalry. 30 years ago, when Apple unveiled their legendary Big Brother commercial, it was a tiny contender against IBM’s domination on the PC market. How times have changed! Apple has since grown into the largest player on mobile device market with market capitalization several times larger than IBM’s. IBM has sold their PC hardware business to Lenovo years ago and is currently concentrated on enterprise software, cloud infrastructure and big data analytics and consulting businesses. So, they are no competitors anymore, but can we really consider them equal partners? Apple’s cash reserves continue to grow, and IBM’s revenues have been declining over the last two years. After losing a $600M contract with US government to AWS last year, a partnership with Apple is a welcome change for them.

So, what’s in this deal, anyway? In short, it includes the following:

For Apple, this deal marks their renewed attempt to get a better hold of the enterprise market. It’s well known that Apple has never been successful in this, and whether it was because of ignoring enterprise needs or simply because of inability to develop the necessary services in-house, can be debated. This time, however, Apple is bringing a partner with a lot of experience and a large portfolio of existing enterprise services (notorious, however for their consistently bad user experience). Could an exclusive combination of a new shiny mobile UI with a proven third party backend finally change the market situation in Apple’s favor? Personally, I’m somewhat skeptical: although a better user experience does increase productivity and would be a welcome change for many enterprises, we’re still far away from a mobile-only world, and UI consistency across mobile and desktop platforms is a more important factor than a shiny design. In any case, the biggest thing that matters for Apple is the possibility to sell more devices.

For IBM the deal looks even less transparent. Granted, we do not know the financial details, but judging by how vehemently their announcement stated that they are “not just a channel partner for Apple”, many analysts do suspect that reselling Apple devices could be a substantial part of IBM’s profit from the partnership. Another important point is, of course, that IBM cannot afford to maintain a truly exclusive iOS-only platform. Sure, iOS is still a dominant platform on the market, but its share is far from 100%. Actually, it is already decreasing and will probably continue to decrease in the future, as other platforms will gain their market shares. Android’s been growing steadily during the last year, and it’s definitely too early to dismiss Windows Phone (remember how people were trying to dismiss Xbox years ago?). So, IBM must continue to support all other platforms with their products such as MaaS360 and can only rely on additional services to support the notion of iOS exclusivity. In any case, the partnership will definitely bring new revenue from consulting, support and cloud services, however it’s not easy to say how much Apple will actually contribute to that.

So, what about the competitors? One thing that at least several publications seem to ignore is that those companies that are supposed to suffer from the new partnership are operating on several completely different markets and comparing them to each other is like comparing apples to oranges.

For example, Apple does not need IBM’s assistance to trump BlackBerry as a rival mobile device vendor. But applying the same logic to Microsoft’s Windows phone platform would be a big mistake. Surely, their current share in the mobile hardware market is quite small (not on every market, by the way: in Germany they have over 10% and growing), but to claim that Apple/IBM will drive Microsoft out of enterprise service business is simply ridiculous. In fact, Microsoft is a dominant player there with products like Office 365 and Azure Active Directory and it’s not going anywhere yet.

Apparently, SAP CEO Bill McDermott isn’t too worried about the deal as well. SAP is already offering 300 enterprise apps for iOS Platform and claims to be years ahead of its competitors in the area of analytics software.

As for Google – well, they do not make money from selling mobile devices. Everything Google does is designed to lure more users into their online ecosystem, and although Android is an important part of their strategy, it’s by no means the only one. Google services are just as readily available on Apple devices, after all.

Anyway, the most important question we should ask isn’t about Apple’s or IBM’s, but about our own strategies. Does the new IBM/Apple partnership has enough impact to make an organization reconsider its current MDM, BYOD or security strategy? And the answer is obviously “no”. BYOD is by definition heterogeneous and any solution deployed by an organization for managing mobile devices (and more importantly, access to corporate information from those devices) that’s locked on a single platform is simply not a viable option. Good design may be good business, but it is not the most important factor when the business is primarily about enterprise information management.

Kuppinger ColeExecutive View: Symantec.cloud Security Services - 70926 [Technorati links]

July 18, 2014 09:23 AM
In KuppingerCole

Symantec was founded in 1982 and has evolved to become one of the world’s largest software companies with more than 18,500 employees in more than 50 countries. Symantec provides a wide range of software and services covering security, storage and systems management for IT systems.   Symantec has a very strong reputation in the field of IT security that has been built around its technology and experience. While Symantec has a wide range of security...

July 17, 2014

Kantara InitiativeOpen Standards Drive Innovation – Kantara CIS Workshop [Technorati links]

July 17, 2014 09:36 PM

We have arrived at the revolution of Identity and are ready for the next installment of the Cloud Identity Summit. 

Identity Services are converging more and more with every technology and nearly every part of our lives.  Identity is moving fast and it’s not only a technical or policy discussion anymore.  As IRM notes, Identity Services are key to business development, growing revenue, and economies. But our world has also changed over the last 18 months.  Privacy and Trust are under hot debate… not to mention how they factor in to technology adoption.  We believe that Identity tech and policy standards are core to building a platform for innovation, and not just any standards but Open Standards. With transparency, openness, multi-stakholderism as core value, Open Standards are key to building trusted platforms along with the more traditional national standards.

Open Standards move faster, provide new proving ground, and, ultimately, drive innovation!

We are thrilled to have industry leaders participate in our Cloud Identity Summit Workshop . Their knowledge and expertise will be shared through the CIS event and Kantara Initiative workshop. Why should you attend? This workshop include 3 sessions discussing open standards as a driver to innovation of marketplaces.  Attendees will learn how leading organizations drive innovation through technology and standards development and partnerships. Who should attend? C-Level, Managers, Directors, Technologists, Journalists, Policy Makers and Influencers IEEE_SA_Logo 1. Open Standards Driving Innovation Presented by: the IEEE Standards Association Participants:  Allan Foster, Vice President Technology & Standards, ForgeRock:  John Fontana, PingID:  Scott Morrison, SVP and Distinguished Engineer, CA Technologies:  Dennis Brophy, Mentor Graphics Abstract:  Today, more than ever, open standards are core to unbounded market growth and success through innovation. Those involved in innovation systems—companies, research bodies, knowledge institutions, academia and standards developing communities—influence knowledge generation, diffusion and use, and shape global innovation capacity. As the global community strives to keep pace with technology expansion and to anticipate the technological, societal and cultural implications of this expansion, and as it faces the increasing interference of technology with economic, political and policy drivers, embracing a bottom-up, market driven and globally open and inclusive standards development paradigm will help ensure strong integration, interoperability and increased synergies along the innovation chain across boundaries. Globally open standardization processes and standards produced through a collective of standards bodies adhering to such principles are essential for technology advancement to ultimately benefit humanity, as the global expert communities address directly, in an open and collaborative way, such global issues of sustainability, cybersecurity, privacy, education and capacity building. Working within a set of principles that:

8 Miles Logo2. Federation Integration and Deployment of Trusted Identity Solutions Presentations by: Lena Kannappan  – 8k Miles FuGen Solutions & Ryan Fox – ID.me Abstract: Deployment integration of Identity Federations can be a challenge, but through standards setting and innovative testing that process can move much faster and bring benefits to all parties growing their respective markets and budget power.  Industry based development and application of standards helps set the industry levels for operations while testing and approval marks help support rapid on boarding of partners.  When Identity Federations and partners make use of agreed upon Open Standards the platform is created that allows innovative organizations to build new models and compelling services. Innovators can begin to leap in to new areas proving their business value and vitality. In this session leaders discuss:

mem-securekey 3. Approaches to Solving Enterprise Cybersecurity Challenges Presented by: SecureKey Participants:  Andre Boysen, EVP Marketing & Digital Identity Evangelist, SecureKey Technologies Inc.: Christine Desloges – Assistant Deputy Minister Rank,Department of Foreign Affairs, Trade and Development (DFATD): Patricia Wiebe – Director of Identity Architecture at BC Provincial Government Abstract:  There is an identity ecosystem emerging in North America that is unique in the world. It is a multi-enterprise service that is focused on making more meaningful services available online while at the same time making it easier for users to enroll, access and control information shared by these services. Things like easy access to online government services, opening a bank account on the internet, proving your identity for new services online, registering your child at school or participating in an education portal are becoming possible. The service model for the Internet is moving from app-centric to user-centric. The current password model of authentication needs to evolve. Every web service needs to make a choice between making their credentials stronger by adding multifactor authentication (BYOD) or partnering to get authentication from a trusted provider (BYOC – bring your own credential).

GluuSymplified… So long and thanks for all the fish! [Technorati links]

July 17, 2014 08:49 PM

Symplified Adieu

As many of you have heard, Symplified is exiting the access management market. The company’s founders had a long history in the single sign-on business, having founded Securant in the late nineties. Securant was acquired by RSA in September 2001, and evolved into RSA Cleartrust, which is still in production today at many organizations.

It seemed logical that the experienced team behind such a successful product would have launched an equally successful SaaS offering. I don’t know the whole back story, but many things have to align for a startup to succeed. You need good execution, but you also need a little bit of good luck.

I first ran into Symplified at the Digital Identity World in 2008 (thanks for the flying monkey!). At the next Digital Identity World, I had a long conversation with Eric Olden about utility computing. He gave me a copy of the book The Big Switch, which provided valuable evidence in my thinking about how utility computing could make sense for SSO and access management, and how lowering the price could actually expand the size of the market.

Although Gluu has many competitors, identity and access management is a very large global market, which Gluu cannot serve alone. We’re sad to see the exit of one of the early innovators who helped pave the way for a new delivery model for access management. Here at Gluu we’re grateful for Symplified’s early leadership, dedication to their customers, and management excellence.

As a small thanks and to bid farewell to one of our respected peers, I composed this haiku:

First SaaS SSO
Visionary service
Sadly, fate had other plans

Best of luck to all at the Symplified team!

Kuppinger ColeLeadership Compass: Cloud User and Access Management - 70969 [Technorati links]

July 17, 2014 09:47 AM
In KuppingerCole

Leaders in innovation, product features, and market reach for Cloud User and Access Management. Manage access of employees, business partners, and customers to Cloud services and on-premise web applications. Your compass for finding the right path in the market.

July 16, 2014

Julian BondWell, well. So the Myers-Briggs test is totally meaningless, unscientific bullshit. There's a surprise... [Technorati links]

July 16, 2014 05:14 PM
Well, well. So the Myers-Briggs test is totally meaningless, unscientific bullshit. There's a surprise! I wonder how many other cod-psych tests are the same and have about as much 2014 relevance as astrology or palm reading.
via http://boingboing.net/2014/07/16/myers-briggs-personality-test.html
 Why the Myers-Briggs test is totally meaningless »
It's no more scientifically valid than a BuzzFeed quiz.

[from: Google+ Posts]

Kuppinger ColeEU-Service Level Agreements for Cloud Computing – a Legal Comment [Technorati links]

July 16, 2014 09:20 AM
In Karsten Kinast

Cloud computing allows individuals, businesses and the public sector to store their data and carry out data processing in remote data centers, saving on average 10-20%. Yet there is scope for improvement when it comes to the trust in these services.

The new EU-guidelines, developed by a Cloud Select Industry Group of the European Commission, were meant to provide reliable means and a good framework to create confidence in cloud computing services. But is it enough to provide a common set of areas that a cloud-SLA should cover and a common set of terms that can be used, as the guidelines do? Can this meet the individuals’ and business’ concerns when – or if – using cloud services?

In my opinion it does not, at least not sufficiently.

Having a closer view at the Guidelines from a legal perspective and thus concentrating on chapter 6 („Personal Data Protection Service Level Objectives Overview”), they appear to offer no tangible news. The Service Level Objectives (SLOs) that are described therein do give a detailed overview about the objectives that must be achieved by the provider of a cloud computing service. However, they lack description of useful examples and practical application. I would have imagined some kind of concrete proposals for the wording of a potential agreement. Any kind of routine concerning the procedure of creating a cloud computing service agreement would be a first step, to my mind, to increase the trust in cloud computing.

Since the guidelines fall short especially in this pragmatic aspect, their benefit in practice will be rather little.

As a suggestion for improvement one could follow the example of the ENISA „Procure Secure“-guidelines. They do focus on examples from “real life” and show what shall be comprised in a cloud computing contract. And they support cloud customers in setting up a clearly defined and practical monitoring framework, also by giving “worked examples” of common situations and best-practice solutions for each parameter suggested.

July 15, 2014

Kuppinger ColeLeadership Compass: Cloud IAM/IAG - 71121 [Technorati links]

July 15, 2014 09:39 AM
In KuppingerCole

The Cloud IAM market is currently driven by products that focus on providing Single Sign-On to various Cloud services as their major feature and business benefit. This will change, with two distinct evolutions of more advanced services forming the market: Cloud-based IAM/IAG (Identity Access Management/Governance) as an alternative to on-premise IAM suites, and Cloud IAM solutions that bring a combination of directory services, user management, and access management to the Cloud.

July 14, 2014

Kuppinger ColeAmazon Web Services: One cloud to rule them all [Technorati links]

July 14, 2014 01:23 PM
In Alexei Balaganski

Since launching its Web Services in 2006, Amazon has been steadily pushing towards global market leadership by continuously expanding the scope of their services, increasing scalability and maintaining low prices. Last week, Amazon has made another big announcement, introducing two major new services with funny names but a heavy impact on the future competition on the mobile cloud services market.

Amazon Zocalo (Spanish for “plinth”, “pedestal”) is a “fully managed, secure enterprise storage and sharing service with strong administrative controls and feedback capabilities that improve user productivity”. In other words, it is one of the few user-facing AWS services and none other than a direct competitor to Box, Google Drive for Work and other products for enterprise document storage, sharing, and collaboration. Built on top of AWS S3 storage infrastructure, Zocalo provides a cross-platform solution (for laptops, iPads and Android tablets, including Amazon’s own Kindle Fire) for storing and accessing documents from anywhere, synchronizing files between devices, and sharing documents for review and feedback. Zocalo’s infrastructure provides at-rest and in-transit data encryption, centralized user management with Active Directory integration and, of course, ten AWS geo-regions to choose from in order to be compliant with local regulations.

Now, this does look like “another Box” at first sight, but with the ability to offer cloud resources cheaper than any other vendor, even with Zocalo’s limited feature set Amazon has all the chances to quickly gain a leading position in the market. First with Google announcing unlimited storage for their enterprise customers and now with Amazon driving prices further down, it means that cloud storage itself has very little market value left. Just being “another Box” is simply no longer sustainable, and only the biggest and those who can offer additional services on top of their storage infrastructure will survive in the long run.

Amazon Cognito (Italian for “known”) is a “simple user identity and data synchronization service that helps you securely manage and synchronize app data for your users across their mobile devices.” Cognito is a part of newly announced suite of AWS mobile services for mobile application developers, so it may not have caused a splash in the press like Zocalo, but it’s still worth mentioning here because of its potentially big impact on future mobile apps. First of all, by outsourcing identity management and profile synchronization between devices to Amazon, developers can free up resources to concentrate on the business functionality of their apps and thus bring them to market faster. Second, using the Cognito platform app developers are always working with temporary limited identities, safeguarding their AWS credentials as well as enabling uniform access control across different login providers. Thus, developers are implicitly led towards implementing security best practices in their applications.

Currently, Cognito is supporting several public identity providers, namely Amazon, Facebook and Google, however the underlying federation mechanism is standard-based (OAuth, OpenID Connect), so I cannot believe it won’t soon be extended to support enterprise identity providers as well.

Still, as much as an ex-developer in me feels excited about Cognito’s capabilities, an analyst in me cannot but think that Amazon could have gone a step further. Currently, each app vendor would maintain their own identity pool for their users. But why not give users control over their identities? Had Amazon made this additional step, it could eventually become the world’s largest Life Management Platform vendor! How’s that for an idea for Cognito 2.0?

CourionWhat Makes Intelligent IAM Intelligent [Technorati links]

July 14, 2014 01:00 PM

Access Risk Management Blog | Courion

Bill GlynnIn order to explain what makes Intelligent IAM Intelligent, we must first discuss why IAM needs to be intelligent.  Fundamentally, IAM is a resource allocation process that operates on the simple principle that people should only have access to the resources they need in order to do their job. So, basically, IAM is used to implement the Marxist philosophy, “to each according to need”. Therein lies one of the problems: without intelligence, IAM operations are inconsistent and can be easily corrupted; resulting in decreased efficiency of workers, increased risk to the corporation (more on that later) or both. The folks with the power have the ability to give some people (the privileged class, like their friends) more access than they need, while others (the exploited workers) may not have access to the resources they truly need, which leads to civil unrest and the potential collapse of corporate society as we know it.

However, given appropriate guidelines (rules) and sufficient information (knowledge), traditional IAM has evolved into an inherently intelligent process for managing resource allocation, such as Courion’s Intelligent IAM solution. On the front end, access requests are evaluated to see if they violate any business rules, such as, “If you aren’t in the Sales department, then you can’t have access to the company sales commission report.”

Such business rules combined with knowledge about the access recipients request and should receive enables the access assignment process to be an intelligent activity; ensuring that people do or don’t get access to corporate resources as determined by their functional role or their operational needs. On the back end, the entire corporate environment is continuously monitored, looking for evidence of any business rule violations.

Today’s corporations are challenged by a complex, mobile and open society; problems don’t necessarily get introduced through the front door.  Therefore, it’s critical to have an intelligent IAM system like Courion’s to both prevent problems from being created and to maintain a watchful eye and take immediate action, such as automatic notifications or even automatically disabling access or accounts should issues be discovered.Likelihood Impact Visual

As an example, Courion’s solution can easily distinguish between a company’s finance department server, which is obviously a far more sensitive resource than a Marketing department’s color printer – (unless you consider the price of replacement ink cartridges, and then it’s not so obvious.) Consequently, Courion’s Intelligent IAM solution, based upon a number of criteria, can determine who should and shouldn’t have access to such sensitive resources. This scenario alludes to a fundamental concept that guides the Courion solution: the concept of risk as it pertains to the corporation.  The system defines risk as a combination of likelihood, as in “OK, so what are the odds that will happen?”, and impact, as in, “So if it happens, how bad can it really be?” In general, a customer can configure the system to behave in accordance with their risk tolerance, which boils down to a basic question, “Just how lucky do you really feel?”

But it’s not just a pattern matching exercise based upon a bunch of If / Then conditions.  Courion’s Intelligent IAM solution not only knows which resources are more sensitive than others, but it also automatically adjusts its knowledge and its perspective over time.

As an analogy, a key isn’t necessarily an inherently sensitive resource. The risk associated with giving someone that key depends upon a variety of dynamic variables, such as who is going to get the key, what other keys may be behind the door that this key unlocks, how many other people also have a copy of this key, and exactly who are they?

So, while it may have seemed like a good idea to give Fred a key to the supply room, a week later we now know that all of Fred’s buddies also have a key to the supply room. More specifically, we know that Fred’s good friend Barney just got access to an additional key that unlocks the back door of the supply room. Consequently, the risk that the company’s expensive monogrammed tissue paper goes missing from the supply room has increased dramatically.

It’s this broad contextual view across a dynamically evolving environment, coupled with the knowledge of what is and isn’t an acceptable level of risk, and the ability to adapt its perspective to changing conditions that makes Courion’s Intelligent IAM solution such a valuable tool for ensuring appropriate access to corporate resources, such as prized paper goods.

However, perhaps one of the more subtle benefits provided by Courion’s Intelligent IAM solution is that it takes the burden off of the IT folks who no longer have to justify to angry users why their request was denied.  It now becomes a much easier conversation:

Rolling Stone Need vs Want“I’m sorry. I like you, and I feel your pain. I want to give you access to the Executive rest room, but I just don’t have that kind of power. You see, we use Courion’s Intelligent IAM solution and it can distinguish between what you want and what you need. So, it knows that you want access to the executive rest room, but it also knows that you don’t really need access to the executive rest room. It’s not like the old days when I might be persuaded to give you what you want. Even if I could give you such access, the Courion solution is always watching and it’s configured to notify the entire executive team of rule violations, and not only that, it will automatically take away your access.  It will simply lock the door. Therefore, continuing to try to open the door might be embarrassing, even for you. Why don’t you just use that nice restroom down the hall like the rest of us and then go back to your desk and listen to some music; I suggest a tune from The Rolling Stones – “You can't always get what you want, but if you try sometimes, you just might find, you get what you need.”


Kuppinger ColeExecutive View: Ergon Airlock/Medusa - 71047 [Technorati links]

July 14, 2014 07:05 AM
In KuppingerCole

Die Ergon Informatik AG ist ein in Zürich ansässiges Unternehmen. Neben einem großen Unternehmensbereich für Software-Individualentwicklungen ist Ergon schon seit vielen Jahren auch als Anbieter von Standard-Software am Markt präsent und hat eine signifikante Zahl von Kunden. Die Kernprodukte des Unternehmens sind die eng miteinander verbundenen Lösungen Airlock und Medusa. Airlock ist eine Web Application Firewall, die Web Single...

Kuppinger ColeExecutive View: Centrify Server Suite - 70886 [Technorati links]

July 14, 2014 06:28 AM
In KuppingerCole

Centrify is a US based Identity Management software vendor that was founded in 2004. Centrify has achieved recognition for its identity management and auditing solutions including single sign-on service for multiple devices and for cloud-based applications. The company is VC funded and has raised significant funding from a number of leading investment companies. The company as of today has more than 5,000 customers. Centrify has licensed key SaaS...

July 12, 2014

Anil JohnIdentity Validation as a Public Sector Digital Service? [Technorati links]

July 12, 2014 03:00 PM

I’ve written before about the role that the public sector currently has in identity establishment, but not in identity validation. This absence has led to an online ecosystem in the U.S. that depends on non-authoritative information for identity validation. These are some initial thoughts on what an attribute validation service, which provides validation of identity attributes using authoritative public sector sources, could look like.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

Julian BondJohn Doran (TheQuietus editor) playing DJ in an open air car park just off Old Street. Wed evening, ... [Technorati links]

July 12, 2014 10:23 AM
John Doran (TheQuietus editor) playing DJ in an open air car park just off Old Street. Wed evening, 16 July. I think it's free but not sure.


 The Quietus | News | Red Market: S U B J E C T Wednesday »
The Quietus' John Doran and more announced to play open-air venue

[from: Google+ Posts]

Julian BondUtopia returns on UK Channel 4  on Monday and Tuesday (14/15 July) next week. Don't miss it. [Technorati links]

July 12, 2014 10:15 AM
Utopia returns on UK Channel 4  on Monday and Tuesday (14/15 July) next week. Don't miss it.

 Utopia - Channel 4 »
When a group of strangers find themselves in possession of the manuscript for a legendary graphic novel, their lives brutally implode as they are pursued by a shadowy and murderous organisation...

[from: Google+ Posts]
July 11, 2014

Nat Sakimuraベネッセ個人情報漏洩事件所感 [Technorati links]

July 11, 2014 10:40 AM

















(1) 本人ないしはその代理人からの同意を取得してい無い、第三者からのデータの取得

(2) 本人ないしはその代理人からの同意を取得してい無いデータ利用

の2つを規制するというのがある。(1)が実現できると、転々流通はその段階である程度止まる。さらに、もしもその名簿が既に取得されてしまっていたとしても、(2) によって、その個人にコンタクトを取るなどはできなくなり、プライバシーの状態が改善する。


多くのデータ漏洩は、アクセス管理がきちんとしていなかったことに起因する。今回も「グループ社員以外の内部者の持ち出し」というのだから、恐らくそういう問題があったのではないかと推察する。そこはきちんと固めなければいけないのはもちろんである。そのためにはアイデンティティ管理がその基礎となることは言うまでもない。だが、このアイデンティティとアクセス管理、多くの企業では極めてなおざりになっているのが現状である。各企業は、他山の石として、この機会に再検討すると良いだろう。たとえば、「共有アカウントなんて無いよね」とか「デフォルトのシステムアカウントなんて殺してあるよね」とかとか。セキュリティは後付ではダメだ。設計段階から入れなければいけない。セキュア・バイ・ デザイン(SBD)が重要だ。[5](同時に、プライバシー・バイ・デザイン(PbD)もまた重要、とちょっと日本人で2人目のPbDアンバサダーとして宣伝してみる。)





[1] NHK「ベネッセ 個人情報最大2070万件流出か」 (2014/7/9), http://www3.nhk.or.jp/news/html/20140709/k10015871611000.html

[2] 楠「ベネッセが埋めた名簿屋のミッシング・ピース」(2014/7/9), 雑種路線でいこう, http://d.hatena.ne.jp/mkusunok/20140709/leak#seemore

[3] 崎村「個人情報保護法ってそんなにザルなの?」(2009/11/29), .Nat Zone, http://www.sakimura.org/2009/11/656/

[4] 首相官邸「パーソナルデータの利活用に関する制度改正大綱」(2014/6/24), http://www.kantei.go.jp/jp/singi/it2/kettei/pdf/20140624/siryou5.pdf


[5] 今回に関して言うと、ちゃんとアクセス制御個人毎の認証はしていたようだ。個人毎のアカウントになっていたのですぐ足が付いた(7/17に逮捕)し、アクセスにあたっては、特定の部屋から(=位置認証)、特定のPCからのアクセス(=デバイス認証)し、ネットワーク越しではデータを取り出すことはできず、WSJの報道[6]によると、USBも殺してあったようだ。セキュリティ的な落ち度というと、データのアクセス権が広範過ぎたのとログ監視をしていなかったとみられることくらいか。つまり、IAMでいうところのポリシー設定の失敗、それから運用ポリシー設定および運用自体の失敗であるように見える。これらは逆に、物理的な安全対策をなまじしっかりやっていたため、それを過信した結果なのかもしれない。いわば旧来型の境界セキュリティの限界と言うべきであり、これがIdentity中心にセキュリティを再編しなければならないと言われる所以でもある。(青字部分は7/23に補記)

[6] 時事通信社「顧客DB開発に関与=知識悪用し防止措置解除—派遣SE、逮捕状請求へ・警視庁」(2014/7/17 05:30JST) , Wall Street Journal Web版 http://jp.wsj.com/news/articles/JJ10231533482860533506220261489651978215431?tesla=y&tesla=y

Mark Dixon - Oracle“Wink” at The Home Depot: Emerging #IoT Ecosystem? [Technorati links]

July 11, 2014 01:10 AM

Today, I learned from a USA Today article that The Home Depot and Amazon.com have begun to offer home automation devices that work with the Wink app and home automation Wink Hub

Boosting your home’s IQ got easier Monday as The Home Depot began selling a collection of nearly 60 gadgets that can be controlled by mobile devices, including light bulbs, lawn sprinklers and water heaters.

I quickly found that homedepot.com offers more Wink devices on line that does Amazon.com - interesting that the orange bastion of brick and mortar DIY sales seems to be besting Amazon at its own game!

I jumped in my pickup and drove to the nearest Home Depot store - and there it was – a Wink end cap, stationed right between the aisles offering water heaters and replacement toilets. The display wasn’t pretty, but it was there.  I could have loaded up a cart full of water sprinkler controllers, video cameras, door locks, smoke alarms, LED lights, motion sensors and more – all controllable via Wink. Pretty impressive, actually.


Two things are significant here:

  1. The Wink ecosystem for connecting many devices from multiple vendors seems to be emerging more quickly than systems promised by Apple and Google.
  2. The Home Depot is the epitome of American mainstream – making it available to the common folks, not just techno-geeks.  Heck, I was in the Home Depot store three times last Saturday alone to pick up stuff. That’s mainstream.

It is going to be really interesting to see how this stuff becomes part of “The Fabric of our Lives.”

Mark Dixon - OracleThe Zen of #IoT: The Fabric of our Lives [Technorati links]

July 11, 2014 12:10 AM


When I was a young engineering student at Brigham Young University, I had a physics professor who loved to promote what he called the “Zen of Physics.”  As I recall, he proposed that if we studied the right way and meditated the right way on the virtues of physics, we would reach a state of enlightenment about his beloved area of scientific thought.

As an engineering student more interested in practical application than theoretical science, I never did reach the level of enlightenment he hoped for, although I do remember some exciting concepts related to black holes and liquid nitrogen.

This last week, as I was pondering the merits of the Internet of Things, I had a Zen-like moment, an epiphany or moment of enlightenment of sorts, as I was mowing the lawn, of all things.

My thought at that moment?  The real value of the Internet of Things will become apparent when we find that this technology becomes woven seamlessly and invisibly into “The Fabric of our Lives.”

The Fabric of our Lives” is actually a trademark of the Cotton Industry, so I can’t claim originality, but I think the concept is interesting.  When we come to realize that technology fits us as naturally and comfortably as a favorite old cotton shirt, we tend to forget about the technology itself, but enjoy the benefits of what has slowly become an integral part of ordinary living – woven into the fabric of every day life.

When I had my little epiphany last Saturday, I had forgotten my post from April 1, 2013, entitled, “IoT – Emerging and Receding Invisibly into the Fabric of Life.”  What my Zen moment added is the idea that real value to us as humans is realized not when the first flashy headlines appear, but when the technology recedes quietly into the everyday fabric of our lives.

When I think of technology that has emerged since my childhood and then proceeded to become commonplace, I am amazed: microwave ovens, digital cameras, color television, satellite communications, cable/satellite TV, personal computers, the Internet, social media, smart phones and much more.  Each one of these progressed from being novelties or the stuff of techno-geeks to becoming mainstream threads in the everyday fabric of life.

So it will be with IoT. We talk a lot about it now.  We techno-geeks revel in the audacious beautify of it all.  Just about every publication in the world has something to say about it.  But as first a handful, and then many, of the devices and concepts become commonly accepted, they too will become invisible, but highly valuable threads woven ubiquitously into “The Fabric of our Lives.”

July 10, 2014

Kuppinger ColeIs the latest attack on energy companies the next Stuxnet? [Technorati links]

July 10, 2014 05:23 PM
In Alexei Balaganski

It really didn’t take long after my last blog post on SCADA security for an exciting new development to appear in the press. Several security vendors, including Symantec and F-Secure, have revealed new information about a hacker group “Dragonfly” (or alternatively “Energetic bear”) that has launched a massive cyber-espionage campaign against US and European companies mainly from the energy sector. Allegedly, the most recent development indicates that the hackers not just managed to compromise those companies for espionage, but possess the necessary capabilities for sabotage, disruption and damage to energy grids of several countries.

Previous reports show that the group known as “Energetic bear” has been operating since at least 2012, having highly qualified specialists based somewhere in Eastern Europe. Some experts go as far as to claim that the group has direct ties with Moscow, operating under control of the Russian secret services. So, it’s quite natural that many publications have already labeled Dragonfly as the next Stuxnet.

Now, as much as I love bold statements like this, I personally still find it difficult to believe it. I admit that I have not seen all the evidence yet, so let’s summarize what we do know already:

However, the most recent development that has brought Dragonfly into the limelight is that the group has begun distributing the malware using the “watering hole” approach. Several ICS software vendor websites have been compromised, and their software installers available for download have been infected with Havex. It’s been reported that in one case, compromised software has been downloaded at least 250 times.

Since the sites belonged to notable vendors of programmable logic controllers used in managing wind turbines and other critical equipment, there could not be any other conclusion than “Russia is attacking our energy infrastructure”, right? Or could it?

Quite frankly, I fail to see any resemblance between Stuxnet and Dragonfly at all.

Stuxnet has been a highly targeted attack created specifically for one purpose: destroy Iranian nuclear enrichment industry. It contained modules developed specifically for a particular type of SCADA hardware. It has been so complex in its structure that experts are still not done analyzing it.

Dragonfly, on the other hand, is based on existing and widely used malware tools. It’s been targeting a wide array of different organizations – current reports show that it’s managed to compromise over 1000 companies. Also, the researchers that have discovered the operation could not find any traces of PLC-controlling payloads, the only purpose of the tool appears to be intelligence gathering. The claims of ties to the Russian secret services seem to be completely unsubstantiated as well.

So, does this all mean that there is not threat to our energy infrastructures after all? Of course, it does not! If anything, the whole Dragonfly story has again demonstrated the abysmal state of information security in the Industrial Control Systems around the world. Keep in mind, this time the cause of the attack wasn’t even weak security of an energy infrastructure. Protecting your website from hacking belongs to the basic “security hygiene” norms and does not require any specialized software, a traditional antivirus and firewall would do just fine. Unfortunately, even SCADA software vendors seem to share the relaxed approach towards security typical for the industry.

The fact that the Dragonfly case have been publicized so much is actually good news, even if not all publications are up to a good journalism standard. If this publicity leads to tighter regulations for ICS vendors and increases awareness of the risks among ICS end users, we all win at the end. Well, maybe except the hackers.

Vittorio Bertocci - MicrosoftWhat’s New in ADAL v2 RC [Technorati links]

July 10, 2014 05:12 PM

ADAL v2 RC is here, and is packed with new features! Those are the last planned changes we are doing to the library surface for v2, hence you should expect this to be the harbinger of what you’ll get at GA time.

Here there’s a list of the main changes. For some of the new features, the news are so significant that I wrote an entire post just for it – watch for the links in the descriptions.

The list is pretty long! We are in the process of updating the samples on GitHub: hopefully that will help you to follow the changes. As mentioned above, we are not planning any additional changes before GA – hence you can expect the current surface to be the one that will end up in the documentation.

If you have issues migrating beta code to RC, feel free to drop me a line.

Thanks and enjoy!

Vittorio Bertocci - MicrosoftADAL v2 and Windows Integrated Authentication [Technorati links]

July 10, 2014 04:41 PM

The release candidate of ADAL v2 introduces a new, more straightforward way of leveraging Windows Integrated Authentication (WIA) for your AAD federated tenants in your Windows Store and .NET applications.

Its use is very simple. You might have read here that we model direct username/password authentication by holding those credentials in one instance of a new class, UserCredential. To take advantage of WIA you use the exact same principle: you create one empty instance of UserCredential and pass it through, that will be your way of letting ADAL know that you want to use WIA. In practice:

AuthenticationResult result = 
     authContext.AcquireToken(todoListResourceId, clientId, new UserCredential());

If you are operating from a domain joined machine, you are connected to the domain network, you are signed in as a domain user, and the tenant used in authContext is a federated tenant, you’ll get back a token right away!
Once you successfully get a token that way, ADAL will create a cache entry for it as usual; from that moment on, it won’t matter how the token was acquired – it will be handled as usual and available for lookup from any other AcquireToken overload.

Things to note:

Although it was already possible to take advantage of WIA with the older OM (by using promptbehavior.never against a federated tenant) we are confident that this simpler and more predictable mechanism is more in line with the convenience it is legitimate to expect when operating within a domain. Enjoy! Smile

Vittorio Bertocci - MicrosoftThe New Token Cache in ADAL v2 [Technorati links]

July 10, 2014 05:21 AM


The release candidate of ADAL v2 introduces a new cache model, which makes possible to cache tokens in middle tier apps and dramatically simplifies creating custom caches.
In this post I’ll give you  a brief introduction to the new model. You can see the new cache in action in our updates samples.

Limitations of the ADAL v1 Model

The token cache in ADAL plays a key role in keeping the programming model approachable. The fact that ADAL saves tokens (of all kinds: access, refresh, id) and token metadata (requesting client, target resource, user who obtained the token, tenant, whether the refresh token is an MRRT…) allows you to simply keep calling AcquireToken, knowing that behind the scenes ADAL will make the absolute best use of the cached information to give you the token you need while minimizing prompts.

Another role fulfilled by the ADAL cache is to offer you a view on the current authentication status of your application: by interrogating the cache you can discover whether you have access to a given resource, if you have tokens for a specific set of user accounts, and so on.

In ADAL v1, the cache implementation reflected the primary target scenarios of that version of the library: native clients. The cache implemented as an IDictionary, with a specific key type which reflected the domain-specific info necessary for handling tokens. That fulfilled the two functions outlined earlier, keeping track of the data we needed and offering an easy way of querying the collection (via LINQ). That did its job for native clients, however that was unsuitable for use on mid-tier apps. Think of a web app with few millions of users, each of them with few tokens stored for calling API in the context of their sessions – the resulting dictionary would have been pretty hard to scale. For that reason, AcquireToken* implementations in ADAL v1 meant to prevalently run on the server side did not make use of ADAL’s cache.

Another shortcoming of the v1 model was that providing a custom cache required you to implement IDictionary, not rocket science but certainly an onerous task. Furthermore: many of the elements in the key type were really meant for your own queries and were never used by AcquireToken. We were aware of the complications, but given the asymmetry between producers of custom cache implementations and consumers of such classes (the latter vastly outnumbering the former) we made the tradeoff. When this was picked up in v2, though our awesome dev team found a way of avoiding the tradeoff altogether – designing a new model that delivers on both functions AND that is a breeze to implement!

The New Cache Model

The idea of the new cache model is pretty simple: ADAL manages the cache structure as a private implementation detail, but gives you the means to 1) provide a persistence layer for it, so that you can use your favorite store to hold it and 2) it still allows you to obtain views of the cache content, so that you can gain insights on the capabilities of your application without being exposed to the internals to how the various cache entries are actually maintained. It sounds pretty awesome, right? Smile

You create a custom cache by deriving from the new TokenCache class, and passing an instance of such class at AuthenticationContext construction time. Here there’s how it looks like in VS’ Class View:


There’s quite a lot of stuff there, but in fact you need to touch only 3-4 things.
Here there’s how it works.

TokenCache features three notifications, BeforeAccess, BeforeWrite and AfterAccess, that are activated whenever ADAL does work against the cache. Those notifications offer you the opportunity of keeping in sync the storage of your choice with the in-memory cache that ADAL uses.

Say that you start your app for the very first time, and you make a call to AcquireToken(resource1, clientId, redirectUri). From the cache’s point of view, how does that unfold?

  1. ADAL needs to check the cache to see if there is already an access token for resource1 obtained by client1, or if there is a refresh token good for obtaining such an access token, and whatever other private heuristic you don’t need to worry about. Right before it reads the cache, ADAL calls the BeforeAccess notification. Here, you have the opportunity of retrieving your persisted cache blob from wherever you chose to save it, and pump it in ADAL. You do so by passing that blob to Deserialize.
    Note that you can apply all kind of heuristics to decide whether the existing inmemory copy is still OK to reduce the times in which you access your persistent store.
  2. As we said, this is the first time that the application runs: hence the cache will (typically) be empty. Hence, ADAL pops out the authentication UX and guides the user through the authentication experience. Once it obtains a new token, it needs to save it in the cache: but right before that, it invokes the BeforeWrite notification. That gives you the opportunity of applying whatever concurrency strategy you want to enact: for example, on a mid tier app you might decide to place a lock on your blob – so that other nodes in your farm possibly attempting a write at the same time would avoid producing conflicting updates. If you are optimistic, of course you can decide to simply do nothing heer :Winking smile
  3. After ADAL added the new token in its in-memory copy of the cache, it calls the AfterAccess notification. That notification is in fact called every time ADAL accessed the cache, not just when a write took place: however you can always tell if the current operation resulted in a cache change, as in that case the property HasStateChanged will be set to true. If that is the case, you will typically call Serialize() to get a binary blob representing the latest cache content – and save it in your storage. After that, it will be your responsibility to clear whatever lock you might have set.
    Very important: ADAL NEVER automatically reset HasStateChanged to false. You have to do it in your own code once you are satisfied that you handled the event correctly.

Those are the main moving parts you need to handle. Other important aspects to consider concern the lifecycle of the cache instance outside of its use from AcquireToken. For example, you’ll likely want to populate the cache from your store at construction time; you’ll want to override Clear and DeleteItem to ensure that you reflect cache state changes; and so on.

You might wonder why you can’t just wait for the first access and leave to the notifications to do the first initialization. That’s tricky. You could do that, but then if you’d need to query the cache before requesting the first token you’d be in trouble. Think of a multi-tenant client: on first access you’ll use common as the authority, but for subsequent accesses you want to use the authority corresponding to the user that actually signed in and initialized the app to its own tenant. If you don’t do that, you’ll never hit the cache during AcquireToken given that using “common” is equivalent to say “I don’t know what tenant to use”.

If you want to query the cache, you can call ReadItems() to pull out an IEnumerable of TokenCacheItems.

Pretty easy, right?

Here there’s one of my favorite benefits of this model: from ADAL v2 RC on, AcquireTokenByAuthorizationCode does save tokens in the cache! This will result in a tremendous reduction in the amount of code that it is necessary on middle tier applications, as you’ll see in the updated samples.


Here there’s a simple example. Say that I am writing a Windows desktop app, and I want to save tokens so that I don’t have to re-authenticate every single time I launch the application. I decide to save the tokens in a DPAPI-protected file. Here there’s a super simple cache implementation doing that:

// This is a simple persistent cache implementation for a desktop application.
// It uses DPAPI for storing tokens in a local file.
class FileCache : TokenCache
    public string CacheFilePath;
    private static readonly object FileLock = new object();

    // Initializes the cache against a local file.
    // If the file is already present, it loads its content in the ADAL cache
    public FileCache(string filePath=@".\TokenCache.dat")
        CacheFilePath = filePath;
        this.AfterAccess = AfterAccessNotification;
        this.BeforeAccess = BeforeAccessNotification;
        lock (FileLock)
            this.Deserialize(File.Exists(CacheFilePath) ? 
                : null);

    // Empties the persistent store.
    public override void Clear()

    // Triggered right before ADAL needs to access the cache.
    // Reload the cache from the persistent store in case it changed since the last access.
     void BeforeAccessNotification(TokenCacheNotificationArgs args)
        lock (FileLock)
            this.Deserialize(File.Exists(CacheFilePath) ?  
                : null);

    // Triggered right after ADAL accessed the cache.
    void AfterAccessNotification(TokenCacheNotificationArgs args)
        // if the access operation resulted in a cache update
        if (this.HasStateChanged)
            lock (FileLock)
                // reflect changes in the persistent store
                // once the write operation took place, restore the HasStateChanged bit to false
                this.HasStateChanged = false;

That is really super-simple code, if compared to having to implement an entire IDictionary.

Want to see something a bit more challenging?

Say that I have a web application. My web app connects to web APIs on behalf of its users. Every user has a set of tokens that are saved in a SQL DB, so that when they sign in to the web app they can directly perform their web API calls without having to re-authenticate/repeat consent. The new ADAL cache model makes it pretty easy to achieve this: we can have a flat list of blobs, all representing an ADAL cache for a specific web user. When the user signs in, we retrieve the corresponding blob and use it to initialize his/her token cache. That’s exactly how we implemented the cache in the updated multitenant samples. Here there’s the implementation:

public class PerWebUserCache
    public int EntryId { get; set; }
    public string webUserUniqueId { get; set; }
    public byte[] cacheBits { get; set; }
    public DateTime LastWrite { get; set; }

public class EFADALTokenCache: TokenCache
    private TodoListWebAppContext db = new TodoListWebAppContext();
    string User;
    PerWebUserCache Cache;
    // constructor
    public EFADALTokenCache(string user)
       // associate the cache to the current user of the web app
        User = user;
        this.AfterAccess = AfterAccessNotification;
        this.BeforeAccess = BeforeAccessNotification;
        this.BeforeWrite = BeforeWriteNotification;

        // look up the entry in the DB
        Cache = db.PerUserCacheList.FirstOrDefault(c => c.webUserUniqueId == User);
        // place the entry in memory
        this.Deserialize((Cache == null) ? null : Cache.cacheBits);

    // clean up the DB
    public override void Clear()
        foreach (var cacheEntry in db.PerUserCacheList)

    // Notification raised before ADAL accesses the cache.
    // This is your chance to update the in-memory copy from the DB, if the in-memory version is stale
    void BeforeAccessNotification(TokenCacheNotificationArgs args)
        if (Cache == null)
            // first time access
            Cache = db.PerUserCacheList.FirstOrDefault(c => c.webUserUniqueId == User);
        {   // retrieve last write from the DB
            var status = from e in db.PerUserCacheList
                         where (e.webUserUniqueId == User)
                         select new
                             LastWrite = e.LastWrite
            // if the in-memory copy is older than the persistent copy
            if (status.First().LastWrite > Cache.LastWrite)
            //// read from from storage, update in-memory copy
                Cache = db.PerUserCacheList.FirstOrDefault(c => c.webUserUniqueId == User);
        this.Deserialize((Cache == null) ? null : Cache.cacheBits);
    // Notification raised after ADAL accessed the cache.
    // If the HasStateChanged flag is set, ADAL changed the content of the cache
    void AfterAccessNotification(TokenCacheNotificationArgs args)
        // if state changed
        if (this.HasStateChanged)
            Cache = new PerWebUserCache
                webUserUniqueId = User,
                cacheBits = this.Serialize(),
                LastWrite = DateTime.Now
            //// update the DB and the lastwrite                
            db.Entry(Cache).State = Cache.EntryId == 0 ? EntityState.Added : EntityState.Modified;                
            this.HasStateChanged = false;
    void BeforeWriteNotification(TokenCacheNotificationArgs args)
        // if you want to ensure that no concurrent write take place, use this notification to place a lock on the entry

Also in this case, the implementation is pretty self explanatory: I disregarded locks and only added a little timestamp check to avoid swapping potentially sizable blobs from the DB when it’s not necessary.


I have a confession to make: although I was always bummed by the lack of a viable token caching solution on the server side, and consequent need for complex code for confidential clients, I didn’t believe that a cache redesign would have been possible so late in the cycle given the time constraints we are up against (we want to release soon!!). However the dev team was very passionate about solving that problem, and worked very hard to deliver a design that blew me away – it satisfied all requirements without affecting schedule! So big kudos to the dev team, especially to Afshin Smile

All the new features discussed here apply to both ADAL .NET and ADAL for Windows Store. However, that does NOT change the defaults: ADAL .NET OOB cache remains in-memory, ADAL for Windows Store retains its default persistent cache. All the news only apply to how you’d implement a custom cache, should you choose to write one. If you need a starting point, you can look at the above snippets. If you want to see them in action, you’ll find them (and possibly others) in our github samples.


July 08, 2014

MythicsWebCenter Portal:  Building Your Own. Part 1 [Technorati links]

July 08, 2014 09:04 PM

In this blog, I will share my recent successful experience and the…

MythicsWebCenter Portal:  Building Your Own. Part 1 [Technorati links]

July 08, 2014 09:04 PM

In this blog, I will share my recent successful experience and the…

GluuAuthentication Speed Versus Flexibility: Benchmarking SSO [Technorati links]

July 08, 2014 02:49 PM

Benchmark SSO

Gluu has been working quite a bit recently on benchmarking, and the question came up whether it’s better to use the Gluu Server’s built in LDAP authentication with a custom filter, or the Jython based “Custom Authentication Interception Script.

If you are just considering throughput, the Jython script has more CPU overhead. However, it gives the organization vastly more flexibility. In the future, some organizations may support many authentication workflows. How to identify a person may vary depending on the location of the person being authenticated, and what device is in their hands. Authentication attempts provide valuable data for fraud detection, which may be exposed via API interfaces. For these cases, empowering system administrators to add business logic without having to compile, build, and deploy a war/jar file can improve security and add agility.

Another consideration for benchmarking was whether to use the Gluu Server for Session Management. The OpenID Connect specification does not require central sessions management–the session is only in the browser. In the Gluu Server, central session persistence is optional. In large deployments, its un-desirable. In smaller deployments, it can be quite useful.

In the future, we may see complimentary OpenID Connect specifications to add session management alternatives. One idea is for the OpenID Provider (“OP”) to return the logout URLs to the browser, which could then notify the back-end servers that a logout has occurred. The Gluu Server also has a “Custom Logout Interception Script” that enables the OP to insert some tactical code to ensure the cleanup of resources (for example, call the API to make sure the CA Siteminder session is ended).

In the long term, session management needs to be centralized to enable SSO where there are many autonomous websites and mobile applications. Also, extending Web SSO to mobile applications is under discussion for standardization. This is critical for IoT. For example, when I logout of my tablet, can I force a logout of my TV?

As the OP becomes smarter, there is a trade-off of speed and flexibility, hardware and functionality. Depending on your business requirements, and the number of people you are serving, you may have to make a number of hard choices.

Learn more about benchmarking the open source OX OAuth2 platform for large scale deployments.

Julian BondThe Rhesus Chart (Laundry Files) by Charles Stross [Technorati links]

July 08, 2014 08:26 AM
Ace (2014), Hardcover, 368 pages
[from: Librarything]

Vittorio Bertocci - MicrosoftUsing ADAL .NET to Authenticate Users via Username/Password [Technorati links]

July 08, 2014 07:45 AM

This might be the most requested feature for ADAL: the ability of authenticating a user by pumping in username/password, without showing any pop up. There are perfectly legitimate scenarios that require that feature; unfortunately there are also many ways in which abusing this feature might backfire.

With the RC we just released, we added this feature to ADAL .NET (but not to Windows Store or Windows Phone). We have a sample showing how to use it; here I’ll highlight the minimal syntax that lights it up, discuss some considerations about different cases,  and above all I’ll spell out limitations and warnings about what you are bargaining when you choose to go this route.

When to Use This Feature

There are a number of scenarios where the direct use of username and password is inevitable. The ones below are the ones I encountered most often.

Note that all of those scenarios could be potentially solved by Windows Integrated Auth (WIA), however not all setups can leverage that (e.g. cloud-only tenants, clients running outside of the domain, etc) hence in the below I assume we’re in such unfortunate cases.

Headless clients

Say that you are operating from a console app, running on a Server Core. There is simply no Windows manager on the box to render any UX – everything needs to take place in text.

Legacy Solutions

During the Ask the Expert night at this year’s TechEd North America I met with a gentleman who described a setup in which he was using legacy hardware already sending username and password. His investment in such clients was massive and decomissioning them (or the software running on them) was out of question. He very much wanted to move to AAD and move his backend to the cloud, but could not rely on our current credential gathering experience. The direct use of username and password will allow him to bridge his legacy solution to his new cloud based backend, secured via AAD.

Automated Testing

This is an all-time favorite of our partners within Microsoft. If I’s have a dollar for every mail/IM/hallway conversation I got about this… Smile
The scenario is simple: you have a solution based on native clients and you want to automate its verification. Existing test harnesses don’t always make it easy to automate a web based credential gathering interface, hence the request for a mechanism to easily obtain tokens in exchange for test credentials.

When NOT to Use This Feature

This is easy: in pretty much any other case! Smile Direct manipulation of credentials is a BIG responsibility that significantly grows your attack surface, is conducive of bad habits (like caching the credentials), denies you pretty much all of the advantages you get by presenting a server-driven experience (multi factor auth, consent, multi-hop federation, etc – see below) and makes your client deployments brittle.

The main anti-pattern hidden behind requests for this feature is the desire for customizing the authentication experience. I totally understand that desire, but I often get the impression that the tradeoff one makes when going that route are not always well understood. Falling back to direct credential manipulation is an awfully high price to pay: it cuts you our from a long list of features and puts both your users and your app at risk. I would rather hear your feedback about what parts of the server-provided UX you want to customize – and fiercely fight for you in shiproom to make that change happen – rather than helping you through a security crisis.

How it works

Enough with the doom & gloom, let’s take a look at some code! Winking smile
For the visitors form the future: this feature lights up for the first time in ADAL version 2.7.10707.1513-rc.

I’d daresay that the way in which this feature has been implemented fits right in in the existing, well-proven credentials model we introduced since v1 for handling client credentials flow (see this sample).
We introduced a new type, UserCredential, which represents a user credential. If you want to use username and password, you’d initialize a new instance via the following:

UserCredential uc = new UserCredential(user, password);

Where user is a string containing the UPN of the user you want to authenticate, and password is a string or a SecureString containing… well, you know.

How to you use uc for getting a token? Well, we added a couple of overloads to the AcquireToken* family:

public AuthenticationResult AcquireToken(string resource, string clientId, UserCredential userCredential);
public Task<AuthenticationResult> AcquireTokenAsync(string resource, string clientId, UserCredential userCredential);


The relationship between the client app and the resource is precisely the same one you learned about in all other single tenant native client->web service ADAL samples: both need to be registered, the API needs to expose at least a permission, the client needs to be configured to request such permission, and so on. Note that ehre there is no opportunity for AAD to prompt for consent, hence flows which would require it are off limits here.

Once you call one of those overloads, as long as you provided the correct credentials (and your tenant is configured correctly) you’ll get back a standard AuthenticationResult, the resulting tokens will be automatically cached, and so on.

You can get a feeling of how this all works by giving a spin to our headless native client sample on GitHub. Here there’s a screenshot of a typical run, to give you a feeling of the experience you can achieve with this flow. Party like it’s ‘95! Smile


To peek a bit behind the scenes, there are two main sub-scenarios:

From your code you won’t notice any difference between the two cases – I am just mentioning that so that you’re aware of what’s required for making this flow work. For example, if instead of ADFS you set up another IP that does not expose WS-Trust endpoints or does it differently from ADFS, this flow will likely fail.

Constraints & Limitations

Here there’s a list of limitations you’ll have to take into account when using this flow.

Only on .NET

Given the intended usage of this feature, we decided to add it only to .NET.

On Windows Store we added the ability to use Windows Integrated auth, which has many of the same advantages and less drawbacks. Details in another post.

No web sites/confidential clients

This is not an ADAL limitation, but an AAD setting. You can only use those flows from a native client. A confidential client, such as a web site, cannot use direct user credentials.


Microsoft accounts that are used in the context of an AAD tenant (classic example: Azure admins) cannot authenticate to AAD via raw credentials – they MUST use the interactive flow (though the PromptBehavior.Never flag remains an option).


Multi-factor authentication requires dynamic UX to be served on the fly – that clearly cannot happen in this flow.

No Consent

Users do not have any opportunity of providing consent if username & password are passed directly.

No multi-hop federation

Any scenario requiring home realm discovery, multiple federation hops and similar won’t work – the protocol steps are rigidly codified in the client library, with no chance for the server to dynamically influence the authentication path.

No any server side features, really

In the “traditional” AcquireToken flows you have the opportunity of injecting extra parameters that will influence the behavior of AAD – including parameters that AAD didn’t even support when the library was released. None of that is an option when using username and password directly.


Direct use of username an password is a powerful feature, which enables important scenarios. However it also a bit of a Faustian pact – the price you pay for its directness is in the many limitations it entails and the reduced flexibility that it imposes on any solution relying on it.
If you are in doubt on whether this feature is right for your scenario, feel free to drop us a line!

Vittorio Bertocci - MicrosoftADAL for .NET/Windows Store/Windows Phone Is Now Open Source! [Technorati links]

July 08, 2014 06:06 AM

We’ve been saying it was coming for almost a year. With this RC preview release, it’s finally happening: ADAL for .NET/Windows Store/Windows Phone is now fully open source!

Without getting too dramatic, this truly ushers a new era of transparency and collaboration between our team and you guys – you’ll be able to:

Note that we will keep releasing new NuGet versions (stable and prerelease) at the usual location, with the usual support policies – the code is an additional way for you to get even more value from ADAL and does not substitute our usual release cycle.

As you might infer from the number of exclamation points, I am pretty excited about this! Smile
All of the above points are pretty self-explanatory, but the MyGet and VS configuration for vacuuming down the library symbols require a bit more guidance: see the instructions below.

Configuring Visual Studio 2013 to Access AAD’s MyGet Feed

Our collaboration with the ASP.NET team on the OWIN middleware components for OpenId Connect made us experience first hand how convenient it is to have a MyGet feed where we can dump nightly builds and use it as a collaboration touch point as we refine our software. Hence, we decided to extend those benefits to ADAL itself.
To configure the AAD MyGet feed in VS 2013:

…and voila’! From now on you can get the absolute freshest (and totally unsupported, BTW Smile) work-in-progress ADAL build.

Remember, for official previews and stable releases keep referring to the NuGet.org feed.

Configuring Visual Studio to Load ADAL Symbols

In my opinion, this is one of the coolest VS + NuGet features for open source projects. How many times have you wished to unpack that mysterious error and get to the bottom of what exactly is failing in that oh-so-handy-but-oh-so-black-box library you’re using? Well, now with ADAL you can! It just requires a bit of configuration.

You basically need to follow the “recommended configuration” section of this page.
Here there’s how my debugger options look like after I have done so:


and here there are my symbols settings:


Please be aware that loading up the symbol cache is doing to be quite laborious for VS, hence don’t set this thing up right before walking on stage for a demo! Smile But once you’ve got the symbols in place, you’ll be able to dig as deep as you want.


We’ve been waiting for this for a long time. Today we are all very excited to start to develop ADAL for .NET/Windows Store/Windows Phone in that huuuuge open space floor that is the Internet.

As we finally reached the RC milestone, you can expect the next few weeks to be devoted to stabilization – however don’t let that stop you from toying with the source, share your ideas, and if there’s something you want to fix or contribute… hit that pull request button! Smile

July 07, 2014

ForgeRockIdentity Relationship Management Introduced to Global Leaders [Technorati links]

July 07, 2014 06:23 PM

We’re excited to see Identity Relationship Management getting global exposure. Joni Brennan, Executive Director of the Kantara Initiative, talked up IRM in the latest Organization for Economic Cooperation & Development (OECD) Internet Technical Advisory Committee newsletter, distributed to all 34 member countries.

As members of the Kantara Initiative and creators of the first and only open source IRM platform, we at ForgeRock see this as further evidence of the evolution of identity and access management. In her article, Joni discusses the industry shift from traditional, internal identity and access management to IRM, which focuses on external relationships between people, entities, services, and things. She explains why organizations cannot ignore this foundational shift, and how IRM has the ability to drive revenue growth for organizations.

Check out the article to learn more about why it’s important for organizations to choose a secure and private IRM solution for their identity and access management needs: http://www.internetac.org/?p=2136

The post Identity Relationship Management Introduced to Global Leaders appeared first on ForgeRock.

ForgeRockIdentity Relationship Management Introduced to Global Leaders [Technorati links]

July 07, 2014 06:23 PM

We’re excited to see Identity Relationship Management getting global exposure. Joni Brennan, Executive Director of the Kantara Initiative, talked up IRM in the latest Organization for Economic Cooperation & Development (OECD) Internet Technical Advisory Committee newsletter, distributed to all 34 member countries.

As members of the Kantara Initiative and creators of the first and only open source IRM platform, we at ForgeRock see this as further evidence of the evolution of identity and access management. In her article, Joni discusses the industry shift from traditional, internal identity and access management to IRM, which focuses on external relationships between people, entities, services, and things. She explains why organizations cannot ignore this foundational shift, and how IRM has the ability to drive revenue growth for organizations.

Check out the article to learn more about why it’s important for organizations to choose a secure and private IRM solution for their identity and access management needs: http://www.internetac.org/?p=2136

The post Identity Relationship Management Introduced to Global Leaders appeared first on ForgeRock.

KatasoftMaking Express.js Authentication Fun Again [Technorati links]

July 07, 2014 03:00 PM

Express and Node

It’s no secret that if you’re building an Express web app, adding in user authentication is quite difficult. If you google “Express Authentication”, you’ll be directed to one of two tools:

While both Passport and everyauth are really great tools, as a relatively new Node developer, I had a difficult time figuring out how to actually use them in a real application.

I honestly don’t want to setup / configure my own session management stuff, or worry about creating my own login / registration views securely.

After a lot of discussion internally at Stormpath, we decided it would be awesome to build a really simple, powerful, and elegant authentication system for Express.

Which brings me to…


For the past week, I’ve been working on building an authentication library that would abstract away all the details, and make adding user authentication to Express apps drop-dead easy.

With that said, I’m really happy to introduce express-stormpath! Visit the official docs here: http://docs.stormpath.com/nodejs/express/

NOTE: If you aren’t a Stormpath user already, Stormpath is an API service that makes managing users simple. It’s completely free for small apps.

express-stormpath allows you to painlessly add complete user authentication (including registration, login, and logout) into your Express apps in just a few lines of code.

First, install the library (and express, too!):

$ npm install express
$ npm install express-stormpath

Next, open up your editor and create an app.js file:

var express = require('express');
var stormpath = require('express-stormpath');

var app = express();

app.use(stormpath.init(app, {
  apiKeyId: 'xxx',
  apiKeySecret: 'xxx',
  application: 'xxx',
  secretKey: 'xxx',


The secretKey must be a long random string (used to secure sessions), while the other fields contain your Stormpath account settings. For more information on setting up a Stormpath account, you can check out the Setup section of our docs.

The above code sample is a fully functional Express app which has three pre-configured routes:

A registration route (/register), which looks like this:

Stormpath Express Registration

A login route (/login), which looks like this:

Stormpath Express Login

And a logout route (/logout) which logs users out of their account.

Out of the box you get user registration, login, and logout!

Just to demonstrate how easy it actually is, here’s a 90 second screencast I made in which I’ll create a brand new Express app from scratch with full user registration and login!


With express-stormpath, you can also customize essentially every part of the library very easily.

Let’s say, for instance, that after a user creates a new account or logs in, you’d like to redirect them to a dashboard page (/dashboard), you can easily do this by specifying the redirectUrl setting when initializing the middleware, like so:

app.use(stormpath.init(app, {
  redirectUrl: '/dashboard',

What if you want to change the login and registration urls? It’s also a single setting:

app.use(stormpath.init(app, {
  loginUrl: '/user/login',
  registrationUrl: '/user/register',

Or, what if you want to change the view code to add in your own styles / etc? It’s incredibly simple! I wrote a guide which explains how to do it in explicit detail.

You can easily change / remove / modify any part of the express-stormpath library by modifying middleware settings — it really is that simple.

Exploring express-stormpath

If you’d like to give express-stormpath a try, here are some code samples which illustrate how to use the library in a bit more depth.

Let’s say you want to write a route which requires a user to be logged in. You can do this easily by using the stormpath.loginRequired tooling:

app.get('/dashboard', stormpath.loginRequired, function(req, res) {
  res.send('If you can see this page, you must be logged into your account!');

When a user visits /dashboard, we’ll automatically check to ensure the user is logged in before allowing them to continue. If the user isn’t logged in, they’ll be redirected to /login?next=%2Fdashboard, so once they log into their account, they’ll be immediately sent back to the dashboard page!

Furthermore, you can also require a user to be a member of one or more groups in order to access a route. For instance, if you’d like to build an admin panel that’s exclusively available to users in the ‘admins’ group, you could do:

app.get('/admin', stormpath.groupsRequired(['admins']), function(req, res) {
  res.send('You are an admin!');

To assert that a user is a member of multiple groups, you can simply list multiple groups, ex:

app.get('/admin', stormpath.groupsRequired(['admins, developers']), function(req, res) {
  res.send('You are an admin and developer!');

You can also assert that a user is a member of one or more groups by passing an optional flag:

app.get('/hmm', stormpath.groupsRequired(['admins, developers', 'dudes'], false), function(req, res) {
  res.send('You are either an admin, developer, dude, or some combination of them all!');

In your route code, you can also access the current user object by calling res.locals.user like so:

app.get('/dashboard', stormpath.loginRequired, function(req, res) {
  res.send('Welcome back: ' + res.locals.user.email);

If you’re inside of a template, you can access the user object directly, here’s an example in Jade:

    p Hi #{user.email}

Making Authentication Easier

Thanks for reading this far!

If you’d like to get started with express-stormpath, now would be a great time to check out the official documentation!

I’m super excited to launch the very first release of express-stormpath, and plan on adding lots of new features to it in the coming weeks:

If any of you give express-stormpath a try, I’d love to hear from you, please drop me a line: randall@stormpath.com — or tweet us @gostormpath!

CourionWhen the Lines Separating Employees, Contractors and Customers Blur [Technorati links]

July 07, 2014 01:05 PM

Access Risk Management Blog | Courion

Nick Berents

I recently met with a Courion customer, one of the largest accountable care organizations in the US. This customer is based outside of Orlando, Florida, so naturally the topic of Disney came up. Over the past year Disney has figured out a way to use technology to distribute guests more evenly throughout the parks via their "Fastpass+" system. The end result is higher customer satisfaction by reducing wait times and increased revenue because now – you guessed it – vacationers can spend more time in the gift shops and restaurants.

Disney is able to accomplish this by setting up profiles that track your ride preferences in addition to your purchases. Vacationers can go through Disney's website portal, which is personalized based on their preferences, to make ride selections, dining reservations, and plans with others who also have profiles on the portal.

This was a massive investment and IT project for Disney. Naturally, it got me wondering, do they segregate this portal from their corporate networks? Are their employees also customers, and do they co-mingle their profiles? What about contractors they hire? Do they have access to the networks and are they constantly being monitored? Do they set up profiles on the portal as well? Remember that the Target data breach came about as a result of third party HVAC vendor’s access being compromised.

I then asked the Courion customer what he looks for in an identity and access intelligence system like Access Insight®. This is when the conversation got serious. He made it clear where Access Insight fits in.

"What if someone has what appears to be a safe access, but they happen to be an expert programmer? Once they're in your system they may start to make some movement that would cause your security people to ask questions like, 'Why has a person who should only have certain access suddenly be asking for access here, here, and here?' Those are the types of movements that really are suspicious and in some of the security breaches we've read about, only after the fact they say, 'Oh wow, if we had seen how somebody started to move along the access chain quickly at two in the morning, we would've been able to call this out.'"

"That's what Access Insight does. It alerts that there is movement that should not be, and we have a team on call 24 x 7 to monitor for alerts like that. It helps us understand if the movement is a natural course of action or a natural workflow. Or is this something that we need to wake some people up right now and stop and then investigate in the morning? Access Insight affords us the opportunity to see that."

He also acknowledged that most companies have very intricate infrastructure systems, and their IT departments are very well-schooled in protecting their environment. They receive penetration challenges every single day and they swat them back quickly. But what differentiates Access Insight is it sees someone who has been given permission to come in under the guise of a role that fits the job profile, but suddenly that person starts traversing the network because they have an extra skill or access that you don't know about. Access Insight keeps monitoring the people with permissions so that any activity that takes place out of the normal parameters you would expect to see, sends off an alert for your security team to stop, investigate, and take action if necessary.

This is something all organizations, from our Orlando-based customer to Disney, need to consider as the news of insider threats continues to rise. Knowing how sensitive company information is being accessed, at what time and for what purpose is also key. Having this insight will ensure that insiders, nefarious or naïve, don't get a data breach fast pass.


Kuppinger ColeExecutive View: EU Guidelines for Cloud Service Level Agreements - 71154 [Technorati links]

July 07, 2014 12:38 PM
In KuppingerCole

In a press release on June 26th, the European Commission announced the publication of new guidelines “to help EU businesses use the Cloud”. These guidelines have been developed by a Cloud Select Industry Group as part of the Commission’s European Cloud Strategy to increase trust in these services. These guidelines cover SLAs (Service Level Agreements) for cloud services. In KuppingerCole’s opinion these guidelines are...

Kuppinger ColeExecutive View: Microsoft ADFS: Active Directory Federation Services - 71126 [Technorati links]

July 07, 2014 11:52 AM
In KuppingerCole

There is a growing demand from organizations for tighter communication and collaboration with external parties and, in some cases, customers. At the same time the rapid growth of cloud services is driving the need for robust and flexible authentication solutions. As the network boundary fades, access management becomes increasingly important for agile organizations and drives the need for more sophisticated solutions. ADFS (Active Directory...

Julian BondToday's neologism is :- [Technorati links]

July 07, 2014 09:32 AM
Today's neologism is :-


 Seven good reasons to be an apocaloptimist »
Andrew Simms: The climate clock is still ticking – yet there are incredible opportunities to make things better

[from: Google+ Posts]

Anil JohnRelaxing, Recharging and Hiking in Banff National Park, Canada [Technorati links]

July 07, 2014 12:00 AM

I have found it very important to allocate time to rest, relax and recharge in order to deal with the pace and stress of daily life. My family and I find the outdoors to be the place to do just that. We just got back from Banff National Park, in the Canadian Rockies, which we visit often enough that my kids call it their happy place.

Keep close to Nature's heart... and break clear away, once in a while, and climb a mountain or spend a week in the woods. Wash your spirit clean.

John Muir

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.