March 05, 2015

Gluu4 Identity & Access Management Use Case Considerations [Technorati links]

March 05, 2015 05:36 PM

5_reasons

Here at Gluu we hate answering RFI’s. It’s a lot of writing for a small audience. So in the interest of salvaging our time, and perhaps even helping other organizations with similar questions, we like to publish our responses to some of the RFIs we receive (of course without any organization specific or sensitive information!).

Here are four common identity and access management requirements many organizations face, and how they can be addressed using the Gluu Server as the central access management platform.

Use Case 1: Web SSO towards a proprietary RDBMS based User Repository

Say your organization has all your web portal user accounts stored in a proprietary user store, and you need to authenticate against that database using a custom built RDBMS connector in the SSO platform. Using the Gluu Server, there are two common solutions: 1) Use the Radiant Logic Virtual Directory Server (VDS) to map data from the RDBMS, or 2) publish an API that enables the Gluu Server to validate the username/password credentials in a a GLuu custom authentication interception script, and to dynamically enroll the person in the Gluu Server.

The important takeaway here is that RDBMS always requires some custom integration because every organization has a different schema. The most productive way to do it varies based on the complexity of your RDBMS data structures.

Use Case 2: Employees need to impersonate customers

If your admin staff need to “impersonate” a nother user in your domain, there are a few different strategies you can employ.

While it is possible to address this scenario in the Gluu Server, it may be advisable to handle this within your application. For example, the Gluu Server could authenticate the employee who is impersonating the customer, and in your application, you would detect that the person has a certain role (i.e. admin) and should have the ability to see / edit a customer’s environment.  

If you actually do let a staff member authenticate as another person, it is highly recommended that you use very secure credentials and enforce a multi-step authentication workflow (both supported by the Gluu Server). You would need to use a custom authentication interception script in the Gluu Server to handle validating all the required credentials. For example, you could use the combination: username (of the admin staff), target-username of the person being authenticated, password (of the admin staff), yubikey token (of admin staff). In this way, if a hacker were to compromise your staff’s credentials, he would still need the second factor (yubikey token) to do impersonation. You’d also want some special logs, and to pass along some other user claims that your application could use to record the fact that the person was being impersonated by an admin.

Use Case 3: External SSO using AD creds to Office 365, Salesforce, and other third-party applications

If people in your domain–such as employees, partners, or customers–need to be able to leverage their Active Directory credentials to access third-party applications, the Gluu Server has you covered. As long as the target application supports SAML or OpenID Connect, the single sign-on transaction can be configured from within the Gluu Server GUI. If the target application supports a legacy federation protocol like WS-Federation, we recommend using ADFS as a WS-Federation-to-SAML proxy. Consider each target application on an individual basis and determine the level of integration effort needed. Aligning with standards like SAML and OpenID Connect will save you time, money, and indefinite headaches.

Use Case 4: I have an API gateway, do I still need the Gluu Server for central API access management?

API Gateways like Layer 7 or Apigee perform some important security fuctions, like controlling transaction volume, validating parameters, and hardware SSL acceleration. However, these solutions were not designed to provide a central policy decision point, or to support identity federation protocols like OpenID Connect or SAML. Where they do provide such features, they are usually rigid, and limited in their usefulness for managing the OAuth2 client of partners, and using all the required contextual information to determine if a partner should be given access to a particular API. Maybe one day in the future, these products will offer more features in the area of identity and access management. But right now, they are primarily in an adjacent market.

—————

These are some basic use case considerations that many organizations face. More technical information and implementation guides can be found on the Gluu Server docs and our community knowledge base. For a deep dive or to discuss enterprise support, feel free to book a call with us today.

CourionJoin Us at CONVERGE in Vegas! [Technorati links]

March 05, 2015 01:25 PM

Access Risk Management Blog | Courion

Venkat RajajiCONVERGE, our perennially popular annual customer conference, happens Tuesday May 19th to Thursday May 21st at the Cosmopolitan Hotel in Las Vegas. Click here to register and take advantage of a $150.00 discount if you sign-up before March 31st.

CONVERGE provides a great opportunity to mix and mingle with your peers and industry thought leaders. We’re bringing together noted authorities in identity governance and administration to share their expertise, and we’ll provide a peek into what’s new at Courion and in the field of security.

Need to earn (ISC)² Continuing Professional Education credits toward your CISSP or other professional certification? On Tuesday May 19th we are offeriConverge skylineng a full day dedicated to technical training and workshops, including a deep dive into the Courion Access Assurance Suite so you can fully exploit this market-leading IGA suite’s capabilities. Tech Tuesday at CONVERGE provides the ideal opportunity to earn those CPE credits and we’ll be happy to submit the needed paperwork.

Our conference theme, Know the Odds – Win with Risk Aware IAM is based on the notion that in this age of the Internet of Things, it’s essential to have concrete insight into your IAM infrastructure so you can better protect your company from access risks that may lead to a data breach. Courion’s intelligent IAM provisioning and governance solutions, powered by the award-winning identity analytics solution, Access Insight, provides the knowledge you need to see exactly where threats are hiding so you can identify, quantify, and reduce risk.

So come, join us in Vegas and register today!

To learn more, go http://www.courion.com/CONVERGE.

Venkat Rajaji is Vice President of Product Management & Marketing for Courion.

blog.courion.com

Kuppinger ColeFacebook profile of the German Federal Government thwarts efforts to improve data protection [Technorati links]

March 05, 2015 09:32 AM
In Martin Kuppinger

There is a certain irony that the federal government has almost simultaneously launched a profile on Facebook with the change of the social network’s terms of use. While the Federal Minister of Justice, Heiko Maas, is backing up consumer organizations with their warnings of Facebook, the Federal Government has taken the first step in setting up its own Facebook profile.

With the changes in the terms of use, Facebook has massively expanded its ability to analyze the data of users. Data is also stored which is left behind by users on pages outside of Facebook for use in targeted advertising and possibly other purposes. On the other hand, the user has the possibility of better managing personal settings for his/her own privacy. The bottom line: it remains clear that Facebook is collecting even more data in a hard to control manner.

Like Federal Minister of Justice Maas says, “Users do not know which data is being collected or how it is being used.”

For this reason alone, it is difficult to understand why the Federal Government is taking this step right at this moment. After all, it has been able to do its work so far without Facebook.

With its Facebook profile, the Federal Government is ensuring that Facebook is, for example, indirectly receiving information on the political interests and preferences of the user. Since it is not clear just how this information could be used today or in the future, it is a questionable step.

If one considers the Facebook business model, it can also have an imminent negative impact. Facebook’s main source of income is from targeted advertising based on the information that the company has collected on its users. With the additional information that will be available via the Federal Government’s Facebook profile, for example, interest groups can, in the future, selectively advertise on Facebook to track their goals.

Here it is apparent, as with many businesses, that the implications of commercial Facebook profiles are frequently not understood. On the one hand, there is the networking with interested Facebook users. Their value is often overrated – these are not customers, not leads and NOT voters, but at best people with a more or less vague interest. On the other hand, there is information that a company, a government, a party or anyone else with a Facebook profile discloses to Facebook: Who is interested in my products, my political opinions (and which ones) or for my other statements on Facebook?

The Facebook business model is exactly that – to monetize this information – today more than ever before with the new business terms. For a company, this means that the information is also available to the competition. You could also say that Facebook is the best possibility of informing the competition about a company’s (more or less interested) followers. In marketing, but also in politics, one should understand this correlation and weigh whether it is worth paying the implicit price for the added value in the form of data that is interesting to competitors.

Facebook may be “in” – but it is in no way worth it for every company, every government, every party or other organization.

End users have to look closely at the new privacy settings and limit them as much as possible if they intend to stay on Facebook. In the meantime, a lot of the communication has moved to other services like WhatsApp, so now is definitely the time to reconsider the added value of Facebook. And sometimes, reducing the amount of communication and information that reaches one is also added value.

The Federal Government should in any case be advised to consider the actual benefits of its Facebook presence. 50,000 followers are not 50,000 voters by any means – the importance of this number is often massively overrated. The Federal Government has to be clear about the contradiction between its claim to strong data protection rules and its actions. To go to Facebook now is not even fashionable any more – it is plainly the wrong step at the wrong time.

According to KuppingerCole, marketing managers in companies should also exactly analyze which price they are paying for the anticipated added value of a Facebook profile – one often pays more while the actual benefits are much less. Or has the number of customers increased accordingly in the last fiscal year because of 100,000 followers? A Facebook profile can definitely have its uses. But you should always check carefully whether there is truly added value.

Kuppinger ColeBeyondTrust PowerBroker Auditor Suite - 70891 [Technorati links]

March 05, 2015 08:51 AM
In KuppingerCole

BeyondTrust PowerBroker Auditor Suite is a set of auditing tools for Windows environments. Together they provide a unified real-time insight and an audit trail for file system, SQL Server, Exchange and Active Directory access and changes.


more
March 04, 2015

Kaliya Hamlin - Identity WomanIIW is early!! We are 20!! We have T-Shirts [Technorati links]

March 04, 2015 10:01 PM

Internet Identity Workshop is only a month away. April 7-9, 2015

Regular tickets are only on sale until March 20th. Then prices to up again to late registration.

I’m hoping that we can have a few before we get to IIW #20!!

Yes it’s almost 10 years since we first met (10 years will be in the fall).

I’m working on a presentation about the history of the Identity Gang, Identity Commons and the whole community around IIW over the last 10 years.

Where have we come from?…leads to the question…. where are we going?  We plan to host at least one session about these questions during IIW.

It goes along with the potential anthology that I have outlined (But have a lot more work to get it completed).

Kaliya Hamlin - Identity WomanIIW topics so far [Technorati links]

March 04, 2015 07:32 PM

We keep track of topics folks want to talk about on our Identity Commons wiki.

I figured I would pull the list out from there and share it here…Its looking good so far.

What topics are you planning to present about or lead a discussion about at this IIW?

What are you hoping to learn about or hear a presentation about at IIW?

What are the critical questions about user-centric identity and data you hope to discuss with peers at IIW?

Vittorio Bertocci - MicrosoftADAL v3 Preview – March Refresh [Technorati links]

March 04, 2015 04:00 PM

image

It’s time for a refresh for ADAL .NET v3 preview!
There’s nothing earth-shattering this time around… if you don’t consider adding a brand new platform earth-shattering, of course Winking smile

This refresh includes lots of bug fixes and small improvements, as  you would expect from a refresh. However as the feature set is still fluid, we did add a couple of more substantial improvement.

Support for Xamarin Unified API for iOS

In the first preview, ADAL .NET v3  supported iOS development via the classic API Xamarin project type, the one associated to MonoTouch10, to be concrete. Those project types are unable to target x64, which is at odd with the new requirements that Apple started enforcing on its app store since February.
With this refresh, ADAL moves to the new Xamarin Unified API for iOS – which does support 64 bits. If you take a peek at the targets in the NuGet package, shown in the screenshot at the beginning of the post, you’ll notice the corresponding new platform target, Xamarin.iOS10.

Naturally, we (where “we”==Danny Smile) updated the multi target sample to reflect the new features – you can find it here.

To preempt a question that – I am sure – you would have sent me right after having read this section. This refresh does not work with Xamarin.Forms. We are looking into it – and if you have feedback please send it our way! – but we didn’t do work in this refresh to that end.

ADAL support for .NET Core

My friend Daniel Roth is going to yell at me for not having written this as the first notable news in this refresh Smile

Last week I guest-posted on the ASP.NET & Web dev tools team blog, mentioning that we have brand new OWIN middleware for OpenId Connect and OAuth2 bearer token in ASP.NET 5 and .NET core. That covers you for doing Web sign on and token validation in Web API – but it does not help if you want to consume API.

I am super happy to tell you that from this refresh on you can experiment with ADAL in your ASP.NET 5 projects and consume API from Azure, Office 365 and any other API protected by Azure AD.
Once again, if you unpack the ADAL NuGet you’ll find that we added the new platform target aspnetcore5, which delivers to your Web app a brand new, confidential client only library for all your mid tier needs.

To demonstrate the new functionality we (where “we” == still Danny Smile) put together a new sample, WebApp-WebAPI-OpenIdConnect-AspNet5.
This is the ASP.NET 5.0 counterpart of the classic WebApp-WebAPI-OpenIDConnect-DotNet, which demonstrates how to implement Web sign on AND invoke an AAD-secured API, all via OpenId Connect. It is an incremental refinement of the sample announced last week, WebApp-OpenIdConnect-AspNet5, which only covered Web sign on sans web API call.
There isn’t much else to say about the new sample: all the changes are mostly due to the new OWIN pipeline in ASP.NET 5 and the new project templates, you should not have any surprises. If you do, please let us know!

Final note on this. Although we only released this new sample, all the service side scenarios using ADAL should now work: that means client credentials, onbehalfof and the like. Feel free to give it a spin, we are eager to hear from you what works and what doesn’t.

Still an Alpha

At the cost of being pedantic, I have to issue the usual warning here: this is still a preview release. We hope it will be enough to help you experimenting with scenarios we believe you’ll find interesting, but as you play with the library you should remember that for the time being 1) it is not fit for production and 2) between now and general availability, the programming model can and will change. If you writ code against the current model, please be prepared to revv it once a newer release will come along.

That’s it! I know that those two features were highly sought-after. We hope that this refresh will unblock whatever experiments you were conducting on Xamarin iOS and/or ASP.NET 5. Please help us to prioritize new work, keep the feedback coming Smile happy coding!

Kuppinger ColeExecutive View: Covertix SmartCipher™ - 71267 [Technorati links]

March 04, 2015 07:53 AM
In KuppingerCole

The Covertix SmartCipher™ Product Suite provides an important solution for the protection of unstructured data files on premise, shared with partners and held in the cloud...


more

Mike Jones - MicrosoftJWK Thumbprint -04 draft incorporating feedback during second WGLC [Technorati links]

March 04, 2015 02:15 AM

IETF logoThe latest JWK Thumbprint draft addresses review comments on the -03 draft by Jim Schaad, which resulted in several clarifications and some corrections to the case of RFC 2119 keywords.

The specification is available at:

An HTML formatted version is also available at:

March 03, 2015

Kuppinger ColeKuppingerCole Analysts' View on Internet of Things [Technorati links]

March 03, 2015 10:59 PM
In KuppingerCole

For a topic so ubiquitous, so potentially disruptive and so overhyped in the media in the recent couple of years, the concept of the Internet of Things (IoT) is surprisingly difficult to describe. Although the term itself has appeared in the media nearly a decade ago, there is still no universally agreed definition of what IoT actually is. This, by the way, is a trait it shares with its older cousin, the Cloud.

On the very basic level, however, it should be possible to define IoT...
more

Kuppinger Cole16.04.2015: Make your Enterprise Applications Ready for Customers and Mobile Users [Technorati links]

March 03, 2015 01:52 PM
In KuppingerCole

Rapidly growing demand for exposing and consuming APIs, which enables organizations to create new business models and connect with partners and customers, has tipped the industry towards adopting lightweight RESTful APIs to expose their existing enterprise services and corporate data to external consumers. Unfortunately, many organizations tend to underestimate potential security challenges of opening up their APIs without a proper security strategy and infrastructure in place.
more

Mike Jones - MicrosoftKey Managed JSON Web Signature (KMJWS) specification [Technorati links]

March 03, 2015 10:38 AM

IETF logoI took a little time today and wrote a short draft specifying a JWS-like object that uses key management for the MAC key used to integrity protect the payload. We had considered doing this in JOSE issue #2 but didn’t do so at the time because of lack of demand. However, I wanted to get this down now to demonstrate that it is easy to do and specify a way to do it, should demand develop in the future – possibly after the JOSE working group has been closed. See http://tools.ietf.org/html/draft-jones-jose-key-managed-json-web-signature-00 or http://self-issued.info/docs/draft-jones-jose-key-managed-json-web-signature-00.html.

This spec reuses key management functionality already present in the JWE spec and MAC functionality already present in the JWS spec. The result is essentially a JWS with an Encrypted Key value added, and a new “mac” Header Parameter value representing the MAC algorithm used. (Like JWE, the key management algorithm is carried in the “alg” Header Parameter value.)

I also wrote this now as possible input into our thinking on options for creating a CBOR JOSE mapping. If there are CBOR use cases needing managed MAC keys, this could help us reason about ways to structure the solution.

Yes, the spec name and abbreviation are far from catchy. Better naming ideas would be great.

Feedback welcomed.

Kuppinger ColeExecutive View: SecureAuth IdP - 70844 [Technorati links]

March 03, 2015 08:51 AM
In KuppingerCole

SecureAuth IdP combines cloud single sign-on capabilities with strong authentication and risk-based access control while focusing on both internal and external users that want to access to both on-premise and cloud services...


more
March 02, 2015

Kuppinger ColeHoward Mannella's Keynote about Taleb's Black Swan Theory at EIC 2015 [Technorati links]

March 02, 2015 09:43 PM
In European Identity and Cloud Conference

Howard Mannella, a seasoned expert and thought leader for resiliency and big disasters, will talk about how to mitigate against unpredicted, massively game-changing events.

Drummond Reed - CordanceBrad Feld on How to Deal with Email After a Long Vacation [Technorati links]

March 02, 2015 08:54 PM

brad-feldMy Newsle service spotted this post by Brad Feld about his recommended approach to dealing with missed email: ignore it and re-engage with your email stream afresh upon your return. I completely agree; that’s was the same conclusion I came to after my summer vacation in 2013.

Brad ends his post by saying:

I’m always looking for other approaches to try on this, so totally game to hear if you have special magic ones.

This resonates with me because my focus right now is on how the XDI semantic data interchange protocol can give us a new form of messaging that we’ve never had before—something that gives us new and better ways of handling messages that either email or texting give us today.

Stay tuned.


Julian BondThe Future of the Future at Futurefest. As seen from hipster London in Borough Market, Hoxton and Hackney... [Technorati links]

March 02, 2015 06:42 PM
The Future of the Future at Futurefest. As seen from hipster London in Borough Market, Hoxton and Hackney, late Mar 2015. It's also tied in with a series of concerts put on by Convergence.

I've got my tickets for Portico+SnowGhosts. I'd quite like to have seen Tricky+Gazelle Twin but missed the tickets and that's probably 20 years too late.

I maybe should have done the full Futurefest weekend but £80 is a bit rich for me. The speaker list though is curiously hilarious. Edward Snowden, Vivienne Westwood, George Clinton, Maggie Philbin; Together at last!

http://futurefest.org/
http://www.convergence-london.com/

I hope somebody there talks about AD 02100 and the 22nd Century.
 FutureFest »
What might the world be like in decades to come? FutureFest is Nesta's flagship weekend event of immersive experiences, compelling performances and radical speakers to excite and challenge perceptions of the future. Join the conversation #futurefest. Updates. Tickets on sale here.

[from: Google+ Posts]

KatasoftWhere to Store your JWTs - Cookies vs HTML5 Web Storage [Technorati links]

March 02, 2015 03:00 PM

Stormpath has recently worked on token authentication features using JSON Web Tokens (JWT), and we have had many conversations about the security of these tokens and where to store them.

If you are curious about your options, this post is for you. We will cover the basics of JSON Web Tokens (JWT), cookies, HTML5 web storage (localStorage/sessionStorage), and basic information about cross-site scripting (XSS) and cross site request forgery (CSRF).

Let’s get started…

JSON Web Tokens (JWT): A Crash Course

The most implemented solutions for API authentication and authorization are the OAuth 2.0 and JWT specifications, which are fairly dense. Cliff’s Notes Time! Here’s what you need to know:

If you encounter a token in the wild, it looks like this:

"dBjftJeZ4CVP.mB92K27uhbUJU1p1r.wW1gFWFOEjXk..."

This is a Base64 encoded string. If you break it apart you’ll actually find three separate sections:

eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9
.
eyJpc3MiOiJodHRwOi8vZ2FsYXhpZXMuY29tIiwiZXhwIjoxMzAwODE5MzgwLCJzY29wZXMiOlsiZXhwbG9yZXIiLCJzb2xhci1oYXJ2ZXN0ZXIiXSwic3ViIjoic3RhbmxleUBhbmRyb21lZGEuY29tIn0
.
edK9cpfKKlGtT6BqaVy4bHnk5QUsbnbYCWjBEE7wcuY

The first section is a header that describes the token. The second section is a payload which contains the juicy bits, and the third section is a signature hash that can be used to verify the integrity of the token (if you have the secret key that was used to sign it).

When we magically decode the second section, the payload, we get this nice JSON object:

{
  "iss": "http://galaxies.com",
  "exp": 1300819380,
  "scopes": ["explorer", "solar-harvester", "seller"],
  "sub": "tom@andromeda.com"
}

This is the payload of your token. It allows you to know the following:

These declarations are called ‘claims’ because the token creator claims a set of assertions that can be used to ‘know’ things about the subject. Because the token is signed with a secret key, you can verify its signature and implicitly trust what is claimed.

Tokens are given to your users after they present some credentials, typically a username and password, but they can also provide API keys, or even tokens from another service. This is important because it is better to pass a token (that can expire, and have limited scope) to your API than a username and password. If the username and password are compromised in a man-in-the-middle attack, it is like giving an attacker keys to the castle.

Stormpath’s API Key Authentication Feature is an example of this. The idea is that you present your hard credentials once, and then get a token to use in place of the hard credentials.

The JSON Web Token (JWT) specification is quickly gaining traction. Recommended highly by Stormpath, it provides structure and security, but with the flexibility to modify it for your application. Here is a longer article on it: Use JWT the Right Way!

Where to Store Your JWTs

So now that you have a good understanding of what a JWT is, the next step is to figure out how to store them. If you are building a web application, you have a couple options:

To compare these two, let’s say we have a fictitious AngularJS or single page app (SPA) called galaxies.com with a login route (/token) to authenticate users to return a JWT. To access the other APIs endpoints that serve your SPA, the client needs to pass an valid JWT.

The request that the single page app makes would resemble:

HTTP/1.1

POST /token
Host: galaxies.com
Content-Type: application/x-www-form-urlencoded

username=tom@galaxies.com&password=andromedaisheadingstraightforusomg&grant_type=password

Your server’s response will vary based on whether you are using cookies or Web Storage. For comparison, let’s take a look at how you would do both.

Web Storage

Exchanging a username and password for a JWT to store it in browser storage (sessionStorage or localStorage) is rather simple. The response body would contain the JWT as an access token:

HTTP/1.1 200 OK

  {
  "access_token": "eyJhbGciOiJIUzI1NiIsI.eyJpc3MiOiJodHRwczotcGxlL.mFrs3Zo8eaSNcxiNfvRh9dqKP4F1cB",
       "expires_in":3600
   }

On the client side, you would store the token in HTML5 Web Storage (assuming that we have a success callback):

function tokenSuccess(err, response) {
    if(err){
        throw err;
    }
    $window.sessionStorage.accessToken = response.body.access_token;
}

To pass the access token back to your protected APIs, you would use the HTTP Authorization Header and the Bearer scheme. The request that your SPA would make would resemble:

HTTP/1.1

GET /stars/pollux
Host: galaxies.com
Authorization: Bearer eyJhbGciOiJIUzI1NiIsI.eyJpc3MiOiJodHRwczotcGxlL.mFrs3Zo8eaSNcxiNfvRh9dqKP4F1cB

Cookies

Exchanging a username and password for a JWT to store it in a cookie is simple as well. The response would use the Set-Cookie HTTP header:

HTTP/1.1 200 OK

Set-Cookie: access_token=eyJhbGciOiJIUzI1NiIsI.eyJpc3MiOiJodHRwczotcGxlL.mFrs3Zo8eaSNcxiNfvRh9dqKP4F1cB; Secure; HttpOnly;

To pass the access token back to your protected APIs on the same domain, the browser would automatically include the cookie value. The request to your protected API would resemble:

GET /stars/pollux
Host: galaxies.com

Cookie: access_token=eyJhbGciOiJIUzI1NiIsI.eyJpc3MiOiJodHRwczotcGxlL.mFrs3Zo8eaSNcxiNfvRh9dqKP4F1cB;

So, What’s the difference?

If you compare these approaches, both receive a JWT down to the browser. Both are stateless because all the information your API needs is in the JWT. Both are simple to pass back up to your protected APIs. The difference is in the medium.

Web Storage

Web Storage (localStorage/sessionStorage) is accessible through JavaScript on the same domain. This means that any JavaScript running on your site will have access to web storage, and because of this can be vulnerable to cross-site scripting (XSS) attacks. XSS in a nutshell is a type of vulnerability where an attacker can inject JavaScript that will run on your page. Basic XSS attacks attempt to inject JavaScript through form inputs, where the attacker puts <script>alert('You are Hacked');</script> into a form to see if it is run by the browser and can be viewed by other users.

To prevent XSS, the common response is to escape and encode all untrusted data. But this is far from the full story. In 2015, modern web apps use JavaScript hosted on CDNs or outside infrastructure. Modern web apps include 3rd party JavaScript libraries for A/B testing, funnel/market analysis, and ads. We use package managers like Bower to import other peoples’ code into our apps.

What if only one of the scripts you use is compromised? Malicious JavaScript can be embedded on the page, and Web Storage is compromised. These types of XSS attacks can get everyone’s Web Storage that visits your site, without their knowledge. This is probably why a bunch of organizations advise not to store anything of value or trust any information in web storage. This includes session identifiers and tokens.

As a storage mechanism, Web Storage does not enforce any secure standards during transfer. Whoever reads Web Storage and uses it must do their due diligence to ensure they always send the JWT over HTTPS and never HTTP.

Cookies

Cookies, when used with the HttpOnly cookie flag, are not accessible through JavaScript, and are immune to XSS. You can also set the Secure cookie flag to guarantee the cookie is only sent over HTTPS. This is one of the main reasons that cookies have been leveraged in the past to store tokens or session data. Modern developers are hesitant to use cookies because they traditionally required state to be stored on the server, thus breaking RESTful best practices. Cookies as a storage mechanism do not require state to be stored on the server if you are storing a JWT in the cookie. This is because the JWT encapsulates everything the server needs to serve the request.

However, cookies are vulnerable to a different type of attack: cross-site request forgery (CSRF). A CSRF attack is a type of attack that occurs when a malicious web site, email, or blog causes a user’s web browser to perform an unwanted action on a trusted site on which the user is currently authenticated. This is an exploit of how the browser handles cookies. A cookie can only be sent to the domains in which it is allowed. By default, this is the domain that originally set the cookie. The cookie will be sent for a request regardless of whether you are on galaxies.com or hahagonnahackyou.com.

CSRF works by attempting to lure you to hahagonnahackyou.com. That site will have either an img tag or JavaScript to emulate a form post to galaxies.com and attempt to hijack your session, if it is still valid, and modify your account.

For example:

<body>

  <!-- CSRF with an img tag -->

  <img href="http://galaxies.com/stars/pollux?transferTo=tom@stealingstars.com" />

  <!-- or with a hidden form post -->

  <script type="text/javascript">
  $(document).ready(function() {
    window.document.forms[0].submit();
  });
  </script>

  <div style="display:none;">
    <form action="http://galaxies.com/stars/pollux" method="POST">
      <input name="transferTo" value="tom@stealingstars.com" />
    <form>
  </div>
</body>

Both would send the cookie for galaxies.com and could potentially cause an unauthorized state change. CSRF can be prevented by using synchronized token patterns. This sounds complicated, but all modern web frameworks have support for this.

For example, AngularJS has a solution to validate that the cookie is accessible by only your domain. Straight from AngularJS docs:

When performing XHR requests, the $http service reads a token from a cookie (by default, XSRF-TOKEN) and sets it as an HTTP header (X-XSRF-TOKEN). Since only JavaScript that runs on your domain can read the cookie, your server can be assured that the XHR came from JavaScript running on your domain.

You can make this CSRF protection stateless by including a xsrfToken JWT claim:

{
  "iss": "http://galaxies.com",
  "exp": 1300819380,
  "scopes": ["explorer", "solar-harvester", "seller"],
  "sub": "tom@andromeda.com",
  "xsrfToken": "d9b9714c-7ac0-42e0-8696-2dae95dbc33e"
}

If you are using the Stormpath SDK for AngularJS, you get stateless CSRF protection with no development effort.

Leveraging your web app framework’s CSRF protection makes cookies rock solid for storing a JWT. CSRF can also be partially prevented by checking the HTTP Referer and Origin header from your API. CSRF attacks will have Referer and Origin headers that are unrelated to your application.

Even though they are more secure to store your JWT, cookies can cause some developer headaches, depending on if your applications require cross-domain access to work. Just be aware that cookies have additional properties (Domain/Path) that can be modified to allow you to specify where the cookie is allowed to be sent. Using AJAX, your server side can also notify browsers whether credentials (including Cookies) should be sent with requests with CORS.

Conclusion

JWTs are a awesome authentication mechanism. They give you a structured way to declare users and what they can access. They can be encrypted and signed for to prevent tampering on the client side, but the devil is in the details and where you store them. Stormpath recommends that you store your JWT in cookies for web applications, because of the additional security they provide, and the simplicity of protecting against CSRF with modern web frameworks. HTML5 Web Storage is vulnerable to XSS, has a larger attack surface area, and can impact all application users on a successful attack.

Questions or comments? We would love to hear them! Let me know if you have any questions in the discussion below or at tom@stormpath.com / @omgitstom.

Nat Sakimura米国で消費者権利章典法案がホワイトハウスによって公表されました [Technorati links]

March 02, 2015 05:58 AM

日本時間28日(土)に、米国ホワイトハウスが、消費者権利章典法案[1]を公表しました。これは、以前発表されていた消費者権利章典を実際に法案に落としたものです。

データの種類による規制主義ではなく、コンテキスト主義で押し切っているのが特徴です。私はこっちの方が良いと思っています。構成的には、SEC. 4 に定義があって、SEC.101~107が消費者権利章典、SEC.201~203が法執行、SEC.301が法執行可能な行動規範としてのセーフハーバー、SEC.401で、この法が他に優先すること、SEC.402で、FTCの権限に影響を与えないこと、SEC.403で、Private Right of Actionをこの法は与えないことを明示しています。

この辺りについても、今日のOpenID BizDay #8 で、時間があれば話して行ければよいと思っています。

なお、この記事は、時間を見つけて拡充して行く or 別記事をおこす予定です。

今週末は、ISO/IEC のコメント締切が5件あったりして、全然時間が取れませんでした。今日のBizDayの後ですね、何か書くのは。

ではでは!

 

[1] http://www.whitehouse.gov/sites/default/files/omb/legislative/letters/cpbr-act-of-2015-discussion-draft.pdf

March 01, 2015

Bill Nelson - Easy IdentityThe Next Generation of Identity Management [Technorati links]

March 01, 2015 05:48 PM

The face of identity is changing. Historically, it was the duty of an identity management solution to manage and control an individual’s access to corporate resources. Such solutions worked well as long as the identity was safe behind the corporate firewall – and the resources were owned by the organization.

But in today’s world of social identities (BYOI), mobile devices (BYOD), dynamic alliances (federation), and everything from tractors to refrigerators being connected to the Internet (IoT), companies are finding that legacy identity management solutions are no longer able to keep up with the demand. Rather than working with thousands to hundreds of thousands of identities, today’s solutions are tasked with managing hundreds of thousands to millions of identities and include not only carbon-based life forms (people) but also those that are silicon-based (devices).

In order to meet this demand, today’s identity solutions must shift from the corporation-centric view of a user’s identity to one that is more user-centric. Corporations typically view the identity relationship as one between the user and the organization’s resources. This is essentially a one-to-many relationship and is relatively easy to manage using legacy identity management solutions.

One to Many Relationship

What is becoming evident, however, is the growing need to manage many-to-many relationships as these same users actually have multiple identities (personas) that must be shared with others that, in turn, have multiple identities, themselves.

Many to Many Relationships

The corporation is no longer the authoritative source of a user’s identity, it has been diminished to the role of a persona as users begin to take control of their own identities in other aspects of their lives.

Identity : the state or fact of being the same one as described.

Persona : (in the psychology of C. G. Jung) the mask or façade presented to satisfy the demands of the situation or the environment.

In developing the next generation of identity management solutions, the focus needs to move away from the node (a reference to an entry in a directory server) and more towards the links (or relationships) between the nodes (a reference to social graphs).

Social Graph

In order to achieve this, today’s solutions must take a holistic view of the user’s identity and allow the user to aggregate, manage, and decide with whom to share their identity data.

Benefits to Corporations

While corporations may perceive this as a loss of control, in actuality it is the corporation that stands to benefit the most from a user-centric identity management solution. Large corporations spend hundreds of thousands of dollars each year in an attempt to manage a user’s identity only to find that much of what they have on file is incorrect. There are indeed many characteristics that must be managed by the organization, but many of a user’s attributes go well-beyond a corporation’s reach. In such cases, its ability to maintain accurate data within these attributes is relatively impossible without the user’s involvement.

Take for instance a user’s mobile telephone number; in the past, corporations issued, sponsored, and managed these devices. But today’s employees typically purchase their own mobile phones and change carriers (or even phone numbers) on a periodic basis. As such, corporate white pages are filled with inaccurate data; this trend will only increase as users continue to bring more and more of themselves into the workplace.

Legacy identity solutions attempt to address this issue by introducing “end-user self-service” – a series of Web pages that allow a user to maintain their corporate profile. Users are expected to update their profile whenever a change occurs. The problem with this approach is that users selectively update their profiles and in some cases purposely supply incorrect data (in order to avoid after hours calls). The other problem with this approach is that it still adheres to a corporate-centric/corporate-owned identity mindset. The truth is that users’ identities are not centralized, they are distributed across many different systems both in front of and behind the corporate firewall and while companies may “own” certain data, it is the information that the user brings from other sources that is elusive to the company.

Identity Relationship Management

A user has relationships that extend well beyond those maintained within a company and as such has core identity data strewn across hundreds, if not thousands of databases. The common component in all of these relationships is the user. It is the user who is in charge of that data and it is the user who elects to share their information within the context of those relationships. The company is just one of those relationships, but it is the one for which legacy identity management solutions have been written.

Note: Relationships are not new, but the number of relationships that a user has and types of relationships they have with other users and other things is rapidly growing.

Today’s identity management solutions must evolve to accept (or at a minimum acknowledge) multiple authoritative sources beyond their own. They must evolve to understand the vast number of relationships that a user has both with other users, but also with the things the user owns (or uses) and they must be able to provide (or deny) services based on those relationships and even the context of those relationships. These are lofty goals for today’s identity management solutions as they require vendors to think in a whole new way, implement a whole new set of controls, and come up with new and inventive interfaces to scale to the order of millions. To borrow a phrase from Ian Glazer, we need to kill our current identity management solutions in order to save them, but such an evolution is necessary for identity to stay relevant in today’s relationship-driven world.

I am not alone in recognizing the need for a change.  Others have come to similar conclusions and this has given rise to the term, Identity Relationship Management (or IRM).  The desire for change is so great in fact that Kantara has sponsored the Identity Relationship Management Working Group of which I am privileged to be a member.  This has given rise to a LinkedIn Group on IRM, a Twitter feed (@irmwg), various conferences either focused on or discussing IRM, and multiple blogs of which this is only one.

LinkedIn IRM Working Group Description:

In today’s internet-connected world, employees, partners, and customers all need anytime access to secure data from millions of laptops, phones, tablets, cars, and any devices with internet connections.

Identity relationship management platforms are built for IoT, scale, and contextual intelligence. No matter the device, the volume, or the circumstance, an IRM platform will adapt to understand who you are and what you can access.

Call to Action

Do you share similar thoughts and/or concerns?  Are you looking to help craft the future of identity management?  If so, then consider being part of the IRM Working Group or simply joining the conversation on LinkedIn or Twitter.

 


February 28, 2015

Nat Sakimuraセミナー:企業にとっての実践的プライバシー保護~個人情報保護法は免罪符にはならない [Technorati links]

February 28, 2015 09:53 PM

明日3/2、OpenID BizDay #8 で、「企業にとっての実践的プライバシー保護の考慮点」について、新潟大学の鈴木正朝教授と、産総研の高木浩光先生をゲストにお迎えして座談会を行います。わたしが司会者としていろいろ質問していく中で、企業活動として、プライバシーにどのように向き合っていったら良いのかということを浮かび上がらせて行くことができればと思っています。ちなみに、OpenIDファウンデーション・ジャパンでなんでこんなことやるかというと、OpenIDというのは、同意取得のフレームワーク+属性提供のフレームワークだからですね。

予定は未定にしてしばしば変更す、ですが、今のところ以下の様なことをお聞きする予定です。これだけ見ても、ワクワクするでしょ?!

あ、ちなみに、有料イベントです。イベント申込みはこちら。

Q.1 個人情報保護法(今年改正予定)、刑法、消費者契約法、債権法(今年改正予定)、不法行為法など、関係する法律がたくさん有るように思われますが、それらの関係を教えて下さい。個人情報保護法が良いと言っても、他の法律がダメと言っているのが結構ありそうで、個人情報保護法を守っていても免罪符にはならないように思われます。その辺りも教えてください。

たとえば、今般の改正では見送られたようですが、たとえ目的変更がOKとなっても、消費者契約法では不利益変更はNGとなっていますし、債権法でもしかり。個人情報保護法でOKだからといって突き進むと、他の法律で絡め取られるケースがかなりあるように思われます。例えば、

法的コンプライアンスを考える上では、これらを全て勘案する必要があります。そのあたりの関係なども含めて解説していただきます。

Q.2 企業がビジネスを行う上での目標というのは、法律云々よりも、ブランド価値を高めて、自社の商品・サービスをもっと評価していただくことだと思うんですが、そこと、現在の個人情報保護法周りの話はかなり乖離しているように思われます。これはなぜなのかとか、ご意見お持ちでしょうか?

国際展開している企業だと、国内法だけでなく他国の法律も見なければなりません。これってかなり大変ですよね。その上、実際にビジネスを行う上では、法律を守っていれば良いというものでもなくて、何が大切かというと、消費者からの信頼を勝ち得ること、つまり、ブランドを確立することだと思うんですよね。それって、法律を守っていることは当然で、+αの話であるように思われます。実際、国際標準というのは、そのレベルを満たすにはどうしたら良いかというようなことが書いてあるんですが、何か巷の議論を聞いていると、どうもそういうことがすっぽり抜け落ちているような感じがするんですよね。この辺りの状況は、どうなんでしょうか?

Q.3 「特定の個人」って、何なんですか?この辺り、今般の改正でもかならいせめぎ合いになったところのように各方面から聞いておりまして、これで「個人情報」の範囲をできるだけ限定しようとしているというわけですね。なので、この概念をちょっと詳しく説明していただけますでしょうか?

ここで、ISO/IEC 29100によるリンクの概念の解説に飛ぶかもしれません。

Q.4 「個人情報」の範囲を狭くするのは、企業にとって意味があることなんでしょうか?ブランド価値の保全まで考えたら、考慮対象を狭くすることはかえってリスクを高めるように思えるのですが。

この、「できるだけ限定したい」という議論、「個人情報保護法」へのコンプライアンスだけを考えるならば、その気持ちは分からないでもないんですが、上述の通りそれじゃダメなわけで、実は個人的には非常に違和感を持っているものなんですよね。80カ国近くの企業や政府関係者が集まって作っているISO/IEC 29100 プライバシー・フレームワークとかとはまるっきり逆方向なんですよ。こちらでは個人情報(PII)を「(a)その情報が関係する本人を識別することに利用することができるか (b)本人に直接・間接に結び付けられうる、任意の情報」[1]と、とても広く定義しておりまして、なんと1節まるまる使って、どうやって隠れている個人情報をあぶり出すかなんてことまで書いています。その上で、その「個人情報」をどのように「使うか」によって起きてくるプライバシーへの影響を評価して、リスクレベルに応じた対策をせよ、となってるんですよね。たとえば、名刺情報の部門での連絡用共有なんて言うのはリスクが低いからそれなりの対策で良くて、それに対して、お預かりしている健康相談情報なんかはすごく対策する、みたいにね。ブランド価値の棄損とかまで考えたら、こっちの方がずっと実践的だと思うんですよ。

Q.5 約款変更に先立つ公表、通知、同意はどうあるべきでしょうか?

Googleなんか、何ヶ月もこれでもか~と公表、通知、し続けたわけですが、一方では、するっと変えてしまう事業者もある。でも叩かれるのは概ね前者と、なにかバランスの悪さを感じます。このあたり、変更に先立ってどの位の期間、変更内容の徹底をはじめるべきなんでしょうか?

Q.5 匿名加工情報というのが今回新設されるようですが…。オプトアウトも必要ないような「匿名加工」って、統計化のさらに限定されたものになってしまいそうなんですが、それだと現行法でもOKっぽくて…。詳しく教えていただけますでしょうか?

この話が出てきた背景や議論されていたところって、ちょっと違和感がありまして。いわゆるFTC3要件のあたりから出発しているようですが、なんか大きく誤解されているような気がします。そもそもあれは任意のところに情報提供して良いという話ではないし、その背景にFTC法5条がありまして、それをデータ提供元にも提供先にも起動できるようにするために、この3要件を受け入れよ、というものなんですよね。そもそも第一条件の「de-identification」は、その前に散々「re-identificationができない安全なde-identificationなんて無いよ」という話をしているくらいで、だから技術的にはあんまりできてなくても良くて、それをしたと宣言させて、かつ第2要件、第3要件で、自ら再識別しないし提供先にもさせない責任を持つと宣言させることで、FTC法5条が発動できるようになっているところに意味があるんです。それを、FTC法5条が無い日本で語ってもねぇ。これでやるなら、独禁法改正して公取が介入できるようにするとかしないとダメなはずなんですが。

Q.6 データの越境移動関連なんですが、グローバル展開している企業が、EU在住の職員のデータを日本に持ってきて日本で人事評価するとなるとやばそうという話もあるんですが、どうなんでしょうか?安全にやるには、どうしたら良いのですか?

まぁ、データをEUに移管して、人事評価もEUでやれば良い。どうせEU支社もあるわけだし、むしろそっちを本社にすれば良いだけだから、企業としてはどうでも良いという話もあるわけですが。

Q.7 個人情報保護法改正項目の中に「第三者提供時に提供元 & 提供先双方でその記録義務が追加される」というのが有るらしいですが…。どこまでやれば良いのでしょうか?

実務を考えると:たとえばOpenID Connect / OAuth で属性を連携したとしましょう。属性の提供先は、IdP側は記録しているはずです。RP側も建前としては記録しているはずです。ですが、その後は、経路問わずのDBに突っ込んでしまうケースが多いはずで、しかも、途中でRPは本人から新しい情報を直接もらったりもする。すると、もはやどこから何のために来たかなんてわからなくなってしまうわけで、こういうシステムは結構改造が必要になりそうです。まぁ、プライバシー・バイ・デザインをやってないと、後でひどくコストがかかるということの典型例なわけで、ISO/IEC 29101 プライバシー・アーキテクチャ・フレームワークでも、最初の段階でちゃんとそこのところ設計しろと言っているわけですが…。

[1] SOURCE: ISO/IEC 29100. 2.9 PII = any information that (a) can be used to identify the PII principal to whom such information relates, or (b) is or might be directly or indirectly linked to a PII principal

February 27, 2015

KatasoftExploring Microservices Architectures with Stormpath [Technorati links]

February 27, 2015 04:30 PM

One of the biggest development trends over the last few years is a move towards architectures based on API services. While it’s increasingly common for new applications to start fresh with a services-friendly framework and functionality offloaded to API services, it’s much less common to rebuild traditional applications in a service-oriented architecture. It’s harder, there’s much more risk, as well as legacy mayhem to contend with. We applaud people who do it, and particularly the audacious people who do it for the love of Java development.

Tony Wolski has been exploring the movement from monolithic Java web applications to microservices-based architectures and documenting his odyssey and lessons learned on his (fascinating) blog.

One system he’s working on was built in a traditional structure: JSF on the frontend and Hibernate and MySQL on the backend, running on Tomcat, and deployed to Jelastic. Bit by bit, Wolski has been re-architecting an application, following Rich Manalang’s Four Principles Of Modern Web Development. Central to his efforts has been the extrapolation of services into API’s, following the third principle of “Create and use your own REST API.”

Transitioning to API Services

Replacing functionality with REST-JSON-based APIs gives him a lot more deployment flexibility and allows for more agile development. “I can make changes to the microservice without having to package up and deploy the whole thing again,” Wolski explained. His aspiration is to deploy all the backend services separately, with executable JARs that are able to connect with any UI.

Initially, Wolski had built a custom user management system but it was the first functionality he decoupled to a standalone API, using Stormpath. “With the old user management I would actually have to write the reset passwords link. With Stormpath I just hook into the API and it’s really straightforward. It takes that all away.”

“I was pretty impressed with the way that Stormpath took all of that custom-built stuff out of my hands,” he said, adding that it “takes a way a bit of the application that really has nothing to do with the application. It lets me focus on the actual business logic.”

Instead of building his own API service for user management, Wolski saved several weeks of time in actual development, as well as time in research and design of user features. “I use it because it saves me time to not think about it, design something and wonder whether I’m actually doing it right or if there’s a better way to get it out there.”

DropWizard, Dart, and Apache Shiro

With user infrastructure out of the way, Wolski turned his attention to the first of his planned microservices; a client management REST API selected for its limited scope. Critical to that effort was DropWizard.

DropWizard is a Java framework built specifically for developing RESTful web services, and it comes with tools many web services need: configuration, application metrics, logging, and operational tools. The redesign also gave him a chance to dig into Groovy and experiment with new technologies, like Dart and Angular.js.

To add a security layer, he implemented the Apache Shiro plugin for Stormpath. Apache Shiro is a fast growing Java security framework that plugs directly into Stormpath with an official integration. You can see how Wolski configured Shiro’s Authentication filter to work with Dart/Chromium on his blog.

The new client management API proved to be so effective at increasing productivity and maintainability that he plans to follow the same model to redesign his entire monolithic app.

Wolski had such a positive experience with his initial integration to Dropwizard, he spun out his work into a standalone sample. Today, that DropWizard API secured with Shiro+Stormpath is available on GitHub, so be sure to check out his GitHub repo.

The Future

Eventually, Wolski hopes to expand the project into a multi-tenant SaaS. To do so, he’ll turn his attention to redesigning the core quoting and conveyance functionality of his application as a REST service. Stormpath will facilitate this with its support for multitenant user models.

Stay on the lookout for the full product once it is launched!

Nishant Kaushik - Oracle2FA in Password Managers: Fair or Faux [Technorati links]

February 27, 2015 01:57 PM

It all started with a tweet I sent regarding the position on passwords and password managers that a member of Microsoft Research was taking in an NPR article (I’ll expand on my viewpoint in a later blog post). But one of the resulting responses I received sent me down a very interesting rabbit hole.

Screen Shot 2015-02-27 at 4.39.32 AM

Faux 2FA? Of course I was intrigued. This led to an extensive and interesting twitter discussion with Paul Moore that I have captured here on Storify. I’m not going to recap the entire debate here. But in the course of the debate, Paul Madsen did link to a post Paul Moore wrote explaining his theory of why Password Managers that support 2FA are not really 2FA. In it, he isn’t actually questioning the 2nd Factor (like use of Google Authenticator or a Yubikey) but actually the 1st. It is a worthwhile read, but to evaluate it requires first understanding the mechanics of password managers.

Password-based Authentication Without the Password?

Most good cloud-based password managers require the user to set a Master Password, but do not store this in the users account or in their vault within the cloud service. It is used in the client (usually the browser add-on, or on the vault web page itself within the browser using Javascript) to encrypt an application password before sending it to the cloud service for storage in the vault. Retrieval will pull the encrypted application password down to the client, where it is decrypted locally.

Since the Master Password is never sent to the cloud service, the question therefore arises: How does the client authenticate the user when they want to retrieve a password? For the purpose of this exercise, let’s assume the user hasn’t enabled 2FA on their password manager (tsk, tsk). One of the password managers I like, LastPass, addresses this in this support article. Put simply:

At account creation time:

Subsequently, during normal day-to-day usage:

This looks like typical username/password 1 Factor Authentication, except that what is sent to the server for it to authenticate the user isn’t the actual password but a generated password, which is fine since the server has stored the hash of the generated password. And it is that distinction between the transmission of a generated password versus the actual password that is at the heart of Paul’s contention.

Much Ado About Something?

As he pointed out in the twitter debate, for this authentication flow to meet the definition of a 1FA flow per NIST 800-63, the transmitted password that the server uses to authenticate the user must be something the user knows. And his contention is that the user does not know the generated password being received by the verifier, so it doesn’t meet the requirement. Here are some reasons (in no particular order) why I contend it does.

1) The “generated password” can only be generated when the user is there to provide the Master Password to generate it. Arguments that someone other than the user can authenticate using the “generated password” without knowing the password is a red herring, because it assumes that someone other than the user can capture the “generated password” (by exploiting the plugin, or capturing the transmission) and replay it. But that is the exact same vulnerability the password itself is subject to. My assertion is that if you exclude all attack vectors that allow a 3rd party to capture a password itself, you are left with no attack vectors on the “generated password” either, giving them the same characteristics.

2) NIST 800-63 Appendix A Section A.3 (Other Types of Passwords) talks of “composite passwords” that combine randomly chosen elements with user chosen elements to ensure min-entropy. Clearly the user wouldn’t know this “composite password”, but my interpretation is that the document considers this acceptable as a password factor. By that logic, generated passwords that are based on the user chosen Master Password should be acceptable as a password factor too.

3) A users fingerprint is a commonly used example of a something you are factor in NIST 800-63 (even if it does not permit the use of biometrics as a token). However, in common biometric authentication flows, the actual fingerprint image isn’t transmitted and stored. Rather a biometric template that is derived from the fingerprint is. So a transform based on the raw data is considered to meet the requirement to be a factor. The “generated password” is no different in that way from a biometric template.

Wasn’t This About 2FA?

Indeed. And if you accept that the “generated password” model is a valid authentication model on its own, thereby constituting a 1FA, then adding in a something you have factor like Google Authenticator should therefore be elevating it to 2FA. If you don’t accept this contention, then I guess it’s faux.

There is one wrinkle though. Paul pointed out that for an authentication model to be 2FA, both factors cannot belong to the same something you * classification. And one could read NIST 800-63 Section 6.1.2 to claim that because the browser add-on is needed to create the “generated password” for transmission, the authentication flow technically becomes a something you have flow, with the Password Manager being a Single-factor (SF) Cryptographic Device. However, I don’t think that flies since a user can use a completely different browser or device that they have never encountered before, provide the Username and Master Password, and authenticate successfully.

Interestingly enough, I’ve recently been introduced to the Unbreachable Passwords capability of CA Advanced Authentication. I wonder how the arguments here would bear on that, since the authentication server doesn’t receive the user’s memorized password there either.

So what do you think? Do I have a case? I would love to hear from folks (in the comments or on twitter) who have been heavily involved in the standards and authentication space, especially if you’ve made the effort to participate or dig into the NIST standard.

CandH-KnowledgeIsParalyzing

[Update 03/03/2015]: I’ve captured the follow up discussion on Twitter on Storify here.

The post 2FA in Password Managers: Fair or Faux appeared first on Talking Identity | Nishant Kaushik's Look at the World of Identity Management.

Radovan Semančík - nLightHacking OpenAM, Level: Nightmare [Technorati links]

February 27, 2015 09:58 AM

I'm dealing with the OpenAM and its predecessors for a very long time. I remember Sun Directory Server Access Management Edition (DSAME) in early 2000s. After many years and (at least) three rebrandings the product was finally released as OpenSSO. That's where Oracle struck and killed the product. ForgeRock picked it up. And that's where the story starts to be interesting. But we will get to that later.

I was working with DSAME/SunAM/OpenSSO/OpenAM on and off during all that time that it existed. A year ago one of our best partners called and asked for help with OpenAM. They need to do some customizations. OpenAM is no longer my focus, but you cannot refuse a good partner, can you? So I have agreed. The start was easy. Just some custom authentication modules. But then it got a bit complicated. We figured out that the only way forward is to modify OpenAM source code. So we did that. Several times.

That was perhaps the first time in all that long history that I needed to have a close look at OpenAM source code. And I must honestly say that what have I seen scared me:

Using some software archeology techniques I estimate that the core of current OpenAM originated between 1998 and 2002 (it has collections, but not logging and no generics). And the better part of the code is stuck in that time as well. So, now we have this huge pile of badly structured, poorly documented and obsolete code that was designed at the time when people believed in Y2K. Would you deploy that into your environment?

I guess that most of these problems were caused by the original Sun team. E.g. the JAX-RPC was already deprecated when Sun released OpenSSO, but it was not replaced. Logging API was already available for many years, but they haven't migrated to it. Anyway, that is what one would expect from a closed-source software company such as Sun. But when ForgeRock took over I have expected that they will do more than just take the product, re-brand it and keep it barely alive on a life support. ForgeRock should have invested in a substantial refactoring of OpenAM. But it obviously haven't. ForgeRock is the maintainer of OpenAM for 5 years. It is a lot of time to do what had to be done. But the product is technologically still stuck in early 2000s.

I also guess that the support fees for OpenAM are likely to be very high. Maintaining 2M lines of obsolete code is not an easy task. It looks like it takes approx. 40 engineers to do it (plus other support staff). ForgeRock also has a mandatory code review process for every code modification. I have experienced that process first-hand when we were cooperating on OpenICF. This process heavily impacts efficiency and that was one of the reasons why we have separated from OpenICF project. All of this is likely to be reflected in support pricing. My another guess is that the maintenance effort is very likely to increase. I think that all the chances to efficiently re-engineer OpenAM core are gone now. Therefore I believe that OpenAM is a development dead end.

I quite liked the OpenSSO and its predecessors in early 2000s. At that time the product was slightly better than the competition. The problem is that OpenAM is mostly the same as it was ten years ago. But the world has moved on. And OpenAM haven't. I have been recommending the DSAME, Sun Identity Server, Sun Java System Access Manager, OpenSSO and also OpenAM to our customers. But I will not do it any more. And looking back I have to publicly apologize to all the customers that I have ever recommended OpenAM to them.

Everything in this post are just my personal opinions. They are based on more than a decade long experience with DSAME/SunAM/OpenSSO/OpenAM. But these are still just opinions, not facts. Your mileage may vary. You do not need to believe me. OpenAM is open source. Go and check it out yourself.

(Reposted from https://www.evolveum.com/hacking-openam-level-nightmare/)

Ludovic Poitou - ForgeRockWhy I love my job ! [Technorati links]

February 27, 2015 09:19 AM

At ForgeRock, I have multiple reasons to enjoy what I do. I have the responsibility for two products: OpenDJ, the LDAP directory services and OpenIG the Identity Gateway, and I also manages the French subsidiary. But what really gets me excited in the morning is that I get to work with very smart and passionate people!

Jean-Noël, one of the engineers of the OpenDJ development team, has a passion for beautiful code and AlpesJuggyTranshe loves refactoring, cleaning existing code. On his personal time, he started to automate his process in Eclipse, and then turn it into an Eclipse plugin, and finally made the code available as an open source project: AutoRefactor. Now, in the office, most of the engineers using Eclipse are also using the AutoRefactor plugin.

So when Jean-Noël got to present his work at our local Java User Group (the AlpesJUG), the rest of the team went along and supported him. As one of the other engineers has a passion for photography (which I share), it gives this amazing picture gallery and set of souvenirs for everyone:

AutoRefactor Session at the AlpesJUG (Feb 24, 2015)

Photos by Bruno Lavit – Click to go to the picture gallery

PS: It also helps that we are working in a great environment where we can afford to do this⬇︎ (sometime to time) during our lunch break!

FondChamrousse


Filed under: General Tagged: AlpesJUG, Eclipse, ForgeRock, france, grenoble, java, jug, opensource, worklife
February 26, 2015

Mike Jones - MicrosoftJWK Thumbprint -03 draft incorporating additional feedback [Technorati links]

February 26, 2015 05:13 PM

IETF logoA new JWK Thumbprint draft has been posted that addresses additional review comments by James Manger and Jim Schaad. Changes included adding a discussion on the relationship of JWK Thumbprints to digests of X.509 values. No normative changes resulted.

The specification is available at:

An HTML formatted version is also available at:

Kuppinger ColeBrian Puhl, Principal Technology Architect for Microsoft's internal IT, will give a Best Practice Talk at EIC2015 [Technorati links]

February 26, 2015 01:41 PM
In European Identity and Cloud Conference

Brian: "Migrating to the cloud can be hard; mergers and acquisitions of companies is hard; M&A of hybrid cloud enabled companies is very hard."

Kuppinger Cole23.04.2015: Lean, Intelligent IAM processes for the ABC - Agile Business, Connected [Technorati links]

February 26, 2015 09:23 AM
In KuppingerCole

The constantly accelerating pace of change in today's businesses and their requirements influence all types of organizations, their business and operational processes and the underlying IT. Keeping up to speed with agile, innovative businesses and their requirements increases the demand for intelligent IAM processes.
more
February 25, 2015

Julian BondGoogle Play music has upped it's limit to uploading 50k tracks instead of 20k. This may take some time... [Technorati links]

February 25, 2015 07:57 PM
Google Play music has upped it's limit to uploading 50k tracks instead of 20k. This may take some time!

It's still annoyingly hard to upload .pls or .m3u playlist files.

And I still find it hard to understand the benefits of uploading your music to the cloud so you can download it again, played through a web music player that's a lot less effective than a local player.

http://www.engadget.com/2015/02/25/google-play-music-50000/
 Google Play Music now lets you store 50,000 songs in the cloud »
Even if you're not paying for All Access or YouTube Music Key, Google Play can be a useful way to stream your personal music collection. With its free "l

[from: Google+ Posts]

Kuppinger ColeGemalto feels secure after attack – the rest of the world does not [Technorati links]

February 25, 2015 04:43 PM
In Martin Kuppinger

In today’s press conference regarding the last week’s publications on a possible compromise of SIM cards from Gemalto by the theft of keys the company has confirmed security incidents during the time frame mentioned in the original report. It’s difficult to say, however, whether their other security products have been affected, since significant parts of the attack, especially in the really sensitive part of their network, did not leave any substantial traces. Gemalto therefore makes a conclusion that there were no such attacks.

According to the information published last week, back in 2010 a joint team of NSA and GCHQ agents has carried out a large-scale attack on Gemalto and its partners. During the attack, they have obtained secret keys that are integrated into SIM cards on the hardware level. Having the keys, it’s possible to decrypt mobile phone calls as well as create copies of these SIM cards and impersonate their users on the mobile provider networks. Since Gemalto, according to their own statements, produces 2 billion cards each year, and since many other companies have been affected as well, we are facing a possibility that intelligence agencies are now capable of global mobile communication surveillance using simple and nonintrusive methods.

It’s entirely possible that Gemalto is correct with their statement that there is no evidence for such a theft. Too much time has passed since the attack and a significant part of the logs from the affected network components and servers, which are needed for the analysis of such a complex attack, are probably already deleted. Still, this attack, just like the theft of so called “seeds” from RSA in 2011, makes it clear that manufacturers of security technologies have to monitor and upgrade their own security continuously in order to minimize the risks. Attack scenarios are becoming more sophisticated – and companies like Gemalto have to respond.

Gemalto itself recognizes that more has to be done for security and incident analysis: “Digital security is not static. Today’s state of the art technologies lose their effectiveness over time as new research and increasing processing power make innovative attacks possible. All reputable security products must be re-designed and upgraded on a regular basis”. In other words, one can expect that the attacks were at least partially successful – not necessarily against Gemalto itself, but against their customers and other SIM card manufacturers. There is no reason to believe that new technologies are secure. According to the spokesperson for the company, Gemalto is constantly facing attacks and outer layers of their protection have been repeatedly breached. Even if Gemalto does maintain a very high standard in security, the constant risks of new attack vectors and stronger attackers should not be underestimated.

Unfortunately, no concrete details were given during the press conference, what changes to their security practices are already in place and what are planned, other than a statement regarding continuous improvement of these practices. However, until the very concept of a “universal key”, in this case the encryption key on a SIM card, is fundamentally reconsidered, such keys will remain attractive targets both for state and state-sponsored attackers and for organized crime.

Gemalto considers the risk for the secure part of their infrastructure low. Sensitive information is apparently kept in isolated networks, and no traces of unauthorized access to these networks have been found. However, the fact that there were no traces of attacks does not mean that there were no attacks.

Gemalto has also repeatedly pointed out that the attack has only affected 2G network SIMs. There is, however, no reason to believe that 3G and 4G networks must be safer, especially not against massive attacks of intelligence agencies. Another alarming sign is that, according to Gemalto, certain mobile service providers are still using insecure transfer methods. Sure, they are talking about “rare exceptions”, but it nevertheless means that unsecured channels still exist.

The incident at Gemalto has once again demonstrated that the uncontrolled actions of intelligence agencies in the area of cyber security poses a threat not only to fundamental constitutional principles such as privacy of correspondence and telecommunications, but to the economy as well. The image of companies like Gemalto and thus their business success and enterprise value are at risk from such actions.

Even more problematic is that the knowledge of other attackers is growing with each published new attack vector. Stuxnet and Flame have long been well analyzed. It can be assumed that the intelligence agencies of North Korea, Iran and China, as well as criminal groups have studied them long ago. The act can be compared to leaking of atomic bomb designs, with a notable difference: you do not need plutonium, just a reasonably competent software developer to build your own bomb. Critical infrastructures are thus becoming more vulnerable.

In this context, one should also consider the idea of German state and intelligence agencies to procure zero-day exploits in order to carry out investigations of suspicious persons’ computers. Zero-day attacks are called that way because code to exploit a newly discovered vulnerability is available before the vendor even becomes aware of the problem, because they literally have zero days to fix it. In reality, this means that attackers are able to exploit a vulnerability long before anyone else discovers it. Now, if government agencies are keeping the knowledge about such vulnerabilities to create their own malware, they are putting the public and the businesses in a great danger, because one can safely assume that they won’t be the only ones having that knowledge. After all, why would sellers of such information make their sale only once?

With all due respect for the need for states and their intelligence agencies to respond to the threat of cyber-crime, it is necessary to consider two potential problems stemming from this approach. On one hand, it requires a defined state control over this monitoring, especially in light of the government’s new capability of nationwide mobile network monitoring in addition to already available Internet monitoring. On the other hand, government agencies finally need to understand the consequences of their actions: by compromising the security of IT systems or mobile communications, they are opening a Pandora’s Box and causing damage of unprecedented scale.

Kuppinger ColeGemalto fühlt sich weiter sicher – der Rest der Welt ist es nicht [Technorati links]

February 25, 2015 12:29 PM
In Martin Kuppinger

In einer Pressekonferenz zu den Veröffentlichungen von vergangener Woche zu einer möglichen Kompromittierung von SIM-Karten von Gemalto durch den Diebstahl von Schlüsseln hat Gemalto heute bekannt gemacht dass es Vorfälle gegeben hat – ob wirklich keine anderen Produkte betroffen waren kann man aber nicht sagen, weil wesentliche Teile des Angriffs, insbesondere in den wirklich sensitiven Teilen des Netzwerks, nicht nachvollziehbar waren. Gemalto zieht daraus den Schluss, dass es keine solchen Angriffe gegeben hat.

Laut den vergangene Woche bekannt gewordenen Informationen haben NSA und GCHQ im Jahr 2010 einen groß angelegten Angriff auf Gemalto und seine Partner durchgeführt. Dabei wurden geheime Schlüssel, die in die SIM-Karten auf Hardware-Ebene integriert sind, erbeutet. Mit diesen Schlüsseln können potentiell Kopien dieser SIM-Karten erzeugt werden. Mit diesen können sich die Geheimdienste in Anrufe aller Mobiltelefone einklinken, die solche SIM-Karten verwenden. Da Gemalto nach eigenen Angaben rund 2 Milliarden solcher SIM-Karten pro Jahr produziert und auch etliche andere Firmen davon betroffen waren, geht es hier um die Möglichkeit, dass Geheimdienste flächendeckend die mobile Kommunikation in einfacher und nicht nachvollziehbarer Weise abhören können.

Es spricht viel dafür, dass Gemalto mit seiner grundsätzlichen Aussage richtig liegt, dass man keinen Nachweis für einen solchen Diebstahl hat. Er liegt zu lange zurück und ein erheblicher Teil der Log-Daten der betroffenen Netzwerkkomponenten und Server, die man zur Analyse eines solchen komplexen Angriffs benötigt, sind vermutlich längst gelöscht. Dieser Angriff macht, genauso wie der Diebstahl von sogenannten „seeds“ bei RSA im Jahr 2011, aber deutlich, dass Hersteller von Sicherheitstechnologien ihre eigene Sicherheit permanent überprüfen und verbessern müssen, um die Risiken zu verringern. Die Angriffsszenarien werden immer ausgefeilter – deshalb müssen auch Firmen wie Gemalto reagieren.

Gemalto wies sogar selbst darauf hin, dass es immer neue Sicherheitsrisiken gibt: „Digitale Sicherheit ist nicht statisch. Der heutige Stand der Technologien verliert ihre Wirksamkeit im Laufe der Zeit, neue Forschungs- und zunehmender Rechenleistung machen innovative Angriffe möglich. Alle seriösen Sicherheitsprodukte müssen neu gestaltet und in regelmäßigen Abständen aktualisiert werden.” Einfach gesagt: Offensichtlich gab es Angriffe und es spricht einiges dafür, dass diese zumindest teilweise erfolgreich waren – nicht unbedingt Gemalto selbst, aber bei Kunden von Gemalto. Es gibt daher keinen Grund anzunehmen, dass neue Technologien sicher sind. Darüber hinaus hat ein Sprecher von Gemalto selbst darauf hingewiesen, dass sie permanent angegriffen werden und zumindest die äußeren Schutzschichten wiederholt durchbruchen werden. Auch wenn Gemalto einen sehr hohen Standard im Bereich Sicherheit pflegt, dürfen die Risiken durch immer neue Angriffsformen und leistungsfähigere Angreifer nicht unterschätzt werden.

Leider wurden auf der Pressekonferenz keine konkreten Aussagen dazu gemacht, ob und in welchem Umfang Änderungen bei den Sicherheitsmaßnahmen bereits vorgenommen oder geplant sind, außer dem Verweis auf eine kontinuierliche Verbesserung dieser Maßnahmen. Grundsätzlich sind aber auch Konzepte zu überdenken, bei denen es solche „Generalschlüssel“ wie in diesem Fall für die Verschlüsselung von Informationen auf SIM-Karten gibt. Denn diese Generalschlüssel sind natürlich ein attraktives Ziel sowohl für staatliche und von Staaten gesponserte Angreifer wie auf für die organisierte Kriminalität.

Gemalto bewertet die Risiken für den sicheren Teil seiner Infrastruktur als gering. Die wirklich sensitiven Informationen fänden sich in isolierten Netzwerken und es habe in den sensitiven Bereichen keine nachvollziehbare Zugriffe gegeben. Dass Angriffe nicht nachvollzogen werden können bedeutet aber nicht, dass sie nicht stattgefunden haben. Es soll laut Gemalto auch keine Risiken für neuere Mobilfunknetze geben. Von dem konkreten Vorfall seien nur 2G-Netze betroffen und die Probleme seien auch primär bei Mobilfunk-Operatoren entstanden. Allerdings bedeutet das nicht, dass 3G- und 4G-Netzwerke wirklich sicher sind.

Bedenklich ist auch, dass es laut Gemalto zu einigen Mobilfunk-Anbietern immer noch unsichere Übertragungsverfahren gibt. Gemalto sprach hier von „rare exceptions“ – was im Umkehrschluss bedeutet, dass es diese weiterhin gibt.

Der Vorfall bei Gemalto zeigt aber einmal mehr auf, dass vom unkontrollierten Handeln von Geheimdiensten im Bereich der Cyber-Sicherheit eine Gefahr nicht nur für fundamentale rechtsstaatliche Prinzipien wie das Post- und Fernmeldegeheimnis ausgeht und das Verhältnis von eigentlich befreundeten Staaten ausgeht – immerhin wurde hier ein französisches Unternehmen mutmaßlich im Auftrag und mit Unterstützung amerikanischer und britischer Geheimdienste angegriffen –  sondern auch für die Wirtschaft. Das Image von Unternehmen wie Gemalto und damit deren geschäftlicher Erfolg und Unternehmenswert werden durch solche Aktionen gefährdet. Gemalto merkt hier selbst zu Recht an, dass das Handeln der Geheimdienste nicht akzeptabel und nachvollziehbar ist.

Viel problematischer ist aber ein anderer Aspekt: Mit jedem bekannt gewordenen neuen Angriffsmuster – und früher oder später wird das meiste bekannt – wächst auch das Wissen anderer Angreifer. Stuxnet und Flame sind längst bestens analysiert. Man kann davon ausgehen, dass die Geheimdienste von Nordkorea, dem Iran oder China längst davon gelernt haben, ebenso wie die organisierte Kriminalität. Das Handeln ist in seiner Qualität vergleichbar mit der Veröffentlichung von Konstruktionsplänen von Atombomben, mit dem Unterschied, dass man kein Plutonium, sondern nur einigermaßen fähige Softwareentwickler zum Bombenbau benötigt. Die kritische Infrastruktur wird damit immer angreifbarer.

In diesem Kontext ist auch die Idee deutscher staatlicher Stellen und Geheimdienste zu bewerten, sich Code für Zero Day-Attacken zu sichern, um damit Nachforschungen auf Computersystemen von verdächtigen Personen durchführen zu können. Zero Day-Attacken bezeichnen Angriffe, bei denen bei bekannt werden einer Schwachstelle in einem Betriebssystem, einem Browser oder einer anderen Software null Tage („zero days“) zur Verfügung stehen, um zu reagieren, weil der Angriffscode bereits verfügbar ist. Faktisch sind Zero Day-Attacken eigentlich solche, bei denen Schwachstellen schon längst genutzt werden, bevor sie von anderen als den Angreifern entdeckt werden. Wenn nun staatliche Stellen das Wissen über solche Schwachstellen nutzen, um eigene Malware für Angriffe zu entwickeln, dann setzen sie die breite Öffentlichkeit und die Wirtschaft einer massiven Gefährdung aus, da natürlich davon ausgegangen werden muss, dass auch andere diese Schwachstellen entdecken. Abgesehen davon: Warum sollte der Verkäufer solcher Informationen diese nur einmal verkaufen oder nutzen?

Bei allem Verständnis für die Notwendigkeit, dass Staaten und ihre Geheimdienste auf die Bedrohung durch Cyber-Kriminalität und Cyber-Angriffe von staatlicher Seite reagieren, muss man doch ein Umdenken in zweierlei Hinsicht einsetzen. Zum einen bedarf es einer definierten staatlichen Kontrolle der Überwachung, gerade dann wenn Staaten wie nun im Mobiltelefonbereich oder schon längst im Internet die Fähigkeit zur flächendeckenden Überwachung haben. Zum anderen müssen aber die involvierten staatlichen Stellen endlich die Konsequenzen ihres Handelns verstehen: Wer die Sicherheit von IT-Systemen oder von mobiler Kommunikation kompromittiert, öffnet die Büchse der Pandora.

Drummond Reed - CordanceT.Rob on the Samsung AdHub Privacy Policy – Have We Reached a Privacy Waterloo? [Technorati links]

February 25, 2015 03:56 AM

iopt-logoOne of my favorite bloggers in the Internet identity/security/privacy/personal data space, T.Rob Wyatt, just posted an expose of what the Samsung privacy policy really means when it comes to using Samsung devices and their integrated AdHub advertising network.

I can tell you right now: I’ll never buy a Samsung smart-ANYTHING until that policy is changed. Full stop.

If every prospective Samsung customer does the same thing—and tells Samsung this right out loud, like I’m doing right now—then we’d finally see some of these policies changing.

Because it would finally hit them in the pocketbook.


February 24, 2015

Phil Hunt - OracleA 'Robust' Schema Approach for SCIM [Technorati links]

February 24, 2015 05:55 PM
This article was originally posted on the Oracle Fusion Blog, Feb 24, 2015.Last week, I had a question about SCIM's (System for Cross-domain Identity Management) approach to schema. How does the working group recommend handling message validation? Doesn't SCIM have a formal schema? To be able to answer that question, I realized that the question was about a different style of schema than SCIM

IS4UFIM2010: Filter objects on export [Technorati links]

February 24, 2015 03:51 PM

Intro

FIM allows you to filter objects on import through filters in the connector configuration. The same functionality is not available on export. There are two methods available to provision a selected set of objects to a target system through synchronization rules. This article shortly describes these two mechanisms and also describes a third using provisioning code.

Synchronization Rules

Synchronization rules allow codeless provisioning. It also allows you control over the population of objects you want to create in a certain target system.

Triplet

The first way of doing this is by defining a set of objects, a synchronization rule, a workflow that adds the synchronization rule to an object and a Management Policy Rule (MPR) that binds them together. In the set definition you can define filters. You can select a limited population of objects by configuring the correct filter on the set. triplet

Scoping filter

The second method defines the filter directly on the synchronization rule, so you do not need a set, workflow and MPR. You simply define the conditions the target population needs to satisfy before they can be provisioned to the target system. outbound system scoping filter Scope filter

Coded provisioning

Coded provisioning allows for very complex provisioning and it is also the only option on projects where you use only the Synchronization Engine. What follows is only a portion of a more complex provisioning strategy:

Sample configuration file

<Configuration>
  <MaConfiguration Name="AD MA">
    <Export-Filters>
      <Filter Name="DepartmentFilter" IsActive="true">
        <Condition Attribute="Department" Operation="Equals" IsActive="true">Sales</Condition>
      </Filter>
    </Export-Filters>
  <MaConfiguration>
</Configuration>

Sample source code

Following code is on itself not functional, but you get an idea of how the complete implementation can look like:
private bool checkFilter(MVEntry mventry, Filter filter)
{
  foreach (FilterCondition condition in filter.Conditions)
    {
      // Return false if one of the conditions is not true.
      if (!checkCondition(mventry, condition))
      {
        return false;
      }
  }
  return true;
}

 

private bool checkCondition(MVEntry mventry, FilterCondition condition)
{
  string attributeValue = condition.Attribute;
  if (mventry[attributeValue].IsPresent)
  {
    if (mventry[attributeValue].IsMultivalued)
    {
      foreach (Value value in mventry[attributeValue].Values)
      {
        bool? result = 
          condition.Operation.Evaluate(value.ToString());
        if (result.HasValue)
        {
          return result.Value;
        }
      }
      return condition.Operation.DefaultValue;
    }
    else
    {
      bool? result = condition.Operation.Evaluate(mventry[attributeValue].Value.ToString());
      if (result.HasValue)
      {
        return result.Value;
      }
      return condition.Operation.DefaultValue;
    }
  }
  return condition.Operation.DefaultValue;
}

Kuppinger ColeOperational Technology: Safety vs. Security – or Safety and Security? [Technorati links]

February 24, 2015 02:45 PM
In Martin Kuppinger

In recent years, the area of “Operational Technology” – the technology used in manufacturing, in Industrial Control Systems (ICS), SCADA devices, etc. – has gained the attention of Information Security people. This is a logical consequence of the digital transformation of businesses as well as concepts like the connected (or even hyper-connected) enterprise or “Industry 4.0”, which describes a connected and dynamic production environment. “Industry 4.0” environments must be able to react to customer requirements and other changes by better connecting them. More connectivity is also seen between industrial networks and the Internet of Things (IoT). Just think about smart meters that control local power production that is fed into large power networks.

However, when Information Security people start talking about OT Security there might be a gap in common understanding. Different terms and different requirements might collide. While traditional Information Security focuses on security, integrity, confidentiality, and availability, OT has a primary focus on aspects such as safety and reliability.

Let’s just pick two terms: safety and security. Safety is not equal to security. Safety in OT is considered in the sense of keeping people from harm, while security in IT is understood as keeping information from harm. Interestingly, if you look up the definitions in the Merriam-Webster dictionary, they are more or less identical. Safety there is defined as “freedom from harm or danger: the state of being safe”, while security is defined as “the state of being protected or safe from harm”. However, in the full definition, the difference becomes clear. While safety is defined as “the condition of being safe from undergoing or causing hurt, injury, or loss”, security is defined as “measures taken to guard against espionage or sabotage, crime, attack, or escape”.

It is a good idea to work on a common understanding of terms first, when people from OT security and IT security start talking. For decades, they were pursuing their separate goals in environments with different requirements and very little common ground. However, the more these two areas become intertwined, the more conflicts occur between them – which can be best illustrated when comparing their views on safety and security.

In OT, there is a tendency to avoid quick patches, software updates etc., because they might result in safety or reliability issues. In IT, staying at the current release level is mandatory for security. However, patches occasionally cause availability issues – which stands in stark contrast to the core OT requirements. In this regard, many people from both sides consider this a fundamental divide between OT and IT: the “Safety vs. Security” dichotomy.

However, with more and more connectivity (even more in the IoT than in OT), the choice between safety and security is no longer that simple. A poorly planned change (even as simple as an antivirus update) can introduce enough risk of disruption of an industrial network that OT experts will refuse even to discuss it: “people may die because of this change”. However, in the long term, not making necessary changes may lead to an increased risk of a deliberate disruption by a hacker. A well-known example of such a disruption was the Stuxnet attack in Iran back in 2007. Another much more recent event occurred last year in Germany, where hackers used malware to get access to a control system of a steel mill, which they then disrupted to such a degree that it could not be shut down and caused massive physical damage (but, thankfully, no injuries or death of people).

When looking in detail at many of the current scenarios for connected enterprises and – in consequence – connected OT or even IoT, this conflict between safety and security isn’t an exception; every enterprise is doomed to face it sooner or later. There is no simple answer to this problem, but clearly, we have to find solutions and IT and OT experts must collaborate much more closely than they are (reluctantly) nowadays.

One possible option is limiting access to connected technology, for instance, by defining it as a one-way road, which enables information flow from the industrial network, but establishes an “air gap” for incoming changes. Thus, the security risk of external attacks is mitigated.

However, this doesn’t appear to be a long-term solution. There is increasing demand for more connectivity, and we will see OT becoming more and more interconnected with IT. Over time, we will have to find a common approach that serves both security and safety needs or, in other words, both OT security and IT security.

Ludovic Poitou - ForgeRockAbout auditing LDAP operations… [Technorati links]

February 24, 2015 07:58 AM

OpenDJ LogoMany years ago, when I’ve started working on LDAP directory services, we needed to have some auditing of the operations occurring on the server. So, the server had a “Access” log which contained a message when an operation was received, and one when it was returned to the client, which included the processing time on the server side (the etime parameter). On Netscape and Sun directory servers, the etime was measured in seconds. This format allowed us to detect requests that were taking a long time, or were started but not finished.

In OpenDJ, we switched the etime resolution to milliseconds, but there’s an option to set it to nano-seconds. Yet, with millisecond resolution, there are still a number of log entries with an etime value of 0. The truth is that the server is faster, but so are the machines and processors.

At a rate of 50 000 operations per seconds (which can easily be sustained on my laptop), having two messages per operation does generate a lot of data to write to disk. That’s why we have introduced a new audit log format, not well advertised I must say, in OpenDJ 2.6.0. To enable the new format, use the following dsconfig command:

dsconfig set-log-publisher-prop -h localhost -p 4444 -X -n \
 -D "cn=directory manager" -w password \
 --publisher-name File-Based\ Access\ Logger  --set log-format:combined

And now instead of having 2 lines per operations, there is a single one.

Before:

[23/Feb/2015:08:56:31 +0100] SEARCH REQ conn=0 op=4 msgID=5 base="cn=File-Based Access Logger,cn=Loggers,cn=config" scope=baseObject filter="(objectClass=*)" attrs="1.1"
[23/Feb/2015:08:56:31 +0100] SEARCH RES conn=0 op=4 msgID=5 result=0 nentries=1 etime=0
[23/Feb/2015:08:56:31 +0100] SEARCH REQ conn=0 op=5 msgID=6 base="cn=File-Based Access Logger,cn=Loggers,cn=config" scope=baseObject filter="(objectClass=*)" attrs="objectclass"
[23/Feb/2015:08:56:31 +0100] SEARCH RES conn=0 op=5 msgID=6 result=0 nentries=1 etime=0

After, in combined mode:

[23/Feb/2015:13:00:28 +0100] SEARCH conn=48 op=8215 msgID=8216 base="dc=example,dc=com" scope=wholeSubtree filter="(uid=user.1)" attrs="ALL" result=0 nentries=1 etime=0
[23/Feb/2015:13:00:28 +0100] SEARCH conn=60 op=10096 msgID=10097 base="dc=example,dc=com" scope=wholeSubtree filter="(uid=user.6)" attrs="ALL" result=0 nentries=1 etime=0

The benefits of enabling the combined log format are multiple. Less data is written to disk for each operation, less I/O operations are involved, resulting in overall better throughput for the server. And it allows to keep more history of operations with the same volume of log files.

Do you think that OpenDJ 3.0 access log files should use the combined format by default ?


Filed under: Directory Services Tagged: auditing, directory, directory-server, ForgeRock, ldap, logs, Tips
February 23, 2015

Matt Pollicove - CTIDoing a LINUX Based Victory Lap [Technorati links]

February 23, 2015 04:04 PM

 OK, I admit it, when I solve a particularly confounding problem, I like to get up, proclaim “Victory Lap!” and walk around in a small circle (Sometimes two for bigger problems). I think it's important to celebrate development successes particularly when they result in helping to really move progress ahead. I’d like to share with you folks my latest cause for a Victory Lap.

My current project is requiring me to install Virtual Directory Server for HCM and GRC along with an IDM Dispatcher on a Red Hat LINUX server.  Now I’ve dabbled with LINUX before and even run it as my personal Operating System once upon a time, but I think it’s safe to say that I was going to learn a bit during this process, and learn I did!

There are two important things to remember about LINUX that trip up someone coming from a DOS/Windows background:
IDM Default Path
IDM Default Install Path

Remember, even those two paths have the same basic content; they are different files since the case is different in each path.  Personally I recommend using the VDS install path.  I don’t know that I have any terribly well-defined reason except for the fact that the VDS configuration has more moving parts in relationship to the Operating System so it seems like it makes more sense not to rock that boat more than absolutely necessary.

For all of that it was an interesting find and one that I’m happy to share, but it was not quite worthy of taking the Victory Lap. The issues that followed on the other hand...

Along with VDS, I also installed a dispatcher for IDM. Pretty standard stuff, generate the dispatcher from the MMC Admin console, move the files over to the LINUX box via SCP, check permissions and then edit them to reflect the new environment. I made the required changes to the shell script and tested the configuration. This brings me to another interesting difference between a Windows implementation and a LINUX implementation:

In a Windows configuration the dispatcher is tested by using the command: 


Dispatcher_Service_[Dispatcher Name].bat test checkconfig 

 however in LINUX all you need to enter is:

Dispatcher_Service_[Dispatcher Name].sh checkconfig 

Again, not quite Victory Lap worthy, and regardless, the check came out just fine and the dispatcher fired off without a problem. However that’s when the problems began. While I was able to observe through the MMC console that IDM saw the dispatcher running on the LINUX box, it would not pick up any jobs that I tried to execute through it. Each and every one would eventually time out leaving me wondering why.

I examined all of the settings in the Dispatcher’s shell script and didn’t really find anything wrong; double checked the permissions and those seemed to be fine as well. Eventually I went to look at the Dispatcher’s PROP file. It seemed OK, but what did I know? After much consultation among my fellow IDM experts around the world, one thing stuck out as a piece of advice: 

"Since prop file is on UNIX – EVERYTHING has to point to UNIX.”

So I took another look at the files and noticed an entry in the PROP file and saw this:
DSECLASSPATH=%DSE_HOME%/Java/DSE.jar:%DSE_HOME%/Java:%DSE_HOME%/Java/sapjco.jar:C:\\oracle\\product\\10.2.0\\client_1\\jdbc\\lib\\ojdbc14.jar
Hey, what’s with that %DSE_HOME% reference?  That’s DOS not LINUX!  Made a quick search and replace, %DSE_HOME% for /usr/sap/IdM/identitycenter and everything started working!
So, what were the lessons here? 

Kuppinger ColeCryptography for the People [Technorati links]

February 23, 2015 11:19 AM
In European Identity and Cloud Conference

EIC 2015 Keynote: Dr. Jan Camenisch from IBM´s Research Center near Zurich, will talk about the need for encryption in the connected world and how to give the user control over his/her keys.

Kuppinger ColeExecutive View: RSA Archer GRC - 70888 [Technorati links]

February 23, 2015 09:35 AM
In KuppingerCole

RSA Archer by RSA, The Security Division of EMC is a full-featured GRC-framework providing an enterprise-wide, systemic approach to implementing Governance, Compliance and Risk Management. With its platform approach it can be continuously adapted to maturing GRC-strategies towards risk-orientated business processes...


more

Kuppinger ColeExecutive View: TITUS Classification Suite - 70951 [Technorati links]

February 23, 2015 09:15 AM
In KuppingerCole

Sharing information securely is becoming increasingly important within companies, be it to protect intellectual properly, meet regulatory requirements for privacy or simply to avoid embarrassing leaks of proprietary information. While it is easy to stop access to documents and files, it is much harder to manage sharing such information. Shared Information Security is a topic within Cyber Security that deals with providing intelligent access control to protected resources; and it is of...
more

Kuppinger ColeAdvisory Note: ABAC done right - 71134 [Technorati links]

February 23, 2015 09:06 AM
In KuppingerCole

There is an increasing number of documents purporting to advise on how to migrate to an Attribute-Based Access Control environment. The real requirement is for Adaptive Policy-based Access Management. Here are some tips...


more
February 21, 2015

Julian BondNot your typical track day bike [Technorati links]

February 21, 2015 04:54 PM

Julian BondThis is all very well and I like the style. But when will I be able to buy the outfit from M&S for £... [Technorati links]

February 21, 2015 09:14 AM
February 20, 2015

MythicsHow to Guide: Installing, Patching and Managing Oracle Solaris [Technorati links]

February 20, 2015 03:17 PM

How to Install and Use Oracle Solaris 11.2 for x86 from an ISO in VirtualBox

Published on the "OTN Garage" the Official blog of…

Kuppinger ColeThe Great SIM Heist and Other News from NSA [Technorati links]

February 20, 2015 01:57 PM
In Alexei Balaganski

Even almost two years after Edward Snowden made off with a cache of secret NSA documents, the gradual ongoing publication of these materials, complemented by independent research from information security experts has provided a unique insight into the extent of global surveillance programs run by the US intelligence agencies and their partners from various European countries. Carefully timed, they’ve provided an exciting and at the same time deeply disturbing reading for both IT experts and the general public.

In the recent period, it looked as if the trickle of news regarding our friends from NSA had almost dried out, but apparently, this was just “calm before the storm”. First, just a few days ago Kaspersky Lab published their extensive report on the “Equation group”, a seemingly omnipotent international group of hackers active for over a decade and known to utilize extremely sophisticated hacking tools, including the ability to infect hard drive firmware. Technical details of these tools reveal many similarities with Stuxnet and Flame, both now known to have been developed in collaboration with NSA. It was later confirmed by a former NSA employee that the agency indeed possesses and widely utilizes this technology for collecting their intelligence.

And even before the IT security community was able to regain its collective breath, The Intercept, the publication run by Edward Snowden’s closest collaborators, has unveiled an even bigger surprise. Apparently, back in 2010, American and British intelligence agencies were able to carry out a massive scale breach of mobile phone encryption in a joint operation targeting telecommunication companies and SIM card manufacturers.

If we are to believe the report, they have managed to penetrate the network of Gemalto, world’s largest manufacturer, shipping over 2 billion SIM cards yearly. Apparently, they not only resorted to hacking, but also ran a global surveillance operation on Gemalto employees and partners. In the end, they managed to obtain copies of secret keys embedded into SIM cards that enable mobile phone identification in providers’ networks, as well as encryption of phone calls. Having these keys, NSA and GCHQ are, in theory, able to easily intercept and decrypt any call made from a mobile phone, as well as impersonate any mobile device with a copy of its SIM card. As opposed to previously known surveillance methods (like setting up a fake cell tower), this method is completely passive and undetectable. By exploiting deficiencies of GSM encryption protocols, they are also able to decrypt any previously recorded call, even from years ago.

Since Gemalto doesn’t just produce SIM cards, but various other kinds of security chips, there is a substantial chance that these could have been compromised as well. Both Gemalto and its competitors, as well as other companies working in the industry, are now fervently conducting internal investigations to determine the extent of the breach. It’s worth noting that according to Gemalto’s officials, they hadn’t noticed any indications of the breach back then.

A side note: just another proof that even security professionals need better security tools to stay ahead of the intruders.

Now, what lesson should security experts, as well as ordinary people learn from this? First and foremost, everyone should understand that in the ongoing fight against information security threats everyone is basically on their own. Western governments, which supposedly should be protecting their citizens against international crime, are revealed to be conducting the same activities on a larger and more sophisticated scale (after all, intelligence agencies possess much bigger budgets and legal protection). Until now, all attempts to limit the intelligence agencies’ powers have been largely unsuccessful. The governments even go as far as to lie outright about the extent of their surveillance operations to protect them.

Another, more practical consideration is that the only solutions we can still more or less count on are complete end-to-end encryption systems where the whole information chain is controlled by users themselves, including secure management of encryption keys. Before practical quantum computers become available, breaking a reasonably strong encryption key is still much more difficult than stealing it. For any other communication channel, you should significantly reconsider your risk policies.

Kuppinger ColeUMA and Life Management Platforms [Technorati links]

February 20, 2015 10:24 AM
In Martin Kuppinger

Back in 2012, KuppingerCole introduced the concept of Life Management Platforms. This concept aligns well with the VRM (Vendor Relationship Management) efforts of ProjectVRM, however it goes beyond in not solely focusing on the customer to vendor relationships. Some other terms occasionally found include Personal Clouds (not a very concrete term, with a number of different meanings) or Personal Data Stores (which commonly lack the advanced features we expect to see in Life Management Platforms).

One of the challenges in implementing Life Management Platforms until now has been the lack of standards for controlling access to personal information and of standard frameworks for enforcing concepts such as minimal disclosure. Both aspects now are addressed.

On one hand, we see technologies such as Microsoft U-Prove and IBM Idemix being ready for practical use, which recently has been demonstrated in an EU-funded project. On the other hand, UMA is close to final, a standard that allows managing authorization for information that is stored centrally. It moves control into the hands of the “data owner”, instead of the service provider.

UMA is, especially in combination with U-Prove and/or Idemix, an enabler for creating Life Management Platforms based on standard and COTS technology. Based on UMA, users can control what happens with their content. They can make decisions on whether and how to share information with others. On the other hand, U-Prove and Idemix allow enforcing minimal disclosure, based on the concepts of what we called “informed pull” and “controlled push”.

Hopefully we will see a growing number of offerings and improvements to existing platforms that make use of the new opportunities UMA and the other technologies provide. As we have written in our research, there is a multitude of promising business models that respect privacy – and not only for business models that destroy privacy. Maybe the release of UMA is the priming for successful Life Management Platform offerings.

Kuppinger ColeAdvisory Note: Studie zu digitalen Risiken und Sicherheitsbewusstsein - 71252 [Technorati links]

February 20, 2015 09:54 AM
In KuppingerCole

In einer weltweiten Online-Studie befragte KuppingerCole Experten aus dem Bereich der Informationssicherheit zu ihrer derzeitigen Wahrnehmung von digitalen Risiken und Sicherheit. Die Studie weist auf eine signifikant gestiegene Wahrnehmung beider Bedrohungen, d.h. von potenziellen Angriffen und Risiken, hin.


more

Vittorio Bertocci - MicrosoftIdentity Libraries: Status as of 02/20/2015 [Technorati links]

February 20, 2015 08:59 AM

FYI: with the release of ADAL JS 1.0, I took the opportunity for updating the perma-page with the megadiagram of all our identity dev libraries. The old diagram lives here in case you’re curious.

Besides flipping the “released” switch for ADAL JS, I also rearranged the layout of the entire diagram to address some feedback I got on the old one. Now it should be clearer that the libraries span through a continuum of platforms, without artificial grouping. I hope this will help to find your way through our increasingly rich collection! Smile

Libraries02202015

Nat SakimuraSIMのキーが気付かれずに大量にNSAとGCHQに持って行かれていた [Technorati links]

February 20, 2015 02:22 AM

The Intercept が現地時間2015/2/19に報じた[1]ところによると、NSAとGCHQによって、Gemealtoから出荷されたSIMに保存されている鍵(Ki)が大量に強奪されていたとのことです。その結果、これらのSIMを使った携帯電話の通信を盗聴するのは、何でもなくなっていたと。Snordenさんが持ちだしたファイルの中にこの情報が入っていたとのことです。

携帯電話に入っているSIMカードは、Gemaltoのような「パーソナライゼーション会社」によってAuthentication Key (Ki)が焼きこまれます。このAuthentication Keyは、SIMをネットワーク上で認証するのと、暗号鍵を生成するのに使われます。この鍵は生成された後、SIMカードに記録され、取り出せないようになります。ですが、このSIMカードをネットワーク上で認証するためには、同じ鍵を携帯電話会社も持たなければならないので、携帯電話会社にも送られます。問題があったのは、この送り方ですね。SIMカードは大量にパーソナライズされて電話会社に届けられます。その時、書き込んだAuthentication Keyを大量にまとめてFTPないしはemailで送っていたとのことです。しかも、弱い暗号しか使わず、場合によっては平文で。

GCHQとNSAはGemaltoの従業員のメールからGemaltoの社内ネットワークに入り込んで、誰がこの重要な仕事をしているのかを見つけ出し、その人の通信を傍受して、その中からキーの大量送信ファイルを取り出していたようです。

わたし、この分野には疎くて、まさかこんなことになっているとは知りませんでした。当然チップ内でキーペアを生成させて、そのPublic Keyをキャリアに送っているんだとばかり思っていました。共通鍵でやっていたとは…。

昨日、Real World Crypto 報告会に行っていたのですが、そこで「暗号プリミティブをクラックするより、その他の場所をクラックしたほうが全然簡単だから、暗号プリミティブ自体が弱いとかあんまり関係ない」という意見がフロアから出ていましたが、正にそれを地で行ったわけですね。

教訓

かな。

[1] https://firstlook.org/theintercept/2015/02/19/great-sim-heist/

Mike Jones - MicrosoftJWK Thumbprint -02 draft incorporating WGLC feedback [Technorati links]

February 20, 2015 12:40 AM

IETF logoNat Sakimura and I have updated the JSON Web Key (JWK) Thumbprint draft to incorporate feedback receiving during JOSE working group last call. Changes were:

The specification is available at:

An HTML formatted version is also available at:

February 19, 2015

Kuppinger ColeWindows 10 will support FIDO standards for strong authentication [Technorati links]

February 19, 2015 11:26 AM
In Alexei Balaganski

At KuppingerCole, we have been following the progress of FIDO alliance for quite some time. Since their specifications for scalable and interoperable strong authentication have been published last year, FIDO has already had several successful deployments in collaboration with such industry giants as Samsung, Google and Alibaba. However, their probably biggest breakthrough been announced just a few days ago by none other than Microsoft. According to their announcement, Microsoft’s upcoming Windows 10 will include support for FIDO standards to enable strong and password-free authentication for a number of consumer and enterprise applications.

We knew, of course, that Microsoft has been working on implementing a new approach to identity protection and access control in their next operating system. Moving away from passwords towards stronger and more secure forms of authentication has been declared on of their top priorities for Windows 10. Of course, solutions like smartcards and OTP tokens have existed for decades, however, in the modern heterogeneous and interconnected world, relying on traditional enterprise PKI infrastructures or limiting ourselves by a single vendor solution is obviously impractical. Therefore, a new kind of identity is needed, which would work equally well for traditional enterprises and in consumer and web scenarios.

Now, unless you’ve been entirely avoiding all news from Microsoft in the recent years, you should have probably already guessed their next move. Embracing an open standard to allow third party manufacturers to develop compatible biometric devices and providing a common framework for hardware and software developers to build additional security into their products instead of building another “walled garden” isn’t just a good business decision, it’s the only sensible strategy.

Microsoft has joined FIDO alliance as a board member back in December 2013. Since then, they have been actively contributing to the development of FIDO specifications. Apparently, a significant part of their designs will be included in the FIDO 2.0 specification, which will then be incorporated into the Windows 10 release. Unfortunately, it’s a bit too early to talk about specific details of that contribution, since FIDO 2.0 specifications are not yet public.

However, it is already possible to get a peek of some of the new functionality in action. Current Windows 10 Technical Preview is already providing several integration scenarios for Windows Sign-in, Azure Active Directory and a handful of major SaaS services like Microsoft’s own Office 365 and partners like Salesforce, Citrix and Box. Using Azure Active Directory, it’s already possible to achieve end-to-end strong two-factor authentication completely without passwords. Windows 10 release will add support for on-premise Active Directory integration as well as integration with consumer cloud services.

And, of course, since this authentication framework will be built upon an open standard, third party developers will be able to quickly integrate it with their products and services, security device manufacturers will be able to bring a wide array of various (and interoperable) strong authentication solutions to the market and enterprise users will finally be able to forget the words “vendor lock-in”. If this isn’t a win-win situation, I don’t know what is.

Vittorio Bertocci - MicrosoftIntroducing ADAL JS v1 [Technorati links]

February 19, 2015 09:50 AM

Less than 4 months ago I wrote at length about the first preview of ADAL JS, a new library meant to help you to take advantage of Azure AD to secure your SPA apps and consume Web API from JavaScript.

Since then we’ve been iterating on the library surface and architecture, ingesting your feedback and your numerous direct contributions. The library has been stable for few weeks now, hence we decided to make ADAL JS v1 generally available.

I am very, very excited about this release Smile. If I’d have to summarize why, I can identify 3 main reasons:

For a quick overview of what we are releasing today, please head to the announcement post on the team blog. If you want to dig deeper, stay with me… and I’ll show you how deep the rabbit hole goes.

What Is ADAL JS

ADAL JS is a JavaScript library which offers you a super easy programming model for

You can find a high level intro in this video.
If you are using AngularJS we provide very convenient primitives in line with Angular’s model. However, you can use ADAL JS with any SPA stack – or no stack at all, as long as the architecture is SPA.

Using ADAL JS is very simple. It boils down to the following steps:

Adding a Reference to ADAL JS

Your app will need a reference to the ADAL JS library files. As it is customary for JS libraries, we support any style you like:

>You can get the bits from our CDN. If your app uses Angular.JS, you’ll need both of the files below:

https://secure.aadcdn.microsoftonline-p.com/lib/1.0.0/js/adal.min.js

https://secure.aadcdn.microsoftonline-p.com/lib/1.0.0/js/adal-angular.min.js

(non minified version here and here)

If you are not using angular, you just need the first file.

>You can use Bower.

Simply type

$ bower install adal-angular

>You can grab the source directly.

The adal.js source is here. The adal-angular.js source is here.

Initializing ADAL JS

Azure AD wants needs to know about your application before allowing users to request tokens for it. That means that you first need to register your app in the Azure AD directory containing the users you want to authenticate.
In the process, Azure AD will generate a unique identifier for your app. Such identifier must be used at authentication time, so that AAD can tell that the request comes from your application.

ADAL JS offers a very simple config structure to write down that identifier, plus the specific directory you want to work with. Once you have those values in there, ADAL will use them during authentication without any extra work on your part. It looks like the below:

adalProvider.init( 
{ 
tenant: 'contoso.onmicrosoft.com', 
clientId: 'e9a5a8b6-8af7-4719-9821-0deef255f68e'
}, 
$httpProvider 
); 

More details on that later, this is just for giving you an idea of the amount of initialization code required.

Deciding When/How To Trigger Sign In/Out

Once you initialized ADAL JS, all that’s left is deciding which parts of your application require authentication.

If you are using Angular, this is as easy as adding a requireADLogin: true property to the routes you want to protect, and the right thing will happen automatically.

If you want to trigger sign in and sign out explicitly from your own functions, you can simply wire up calls to login() or logOut() – methods provided by ADAL’s default authentication service.

Those are the absolute basics. Let’s dig a bit deeper.

How ADAL JS Works

Even if you are already familiar with SPA applications, it is worth spending a moment to get on the same page about what we think of when we use the term “SPA” in the context of ADAL JS.

With “SPA“ we mean a very specific application architecture: one in which the application frontend is implemented via JavaScript, and all communications with the application backend take place via Web API calls. The backend can be implemented with any dev stack/run on any platform you like, as long as it is a Web API. Here there’s how a canonical SPA looks like:

image

The user navigates to one initial location, from where he/she downloads one starting page and a collection of JavaScript files, containing all of the application’s interaction logic.
As the user interact with the application frontent after that initial download, whenever the app requires to run logic (or acquire data) on the server the JS scripts perform REST calls to the Web API backend, sending and retrieving the required data. The same JS logic takes care of using the new data to drive the experience: display the new data to databinded HTML controls, give the illusion of navigation by programmatically rearranging the current view, and so on.
The key point is that after that initial (single) page load, all the traffic is performed through REST calls instead of full page postbacks.
That is what we mean with “SPA”. ADAL JS is optimized to make extra easy to secure an app built with that architecture.

ADAL JS is yet another script that sits in the frontend portion of your SPA. ADAL JS is made of two distinct layers, hosted in two different files:

The general ADAL flow is very easy when you code and witness it, so easy that it might leave you with the impression that some “magic” (in the bad sense of the word) is going on. Let’s dispel that notion by describing what happens in some details.

Requesting a Token

Once you feed ADAL JS your app coordinates (the identifier of your app and the AAD directory you want to work with), it waits for one of the triggers that will signal the need to initiate a user authentication flow.

Here there’s some under the hood action. Once the user performs one of the actions that trigger authentication (hit a route decorated with RequireADLogin:true or invoke a function that calls login()), ADAL JS:

Accessing the Backend

At this point, the user might think that the sign in concluded; but we are not quite there yet.

So far we only proved that the user came back with bits that look like a token in the right place, but we have no clue on whether those bits do correspond to a token we like. ADAL JS itself does not validate the token: it relies on the Web API backend to do so, hence until we don’t call the backed at least once we don’t really know for sure if the user obtained an acceptable token.

The good news are that the Web API was meant to validate that token anyway. Although the high level reason for which we obtained a token was for signing in the user, the way in which we implemented it was to ask for a token for accessing our own backend API. Using that token for securely accessing the API means attaching that token in the authorization header in every call to that API, as dictated by the OAuth2 bearer token specs.

Here there’s how it happens.

This is probably where ADAL JS departs the most from its .NET/iOS/Android brethren. All other ADALs concern themselves exclusively with token acquisition, mostly because they have no a priori information about the circumstances in which the tokens should be used and adding functionalities there would be equivalent to add unwarranted constraints (remember the WCF channels?).
ADAL JS can afford to go as far as attaching the right token to Web API calls because of strong hypotheses we make around the application’s architecture – specifically, about the way in which SPAs frontends talk to Web API. That extra knowledge allows us to give you a simpler programing model. Yay!

Renewing Tokens

I simplified things for the sake of clarity, but the above is not entirely accurate. When ADAL JS detects that a request is about to go out and it’s time to attach a token to it, ADAL doesn’t simply retrieve what’s in the cache and attach it. Rather, it examines its projected expiration time (saved at the time of the token acquisition) and if the token is no longer valid or just about to expire, it attempts to obtain a new one. How? I described this at length here: to summarize, ADAL JS uses an invisible iFrame to send a new token request to Azure AD. If the user is still in an active session with Azure AD (== there is a valid cookie) then the request will succeed, the cached token will be replaced by a new one, and the request will go out with that same spank new token with the user remaining blissfully unaware that a renewal took place.
If the user is NOT in such a session, then the request cannot succeed without user interaction – which clearly cannot take place in a hidden iFrame. ADAL blocks the call and raises a failure event, which you can handle as you see fit in your app logic. A typical strategy would be to trigger a new sign in.

Invoking External Web APIs via CORS

We are almost done! There is one last important scenario ADAL JS enables that I really want to mention.

Say that you want to enable your frontend to consume Web APIs that are not hosted in your own backend: APIs exposed by other developers, or even your own API  that happen to be hosted on a domain that is different from the domain serving your SPA files. I am of course assuming that those new API also accept Azure AD tokens. I am also assuming that those API support CORS or equivalent mechanism for allowing cross origin script calls.
Good news! The mechanisms described above for intercepting outgoing requests and requesting tokens via hidden iFrame work just as well when applied to external APIs.

ADAL JS offers you the opportunity of declaring at init time a collection (the Endpoints structure) of the URLs of all the Web API you want to invoke. For each of those, you are also required to declare the App Id URI which which Azure AD identifies all those APIs.
When the interceptor detects a request toward one of those URLs, it looks up in the cache a token with the corresponding App Id URI. If it finds it, it attaches it. If it does not find it, or if it is present but expired, ADAL JS uses the usual hidden iFrame to perform a request for such a token (this time an access token!). Upon success, it caches it and uses it just like it did for calls to its own backend; upon failure, it raises similar events to notify the caller.

To summarize: thanks to the machinery we’ve put in place for dealing with calls to our own backed (again, please refer to this for details), we got the ability of invoking external APIs (almost) for free.

Samples

All the features and flows I described so far are demonstrated in a set of samples, which we released or updated today. Here there’s a quick jumplist.

Sign In with Angular

https://github.com/AzureADSamples/SinglePageApp-DotNet

This is pretty much the same basic ToDo sample we published during the preview, updated to reflect the new object model in v1. I shows you the minimal amount of code for exercising the basic sign in features of ADAL JS. It also shows you how to secure an ASP.NET Web API backend with OWIN mw in case you are expecting an id_token.

Calling API via CORS

https://github.com/AzureADSamples/SinglePageApp-WebAPI-AngularJS-DotNet

I am so, so happy to be able to finally point people to this sample!
The most common question we’ve got during the preview period was – “how do I use ADAL JS to call an API via CORS?”.
The ADAL JS repo readme does have some indications, but we know that nothing beats one sample with detailed instructions. So here you are: this sample is a clone of the first one, with one extra ASP.NET Web API (TGo) called via CORS.

The important things to notice form the frontend standpoint are the magic CORS commands in toGoListSvc.js (nothing to do with ADAL JS, this just seems to be a general point of confusion with CORS) and the endpoints structure in app.js, always remembering the explanation I gave earlier about how external API calls work on the basis of the info in there.

Sign In with JS/JQuery

https://github.com/AzureADSamples/SinglePageApp-jQuery-DotNet

This sample is meant to demonstrate the use of the low level ADAL JS API. If you’ll find the code a bit artificial, that’s because it is: it is not very realistic to expect a developer to rewrite a SPA stack by hand, instead of using Angular, React or similar.

We were very intentional about this (kudos to Danny for the patience demonstrated!) because we wanted to give you a baseline sample that shows how to use the ADAL JS primitives for common tasks directly, without mixing them with Angular artifacts and without forcing you to go through all adal.js code.
I want to come clean here: one of my big hopes in releasing this is that this sample will inspire you to write a adal-<myfavoritelibrary>.js binding and contribute it back to the community Smile

JavaScript Frontend+ JavaScript Backend

I have been spending quite some time chatting with Mat Velloso about SPAs, ADAL JS and how to validate tokens on Web API. Tuesday morning I was in the office that Mat shares with Elliot, scribbling on their whiteboard how a Node JS Web API token validation via Simple-jwt could look like – and how awesome it would be to have a Node backend sample ready by ADAL JS v1 launch!

Little did I know that on Tuesday night I’d receive a mail from Mat saying that he already put together a PoC Smile Amazing, right? We iterated on a couple of details, and you can see the results in https://github.com/matvelloso/AADNodeJWT – an app that is 100% JavaScript both on the frontend and the backend – with a simple but functional JWT validator that gets keys and issuer value directly from AAD discovery document. Very, very nice.

Thanks

Just four months ago, at the question “How can I protect a SPA with Azure AD?” I had to reply “You can’t”.
Today we are announcing the v1 of a full fledged library that does that, and more.

Those kind of feats don’t happen by themselves, I assure you. ADAL JS is the result of the relentless efforts of people like Omer Cansizoglu, the engineer who wrote and designed the bulk of ADAL JS; Danny Strockis, who wrote most of the samples; Mat Velloso, who gave lots of feedback and contributed to samples; and many others. But in fact, you know who else deserves credit? YOU. In less than 4 months of existence on Github, ADAL JS benefitted from many contributions and was the recipient of a lot of great feedback, which made a huge difference in the final shape of the product. In particular, thanks to DanWahlin, BartVDACodeware, BenGale, brianyingling, cicoriasmanospasj, SPDoctor, ThorstenHans and zachcarroll for your contributions. I hope you are satisfied with how ADAL JS came out, and I can’t wait to see what you will achieve with it Smile Happy coding!

February 18, 2015

MythicsPart 2 of 3: Automatic Data Optimization (ADO) in Oracle Database 12c [Technorati links]

February 18, 2015 06:45 PM

In the first of this series, the history of data compression was discussed as well as several methods used to compress data. In…

KatasoftUpdates to Stormpath Python Support [Technorati links]

February 18, 2015 03:00 PM

Stormpath Python Support

At Stormpath, we really love our Python users. Over the past year we’ve made:

In short, we’ve been working hard to not only improve our Python user experience, but also improve the overall quality and feel of our libraries and integrations.

Many the suggestions for improvements and bug fixes – as well as solutions – come from our community.

Recent Stormpath-Python Changes

In the past several Python / Flask / Django releases we’ve tremendously improved the speed at which our libraries work, reduced the number of lurking bugs, and improved error handling in a number of ways:

If you aren’t using the latest versions of our Python libraries, now would be a good time to upgrade!

If you’d like to see our specific improvements, you can check out our:

And if you have any suggestions, be sure to create an issue on the relevant Github repo!

Python Plans for This Year

In terms of what you can expect in the near future, here’s what you’ll be seeing soon:

Looking forward to helping you build more apps faster, and more securely this year.

If you’re not already using Stormpath to secure your user data, now would be a great time to get started!

February 17, 2015

Gluu5 Reasons you need OpenID Connect and UMA in your IAM Stack [Technorati links]

February 17, 2015 08:26 PM

5_reasons

In the last 15 years, there have been too many standards for digital authentication and authorization. Some have seen more adoption than others, but none have provided a “silver bullet” solution to enable secure, universal resource federation at Internet scale. There is still no “one protocol fits all” solution, however don’t tell that to our newest contender: OAUTH2.

2014 was a bad year for security. Now that we’re in 2015, we’re being put through a gauntlet of trial-by-fire situations, including device proliferation, sophisticated hacking schemes, migration to the cloud–there is a bigger attack surface than ever for both people and organizations. Simplifying security sufficiently for a person to manage requires centralization. Every website and device cannot be a “one-off” task, or security management will become as efficient as reading the privacy policy on every website. Can OAuth2 come to the rescue, and enable a person to manage permissions for everything in one place?

On its own OAuth2 is a framework, and does not specify how to implement a particular use case. OAuth2 defines several flows for person and device authorization. It does not specify anything about token formats (JSON or XML?), encryption formats, or other specific implementation details. To do this you can create a “profile” of OAuth2. The two most notable profiles are 1) OpenID Connect, which defines person authentication, client authentication, client registration, and 2) the User Managed Access (UMA) protocol, which defines a profile of OAuth2 that enables a policy enforcement point (i.e. a web server) to authorize a request from a central policy decision point (i.e. the authorization server).

Facebook was an early adopter of OAuth2, and implemented some of the first applications that asked a person to authorize something. Over the years, these patterns were adopted by others in the consumer IDP industry, like Google and Microsoft. The best practices and gotcha’s were documented and shared with everyone, and have become OpenID Connect. Adopting OpenID Connect makes sense for your domain because you can take advantage of the best practices established by the best in the industry to manage their OAuth2 authentication service.

This is where profiles of OAuth2, like OpenID Connect and UMA, succeed in standardizing solutions for today’s distributed authentication and authorization challenges. Its just too complicated to roll-your-own OAuth2 security service. Sure, older protocols like SAML and CAS may still provide tactical value for your organization, but here are 5 reasons why OpenID Connect and UMA should be a top priority for your modern identity and access management infrastructure.

1) Alignment with OAuth2 standards is critical for mobile, IOT and Web security.

By aligning with OAuth2 you can support both today’s and tomorrow’s access management challenges, including mobile single sign-on, multi-factor adaptive authentication, and federated access to third-party cloud resources.

2) OpenID Connect levels the playing field for identity providers and developers.

By supporting OpenID Connect your organization can essentially run the same federation infrastructure as the worlds most trusted identity provider, Google. If any one organization has seen and dealt with today’s most sophisticated authentication challenges, it’s undoubtedly Google. In addition, OpenID Connect is pretty much a JSON / REST architecture which is light on the wire and well understood by modern web developers.

3) OpenID Connect includes discovery

As OpenID Connect continues to gain adoption, gone will be the days of the Nascar Login. Replacing all those various “log in with” buttons will be one OpenID Connect button that enables any user, to use any email, at any domain, for authentication–assuming that the chosen domain supports OpenID Connect and the user attached to that email has the necessary attributes (i.e. group affiliation, title, etc.) needed to access the resource. Simply make a request to https:/.well-known/openid-configuration, which enables a domain to publish their public keys and other configuration information needed by developers. For example, check out Gluu’s OpenID Connect Discovery Page.

4) UMA enables organizations to launch new, more secure APIs that provide a top-line opportunity, not just a cost.

If you can’t control who or what can access an API, and when, then you can’t charge for its use. UMA provides a standard way for API owners to protect and control access to APIs in a distributed, scalable way.

5) UMA enables trust elevation by enabling your organization to set policies for certain API’s that require certain credentials, for instance two factor authentication.

A single authorization point is no longer sufficient for serious website security. To limit the damage done by hackers, domains now need to use a multi-layered security approach where authorization is continually checked for depending on what resource is accessed, by which person, on which device, from where, and when.

—–

A strong, centralized, and modern identity and access management infrastructure is one of the most important security tools available today. The majority of breaches are due to compromised credentials, and aligning with modern standards that support today’s requirements for authentication and authorization can greatly reduce your exposure to fraud.

While IAM software has historically been expensive and proprietary, there are now open source options available (like the Gluu Server!) that enable domains to enforce better security at a more affordable total cost of ownership (TCO).

If you don’t want your organization plastered on the front page of tomorrow’s newspaper due to an embarrassing breach, do yourself a favor and upgrade your identity and access management infrastructure today.

Radovan Semančík - nLightEvolveum Winter 2015 [Technorati links]

February 17, 2015 03:09 PM

The Evolveum team is great. I cannot put it in any other way. It is the best team that I have ever worked with. I have a long history of projects that I worked on: big projects, small projects, corporate projects, academic projects, consulting projects, deployment projects, development projects ... and some of these were extremely exciting and they had really good people on board. But none of the project teams was anything like the Evolveum team. Not even close.

What makes our team this great? I suppose it is a combination of several factors. First of all we have the best people. Each team member has a slightly different personality. Each of us has their own little quirks. But all the team members support the common goal. We work together, not against each other. And it is perhaps this precious combination of human characters and the atmosphere of cooperation that makes our team mostly self-organized. The team does not have any strong central coordination. Oh yes, I'm formally "The Architect". But I do not give orders. I do not distribute the work in the team. Nobody does. Team members are doing the work because they want to. Also the amount of coordination that I do is close to zero. It mostly amounts to answering questions and discussing ideas. Everybody somehow naturally knows what to do.

If anyone told me five years ago that this self-organization is a real thing, I would not believe that. But it is real. And it is unbelievably efficient. MidPoint project formally started later than its competitors. But our development pace is significantly faster. MidPoint is now the biggest and the most comprehensive open source identity management system. We have left our competitors behind. Thanks to our excellent team.

Our team does not work together in one office. We are distributed in space and sometimes it looks like we are also distributed in time. We are used to cooperate remotely. Therefore it is quite a rare occasion when the team meets in the flesh. Like this one:

Evolveum team ... and part of the families

Chene Slovaque

We met for a chat and a glass (well, glasses, actually) of local wine. There are several excellent winemakers on the slopes of Carpathia montains. And we have comandeered a cellar that belongs to one of them ...

Unfortunatelly, not all the team members were able to come there. But most of them did. And I had something to say. I had to say how thankful I am to be part of all of this. And I would like to repeat that. I thank every member of our team. Every single one: I thank you to be with us. There were hard times and you stayed with us. You did your best. So thank you all. And thank you to your families and friends that support you. You are the best team that I was ever part of. And that means something.

(Reposted from https://www.evolveum.com/evolveum-winter-2015/)