April 17, 2014

Ping Talk - Ping IdentityThis Week in Identity: Privacy Policies Gone Wild [Technorati links]

April 17, 2014 07:44 PM
<p><span style="line-height: 1.62;">So you love Cheerios and you&aposre not afraid to like the brand online, download coupons from the Web site, and sacrifice your legal rights for it. </span><span style="line-height: 1.62;">What?</span></p> <p>People scratch their heads over the privacy policies on social sites like Facebook and Google, but here is first evidence of how those policies could warp as virtual and physical worlds blend.</p> <p>General Mills recently introduced its new privacy policy including "legal terms" that prevent those that demonstrate affinity for the company, such as interacting with the brand online, from later suing the company if an issue arises. People that have a dispute over products are restricted to using informal negotiation via email or going through binding arbitration to seek relief.</p> <p>"Although this is the first case I&aposve seen of a food company moving in this direction, others will follow -- why wouldn&apost you?" said Julia Duncan, director of federal programs and an arbitration expert at the American Association for Justice, a trade group representing plaintiff trial lawyers. "It&aposs essentially trying to protect the company from all accountability, even when it lies, or say, an employee deliberately adds broken glass to a product."</p> <p>One legal expert said, "You can bet there will be some subpoenas for computer hard drives in the future." <a href="http://www.nytimes.com/2014/04/17/business/when-liking-a-brand-online-voids-the-right-to-sue.html?_r=0">The New York Times has the scoop.</a></p> <p><em>Update: <a href="http://www.nytimes.com/2014/04/18/business/general-mills-amends-new-legal-policies.html">General Mills has now amended its new policies</a>. </em></p> <p> <span style="line-height: 1.62;">For more scoops of identity-related goodness, read on.</span></p> <p><span style="line-height: 1.62;">General</span></p> <ul> <li><a href="http://nakedsecurity.sophos.com/2014/04/12/heartbleed-would-2fa-have-helped/?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+nakedsecurity+%28Naked+Security+-+Sophos%29">Paul Ducklin: "Heartbleed" - would 2FA have helped?</a><br />Because of the global password reset pandemic, lots of Naked Security readers have asked, "Wouldn&apost 2FA have helped?" You know a password. You have possession of a mobile phone that receives a one-off authentication code. We&aposre going to focus entirely on that sort of 2FA.</li> <li><a href="http://blog.cloudflare.com/the-results-of-the-cloudflare-challenge">The Results of the CloudFlare Challenge</a><br />Earlier today we announced the <a href="https://www.cloudflarechallenge.com/heartbleed">Heartbleed Challenge</a>. We set up a nginx server with a vulnerable version of OpenSSL and challenged the community to steal its private key. The world was up to the task: two people independently retrieved private keys using the Heartbleed exploit.</li> <li><a href="http://www.modernhealthcare.com/article/20140416/BLOG/304169995/1-in-5-healthcare-workers-share-passwords-survey-warns">Joseph Conn: 1 in 5 healthcare workers share passwords, survey warns</a><br />More than 1 in 5 healthcare workers share their passwords with colleagues, a security no-no, but healthcare security pros can take some solace that such risky business is no worse in their industry than some others. Workers in the legal trade, for example, share passwords about as often as in healthcare (22%), according to findings in a report based on a survey of 250 healthcare IT security professionals in the U.S. and another 250 in the U.K. </li> </ul> <p> <span style="line-height: 1.62;">APIs</span></p> <ul> <li><a href="http://blog.tsheets.com/2014/api/using-oauth-2-0-to-authenticate-with-rest-ful-web-apis.html">Using OAuth 2.0 to Authenticate with REST-ful Web API&aposs</a><br />By the end of this article, if you follow along you&aposll have an OAuth access token that you can use to interact with an API. We&aposre going to do all of this without writing a single line of code.</li> <li><a href="http://www.3scale.net/2014/04/the-five-axioms-of-the-api-economy-axiom-1/">Craig Burton and Steven Willmott: The Five Axioms of the API Economy, Axiom #1-- Everything and Everyone will be API-enabled</a><br />The API Economy is a phenomenon that is starting to be covered widely in technology circles and spreading well beyond, with many companies now investing in API powered business strategies. </li> </ul> <p> <span style="line-height: 1.62;">IoT</span><span style="line-height: 1.62;"> </span></p> <ul> <li><a href="http://user.wordpress.com/2013/11/24/pebble-steals-your-email-address-from-an-unsubscribed-form/">Alex Ewerlof: Pebble steals your email address from an unsubscribed form</a><br />Pebble makes smart watches -the kind of watch with a digital display that connects to your phone to show your messages and information that are shared via an application installed on the phone. Their website promises that it "can" do a lot and I have no doubt that there&aposs at least one thing it can do great: stealing my information!</li> <li><a href="http://qz.com/156075/internet-of-things-will-replace-the-web/">Christopher Mims: How the "internet of things" will replace the web</a><br />Most of us don&apost recognize just how far the internet of things will go, from souped-up gadgets that track our every move to a world that predicts our actions and emotions. In this way, the internet of things will become more central to society than the internet as we know it today.</li> <li><a href="http://www.forrester.com/home/">Where will you be affixing your next sensor?</a><br /><i><a href="https://www.pingidentity.com/blogs/pingtalk/assets_c/2014/04/0414%20This%20week%20in%20identity%20Wearable%20graphic-383.html" onclick="window.open(&aposhttps://www.pingidentity.com/blogs/pingtalk/assets_c/2014/04/0414%20This%20week%20in%20identity%20Wearable%20graphic-383.html&apos,&apospopup&apos,&aposwidth=620,height=673,scrollbars=no,resizable=no,toolbar=no,directories=no,location=no,menubar=no,status=no,left=0,top=0&apos); return false"></a><img alt="0414 This week in identity Wearable graphic.png" src="https://www.pingidentity.com/blogs/pingtalk/0414%20This%20week%20in%20identity%20Wearable%20graphic.png" width="620" height="673" class="mt-image-center" style="text-align: center; display: block; margin: 0 auto 20px;" /></i></li> </ul> <p><span style="line-height: 1.62;">Events</span></p> <ul> <li><a href="http://www.infosec.co.uk/">Info Sec UK</a><br />April 29-May 1; London<br />More than 13,000 attendees to Europe&aposs largest free-to-attend conference. Identity management, mobile, managed services and more.</li> <li><a href="http://www.internetidentityworkshop.com/">IIW</a><br />May 6-8, Mountain View, Calif.<br />The Internet Identity Workshop, better known as IIW, is an un-conference that happens at the Computer History Museum in the heart of Silicon Valley.</li> <li><a href="http://www.gluecon.com/2014/">Glue Conference 2014</a><br />May 21-22; Broomfield, Colo.<br />Cloud, DevOps, Mobile, APIs, Big Data -- all of the converging, important trends in technology today share one thing in common: developers. </li> <li><a href="http://www.kuppingercole.com/book/eic2014">European Identity &amp; Cloud Conference 2014</a><b><br /></b>May 13-16, 2014; Munich, Germany<br />The place where identity management, cloud and information security thought leaders and experts get together to discuss and shape the Future of secure, privacy-aware agile, business- and innovation driven IT.</li> <li><a href="http://www.gartner.com/technology/summits/na/catalyst/">Gartner Catalyst - UK</a><br />June 17-18, London<br />A focus on mobile, cloud, and big data with separate tracks on identity-specific IT content as it relates to the three core conference themes.</li> <li><a href="http://bit.ly/1eyMQQ3">Cloud Identity Summit 2014</a><br />July 19-22, Monterey, Calif.<br />The modern identity revolution is upon us. CIS converges the brightest minds across the identity and security industry on redefining identity management in an era of cloud, virtualization and mobile devices.</li> <li><a href="http://www.gartner.com/technology/summits/na/catalyst/">Gartner Catalyst - USA</a><br />Aug. 11-14, San Diego, CA<br />A focus on mobile, cloud, and big data with separate tracks on identity-specific IT content as it relates to the three core conference themes.</li> </ul>

KatasoftSocial Login: Facebook & Google in One API Call [Technorati links]

April 17, 2014 03:01 PM

Integrating to Facebook, Google, and other social providers can be a pain. Do you want to deal with Facebook and Google tokens and their idiosyncrasies every time you build a new app? Probably not.

We at Stormpath frequently get requests to automate social login and integration for our customers so that they don’t have to build it themselves. Well, we’ve done that— Hooray! This post is about some of the design challenges and solutions we worked through while implementing this feature for our customers.

Social Login with Stormpath – Quick Description

Stormpath’s first OAuth integration makes it easy to connect to Google and Facebook in one simple API call. It’s a simple and secure way to retrieve user profiles and convert them into to Stormpath accounts so that no matter the service you’re using, you have one simple user API.

Goals and Design Challenges

My primary goal was to make this easy for developers. At the same time, it needed to be robust enough so that Stormpath could use it as the foundation to integrate with future identity providers like Twitter and GitHub.

This came with some design challenges:

To solve this problem, Stormpath allows you to create a new password for a social account in Stormpath. If you want, the user can specificy a password upon first registering and/or by initiating a password reset flow. Ultimately, the user can now choose how they want to log in regardless of how they registered.

Making Google Access Tokens Painless

Getting an access token for a Google user in a web server application is not as easy as one might hope. Once the end-user has authorized your application, Google will send an “authorization code” as query parameter to the “redirect_uri” you specified in the developers console when you created your “Google project”. Finally, you’ll have to exchange the “authorization code” for an access token.

Of course, each of these calls require their own set of headers, parameters, etc. Fun times.

We wanted to reduce this burden for developers, so our Google integration conveniently automates the “exchange code” flow for you. This allows you to POST the authorization code and then receive a new (or updated) account, along with the access token, which can be used for further API calls.


At Stormpath one of our main responsibilities is securing user data. When it comes to social integration, we ensure that Facebook and Google client secrets are encrypted using strong AES 256 (CBC) encryption, using secure-random Initialization Vectors. Every encryption key is tenant-specific, so you can guarantee that your encrypted secrets are only accessible by you.

Also, Facebook Login Security recommends that every server-to-server call is signed to reduce the risk of misuse of a user access token in the event it’s stolen. If your access token and you don’t require all your server calls to be signed, the thief can use your application as spam or read user’s private data.

Securing Facebook requests makes your application less vulnerable to attacks, and that’s why we recommend you to enable the Require proof on all calls setting for your Facebook application. Stormpath does this by default.

How does this work? Signing a call to Facebook just means adding the “appsecret_proof” parameter to every server request you make.

The content of the additional parameter is the hash value (SHA-256) of the user’s access with the Facebook application secret. Finally, the generated bytes are encoded using Hexadecimal characters.

appsecret_proof_value = Hex.encodeToString(hmac(SHA-256, access_token, app_secret)

How To Get Started With Stormpath Social Integration

To use Google or Facebook with Stormpath, follow these three steps:

  1. Create a Facebook or Google Directory in Stormpath to mirror social accounts. You can do this via the Stormpath Admin Console, or our REST API using a POST like this:
POST https://api.stormpath.com/v1/directories?expand=provider
Content-Type: application/json;charset=UTF-8

  "name" : "my-google-directory",
  "description" : "A Google directory",
  "provider": {
    "providerId": "google"
  1. Assign the created directory to your application in Stormpath.
  2. Populate your directory with social accounts from Google or Facebook using the application’s accounts endpoint.

That is it! Your application can now access social accounts. And you didn’t have to touch any OAuth!

Future Stormpath releases will support additional social account providers. Please give us your feedback and let us know which ones we should release next!

Kevin MarksFragmentions - linking to any text [Technorati links]

April 17, 2014 12:42 PM

A couple of weeks ago, I went to a w3c workshop about annotations on the web. It was an interesting day, hearing from academics, implementers, archivists and publishers about the ways they want to annotate things on the web, in the world, and in libraries. The more I listened, the more I realised that this was what the web is about. Each page that links to another one is an annotation on it.

Tim Berners-Lee's invention of the URL was a brilliant generalisation that means we can refer to anything, anywhere. But it has had a few problems over time. The original "Cool URLs don't change" has given way to Tim's "eventually every URL ends up as a porn site".

Instead of using URLs, Google's huge success means that searching for text can be more robust than linking. If I want to point you to Tom Stoppard's quote from The Real Thing:

I don’t think writers are sacred, but words are. They deserve respect. If you get the right ones in the right order, you can nudge the world a little or make a poem which children will speak for you when you’re dead.

the search link is more resilient than linking to Mark Pilgrim's deleted post about it, which I linked to in 2011.

Another problem is that linking in HTML is defined to address pages as a whole, or fragments within them, but only if the fragments are marked up as an id on an element. I can link to a blog post within a page by using the link:


because the page contains markup:

<div class="post-body entry-content" id="post-body-90336631" >

But to do that I had to go and inspect the HTML and find the id, and make a link specially, by hand.

What if instead we combined these two ideas:

I've named these "fragmentions"

To tell these apart from an id link, I suggest using a double hash - ## for the fragment, and then words that identify the text. For example:


means "go to that page and find the words 'annotate the web' and scroll to show them"

If you click the link, you'll see that it works. That's because when I mentioned this idea in the indiewebcamp IRC channel, Jonathan Neal wrote a script to implement this, and I added it to my blog and to kevinmarks.com. You can add it to your site too.

However, we can't get every site to add this script. So, Jonathan also made a Chrome Extension so that these links will work on any site if you're running Chrome. (They degrade safely to linking to the page on other browsers).

So, try it out. Contribute to the discussion on the Indiewebcamp Fragmentions page, or annotate this page by linking to it with a fragmention from your own blog or website.

Maybe we can persuade browser writers that fragmentions should be included everywhere.

Originally posted on kevinmarks.com

April 16, 2014

Julian BondWhat a most excellent collection of images. [Technorati links]

April 16, 2014 07:56 AM

Vittorio Bertocci - MicrosoftCalling Office365 API from a Windows Phone 8.1 App [Technorati links]

April 16, 2014 07:34 AM

Did you install the preview of Windows Phone 8.1? I sure did, and it’s awesome!

Windows Phone 8.1 introduces a great new feature, which was until recently only available on Windows 8.x: the WebAuthenticationBroker (WAB for short from now on). ADAL for Windows Store leverages the WAB for all of its authentication UI rendering needs, and that saved us a tremendous amount of work in respect to other platforms (such as classic .NET) on which we had to handle the UX (dialog, HTML rendering, navigation, etc) on our own.

To give you a practical example of that, and to amuse myself during this 9.5 hours Seattle-Paris flight I am sitting on, I am going to show you how to use the WAB on Windows Phone 8.1 to obtain a token from Azure Active Directory: you’ll see that the savings in respect to the older Windows Phone sample (where I did have to handle the UX myself) are significant. If you prefer to watch a video, rather than putting up with my logorrhea, check out the recording of the session on native clients I delivered at //BUILD just 10 days ago: the very first demo I show is precisely the same app, though I cleaned up the code a bit since then.

The WebAuthenticationBroker and the Continuation Pattern

The WAB on Windows Phone 8.1 differs from its older Windows 8.x sibling in more than the size of its rendering surface. The one difference you can’t ignore (and the reason for which you can’t just reuse ADAL for Windows Store on the phone) lies in the programming model it exposes. Note: the WAB coverage on MSDN is excellent, and I recommend you refer to it rather than relying on what I write here (usual disclaimers apply). Here I’ll just give a barebone explanation covering the essential for getting the WAB to work with AAD.


Referring to the diagram above. The idea is that (1) whenever you call the phone WAB from your code, your app gets suspended and the WAB “app” takes over the foreground spot. The user goes through (2) whatever experience the identity provider serves; once the authentication flow comes to an end, (3) the WAB returns control to your app and disappears.
Here that’s where things get interesting. For your app, this is just another activation: you need to add some logic to detect that this activation was caused by the WAB returning from an authentication, and ensure that the values returned by the WAB are routed to the code that needs to resume the authentication logic and process them.
The idea is that you need in your app an object which implements a well-known interface (IWebAuthenticationContinuable), which includes a method (ContinueWebAuthentication) meant to be used as the re-entry point at reactivation time. In the diagram above it is the page itself that implements IWebAuthenticationContinuable; the OnActivated handler (4) calls the method directly, passing the activation event arguments which will be materialized in ContinueWebAuthentication as WebAuthenticationBrokenContinuationArgs. Those arguments will contain the values you typically expect from the WAB, such as the authorization code produced by an OAuth code grant flow.

This is a common pattern in Windows Phone 8.1: it goes under the name “AndContinue”, from the structure of the primitives used. It is applied whenever a “system” operation (such as the WAB, but also file picking) might end up requiring a lot of resources, making it hard for an app on a low power device to keep active in memory both the app and the process handling the requested task. Once again, MSDN provides great coverage for this.

The Sample

Too abstract for your taste? Presto, let’s look at some code. Here I will skip all of the client app provisioning in AAD, as we’ve covered that task many times. If you want a refresher, just head to one of the samples on GitHub and refer to the instructions there. <LINK>

As mentioned in the title, we want an app that will invoke an Office365 API. We won’t do anything fancy with the results, as I just want to show you how to obtain and use a suitable token. If you want to get a trial of Office 365, check out this link <LINK>. Also, if you don’t want to set up a subscription you can easily repurpose this sample to call any other API (such as the Graph or your own).

Ready? Go! Create a new blank Windows Phone 8.1 app. Make sure to pick up the store flavor. Add a button for triggering the API call.

In your main page, add the declaration for your IWebAuthenticationContinuable. Note that you can decide to return a value, if you so choose.

 interface IWebAuthenticationContinuable
   void ContinueWebAuthentication(WebAuthenticationBrokerContinuationEventArgs args);


That done, add it to the page declaration as an implemented interface, and add the logic for requesting tokens and using them via the continuation model. We’ll flesh those stubs out in a moment.

public sealed partial class MainPage : Page, IWebAuthenticationContinuable


   private void btnInvoke_Click(object sender, RoutedEventArgs e)

   public async void ContinueWebAuthentication(WebAuthenticationBrokerContinuationEventArgs args)
       string access_token = await RequestToken(args.WebAuthenticationResult);
       HttpClient httpClient = new HttpClient();
       httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", access_token);
       HttpResponseMessage response = httpClient.GetAsync("https://outlook.office365.com/EWS/OData/Me/Inbox/Messages?$filter=HasAttachments eq true&$select=Subject,Sender,DateTimeReceived").Result
       if (response.IsSuccessStatusCode)


The btnInvoke_Click triggers the request for a token, via the call to the yet-to-be-defined method RequestCode(). We know that requesting a code will require user interaction, hence we can expect that the call to AuthenticateAndContinue (hence the app deactivation & switch to the WAB) will take place in there. That explains why there’s nothing else after the call to RequestCode.

The ContinueWebAuthentication method implements the logic we want to run once the execution comes back from the WAB. The first line, calling the yet-to-be-defined RequestToken, takes the results from the WAB and presumably uses it to hit the Token endpoint of the AAD’s authorization server.

The rest of the method is the usual boilerplate logic for calling a REST API protected by the OAuth2 bearer token flow – still, I cannot help but marvel at the amazing simplicity with which you can now access Office resources. With that simple (and perfectly readable!) string I can obtain a list of al the messages with attachments from my inbox, and even narrow down to which fields I care about Smile

Let’s take a look at the code of RequestCode and RequestToken.

 string Authority = "https://login.windows.net/developertenant.onmicrosoft.com";
 string Resource = "https://outlook.office365.com/";
 string ClientID = "43ba3c74-34e2-4dde-9a6a-2671b53c181c";
 string RedirectUri = "http://l";
  private void RequestCode()
         string authURL = string.Format(
         WebAuthenticationBroker.AuthenticateAndContinue(new Uri(authURL), new Uri(RedirectUri), null, WebAuthenticationOptions.None);

     private async Task<string> RequestToken(WebAuthenticationResult rez)
         if (rez.ResponseStatus == WebAuthenticationStatus.Success)
             string code = ParseCode(rez.ResponseData);
             HttpClient client = new HttpClient();
             HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post, string.Format("{0}/oauth2/token", Authority));
             string tokenreq = string.Format(
             request.Content = new StringContent(tokenreq, Encoding.UTF8, "application/x-www-form-urlencoded");
             HttpResponseMessage response = await client.SendAsync(request);
             string responseString = await response.Content.ReadAsStringAsync();

             var jResult = JObject.Parse(responseString);
             return (string)jResult["access_token"];
             throw new Exception(String.Format("Something went wrong: {0}",rez.ResponseErrorDetail.ToString()));

     private string ParseCode(string result)
         int codeIndex = result.IndexOf("code=", 0) + 5;
         int endCodeIndex = result.IndexOf("&", codeIndex);
         // Return the access code as a string
         return result.Substring(codeIndex, endCodeIndex - codeIndex);


This is mostly protocol mechanics, not unlike the equivalent logic in the older Windows Phone samples I discussed on these pages.

RequestCode crafts the request URL for the Authorization endpoint and passes it to the WAB, calling AuthenticateAndContinue. By now you know what will happen; the app will go to sleep, and the WAB will show up – initialized with the data passed here. MUCH simpler than having to create an in-app auth page and handling navigation by yourself.

RequestToken and its associated utility function ParseCode retrieve the authorization code from the response data returned by the WAB, construct the request for the Token endpoint, hits it, and parses (via JSON.NET, finally available for Windows Phone 8.1! Smile for my //BUILD demo I had to use the data contract serializer, bleah) the access token out from AAD’s response.

If you paid attention to the explanation to how the WAB continuation pattern works, you know that there’s still something missing: the dispatching logic that upon (re)activation routes the WAB results to ContinueWebAuthentication. Open the App.xaml.cs file, locate the OnActivated handler and add the following.

protected async override void OnActivated(IActivatedEventArgs e)
    var rootFrame = Window.Current.Content as Frame;
    var wabPage = rootFrame.Content as IWebAuthenticationContinuable;
    if (wabPage != null)
         wabPage.ContinueWebAuthentication(e as WebAuthenticationBrokerContinuationEventArgs);


Now, I know that my friends in the Windows Phone 8.1 team will frown super-hard at the above. For starters: to do things properly, you should be prepared to be reactivated by multiple *AndContinue. In general, the documentation provides nice classes (like the ContinuationManager) you can use to handle those flows with more maintainable code than the hack I have put together here. My goal here (and during the //BUILD session) was to clarify the new WAB behavior with the least amount of code. Once that is clear to you, I encourage you to revisit the above and re-implement it using the proper continuation practices.

Aaanyway, just to put a bow on this: here there’s what you see when running the app. I landed in Paris and I can finally connect to the cloud Smile

The first page:


Pressing the button triggers RequestCode, which in turns calls WebAuthenticationBroker.AuthenticateAndContinue and causes the switch to the WAB:


Upon successful auth, we get back a token and we successfully call Exchange online:


Ta-dah! Note


Windows Phone 8.1 is a great platform, and WAB is a wonderful addition that will make us identity people very happy. The continuation model will indeed impose a rearranging of the app flow. We are looking at ways in which we can abstract away some of the details for you, so that you can keep operating with the high level primitives you enjoy in ADAL without (too much) abstraction leakage. Stay tuned!

Julian BondNext time somebody tries to tell you that big pharma is hiding medical cures, or the illuminati, sorry... [Technorati links]

April 16, 2014 07:27 AM
Next time somebody tries to tell you that big pharma is hiding medical cures, or the illuminati, sorry, the 1%, are manipulating world society, or big oil invaded Iraq, or similar conspiratorial bullshit, just say:- 

"That's all a bit 'lizard people', isn't it?"
[from: Google+ Posts]

Julian BondAfter the Big Bang is it Space-Time that expands or the distances between the things in it? [Technorati links]

April 16, 2014 07:23 AM
After the Big Bang is it Space-Time that expands or the distances between the things in it?

Something I continue to have trouble getting my head round is the idea that there are bits of the universe that are so far apart (and accelerating away from each other) that there hasn't been enough time since the Big Bang for light to travel between them. So there's a kind of quantum foam of light cones that can't interact. But if nothing can travel faster than the speed of light, then how did these bits of stuff get further apart than light could travel in the available time?  

http://en.wikipedia.org/wiki/Metric_expansion tries to explain this and I think I'm beginning to get it. It also helpfully points out that lots of highly qualified physicist have trouble with understanding this as well so it's not just me! There are bits of it that still feel like handwavium. In particular it feels a bit like http://en.wikipedia.org/wiki/Copenhagen_interpretation in that it's only difficult to think about because you're treating the equations as objective reality. It's all very well to say that it's space-time that's expanding not the stuff in it but, but, 
 Metric expansion of space - Wikipedia, the free encyclopedia »
Basic concepts and overview[edit]. Overview of metrics[edit]. Main article: Metric (mathematics). To understand the metric expansion of the universe, it is helpful to discuss briefly what a metric is, and how metric expansion works.

[from: Google+ Posts]
April 15, 2014

Julian BondWhat do we want? [Technorati links]

April 15, 2014 08:29 AM
What do we want?
Evidence based medicine.

When do we want it?
After full, transparent publication of all trial results both future and historical, peer review and without being encumbered by long term patents.

And we want our governments to subsidise this for the good of society as a whole and to properly enforce the rules with realistic penalties. And without the market being hopelessly skewed by mandated oligopolies bought with high priced lobbying. And without government money being wasted on high priced stockpiles that do nothing. (like Tamiflu: here's looking at you, Roche).

As the article points out, EU regulations pushing for greater transparency on clinical trials are a good thing, but not if they ignore historical results and are never enforced.

 Clinical trials and tribulations: a role for Europe | The Pirate Party »
It's hard to imagine a better fairy-tale villain than a big pharma company. There's something undeniably sinister about these vast, faceless titans with their unfathomable methods and international reach; so much so that it's sometimes an effort to remember that, actually, they're the ones who ...

[from: Google+ Posts]
April 14, 2014

CA on Security ManagementBeware the UnDead Password [Technorati links]

April 14, 2014 10:14 PM
Recently I took my daughters to see the RiffTrax Live showing of Night of the Living Dead.  RiffTrax is a group of three guys who show movies and goof on them (You can get more information here).  Night of the...


KatasoftMultiTenant User Management- the Easy Way [Technorati links]

April 14, 2014 08:29 PM

Building a multi-tenant SaaS isn’t easy, but in a world where your customers expect on-demand services and your engineering team wants a central codebase, multitenancy offers tremendous value. 

The hardest part is user management. Multi-tenant applications come with special user considerations:

As you might have guessed, Stormpath’s data model natively supports multi-tenant user management out-of-the-box. You don’t have to worry about building or managing data partitions yourself, and can focus on building your app’s real features. 

But, how do you build it? We’ve created a comprehensive Guide to Building Multi-tenant Apps and this post will specifically focus on how to model user data for multi-tenancy. We will also show how to build a multi-tenant application faster and more securely with Stormpath, a cloud-hosted user management service that easily supports multi-tenant user models.

What is a Multi-Tenant application?

Unlike most web applications that support a single company or organization with a tightly-coupled database, a multi-tenant application is a single application that services multiple organizations or tenants simultaneously. Multi-tenant apps need to ensure each Tenant has its own private data partition so the data is cleanly segmented from other tenants. The challenge: very few modern databases natively support tenant-based data partitioning. 

Devs must figure out how to do this either using separate physical databases or by creating virtual data partitions in application code.  Due to infrastructural complexities at scale, most engineering teams avoid the separate database approach and implement virtual data partitions in their own application code. 

Our Guide to Building Multi-tenant Apps goes into deep detail on how to set up tenants and their unique identifiers. In this post, we will dive straight into setting up user management for your multi-tenant application.

Multi-Tenant User Management

Why use Stormpath for Multi-Tenant Applications?

Aside from the security challenges that come with partitioning data, setting up partitioning schemes and data models takes time. Very few, if any, development frameworks support multi-tenancy, so developer teams have to build out multi-tenant user management themselves.

Stormpath’s data model supports two different approaches for multi-tenant user partitioning. But first, a little background.

Stormpath Data Model Overview

Most application data models assign user Accounts and groups directly to the application. For example:

Traditional Application User Management Model:

              +-----\>| Account |\
              | 1..\* +---------+\
+-------------+      \^ \
| Application |       |\
+-------------+       v\
              | 1..\* +-------+\
              +-----\>| Group |\

But this isn’t very flexible and can cause problems over time – especially if you need to support more applications or services in the future.

Stormpath is more powerful and flexible. Instead of tightly coupling user accounts and applications, Accounts and Groups are ‘owned’ by a Directory, and an Application can reference one or more Directories dynamically:

Stormpath User Management Model:

                                 +-----\>| Account |\
                                 | 1..\* +---------+\
+-------------+ 1..\* +-----------+     \^\
| Application |-----\>| Directory |      |\
+-------------+      +-----------+       v\
                                 | 1..\* +-------+\
                                 +-----\>| Group |\

A Directory isn’t anything complicated – think of it as simply a ‘top level bucket for Accounts and Groups’. Why did we do it this way?

This directory-based model supports two approaches for partitioning multi-tenant user data:

Approach 1: Single Directory with a Group-per-Tenant

Recommended for most multi-tenant applications.

This design approach uses a single Directory, which guarantees Account and Group uniqueness. A Tenant is represented as a Group within a Directory, so you would have (at least) one Group per Tenant.

For example, let’s assume new user jsmith@customerA.com signs up for your application. Upon submit you would:

  1. Insert a new Account in your designated Directory. This will be a unique account.
  2. Generate a compatible subdomain name for their tenant and create an equivalent Group in your designated Directory. Your ‘Tenant’ record is simply a Group in a Stormpath Directory.
  3. Assign the just-created jsmith@customerA.com Account to the new Group. Any other Accounts added over time to this Group will also immediately be recognized as users for that Tenant.

We cover the many benefits of the Single Directory approach – as well as how to implement it – in the Multi-Tenant Guide , but at a high level, this approach has the following benefits:

The Single Directory, Group-per-Tenant approach is the simplest model, easiest to understand, and provides many desirable features suitable for most multi-tenant applications. Read more.

Approach 2: Directory-per-Tenant

In Stormpath, an Account is unique only within a Directory. This means:

Account jsmith@gmail.com in Directory A

is not the same identity record as

Account jsmith@gmail.com in Directory B.

As a result, you could create a Directory in Stormpath for each of your tenants, and your user Account identities will be 100% separate. With this Directory-per-Tenant approach, your application’s user Accounts are only unique within a tenant (Directory), and users could register for multiple tenants with the same credentials.

Directory-per-Tenant is an advanced data model that offers more flexibility, but at the expense of simplicity. This is the model we use at Stormpath, and it is only recommended for more advanced applications or those with special requirements. 

As a result, we don’t cover the approach in further detail here. If you feel the Directory-per-Tenant approach might be appropriate for your project, and you’d like some advice, just email support@stormpath.com. We are happy to help you model out your user data, whether or not Stormpath is the right option for your application.

We’re Always Here to Help

Whether you’re trying to figure out multi-tenant approaches for your application or have questions about a specific Stormpath API, we’re always here to help. Please feel free to contact us atsupport@stormpath.com.

IDMGOVFICAM TFS TEM on Identity Resolution Needs for Online Service Delivery [Technorati links]

April 14, 2014 07:50 PM
The FICAM Trust Framework Solutions (TFS) Program is convening public and private sector experts in identity proofing, identity resolution and privacy for an Identity Resolution Needs for Online Service Delivery Technical Exchange Meeting (TEM) on 5/1/14 from 9:00 AM - 5:00 PM EST in Washington, DC.


Save the 5/1/14 date! In-person attendance and early registration (due to limited space) are recommended.

Register Now!

Event Location: GSA, 1800 F St NW, Washington, DC 20405

In-person event logistics information will be provided to registered attendees. Remote attendance information will be made available to registered attendees who are not able to attend in-person.

Questions? Please contact the FICAM TFS Program at TFS.EAO@gsa.gov


Identity attributes that are used to uniquely distinguish between individuals (versus describing individuals) are referred to as identifiers. Identity resolution is the ability to resolve identity attributes to a unique individual (e.g. no other individual has the same set of attributes) within a particular context.

Within the context of enabling high value and sensitive online government services to citizens and businesses, the ability to uniquely resolve the identity of an individual is critical to delivering government benefits, entitlements and services.

As part of the recent update to FICAM TFS, we recognized the Agency need for standardized approaches to identity resolution in our Approval process for Credential Service Providers (CSPs) and Identity Managers (IMs).

The study done by the NASPO IDPV Project, "Establishment of Core Identity Attribute Sets & Supplemental Identity Attributes – Report of the IDPV Identity Resolution Project (February 17, 2014)" is currently being used as an industry based starting point for addressing this need. The study proposed 5 equivalent attribute bundles that are sufficient to uniquely distinguish between individuals in at least 95% of cases involving the US population.


However, the FICAM TFS Program recognizes that the NASPO IDPV study, while a starting point, is just the start and not the end. As such, we are convening this TEM to:


If you have expertise in identity resolution, identity proofing and related privacy aspects, and have data-backed research and results to share on this topic, we are interested in hearing from you. Please contact us at TFS.EAO@gsa.gov by COB 4/16/14 with your proposed discussion topic.

DRAFT AGENDA for 05/01/2014

The TEM will seek to address this topic across three dimensions: (1) Identity Resolution (2) Privacy and (3) Business Models / Cost.

09:00 AM - 09:30 AM Attendee Check-In
09:30 AM - 09:55 AM Welcome & TEM Overview/Goals

10:00 AM - 10:10 AM FICAM TFS Level Set on Resolution
10:15 AM - 10:40 AM Agency Viewpoint Panel
10:45 AM - 11:10 AM Industry Viewpoint Panel
11:15 AM - 11:45 AM Resolution Discussion / Q&A

11:45 AM - 01:00 PM LUNCH (On your own) & NETWORKING

01:00 PM - 01:10 PM FICAM TFS Level Set on Privacy
01:15 PM - 01:40 PM Agency Viewpoint Panel
01:45 PM - 02:10 PM Industry Viewpoint Panel
02:15 PM - 02:45 PM Privacy Discussion / Q&A

02:45 PM - 03:00 PM BREAK

03:00 PM - 03:10 PM FICAM TFS Level Set on Business Models / Cost
03:15 PM - 03:40 PM Agency Viewpoint Panel
03:45 PM - 04:10 PM Industry Viewpoint Panel
04:15 PM - 04:45 PM Business Models / Cost Discussion and Q&A

04:45 PM - Event Wrap-up

Sign up for our notification list @ http://www.idmanagement.gov/trust-framework-solutions to be kept updated on this and future FICAM TFS news, events and announcements.

:- by Anil John
:- Program Manager, FICAM Trust Framework Solutions

CourionRe-set Your Passwords, Early & Often [Technorati links]

April 14, 2014 12:29 PM

Access Risk Management Blog | Courion

Jason MutschlerOn Monday April 7th, OpenSSL disclosed a bug in their software that allows data, which can include unencrypted usernames and passwords, to be collected from memory remotely by an attacker.  OpenSSL is the most popular open source SSL (Secure Sockets Layer) implementation and the software is used by many popular websites such as Yahoo, Imgur, Stackoverflow, Flickr and Twitpic.  Many of these popular websites have been patched. However as of this writing some, including Twitpic, remain vulnerable.

HeartbleedSeveral tools have become available to check whether an individual website is vulnerable. We recommend that you double-check whether websites that you use are affected before logging in.  If the website you are logging into is not vulnerable, you should reset your password since the password may have been captured if the server was previously vulnerable.  The bug is also present in some client software and a malicious web server could be used to collect data from memory on client machines running these pieces of software.

httpsThis particular vulnerability has been present since 2012 and underscores the need to look beyond typical perimeter defenses and continuously monitor for unusual behavior within your network.  Persistent attackers will continue to find creative ways to breach the perimeter and detecting abnormal use of valid credentials is becoming extremely important.

By the way, Courion websites, including the Support Portal and the CONVERGE registration page remain unaffected by this vulnerability.


April 13, 2014

Anil JohnStandardizing the RP Requirements for Identity Resolution [Technorati links]

April 13, 2014 06:00 PM

When a credential from an outsourced CSP shows up at the front door of a RP, the RP needs two pieces of information. First, an answer to the question “Are you the same person this credential was issued to?” and second, information to uniquely resolve and enroll the credential holder at the RP. We have more or less standardized the first bit, but have not been as mindful about the second.

I have my own opinions as to why this has not been done before:

At the same time, I do believe that in order to deliver public sector services, it is critical to address this issue. But it needs to be done in a manner that looks at the world as it exists and not as we would wish it to be, which in the U.S. means that:

To make this happen will require three things:

  1. A clear understanding by the RP of the various approaches it can utilize to enroll users
  2. An understanding of the context in which IP/proprietary approaches have a role in identity resolution e.g. At the "identity proofing component"
  3. Development and standardization of the quantitative criteria used by the RP to evaluate the information it needs for identity resolution


This blog post, Standardizing the RP Requirements for Identity Resolution, first appeared on Anil John | Blog. These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

Julian BondGlobal Warming won't be as bad as the IPCC predict and will peak at the low end of their predictions... [Technorati links]

April 13, 2014 06:42 AM
Global Warming won't be as bad as the IPCC predict and will peak at the low end of their predictions.

Because society will have collapsed by then.

So that's all good then!


ps. Have you noticed how 2030 is no longer the far future? The doomsayers are predicting major disruption by 2030 which is now only ~15 years away.
 Oil Limits and Climate Change - How They Fit Together »
We hear a lot about climate change, especially now that the Intergovernmental Panel on Climate Change (IPCC) has recently published another report. At the same time, oil is reaching limits, and thi...

[from: Google+ Posts]
April 11, 2014

ForgeRockForgeRock Software Not Affected by ‘Heartbleed’ Security Flaw [Technorati links]

April 11, 2014 09:33 PM

A few days ago, it was announced that there is a major vulnerability in OpenSSL, known as Heartbleed. ForgeRock customers running enterprise software will not be affected by this vulnerability.

Important notes:

The post ForgeRock Software Not Affected by ‘Heartbleed’ Security Flaw appeared first on ForgeRock.

Mike Jones - MicrosoftJSON Web Key (JWK) Thumbprint Specification [Technorati links]

April 11, 2014 12:47 AM

IETF logoI created a new simple spec that defines a way to create a thumbprint of an arbitrary key, based upon its JWK representation. The abstract of the spec is:

This specification defines a means of computing a thumbprint value (a.k.a. digest) of JSON Web Key (JWK) objects analogous to the x5t (X.509 Certificate SHA-1 Thumbprint) value defined for X.509 certificate objects. This specification also registers the new JSON Web Signature (JWS) and JSON Web Encryption (JWE) Header Parameters and the new JSON Web Key (JWK) member name jkt (JWK SHA-256 Thumbprint) for holding these values.

The desire for this came up in an OpenID Connect context, but it’s of general applicability, so I decided to submit the spec to the JOSE working group. Thanks to James Manger, John Bradley, and Nat Sakimura for the discussions that led up to this spec.

The specification is available at:

An HTML formatted version is also available at:

April 10, 2014

GluuImpact of Heartbleed for Gluu Customers [Technorati links]

April 10, 2014 05:12 PM

This blog provides a good analysis to understand the impact of Heartbleed: http://www.gluu.co/cacert-heartbleed

If you are running a Shibboleth IDP front ended by an Apache HTTPD server, the private SAML IDP key in the JVM’s memory (i.e. tomcat) would not be exposed to the Apache httpd process.

However, if the web server’s private key is compromised, then you have HTTP, not HTTPS!

Password credentials could have leaked. After patching and re-keying the server, people should be advised to reset their password credentials.

I think this is the biggest impact.

It highlights the cost of our societal over-reliance on passwords–basically the cost of doing nothing. Passwords stolen from one site are used elsewhere. So even if your web server wasn’t compromised, a person maybe has the same password in a server that was. So the integrity of password authentication has managed to slip to a new all-time low.


Kuppinger ColeEnterprise Single Sign-On - is there still a need for? [Technorati links]

April 10, 2014 03:48 PM
In KuppingerCole Podcasts

In this KuppingerCole Webinar, we will look at Enterprise Single Sign-On (E-SSO) and the alternatives. Starting with the use cases for single sign-on and related scenarios, we will analyze the technical alternatives. We look at various aspects such as the time for implementation, the reach regarding applications to sign-on, users, and devices and compare the alternatives.

Watch online

GluuCACert Heartbleed Notification [Technorati links]

April 10, 2014 02:45 PM
This note I received from CACert today. It provides a good overview of the HeartBleed vulnerability.

See also Shibboleth Security Advisory

Dear customer, there are news [1] about a bug in OpenSSL that may allow an attacker to leak arbitrary information from any process using OpenSSL. [2] We contacted you, because you have subscribed to get general announcements, or you have had a server certificate since the bug was introduced into the OpenSSL releases and are especially likely to be affected by it. CAcert is not responsible for this issue. But we want to inform members about it, who are especially likely to be vulnerable or otherwise affected. Good news: ========== Certificates issued by CAcert are not broken and our central systems did not leak your keys. Bad news: ========= Even then you may be affected. Although your keys were not leaked by CAcert your keys on your own systems might have been compromised if you were or are running a vulnerable version of OpenSSL. To elaborate on this: ===================== The central systems of CAcert and our root certificates are not affected by this issue. Regrettably some of our infrastructure systems were affected by the bug. We are working to fix them and already completed work for the most critical ones. If you logged into those systems, within the last two years, (see list in the blog post) you might be affected! But unfortunately given the nature of this bug we have to assume that the certificates of our members may be affected, if they were used in an environment with a publicly accessible OpenSSL connection (e.g. Apache web server, mail server, Jabber server, ...). The bug has been open in OpenSSL for two years - from December 2011 and was introduced in stable releases starting with OpenSSL 1.0.1. When an attacker can reach a vulnerable service he can abuse the TLS heartbeat extension to retrieve arbitrary chunks of memory by exploiting a missing bounds check. This can lead to disclosure of your private keys, resident session keys and other key material as well as all volatile memory contents of the server process like passwords, transmitted user data (e.g. web content) as well as other potentially confidential information. Exploiting this bug does not leave any noticeable traces, thus for any system which is (or has been) running a vulnerable version of OpenSSL you must assume that at least your used server keys are compromised and therefore must be replaced by newly generated ones. Simply renewing existing certificates is not sufficient! - Please generate NEW keys with at least 2048 bit RSA or stronger! As mentioned above this bug can be used to leak passwords and thus you should consider changing your login credentials to potentially compromised systems as well as any other system where those credentials might have been used as soon as possible. An (incomplete) list of commonly used software which include or link to OpenSSL can be found at [5]. What to do? =========== - Ensure that you upgrade your system to a fixed OpenSSL version (1.0.1g or above). - Only then create new keys for your certificates. - Revoke all certificates, which may be affected. - Check what services you have used that may have been affected within the last two years. - Wait until you think that those environments got fixed. - Then (and only then) change your credentials for those services. If you do it too early, i.e. before the sites got fixed, your data may be leaked, again. So be careful when you do this. CAcert's response to the bug: ============================= - We updated most of the affected infrastructure systems and created new certificates for them. The remaining will follow, soon. - We used this opportunity to upgrade to 4096 bit RSA keys signed with SHA-512. The new fingerprints can be found in the list in the blog post. ;-) - With this email we contact all members, who had active server certificates within the last two years. - We will keep you updated, in the blog. A list of affected and fixed infrastructure systems and new information can be found at: https://blog.cacert.org/2014/04/openssl-heartbleed-bug/ Links: [1] http://heartbleed.com/ [2] https://www.openssl.org/news/secadv_20140407.txt [3] https://security-tracker.debian.org/tracker/CVE-2014-0160 [4] http://www.golem.de/news/sicherheitsluecke-keys-auslesen-mit-openssl-1404-105685.html [5] https://www.openssl.org/related/apps.html

Kuppinger ColeLeadership Compass: Identity Provisioning - 70949 [Technorati links]

April 10, 2014 12:32 PM
In KuppingerCole

Identity Provisioning is still one of the core segments of the overall IAM market. Identity Provisioning is about provisioning identities and access entitlements to target systems. This includes creating and managing accounts in such connected target systems and associating the accounts with groups, roles, and other types of administrative entities to enable entitlements and authorizations in the target systems. Identity Provisioning is...

April 09, 2014

KatasoftLightweight Authentication and Authorization for MQTT with Stormpath [Technorati links]

April 09, 2014 06:29 PM

This article originally appeared on the HiveMQ blog. A huge ‘Thank You’ to their team for the plugin and writeup!

HiveMQ Logo

Authentication and authorization are key aspects for every Internet of Things application. When using MQTT, topic permissions are especially important for most public-facing MQTT brokers. Learn how you can use Stormpath with HiveMQ to set up fine grained security for your MQTT service in minutes.

For the impatient: You can download the Stormpath HiveMQ plugin here.

Challenges of Authentication and Authorization in the Internet of Things

Security is a big concern in the age of the Internet of Things. More than ever, personal and sensor information are transferred over the Internet. For example, data about the conditions and status in our home or company, as well as chat messages or status updates of our current activity and location. In the wrong hands this kind of information can be exploited to damage people and companies.

Often the problem with architecting for security is not awareness of the challenges and risks, but lies in the implementation of the necessary security measures. Most Developers are focused on building applications and not everybody has deep know-how in implementing secure authentication or authorization.

Stormpath to the Rescue

Stormpath is a User Management API for developers, built for user authentication and authorization in traditional web applications. It can also be used perfectly for Internet of Things applications – no more reinventing the wheel with a manual implementation of user and permission models for your applications. Stormpath saves all users credential in a centralized, cloud-based directory, and users can be assigned to different groups and granted fine-grained permissions.

Stormpath provides a role based access control by adding users to one or more groups, which is ideal for permissions inside one application. In order to create user accounts, groups and so on Stormpath provides a REST API, SDKs for Java, PHP, Ruby, Python and an easy to use WebUI. More details can be found in the extensive documentation on their website. Another important aspect for IoT applications is the constant availability of all services. The basic version of Stormpath is free and is ideal for prototyping and small applications. It does not provide any guarantees on uptime, though. For enterprise and production usage Stormpath provides short response time on support requests and 100% availability SLAs.

Use Stormpath for MQTT Authentication and Authorization

So how can we leverage Stormpath to create authentication and authorization for MQTT clients?

First of all, let’s have a look at its architecture.


The figure shows us that Stormpath is organized in different tenants and each tenant has a cloud directory, which can be accessed by a REST API. The API can be used by a variety of applications. Inside the cloud directory are accounts, groups, directories and applications.

We can use the Stormpath structure to associate MQTT clients with accounts. That means whenever a new MQTT client connects, we query Stormpath if an account with the MQTT username and password exists and only then let the client connect. This handles the authentication scenario pretty straightforwardly.

HiveMQ Stormpath Schema

The authorization behavior can be achieved using Stormpath groups. If an authenticated client wants to publish a message, the MQTT broker can lookup all groups of that particular account, which represent the topics (including wildcards) the client is allowed to use. For example a client wants to publish to home/livingroom/temperature the MQTT broker gets all the groups from Stormpath: home/livingroom/# and checks if the topic matches the permissions of the client. If the clients would only be in the group home/livingroom/light, the permission to publish would be denied.

This described behavior is implemented in our Stormpath Plugin for HiveMQ, which retrieves the necessary authentication and authorization permission from Stormpath.

Using the Stormpath HiveMQ Plugin

Now it is time to get the Stormpath HiveMQ in place and see how simple it is to authenticate a client from Stormpath.

General Setup

stormpath.apiKey.id: <Your API key goes here>
stormpath.apiKey.secret: <Your API key secret goes here>



Please choose the directory, which represents your application name that you set in the property file. The username and password must match the credentials provided by the MQTT client (directory, username, firstname, lastname, email and password are mandatory fields)


Hint: At this point the client can’t publish or subscribe to any topic, because the permission defaults to deny.


More than a proof of concept

While configuring permissions via the Stormpath Web UI is easy and sufficient for a proof of concept, it may be tedious for real applications to maintain all permissions by hand. And here is where Stormpath really excels in conjunction with HiveMQ: You can update all permissions and accounts via the REST API and all changes are automatically applied to your HiveMQ instance. You could integrate Stormpath easily with your user-registration backend and automatically add the correct topic permissions to HiveMQ. Imagine you had a HiveMQ cluster up and running – you can automatically update all the permissions without doing anything.


As we have seen, the setup of Stormpath and HiveMQ is done in minutes and now you have a directory for authentication and authorization in place that can be easily modified by the Web UI and programmatically – while HiveMQ is running!

Anil Saldhana - Red HatJBoss CommunityProjects (including WildFlyAs): OpenSSL HeartBleed Vulnerability [Technorati links]

April 09, 2014 06:29 PM
I want to take this post to summarize that "JBoss community projects including WildFly Application Server are not directly affected by the OpenSSL HeartBleed Vulnerability".

JBossWeb APR

JBossWeb APR functionality requires OpenSSL 0.9.7 or 0.9.8 which is not affected by this vulnerability.

I have consulted the Red Hat Security Response Team before posting this note. We continue to monitor the situation.
Feel free to report any anomalies using http://www.jboss.org/security

We do recommend taking the appropriate precautions.

Please use the links in the references section for gauging indirect exposure to the HeartBleed vulnerability.

Indirect exposure may be possible:


Please refer to the following articles for more information:

Official OpenSSL Official Advisory: https://www.openssl.org/news/secadv_20140407.txt
HeartBleed Information: http://www.heartbleed.com

Red Hat Official Announcement: https://access.redhat.com/site/announcements/781953

CVE:  https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-0160

Amazon Web Services Advisory: https://aws.amazon.com/amazon-linux-ami/security-bulletins/ALAS-2014-320/

Official Linux Distribution Pages


Christopher Allen - AlacrityAdvice to SysAdmins & Managers about Heartbleed Bug in SSL [Technorati links]

April 09, 2014 06:15 PM

Christopher Allen - AlacrityGeneral Advice about the Heartbleed Bug in SSL [Technorati links]

April 09, 2014 06:00 PM

Phil Hunt - OracleStandards Corner: Basic Auth MUST Die! [Technorati links]

April 09, 2014 03:57 PM
Basic Authentication (part of RFC2617) was developed along with HTTP1.1 (RFC2616) when the web was relatively new. This specification envisioned that user-agents (browsers) would ask users for their user-id and password and then pass the encoded information to the web server via the HTTP Authorization header. Basic Auth approach quickly died in popularity in favour of form based login where

Julian BondThere's really only one choice left for the US Republican Party: [Technorati links]

April 09, 2014 03:05 PM
There's really only one choice left for the US Republican Party:

Vote Putin-Palin in 2016!


 www.antipope.org/charlie/pix/Vladimir-Putin-riding-a-bear.jpeg »

[from: Google+ Posts]

Kuppinger ColeMigrating away from your current Identity Provisioning solution [Technorati links]

April 09, 2014 11:30 AM
In KuppingerCole Podcasts

Many organizations currently consider migrating away from their current Identity Provisioning solution. There are many reasons to do so: vendors became acquired and the roadmap changed; the requirements have changed and the current solution does not appear being a perfect fit anymore; a lot of money has been spent for little value; the solution does not suit the new requirements of managing external users and access to Cloud services...

Watch online
April 08, 2014

Kuppinger Cole18.06.2014: Moving from Prohibition to Trust: Identity Management in the On Premises and Cloud Era [Technorati links]

April 08, 2014 11:43 PM
In KuppingerCole

Managing and governing access to systems and information, both on-premise and in the cloud, needs to be well architected to embrace and extend existing building blocks and help organizations moving forward towards a more flexible, future-proof IT infrastructure.

Ping Talk - Ping IdentityBulletin: Ping Identity Unaffected by Heartbleed [Technorati links]

April 08, 2014 11:01 PM
<p><em><span style="line-height: 1.62;">(Updated April 15 to include recommendation to update shared credentials)</span></em></p> <p><span style="line-height: 1.62;">While the OpenSSL Heartbleed bug continues to feed a patching frenzy across the Internet, those using PingFederate, PingOne and/or PingAccess can rest easy.</span></p> <p>None of our platforms is vulnerable to the bug. <span>No updates or patches are required. However, customers that share certificates across applications and platforms, including PingFederate, should exercise due diligence on their non-Ping platforms. Ping recommends that c</span><span style="line-height: 1.62;">redentials at risk should be changed out. The change would include any private keys, passwords, shared secrets, and any other credentials on the application that might be used for authentication to PingFederate, or that have some other shared usage within PingFederate. No updates or patches are needed for the Ping software.</span></p> <p><span style="line-height: 1.62;">Ping&aposs Security Engineering confirms that PingFederate does not use the affected software. </span><span style="line-height: 1.62;">But for the sake of transparency, customers should note that we do distribute and use OpenSSL with our Apache Integration Kit for Windows, but our package does not contain the vulnerable code, we don&apost use it to run HTTPS, and it&aposs not a method that is exposed.</span></p> <p>In addition, our Apache Integration Kit for Linux is dependent on the OS&aposs OpenSSL library, but we do not distribute the library - just use it. But it is key to note that we aren&apost using the library in a way that is exposed. However, <span style="line-height: 1.62;">PingFederate may be exposed indirectly to Heartbleed when configurations of PingFederate incorporate certificates created or used by another application or platform that has been compromised, e.g. a shared certificate. Follow our <a href="https://www.pingidentity.com/support/solutions/index.cfm/Heartbleed-and-Ping-Identity-products">recommendations listed here</a>.</span></p> <p>In addition, Beau Christensen, Ping&aposs director of infrastructure operations, confirmed that Ping Identity&aposs cloud services, notably PingOne, are not affected by the Heartbleed vulnerabilities. He said that as a precautionary measure, "we are forcing credential updates across all systems, and are rotating public certificates and keys." <a href="https://status.pingidentity.com/incidents/jyxrz26bwph9" style="line-height: 1.62;">His full report is available here.</a></p> <p>Also, the engineering team for PingAccess, our <span>mobile, Web and API access management platform, </span>confirmed it was not affected by the bug.</p> <p><em>Brian Whitney, Beau Christensen, Paul Marshall, Stephen Edmonds, Andrew King, Bill Jung, Yang Yu and John Fontana contributed to this blog.</em></p> <p></p> <p><img alt="OpenSSL Ping sso. cleared.png" src="https://www.pingidentity.com/blogs/pingtalk/OpenSSL%20Ping%20sso.%20cleared.png" width="620" height="346" class="mt-image-center" style="text-align: center; display: block; margin: 0 auto 20px;" /></p>

Ping Talk - Ping IdentityThis Week in Identity: That Flushing Sound is Trust Leaving the Building [Technorati links]

April 08, 2014 11:00 PM
<p><span style="line-height: 1.62;">The Heartbleed bug landed an MMA-style left hook on the Internet&aposs security jaw this week. Zulfikar Ramza, chief technology officer at Elastica, says<img alt="this_week_in_identity-sm logo.png" src="https://www.pingidentity.com/blogs/pingtalk/this_week_in_identity-sm%20logo.png" width="200" height="76" class="mt-image-right" style="float: right; margin: 0 0 20px 20px;" /> </span><a href="http://venturebeat.com/2014/04/09/heartbleed-broken-trust/?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+Venturebeat+%28VentureBeat%29" style="line-height: 1.62;">Heartbleed cast a shadow over beliefs that the Internet is safe for transactions.</a><span style="line-height: 1.62;"> "For people to be able to transact with confidence online, they had to believe that SSL was sacrosanct." Sadly, it was not.</span></p> <p><a href="http://techcrunch.com/2014/04/09/heartbleed-the-first-consumer-grade-exploit/?ncid=rss&amp;utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29" style="line-height: 1.62;">John Biggs: Heartbleed, The First Security Bug With A Cool Logo<br /></a><span style="line-height: 1.62;">Heartbleed was one of the first "branded" exploits, a computer bug that has been professionally packaged for easy mass consumption. How did Heartbleed.com happen?</span></p> <p><a href="http://xkcd.com/1353/" style="line-height: 1.62;">xkcd&aposs stick-figure look at Heartbleed<br /></a>Exploits aren&apost funny, but in the stick-figure world anything is fair game.</p> <p><span style="line-height: 1.62;">To stem the bleeding, read on...</span></p> <p><span>General</span><span style="line-height: 1.62;"> </span></p> <p><a href="http://www.independentid.com/2014/04/standards-corner-basic-auth-must-die.html"></a></p> <ul> <li><a href="http://www.independentid.com/2014/04/standards-corner-basic-auth-must-die.html">Phil Hunt: Standards Corner: Basic Auth MUST Die!</a><br /> Basic Authentication (part of RFC2617) was developed along with HTTP1.1 (RFC2616) when the web was relatively new. This specification envisioned that user-agents (browsers) would ask users for their user-id and password and then pass the encoded information to the web server via the HTTP Authorization header.</li> <li><a href="http://blog.aniljohn.com/2014/04/context-and-identity-resolution.html?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+AnilJohn+%28Anil+John+%7C+Blog%29">Anil John: Context and Identity Resolution</a><br /> If identity is defined as a set of attributes that uniquely describe an individual, identity resolution is the confirmation that an identity has been resolved to a unique individual within a particular context. In a federation environment, identity resolution is a means to an end; namely <a href="http://blog.aniljohn.com/2013/08/planning-for-user-enrollment-in-a-federation-redux.html">user enrollment</a>. This blog post looks at identity resolution in two separate contexts, at the identity proofing component and at the RP.</li> <li><a href="https://www.pingidentity.com/blogs/cto-blog/index.cfm/2014/04/warning---explicit-content.cfm">Paul Madsen: Warning! Explicit (Authentication) Content</a><br /> Today&aposs authentication mechanisms are explicit and discontinuous - on some schedule (depending on the resource being accessed) we demand users stop what they are doing (e.g. doing work for us or buying stuff from us) and <i>login</i> - a distinct and unappreciated operation.</li> </ul> <p><span style="line-height: 1.62;"> </span><span style="line-height: 1.62;">APIs</span><span style="line-height: 1.62;"> </span></p> <ul> <li><a href="http://blog.programmableweb.com/2014/04/08/seven-key-messages-from-nordic-apis-that-got-developers-talking/">Mark Boyd: Seven Key Messages From Nordic APIs that Got Developers Talking</a><br />Presentations by Travis Spencer (<a href="http://twobotechnologies.com/">Twobo Technologies</a>) and David Gorton (<a href="http://pingidentity.com/">Ping Identity</a>) shared the latest advances in API neo-security frameworks. Currently, most industry players with an eye to best practice identity management and user authentication are using OAuth 2 and SAML. OpenID Connect is still seen as "the new kid on the block."</li> <li><a href="https://www.youtube.com/watch?v=zhbm_MtSYlg&amp;utm_content=buffere12eb&amp;utm_medium=social&amp;utm_source=twitter.com&amp;utm_campaign=buffer">Toward 1 million APIs (video)</a><br />API Growth is accelerating - with many organizations launching and using APIs. However, we&aposre still in the 10,000&aposs or low 100,000&aposs of APIs range and many are not publicly accessible. What happens when we reach millions of APIs and indeed - how do we get there. A panel at the API Strategy &amp; Practice Conference in Amsterdam talks about future API challenges. Hosted by Steven Willmot the CEO at 3scale.</li> </ul> <p><span>Privacy</span></p> <ul> <li><a href="http://www.scmagazine.com/govwin-iq-hacked-payment-card-data-of-25000-deltek-customers-at-risk/article/342005/?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+SCMagazineNews+%28SC+Magazine+News%29">Kim Zetter: The Feds Cut a Deal With In-Flight Wi-Fi Providers, and Privacy Groups Are Worried</a><br />According to a letter Gogo, the in-flight Wi-Fi provider, submitted to the Federal Communications Commission, the company voluntarily exceeded the requirements of the Communications Assistance for Law Enforcement Act, or CALEA, by adding capabilities to its service at the request of law enforcement. The revelation alarms civil liberties groups, which say companies should not be cutting deals with the government that may enhance the ability to monitor or track users.</li> <li><a href="http://www.theregister.co.uk/2014/04/07/internet_inception_security_vint_cerf_google_hangout/">John Leyden: Vint Cerf wanted to make internet secure from the start, but secrecy prevented it</a><b><br /></b>"I worked with the National Security Agency on the design of a secured version of the internet but we used classified security technology at the time and I couldn&apost share that with my colleagues. If I could start over again I would have introduced a lot more strong authentication and cryptography into the system."</li> <li><a href="http://www.securityweek.com/german-nsa-panels-chairman-quits-spat-over-snowden">German NSA Panel&aposs Chairman Quits in Spat Over Snowden</a><b><br /></b>The chairman of a new German parliamentary panel probing mass surveillance by the NSA abruptly quit on Wednesday, rejecting opposition demands that the body question fugitive US intelligence leaker Edward Snowden.<span> </span></li> </ul> <p>IoT<span> </span></p> <ul> <li><a href="http://www.wired.com/2014/04/this-brilliant-internet-connected-washer-is-a-roadmap-for-the-internet-of-things/">Kyle VanHemert: This Brilliant Washing Machine Is a Roadmap for the Internet of Things</a><br />There couldn&apost be a more perfect example of our absurd obsession with the internet of things than the connected washing machine. Nothing so concisely symbolizes just how ludicrous our mania for connectivity has become as a smartphone app that helps you wash your socks.</li> </ul> <p>Events</p> <ul> <li><a href="http://www.infosec.co.uk/">Info Sec UK</a><br />April 29-May 1; London<br />More than 13,000 attendees to Europe&aposs largest free-to-attend conference. Identity management, mobile, managed services and more.</li> <li><a href="http://www.internetidentityworkshop.com/">IIW</a><br />May 6-8; Mountain View, Calif.<br />The Internet Identity Workshop, better known as IIW, is an un-conference that happens at the Computer History Museum in the heart of Silicon Valley.</li> <li><a href="http://www.gluecon.com/2014/">Glue Conference 2014</a><br />May 21-22; Broomfield, Colo.<br />Cloud, DevOps, Mobile, APIs, Big Data -- all of the converging, important trends in technology today share one thing in common: developers. </li> <li><a href="http://www.kuppingercole.com/book/eic2014">European Identity &amp; Cloud Conference 2014</a><b><br /></b>May 13-16; Munich, Germany<br />The place where identity management, cloud and information security thought leaders and experts get together to discuss and shape the Future of secure, privacy-aware agile, business- and innovation driven IT.</li> <li><a href="http://www.gartner.com/technology/summits/na/catalyst/">Gartner Catalyst - UK</a><br />June 17-18; London<br />A focus on mobile, cloud, and big data with separate tracks on identity-specific IT content as it relates to the three core conference themes.</li> <li><a href="http://bit.ly/1eyMQQ3">Cloud Identity Summit 2014</a><br />July 19-22; Monterey, Calif.<br />The modern identity revolution is upon us. CIS converges the brightest minds across the identity and security industry on redefining identity management in an era of cloud, virtualization and mobile devices.</li> <li><a href="http://www.gartner.com/technology/summits/na/catalyst/">Gartner Catalyst - USA</a><br />Aug. 11-14; San Diego, CA<br />A focus on mobile, cloud, and big data with separate tracks on identity-specific IT content as it relates to the three core conference themes.</li> </ul>

Kuppinger ColeThe Heartbleed Bug in OpenSSL – probably the most serious security flaw in years [Technorati links]

April 08, 2014 03:27 PM
In Alexei Balaganski

As just about every security-related publication has reported today, a critical vulnerability in OpenSSL has been discovered yesterday. OpenSSL is a cryptographic software library, which provides SSL/TSL encryption functionality for network traffic all over the Internet. It’s used by Apache and nginx web servers that serve well over half of the world’s web sites, it powers virtual private networks, instant messaging networks and even email. It’s also widely used in client software, devices and appliances.

Because of a bug in implementation of TLS Heartbeat extension, remote attackers are potentially able to trigger a memory leak on an affected server and obtain different kinds of sensitive information including server’s private keys. The most embarrassing part of this is that the bug has been discovered in past OpenSSL releases dating back to 2012. Specifically, all OpenSSL versions from 1.0.1 to 1.0.1f are vulnerable. The bug has been fixed in the version 1.0.1g, released yesterday.

Needless to say, potential consequences of this vulnerability are huge. Any remote attacker can theoretically leak a server’s private key and then easily decrypt any past or future SSL-encrypted traffic from that server without leaving any traces of an attack. This means that simply patching the vulnerability is not enough; all services involved in handling sensitive information also have to change their private keys and reissue their SSL certificates.

For more information, I recommend checking out heartbleed.com and to see whether your server is compromised, use this test. You can also check the OpenSSL version installed on your server directly.

If your server is compromised, the first step should be updating OpenSSL to 1.0.1g – all major Linux and *BSD distributions have already made updates available. If it’s not possible, you can recompile OpenSSL from the source code with heartbeat support disabled.

Unfortunately, it’s not possible to detect whether your server has already been attacked using this bug. So, to be on the safe side, you should consider reissuing your SSL certificates with new private keys.

One rather sad consequence of the whole Heartbleed debacle is that it delivered a serious blow to a major claim of open source proponents that open source software is inherently more secure because more people can inspect its source code and find possible vulnerabilities. While potentially it is true, in reality not many people would do that for such a large-scale project like OpenSSL just out of curiosity. I can only hope that finally someone will have a good reason to sponsor a proper security audit for OpenSSL and other open source security software.

This is also a good opportunity for service providers to upgrade their SSL key length to ensure more reliable encryption.

Gluu2FA for every site on the Internet? [Technorati links]

April 08, 2014 02:57 PM

You’ve probably seen http://twofactorauth.org:

This site totally misses the point. I think Walmart should be congratulated for not rolling out 2FA. A tightly bundled solution that just solves two factor authentication for their website (which I almost never visit) or in their stores (which I am almost never in), is fantastic. Nice work Walmart!!!

The list I’d like to see is which websites enable me to specify where I want to be authenticated, and hopefully with what mechanism. I can choose a domain for my website and email. Why shouldn’t I be allowed to choose how and where I authenticate?

For many people this domain would be Google.com or Facebook.com. We already have social creds, so in many cases these are a good choice. In other cases, I might want to use my work email to identify my home domain. For example, if I am using a SaaS business application, my work might even be paying for it, so it makes sense that they’d want to control access.

The problem is that in the past, it wasn’t clear what standard websites should adopt to enable distributed authentication. Finally, the answer is clear: OpenID Connect. This standard has the backing of Microsoft, Google, enterprise security vendors, and already has tons of open source implementations and libraries like the OX OpenID Connect Provider.

If the authors of http://twofactorauth.org had actually done their research, they would have discovered that the main reason websites don’t use two-factor is deployment issues. A large enterprise like Walmart needs to identify people who are acting as its employees, customers, and partners. The IT infrastructure is comprised of numerous web services, both internal and third party. Tightly bundling one type of authentication to one application does not really address the security concern.

Ironically, increasing security is an inconvenience to the customer. The best usability is not authenticating me at all. We should congratulate the websites who use authentication intelligently to mitigate the risk of network security. We should not be congratulating knee-jerk adoption of technology that doesn’t enhance usability or security for their site, or for the Internet in general.

Kuppinger ColeLeadership Compass: Enterprise Key and Certificate Management - 70961 [Technorati links]

April 08, 2014 12:17 PM
In KuppingerCole

Enterprise Key and Certificate Management (EKCM) is made up of two niche markets that are converging. This process still continues, and as with all major change of IT market segments, is driven by customer requirements. These customer requirements are driven by security and compliance needs. Up until recent times, compliance has been the bigger driver, but increasingly in the days of Cloud and mobile technology security of data in storage; data in the hands of others that is, security...

Kuppinger ColeHow NOT to protect your email from snooping [Technorati links]

April 08, 2014 09:22 AM
In Alexei Balaganski

Since the documents leaked last year by Edward Snowden have revealed the true extent of NSA powers to dig into people’s personal data around the world, the topic of protecting internet communications has become of utmost importance for government organizations, businesses and private persons alike. This is especially important for email, one of the most widely used Internet communication services.

One of the oldest Internet services still in use (SMTP protocol has been published in 1982), email is based on a set of inherently insecure protocols and by design cannot provide reliable protection against many types of attacks. Hacking, eavesdropping, forged identities, spam, phishing – you name it. Yes, there have been numerous developments to improve the situation over the years: transport layer security, anti-spam and anti-malware solutions, even text encryption. However, all of them are considered optional add-ons to the core set of services, since maintaining backwards compatibility with decades-old systems prevents us from enforcing new security-aware standards and protocols.

For the same reason we cannot just abandon email and switch to new, more secure communication services: most of our correspondents still use email only. Companies providing secured email services have existed for over a decade, but their adoption rates have always been low. Security experts have been fighting this inertia for years, educating the public, developing new protocols and services and pushing for stronger regulations. Alas, people are lazy; they always tend to choose convenience over security.

At least, it was like that until last year. Thanks to Snowden, people suddenly realized that their confidential communications are not just theoretically vulnerable to hacking or other illegal activities. In fact, nearly all their communications are routinely siphoned to huge government datacenters, where they are stored, analyzed and matched to other sources of private information. Even worse, all this is completely legal under current laws, and Internet communications providers are forced to silently cooperate with intelligence services – no hacking required.

Finally, people started to take notice. Finally, not just corporate IT managers, but informed consumers have come to understand that the only reliable protection against all kinds of eavesdropping is end-to-end encryption. Unfortunately, it seems that not everyone understands what exactly “end-to-end encryption” is.

The reason that motivated me to write this post was an article titled “Google encrypts all Gmail communications to protect users from NSA snooping”. Several German email providers like GMX and web.de used the same rhetoric when they have announced similar functionality as well. Even De-Mail, which is a paid service from German government, does not offer mandatory encryption.

Of course, this statement cannot be further from reality. Yes, forcing all users to use encrypted SSL connection to a webmail service is good news. In fact, I would even recommend using a tool like HTTPS Everywhere to enable SSL automatically on many major websites, because it makes browsing more secure and provides protection against man in the middle attacks, which can steal your passwords.

However, when it comes to email, SSL will only protect the “first mile” of your message’s journey to its destination. As soon as it reaches your provider’s mail server, it will be stored on the disk in a completely unencrypted format, open for snooping to server administrators, secret services or hackers. When the message is relayed to the next mail server, chances are that the transport channel won’t be encrypted, too, simply because the other server does not support it. On its way, your mail will be read and analyzed by multiple servers and other devices (anti-spam services, antimalware appliances, firewalls with deep packet inspection and so on). Any of these devices can store a copy for later use or simply collect metadata in form of logs.

For companies like Google, being able to snoop through your emails is even fundamental for their business model: they need to serve you the most relevant ads, increasing their revenues. They can do it legitimately, because it’s part of their TOS. They will even share collected information with third parties. No kind of transport encryption will change that.

Companies building their business model on trust and aiming to provide a truly secure service, both in technical and legal terms, face a different kind of problem. They can simply be forced to hand all master keys over to the government, rendering all encryption useless. Thanks to Ladar Levison of Lavabit, we now know that, too.

Therefore, in my opinion, the only reasonable method of secure email currently available is to use a desktop mail program with a form of public key encryption to encrypt all outgoing mails directly on your computer and to decrypt them directly on the recipient’s computer. Unfortunately, several protocols currently in use (most common are S/MIME and OpenPGP) are incompatible and most mail programs require third party add-ons to implement them. In addition, before you’ll be able to encrypt your messages, you need to exchange encryption keys with the other party over a secure channel (not email!). And, of course, you should always keep in mind that merely the fact that you are using encryption may attract the attention of secret services: an honest man has nothing to hide, doesn’t he? Unfortunately, the way email works it cannot provide any kind of plausible deniability, since message metadata are never encrypted. That’s probably one of the reasons for the recent surge in popularity of ephemeral messaging services like Threema or Telegram, which at least claim not to keep any traces of your messages on their servers. Whether you should trust these claims is, of course, another difficult question…

By the way, the future of encryption and privacy-enabling technologies will be a big topic during our upcoming European Identity & Cloud Conference. Leading experts will join Ladar Levison himself to discuss technical, political and legal challenges. You should be there as well!

Kuppinger ColeIBM’s Software Defined Environment [Technorati links]

April 08, 2014 09:18 AM
In Mike Small

In IBM’s view the kinds of IT applications that organizations are creating is changing from internal facing systems to external facing systems.  IBM calls these kinds of systems “systems of record” and “systems of engagement” respectively.  The systems of record represent the traditional applications that ensure that the internal aspects of the business run smoothly and the organization is financially well governed.  The systems of engagement exploit the new wave of technology that is being used by customers and partners and which takes the form of social and mobile computing.  In IBM’s opinion a new approach to IT is needed to cater for this change which IBM calls SDE (Software Defined Environments).

According to IBM these systems of engagement are being developed to enable organizations to get closer to their customers and partners, to better understand their need and to better respond to their issues and concerns.  They are therefore vital to the future of the business.

However the way these systems of engagement are developed, deployed and exploited is radically different to that for systems of record.   The development methodology is incremental and highly responsive to user feedback.  Deployment requires IT infrastructure that can quickly and flexibly respond to use by people outside the organization.  Exploitation of these applications requires the use of emerging technologies like Big Data analytics which can place unpredictable demands on the IT infrastructure.

In response to these demands IBM has a number of approaches; for example in February I wrote about how IBM has been investing billions of dollars in the cloud.  IBM also has offers something it calls SDE (Software Defined Environment).  IBM’s SDE custom-builds business services by leveraging the infrastructure according to workload types, business rules and resource availability. Once these business rules are in place, resources are orchestrated by patterns—best practices that govern how to build, deploy, scale and optimize the services that these workloads deliver.

IBM is also not alone in this approach and others notably VMWare are heading in the same direction.

In the IBM approach – abstracted and virtualized IT infrastructure resources are managed by software via API invocations.   Applications automatically define infrastructure requirements, configuration and Service Level expectations.  The developer, the people deploying the service as well as the IT service provider are all taken into account by the SDE.

This is achieved by the IBM SDE being built on software and standards from the OpenStack Foundation of which IBM is a member.  IBM has added specific components and functionality to OpenStack to fully exploit IBM hardware and software and these include drivers for: IBM storage devices, PowerVM, KVM and IBM network devices.  IBM has also included some IBM “added value” functionality which includes management API additions, scheduler enhancements, management console GUI additions, and a simplified install.  Since the IBM SmartCloud offerings are also based on OpenStack this also makes cloud bursting into the IBM SmartCloud (as well as any other cloud based on OpenStack) easier except where there is a dependency on the added value functionality.

One of the interesting areas is the support provided by the Platform Resource Scheduler for the placement of workloads.  The policies supported make it possible to define that workloads are placed in a wide variety of ways including: pack workload on fewest physical servers or spread across several, load balancing and memory balancing, keep workloads physically close or physically separate.

IBM sees organizations moving to SDEs incrementally rather that in a big bang approach.  The stages they see are virtualization, elastic data scaling, elastic transaction scaling, policy based optimization and finally application aware infrastructure.

In KuppingerCole’s opinion SDCI (Software Defined Computing Infrastructure) is the next big thing.  Martin Kuppinger wrote about this at the end of 2013. IBM’s SDE fits into this model and has the potential to allow end user organizations to make better use their existing IT infrastructure and to provide greater flexibility to meet the changing business needs.  It is good that IBM’s SDE is based on standards; however there is still a risk of lock-in since the standards in this area are incomplete and are still emerging.   My colleague Rob Newby has also written about the changes that are needed for organizations to successfully adopt SDCI.  In addition it will require a significant measure of technical expertise to successful implement in full.

For more information on this subject there are sessions on Software Defined Infrastructure and a Workshop on Negotiating Cloud Standards Jungle at EIC May 12th to 16th in Munich.

Ben Laurie - Apache / The BunkerFruity Lamb Curry [Technorati links]

April 08, 2014 09:14 AM

My younger son, Oscar, asked me to put bananas into the lamb curry I was planning to cook. Which inspired this:

Chopped onions
Diced ginger
Star anise
Diced leg of lamb
Dried apricot

Fry the onions in the ghee. Add ginger and ground spices and fry for a minute more, then add the diced lamb and brown. Add the raisins, banana (sliced), dried apricot (roughly chopped) and lemon (cut into eighths, including skin) and some yoghurt. Cook on a medium heat until the yoghurt begins to dry out, then add some more. Repeat a couple of times (I used most of a 500ml tub of greek yoghurt). Salt to taste. Eat. The lemon is surprisingly edible.

I served it with saffron rice and dal with aubergines.

April 07, 2014

Christopher Allen - AlacrityTo be persuasive, you need to understand "Identity Protective Cognition" [Technorati links]

April 07, 2014 08:35 PM

Christopher Allen - AlacrityWorld Backup Day… [Technorati links]

April 07, 2014 06:30 PM

Radiant LogicDiversity Training: Dealing with SQL and non-MS LDAP in a WAAD World [Technorati links]

April 07, 2014 06:22 PM

Welcome to my third post about the recently announced Windows Azure Active Directory (AKA the hilariously-acronymed “WAAD”), and how to make WAAD work with your infrastructure. In the first post, we looked at Microsoft’s entry into the IDaaS market, and in the second post we explored the issues around deploying WAAD in a Microsoft-only environment—chiefly, the fact that in order to create a flat view of a single forest to send to WAAD, you must first normalize the data contained within all those domains. (And let’s be honest, who among us has followed Microsoft’s direction to centralize this data in a global enterprise domain???)

It should come as no surprise that I proposed a solution to this scenario: using a federated identity service to build a global, normalized list of all your users. Such a service integrates all those often overlapping identities into a clean list with no duplicates, packaging them up along with all the attributes that WAAD expects (usually a subset of all the attributes within your domains). Once done, you can use DirSync to upload this carefully cleaned and crafted identity to the cloud—and whenever there’s a change to any of those underlying identities, the update is synchronized across all relevant sources and handed off to DirSync for propagation to WAAD. Such an infrastructure is flexible, extensible, and fully cloud-enabled (more on that later…). Sounds great, right? But what about environments where there are multiple forests—or even diverse data types, such as SQL and LDAP?

Bless this Mess: A Federated Solution for Cleaning Up ALL Your Identity

So far, we’ve talked about normalizing identities coming from different domains in a given forest, but the same virtualization layer that allow us to easily query and reverse-engineer existing data, then remap it to meet the needs of a new target, such as WAAD, is not limited to a single forest and its domains. This same process also allows you to reorganize many domains belonging to many different forests. In fact, this approach would be a great way to meet that elusive target of creating a global enterprise domain out of your current fragmentation.

But while you’re federating and normalizing your AD layer, why stop there? Why not extend SaaS access via WAAD to the parts of your identity that are not stored within AD? What about all those contractors, consultants, and partners stored in your aging Sun/Oracle directories? Or those identities trapped in legacy Novell or mainframe systems? And what about essential user attributes that might be captured in one of these non-AD sources?

As you can see below, all these identities and their attributes can be virtualized, transformed, integrated, then shipped off to the cloud, giving every user easy and secure access to the web and SaaS apps they need.

Creating a Global Image of all Your Identities

Creating a Global Image of all Your Identities

Credentials: Sometimes, What Happens On-Premise Should Stay On-Premise

So we’ve seen how we can get to the attributes related to identities from many different silos and turn them into a cloud-ready image. But there’s still one very important piece that we’ve left out of the picture. What about credentials? They’re always the hardest part—should we sync all those &#@$ passwords, along with every &%!?# password change, over the Internet? If you’re a sizable enterprise integrating an array of SaaS applications, that’s a recipe for security breaches and hack attacks.

But fortunately, within Microsoft’s hybrid computing strategy, we can now manage our identities on-premise, while WAAD interfaces with cloud apps and delegates the credential-checking back to the right domain in the right forest via our good friend ADFS. Plus, ADFS even automatically converts the Kerberos ticket to a SAML token (well, it’s a bit more complex than that, but that’s all you need to know for today’s story).

The bottom line here is that you’ve already given WAAD the clean list of users, as well as the information it needs to route the credential-checking back to your enterprise AD infrastructure, using ADFS. So WAAD acts as a global federated identity service, while delegating the low-level authentication back to where it can be managed best: securely inside your domains and forests. (And I’m happy to say that we’ve been preaching the gospel of on-premise credential checks for years now, so it’s great to see mighty Microsoft join the choir. ;) )

While this is very exciting, we still face the issue of all those identities not managed by Microsoft ADFS. While I explained above how a federated identity layer based on virtualization can help you normalize all your identities for use by WAAD, there’s still one missing link in the chain: how does WAAD send those identities back to their database or Sun/Oracle directory for the credential checking phase? After all, ADFS is built to talk to AD—not SQL or LDAP. Luckily, federation standards allow you to securely extend this delegation to any other trusted identity source. So if you have a non-MS source of identities in your enterprise and you can wrap them through a federation layer so they work as an IdP/secure token service, you’re in business. Extend the trust from ADFS to your non-AD subsystem through an STS and—bingo—WAAD now covers all your identity, giving your entire infrastructure secure access to the cloud.

How WAAD, ADFS, and RadiantOne CFS Work Together

How WAAD, ADFS, and RadiantOne CFS Work Together

We call this component “CFS” within our RadiantOne architecture, and with CFS and our VDS, you have a complete solution for living a happy, tidy, and secure life in the hybrid world newly ordained by Microsoft…(cue the choir of angels, then give us a call if you’d like to discuss how we can make this happen within your infrastructure…). :)

Thanks, as always, for reading my thoughts on these matters. And feel free to share yours in the comments below.

← Part 2: Hybrid Identity in the MS World

The post Diversity Training: Dealing with SQL and non-MS LDAP in a WAAD World appeared first on Radiant Logic, Inc

KatasoftMemoirs of a Developer Evangelist - My Personal Goals [Technorati links]

April 07, 2014 06:00 PM

Randall - Developer Evangelist at Large

As a Developer Evangelist at Stormpath, one of my jobs is to help increase developer adoption of our User Management API.

In this post, I’ll share my personal goals with you, what I’m doing to reach them, and what I’m learning (as a series!). If you’re interested in developer evangelism, the inner workings of a start-up, growing your developer-centric business, or just excited to learn more about Stormpath, this series of post is for you.

So, What is a Developer Evangelist Anyways?

My job, as a developer evangelist, is to make Stormpath popular. That’s it. The overarching goal is to make sure that programmers who need Stormpath are able to find it easily, use it, and love it!

As you can imagine, there are lots of ways to be a successful developer evangelist. There’s no magic bullet, or direct path to success — it’s all a guess!

My job currently entails helping developers integrate with Stormpath, and doing whatever I can to ensure developers are able to build cool stuff with Stormpath.

As part of that work, I help with library and framework integrations, build sample applications, attend conferences and events, talk with a lot of different programmers, and of course, blog.

NOTE: It is a dream job, in case you were wondering ^^

And, since my job puts me in contact with so many different people, I also share whatever feedback I get with our engineering team so we can continue iterating on Stormpath’s core product, and make a better service for developers.

My Personal Goals

To kickstart this series, I’d like to formally clarify what my personal goals are as a Developer Evangelist at Stormpath:

Having Fun

“Find a job you love and you’ll never work a day in your life” - Confucius

My first and foremost, my personal goal here at Stormpath is to have a lot of fun — among other things, this means I’m focusing on:

My reasons aren’t entirely selfish — I know that if I’m loving what I’m doing, then I’ll do a great job — and I didn’t come here to be mediocre!

While it may sound idealistic, fun is my top priority >:)

Helping Developers Build Stuff

My next priority is to be genuinely helpful to developers — and help them build cool stuff.

When I was talking with my buddy John Sheehan (former Twilio evangelist, and all around awesome dude) about what worked for him as an evangelist — he essentially told me this:

“Help people build stuff.”

Which makes a lot of sense to me. He went on to explain that if you’re able to help someone out (fix a bug in their project, help them accomplish something, help them get some code launched — whatever), you’ve done your job for the day.

I really like this idea.

So — I’m taking it to heart! Whenever I’m working on projects, or attending events, the main thing I’m thinking about is: “How can I help someone do something awesome today?”

If I can’t find a way to help someone build a project, or write some cool code — I’m failing at my task.

This is quickly becoming one of my favorite aspects of the job.

By building personal connections with lots of people, I hope that some of these people will eventually learn about (and potentially use) Stormpath.

It’s obviously not a form of “scalable marketing”, but it’s genuine, nice, and most importantly: fun!

Luckily for me, this is completely in line with how the rest of Stormpath team thinks, and the brand we’re trying to build.

Making Developers’ Lives Easier

This point is pretty important: I want to help make my fellow developers’ lives easier.

I plan to do this in several ways:

These points are more or less self-explanatory. By abstracting away lots of the boring and tedious components of software development, I’m hoping to make some programmer enjoy their life just a little bit more than they already do.

I can’t tell you how many times I’ve been building a project, ended up Googling some problem, and found a really lovely open source library which not only solved the problem for me, but solved it well! It’s such a great feeling to offload responsibility and complexity to a third party, particularly when the problem being solved is something boring!

Nobody wants to write boilerplate code all day long!

To Come…

I’m still new to the role and learning a ton from our team, other evangelists, and our customers. In my next post, I’ll cover some of the more important things I’ve learned in my first few months as a developer evangelist.

Stay tuned!

− Randall

Christopher Allen - Alacritydanah boyd asks "Is Oculis Rift Sexist"? [Technorati links]

April 07, 2014 04:00 PM

CourionSeminar April 10th to Discuss Integrated Solution for Managing Identities & Access In Cloud, On-Premises [Technorati links]

April 07, 2014 02:12 PM

Access Risk Management Blog | Courion

Doug Mow

UPDATE: Due to unforeseen circumstances, security correspondent Frank Gardner will not be participating in this seminar.

If you reside in or near London, consider joining us on Thursday April 10th at 9:00 a.m. at the Milbank Tower for what promises to be an interesting seminar.

Courion and Ping Identity executives will discuss how an integrated solution for managing user identities and access to resources both in the cloud and on-premises can provide the ability to quickly and properly authenticate users and provide access, while still enabling you to manage risk and maintain compliance.

The event will conclude with a panel of experts available to address your questions and discuss strategies and solutions.

To register for the event, click here.

For live updates, follow @CourionEMEA.


Julian BondThe best thing about the end of life of Win XP is that Microsoft will stop:- - forcing a monthly reboot... [Technorati links]

April 07, 2014 07:36 AM
The best thing about the end of life of Win XP is that Microsoft will stop:-
- forcing a monthly reboot
- popping up messages about updates being available
- filling up the hard disk with update roll back files
[from: Google+ Posts]

Kaliya Hamlin - Identity WomanBC Government Innovation in eID + Citizen Engagement. [Technorati links]

April 07, 2014 02:48 AM

I wrote an article for Re:ID about the BC Government's Citizen Engagement process that they did for their eID system.

Here is the PDF: reid_spring_14-BC

Kaliya Hamlin - Identity WomanBig Data and Privacy [Technorati links]

April 07, 2014 02:13 AM

On Friday I responded to the Government "Big Data" Request for Comment.

I will get to posting the whole thing in blog form - for now here is the PDF. BigData-Gov-2



April 06, 2014

Christopher Allen - AlacrityGood Advice from "A Modern Designer's Canvas" [Technorati links]

April 06, 2014 07:00 PM

Ian YipDoing business in Asia: five etiquette tips [Technorati links]

April 06, 2014 04:05 AM
I contributed a piece in Australian BRW late last month that had nothing to do with IT Security, but I thought this may be of interest to those of you out there new to doing business with Asia and would like somewhere to start.

It's quite general, but large mainstream publications want content that will appeal to the masses, not niche pieces that few people will care about. So, if you're an expert on Asia, none of what I've written will be new.

Here's a teaser:
"Business etiquette in western countries is similar enough that we get away with most things. The little quirks are normally overlooked or forgiven, using the “not from around here” explanation. Asia however, is a slightly different animal."
Check out the full article on BRW. 
April 05, 2014

Christopher Allen - AlacrityHow to Hold an Unpopular Opinion [Technorati links]

April 05, 2014 07:00 PM

Anil JohnContext and Identity Resolution [Technorati links]

April 05, 2014 03:00 PM

If identity is defined as a set of attributes that uniquely describe an individual, identity resolution is the confirmation that an identity has been resolved to a unique individual within a particular context. In a federation environment, identity resolution is a means to an end; namely user enrollment. This blog post looks at identity resolution in two separate contexts, at the identity proofing component and at the RP.

My earlier blog post on Identity Establishment, Verification and Validation provided a description of those terms. Given that, some things to keep in mind:

This leads to the following question. Given the different contexts, is the set of attributes required by the RP for identity resolution the same as the set of attributes used by the identity proofing component when it does identity resolution?

Some initial thoughts that may lead to an answer:


This blog post, Context and Identity Resolution, first appeared on Anil John | Blog. These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

April 04, 2014

Radiant LogicContext is Coming: The Move from IdM to Identity Relationship Management and the Internet of Things [Technorati links]

April 04, 2014 11:16 PM

How a Federated ID Hub Helps You Secure Your Data and Better Serve Your Customers

Welcome back to my series on bringing identity back to IAM. Today we’re going to take a brief look at what we’ve covered so far, then surf the future of our industry, as we move beyond access to the world of relationships, where “identity management” will help us not only secure but also know our users better—and meet their needs with context-driven services.

We began by looking at how the wave of cloud services adoption is leading to a push for federation—using SAML or OpenID Connect as the technology for delivering cloud SSO. But as I stressed in this post, for most medium-to large-enterprises, deploying SAML will require more than just federating access. By federating and delegating the authentication from the cloud provider to the enterprise, your organization must act as an Identity provider (IdP)—and that’s a formidable challenge for many companies dealing with a diverse array of distributed identity stores, from AD and legacy LDAP to SQL and web services.

It’s becoming clear that you must federate your identity layer, as well. Handling all these cloud service authentication requests in a heterogeneous and distributed environment means you’ll have to invest some effort into aggregating identities and rationalizing your identity infrastructure. Now you could always create some point solution for a narrow set of sources, building what our old friend Mark Diodati called an “identity bridge.” But how many how of these ad hoc bridges can you build without a systematic approach to federating your identity? Do you really want to add yet another brittle layer to an already fragmented identity infrastructure, simply for the sake of expediency? Or do you want to seriously rationalize your infrastructure instead, making it more fluid and less fragile? If so, think hub instead of bridge.

Beyond the Identity Bridge: A Federated Identity Hub for SSO and Authorization

This identity hub gives you a federated identity system where identity is normalized—and your existing infrastructure is respected. Such a system offers the efficiency of a “logical center” without the drawbacks of inflexible modeling and centralization that we saw with, say, the metadirectory. In my last post, we looked at how the normalization process requires require some form of identity correlation that can link global IDs to local IDs, tying everything together without having to modify existing identifiers in each source. Such a hub is key for SSO, authorization, and attribute provisioning. But that’s not all the hub gives you—it’s also way to get and stay ahead of the curve, evolving your identity to meet new challenges and opportunities.

The Future’s Built In: The Hub as Application Integration Point and Much More

Another huge advantage of federating your identity? Now that you can tie back the global ID to all those local representations, the hub can act as a key integration point for all your applications. Knowing who’s who across different applications allows you to bring together all the specific aspects of a person that have been collected by those applications. So while it begins as a tool for authentication, the hub can also aggregate attributes about a given person or entity from across applications. So yes, the first win beyond authentication is also in the security space: those rich attributes are key for fine-grained authorization. But security is not our only goal. I would contend that this federated identity system is also your master identity table—yes, read CDI and MDM—which is essential for application integration. And if you follow this track to its logical conclusion, you will move toward the promised land of context-aware applications and semantic representations. I’ve covered this topic extensively, so rather than repeat myself, I will point you to this series of posts I did last spring—think of it as Michel’s Little Red Book on Context… ;)

So the way we see it here at Radiant, the emergence of the hub puts you on the path toward better data management and down the road to the shining Eldorado of semantic integration, where your structured and unstructured data comes together to serve you better. But you don’t have to wait for that great day to realize a return—your investment starts to pay off right away as you secure your devices and cloud services.

Immediate ROI That Ripples Across Your Infrastructure

Immediate ROI That Ripples Across Your Infrastructure

Final Notes: Storage that Scales and the Pillars of Identity Relationship Management

Of course, to make all this happen, you’ll need a big data-driven storage solution that scales to support all those myriad queries and demands. And that’s why we’re so excited about our upcoming HDAP release.

But with freedom comes a lot of responsibility. If you can correlate information based on identity, what does that mean for privacy and, ultimately, for freedom? Even though we know that technology is neutral, the way it’s used can be anything but, which is why we are joining Kantara in their IRM Pillars Initiative, to be sure that we doing the right things and following best practices and standards when it comes to identity, security, and the Internet of things.

Thanks, once again, for reading through this series—I’m so glad to have a forum where I can take an in-depth look at such topics, along with great readers who come along for the ride, giving me lots of essential feedback and plenty to think about. Please let me know if you have any questions or would like to discuss the future of identity. I love a good-spirited debate!

← Part 3: Identity at the Center

The post Context is Coming: The Move from IdM to Identity Relationship Management and the Internet of Things appeared first on Radiant Logic, Inc

Mark Dixon - Oracle#YJJ Architecture: Devices on the Jeep [Technorati links]

April 04, 2014 09:10 PM

The following diagram illustrates how the the sensors I proposed would map onto the general Oracle Internet of Things reference architecture I recently discussed.

At the first level, this diagram shows possible raw sensors and the device controllers responsible for configuring and monitoring the sensors.  The gateway device would aggregate the data and forward that data in either raw or summarized form to the data ingest function in the cloud.  Intermediate storage at the gateway level would allow the Jeep to continue to operate in cases where wireless communication is not available.  The gateway would also provide local APIs what could be consumed by a user interface app on an iPad via Wi-Fi connection.


Of course, a lot more detail is needed.  Each little subsystem could become quite complex. What fun!

Roll on Yellow Jeep Journey!


Christopher Allen - AlacrityCountries Learning to Manipulate Social Media [Technorati links]

April 04, 2014 07:02 PM

Christopher Allen - AlacrityJustine Musk says “If you don’t tell your story, someone else will tell it for you.” [Technorati links]

April 04, 2014 06:41 PM

Julian BondThe #FridayNightCocktail [Technorati links]

April 04, 2014 05:44 PM
The #FridayNightCocktail

This one has no name yet; feel free to suggest one. It's the martini glass version of a variation on a Boulevardier.

- 60ml Bourbon
- 15ml Aperol
- 15ml Carpano Antica (Probably the best red Vermouth)
- Dash of orange bitters
- Stirred, martini glass, orange twist.

Somewhat like a Manhatten, somewhat like a Boulevardier. Somewhat like a Valentino. Softer than any of them but still a manly drink!
[from: Google+ Posts]

Kuppinger ColeExecutive View: NextLabs Control Center - 70847 [Technorati links]

April 04, 2014 09:21 AM
In KuppingerCole

NextLabs is a US-based vendor with headquarters in San Mateo, CA, and a strong footprint as well in the APAC (Asia/Pacific) region. The company focuses on what they call “Information Risk Management”. In fact, the focus is more on Information Risk Mitigation, i.e. practical solutions allowing better protection of critical information...