May 03, 2016

Kuppinger ColeExecutive View: PointSharp Mobile Gateway - 71508 [Technorati links]

May 03, 2016 08:32 AM

by Alexei Balaganski

PointSharp Mobile Gateway is an enterprise mobility solution that provides strong authentication and easy, yet secure, mobile access to Microsoft Exchange and Skype for Business, both on-premise and in the cloud.  

Kuppinger ColeExecutive View: CyberArk Privileged Threat Analytics - 71540 [Technorati links]

May 03, 2016 07:11 AM

by Ivan Niccolai

CyberArk’s latest major release of Privileged Threat Analytics is a capable and focused solution for the mitigation of threats caused by the abuse or misuse of privileged system accounts and entitlements. With the addition of several key features, Privileged Threat Analytics now provides real-time network threat detection and automated response capabilities. 

April 29, 2016

Kuppinger ColeJun 14, 2016: Authentication, Access, Assets: The Triple A of securing sensitive systems and information [Technorati links]

April 29, 2016 12:36 PM
In more than two thirds of all cyber breaches, a misused privileged account serves as the entrance gate. Historically, managing privileged access focused on protecting privileged accounts by securing and managing passwords. But today, simply rotating passwords isn’t enough to defend against increasingly sophisticated cyberattacks. When it comes to securing privileged systems and data, organizations need to broaden their focus on controlling Authentication, Access and Assets.

Kuppinger ColeExecutive View: Atos DirX Identity V8.5 - 70896 [Technorati links]

April 29, 2016 08:21 AM

by Ivan Niccolai

Atos DirX Identity encompasses a rich feature set for all areas of Identity Management and Governance. Its comprehensive business and process-driven approach includes very strong modelling capabilities of the organisational structure and user relationships thus providing the foundation for a business, rather than a technology-centric approach to identity management. 

Kuppinger ColeEnforcing Fine Grained Access Control Policies to Meet Legal Requirements [Technorati links]

April 29, 2016 06:00 AM
Attribute Based Access Control (ABAC) solutions provide an organization with the power to control access to protected resources via a set of policies. These policies express the increasingly complicated legal and business environments in which companies operate these days. However, due to the number of moving parts, it becomes harder to understand the effect a policy change might have in a complex policy set. These moving parts include the policies themselves, attribute values and the specific queries under consideration.

April 28, 2016

KatasoftDeveloper-Friendly SAML Single Sign On Support [Technorati links]

April 28, 2016 03:00 PM

Stormpath recently added support for SAML (Security Assertion Markup Language) user management including both Service Provider (SP) initiated and Identity Provider (IdP) initiated authentication. (SAML is an XML-based standard for securely exchanging authentication and authorization information between entities.)

Instead of working with XML or even directly with SAML itself (which none of us wants to do), Stormpath allows you to support SAML login by just adding some configuration to our SDK and the Stormpath console. From there, your applications can consume SAML assertions from any SAML IdP.

SAML Terminology and Roles

Within a SAML workflow, the IdP (Identity Provider) is the data store that holds an application’s account information, including usernames and passwords. The IdP is responsible for password reset, two-factor authentication, and all other user management functions. Well-known enterprise IdPs utilizing SAML include Okta, Ping, ADFS, OneLogin, Salesforce, and Shibboleth.

The SP (Service Provider) is your application which, by utilizing the Stormpath API, can integrate with these IdPs without the headache of working directly with XML or SAML itself. Our integration with the IdP allows your application to grab an identity for any login or access request made and determine who each user agent is and what they should be allowed to do. User agents are the end users of your application, the people (and accounts) making those login or access requests.

Stormpath-assisted SAML Connection vs. Direct Connection to Individual IdPs

So, why would an application want to use Stormpath for SAML connections, rather than connecting to those IDPs directly?

For us, it comes down to flexibility and ease of use. By offloading the burden of user identity management, including authentication and authorization, to Stormpath, your team resources can remain focused on building the core functionality of your application.

As a developer, you can grab the Okta SDK and add it into your application, but what about when you find yourself needing to support multiple IdPs? The SDKs you get from Okta or OneLogin are designed to work only with their product, and either manage the users stored in an IdP, or work indirectly with them as an IdP within your application.

In real life, most SaaS applications need to support multiple IdPs, and they also need to segment users across organizations. You could build that, but then you take on the burden of consuming the XML, working with someone else’s third party library, and overcoming any lack of inherent flexibility in each individual IdP.

Multi-tenancy Across Organizations and Identity Providers



The Stormpath API offers a simple, built-in install path to manage the burden of integrating with any IdP. We also provide configurable data mapping, which brings clarity to the chaos of how information may be conveyed from multiple tenants.

Flexible Support for Both Service and Identity Provider Initiated SAML Login Workflowsstormpath-saml-workflows

Stormpath’s SAML features are designed to support both Service and Identity Provider initiated workflows. End users can access the IdP portal first and then be automatically authenticated for the Stormpath-backed application. Or they can enter through the Stormpath-backed application and automatically be authenticated for all the applications attached to the IdP.

This flexibility allows our clients to create applications that deliver a unified and seamless SSO experience for end users, without any custom code. Stormpath-backed applications can authenticate users without requiring a separate login. Like all features at Stormpath, SAML support comes with pre-built customer screens and workflows through ID Site.

Intelligent Configuration-based Attribute Mapping

Configuration-based attribute mapping enables seamless, intelligent mapping of data from different IdPs to one consistent data model, thus allowing them to assert account attributes into your application. For example, if one IdP uses variable “firstName=Tom” and another IdP says “fn=Tom,” Stormpath can map both to a variable called “givenName” within your application.



To learn more about what’s under the hood with our SAML integration, check out the Product Documentation or watch our webinar!

For information on deciding to integrate the Stormpath API into your application, email and talk directly with one of our architects.

The post Developer-Friendly SAML Single Sign On Support appeared first on Stormpath User Identity API.

Kuppinger ColeExecutive View: BeyondTrust PowerBroker - 71504 [Technorati links]

April 28, 2016 10:17 AM

by Ivan Niccolai

BeyondTrust’s PowerBroker product family provides a well-integrated solution with a broad range of capabilities for the mitigation of threats caused by the abuse or misuse of privileged system accounts and entitlements, on endpoints as well as server systems. With dedicated products for major system architectures, PowerBroker provides deep support for privilege management on Windows, Unix/Linux as well as Mac systems.

Kuppinger ColeExecutive View: Gigya Customer Identity Management Suite - 71529 [Technorati links]

April 28, 2016 07:22 AM

by Matthias Reinwarth

A feature-rich customer identity management platform providing strong analytics and tools for business-oriented decision-making processes while enabling compliance with legal and regulatory requirements and an adequately high level of security.

Kuppinger ColeExecutive View: SAP Enterprise Threat Detection - 71181 [Technorati links]

April 28, 2016 07:10 AM

by Martin Kuppinger

In these days of ever-increasing cyber-attacks, organizations have to move beyond preventative actions towards detection and response. This no longer applies to the network and operating system level only, but involves business systems such as SAP. Identifying, analyzing, and responding to threats is a must for protecting the core business systems.

Kuppinger ColeExecutive View: Balabit Shell Control Box - 71570 [Technorati links]

April 28, 2016 06:42 AM

by Alexei Balaganski

Balabit Shell Control Box is a standalone appliance for controlling, monitoring and auditing privileged access to remote servers and network devices. Shell Control Box provides a transparent and quickly deployable PxM solution without the need to modify existing infrastructure or change business processes.

April 27, 2016

Mark Dixon - OracleOn April 27, 4877 BC, the universe was created!? [Technorati links]

April 27, 2016 04:22 PM

Do we really understand space-time?  Two interesting articles have recently crossed my virtual desk.

In the first, reported:

On this day in 4977 B.C., the universe is created, according to German mathematician and astronomer Johannes Kepler (1571-1630), considered a founder of modern science.


Best known for his theories explaining the motion of planets, Kepler first observed the visible universe with his naked eye, and later with a telescope similar to the one used by Galileo Galilei.

While Kepler’s estimate of the age of the universe may have been based upon the best scientific understandings of his day, we now consider his view quaint and short-sighted.  We now have such sophisticated equipment for measuring time and distance that a current estimate of the age of the universe has been pegged at 13.77 billion years.

But will that current estimate meet the test of time (pun intended)?

In the second article, “Why Space and Time Might be an Illusion,” George Musser stated:

The ordinary laws of physics, operating within time, are inherently unable to explain the beginning of time. According to those laws, something must precede the big bang to set it into motion. Yet nothing is supposed to precede it. A way out of the paradox is to think of the big bang not as the beginning but as a transition, when space crystallized from a primeval state of spacelessness.


What will ultimately explain this paradox? What will give us a really accurate picture of the age of the universe? Perhaps “string theory, loop quantum theory, causal-set theory,” or perhaps something else?

When you take a step back from the dispute, you notice all agree on one essential lesson: the space-time that we inhabit is a construction. It is not fundamental to nature, but emerges from a deeper level of reality. In some way or other, it consists of primitive building blocks — “atoms” of space — and takes on its familiar properties from how those building blocks are assembled.

Atoms of space? Will the textbooks of a future generation speak of them as casually as we currently discuss carbon or plutonium atoms?  Just what will the future hold for our progressive knowledge of how space and time really work?  

I predict that at some future date, scientists will look back on our day and proclaim, “How quaint, but short-sighted were the theories of physics in 2016!”

Mark Dixon - OracleKuppinger Cole: Computer-Centric Identity Management [Technorati links]

April 27, 2016 03:16 PM

Yesterday, I enjoyed attending a webcast entitled, “Computer-Centric Identity Management.” Led by Ivan Nicolai, Lead Analyst at Kuppinger Cole, the presentation was subtitled, “From Identity Management to Identity Relationship Management.  The changing relationship between IAM, CRM and Cybersecurity.”

I found the presentation to be concise, informative, and thought-provoking – particularly the concept that the IAM practitioner must transition from the role of “protector” to “enabler”.

I think the following diagram does a good job of illustrating the relationships people have with organizations, mobile communication devices and other devices in the growing world of IoT. Identity Relationships are critical in enabling the potential of Digital Transformation.


KatasoftLumen And Stormpath As Your Mobile Backend [Technorati links]

April 27, 2016 03:00 PM

I am happy to announce that we have now added Lumen to Stormpath’s PHP integrations. This integration requires minimal setup and about five minutes to get a PHP backend up and running for your mobile applications – exciting! With our Lumen integration, you can quickly set up user registration and user authentication using OAuth tokens.

This tutorial will teach you how to set up a new Lumen project and configure it for use in your mobile application. I will teach you how to install Lumen in a couple of different ways and guide you through the configuration and setup of your Lumen project.

For this tutorial, I am going to teach you how to create a new lumen project all the way to your first call with one of our mobile SDKs. I will take you through your first call to authenticate a user using the /oauth/token endpoint to return OAuth tokens for all future requests against your application. We have also provided middleware for you to use to check for authenticated users on a route.

Let’s get started! You can sign up for a free Stormpath account here.

Lumen Tutorial Overview

This tutorial assumes a few items:

Install Lumen

Lumen is built with the same principle as Laravel, to make things easy for developers. The installation of Lumen, and Laravel, is one of the easiest things I have ever come across for frameworks. There are a 2 main ways to install Lumen for your project and I will cover both of them here

Install Lumen with Composer

Composer is a great tool and the tool of choice here at Stormpath for all of our PHP integrations. The process with composer can be done with a single line of code run from your terminal;

composer create-project --prefer-dist laravel/lumen lumen-stormpath-mobile-backend

Understanding what the snippet of code does for you is important. The first part, composer create-project tells composer that you are going to be creating a new project with the following parameters. -prefer-dist forces installation from package dist. Doing it this way can speed up the process and also prevent errors if you have not set up git correctly. The next part tells composer what package you want to use, in our case, laravel/lumen. Finally lumen-stormpath-mobile-backend is the location you want composer to install the project in. This can be anything you want it to be, but for this tutorial, we will use this location for all examples.

Install Lumen with Lumen Installer

My personal preference when installing Lumen or Laravel is to use the dedicated installers. These installers make it much simpler to quickly start a new project. Installing the installers can be done from composer. To make them available from anywhere on the system, we need to install them as a global dependency. Doing so requires one extra command from the typical composer require.

composer global require "laravel/lumen-installer"

This will run composer and require the laravel/lumen-installer which can be found on github and require it globally. Once composer is done installing the lumen-installer package, you will then be able to run the following from anywhere on your system to create a new lumen project. Once you do this once, you will never have to do it again for any future Lumen projects.

lumen new lumen-stormpath-mobile-backend

The one thing to keep in mind when doing the preceding command is you need to be in the parent folder of where you want the project to be installed. A folder will be created with the name you give it in the folder you are in when running this command.

During the installation, the lumen command will issue a few different commands for you. This will download the newest stable version of Lumen and run a composer install command, prepping Lumen for you on your system. Once this is done, you now have Lumen installed.

Test Lumen Installation

The final step of creating your Lumen project is to test it to make sure it is working. If you are familiar with Laravel, you know of php artisan. One of the nice tools from the Laravel environment is the php artisan serve command which spins up a server for you. Sadly, this was not carried over into the Lumen framework, but that is ok because PHP has a built in server from their command line tool. Change your directory in terminal to the /public folder of the project then run

php -S localhost:8000

Running this command will tell PHP to start its internal server using the hostname localhost on port 8000. You can pick any hostname/ip and port combination you would like. For this tutorial, we will assume localhost:8000 as our point of entry for the mobile backend.

The artisan command is a command line tool that has a lot of helper command line tools that are built into Lumen and Laravel.

Install Stormpath Lumen Package

The goal at Stormpath is to create useful tools that are easy to use and understand. The Stormpath Lumen package is no exception. With very few lines of code changes, only 1 line if you don’t count the composer edits, you can install the stormpath-lumen package and get it running to accept requests.

Set Up Composer To Install Stormpath Lumen

The first step in this process is to edit the composer.json file from your project. Open the file and find the require section. This section sets the packages that are required for your project to run. Once you find the section, add the following;

"stormpath/lumen": "^0.1"

Stormpath follows SemVer for releases. At the time of writing this article, the stormpath/lumen package was at release 0.1.1. The require statement will install anything that is in the 0.x series. Once stormpath/lumen has a 1.x release, you will have to update your require statement to reflect that change. Because of the SemVer versioning, when 1.0.0 is released, it may introduce breaking backwards compatibility changes. Because of this, and as a general practice, you should never blindly update your composer dependencies without testing them first.

This will tell composer to go out and find the package with the name stormpath/lumen at a version (release) that is in the 0.x series, but above or equal to version 0.1.0. Once you have this added to your composer.json file you can run composer update from the command line to bring in the dependency into your project. This will go out and download our files into you vendor folder and add it to the autoload magic of composer.

At this point, you may be asking, why not just run composer require stormpath/lumen. That is a great question and the answer comes down to some dependency conflicts and the way composer handles installing packages this way. You can find out all the details from a ticket we raised with Composer when we ran into this situation on their Github issues

Install the Package

As I stated before, this part requires only a single line of code to be added. We need to tell Lumen to use the Stormpath Lumen service provider. This is done in a little bit of a different way than in Laravel. Open the file found at bootstrap/app.php and scroll down a little bit until you find the section of service providers. After all other services providers are defined (they will be commented out and that is fine), add the Stormpath Lumen service provider.


This will tell Lumen to load our service provider and initialize the Stormpath Lumen application.

Configure Application For Mobile API

Out of the box, the Stormpath Lumen package is already set up for an API, mobile or not. You will have to provide an API key to the service provider along with the application href from Stormpath. To do this, you first need to sign up with Stormpath. Once you do this, a confirmation email will be sent and then you can log into the Stormpath dashboard

Generate And Install API Keys

Once you are in the Dashboard for Stormpath, if it is a new account, you will need to click on Generate Api Keys on the right side of the screen. Once you do, you will be prompted to download a file which contains your keys. If it is an existing account, you can either use current API keys or click on Manage Api Keys on the right of the screen to generate new keys. Once you have your API keys, you need to install the API keys. To do so, open your .env file from the root of the Lumen project and add the following lines


Add Application Configuration

The next step is to add your application href to the Lumen project. You can find your application href from the Stormpath dashboard applications. Find the application you want to use for your Stormpath Lumen project, or create a new one. Once you find it or create one, click on the application name and copy the href on the screen. It should look something like Open your .env file again and add the following


Once you have done both of the preceding steps, you need to restart your PHP server. Do this by pressing ctrl+c in the terminal to stop the current server, then run the command php -S localhost:8000 to start the server again. This will load in your new environment variables and you are ready to begin making calls to your application.

If you need to change any of the default functionality of the package, you can publish the default yaml configuration file that our package is using. from the command line and the root of your project, run php artisan stormpath:publish and this will generate a new file in the root of your project called stormpath.yaml. This file allows you to enable or disable features of the integration to fit your needs.

Authenticate Mobile Accounts

We are not ready to start making calls to the Stormpath Lumen backend. The Stormpath Lumen is an API only integration, meaning that it will only respond to you in JSON. All of the typical login and registration endpoints are set up and ready to accept requests.

The one route that would be a little different for mobile is the way authentication works. At first you may think to just send a post request to /login however this is not the recommend way of doing things. We suggest and allow a post request to be made to an /oauth/token route which will return to you an access_token and a refresh_token for you to use on all further requests. The request to this endpoint would look like the following

POST /oauth/token HTTP/1.1
Host: localhost:8000
Accept: application/json
Content-Type: application/json
Cache-Control: no-cache

"password": "superP4ssw0rd!",
"username": "",
"grant_type": "password"

This will return a result that looks like the following

"access_token": "eyJraWQiOiIxUE4zRlhJMFU3OUUyTUhDRjZYVVlHVTRaIiwiYWxnIjoiSFMyNTYifQ.eyJqdGki...3In0.i1diirJdpcQh1TA8oIya8-86tes_xlauaTwsuKS67gY",
"expires_in": 3600,
"refresh_token": "eyJraWQiOiIxUE4zRlhJMFU3OUUyTUhDRjZYVVlHVTRaIiwiYWxnIjoiSFMyNTYifQ.eyJqdGki...AxfQ.oMEcM9T1K8SptKxKLaUYiJ37whlvhGVoFcRDLAxzjw8",
"token_type": "Bearer"

A refresh token will only be available if it is enabled on the stormpath application. This can be configured by visiting the application token policy. By default, refresh tokens are enabled, but we give you the option to disable them.

Protect Your Routes

Once you begin building out your mobile backend, we have provided a few middleware filters you can use on your routes. This middleware can be added to your route in the typical Lumen way and have already been initialized and available to you. For routes you want only authorized users to see, add the middleware stormpath.auth to the route. With this, passing in the access_token bearer token as an Authorization header, our integration will take over and make sure the user is authorized to view the route based on the token. For routes that you want only guests to see or access, you can use the stormpath.guest middleware. This will check to make sure the user has not passed in an access_token in the header.

Integrate With Stormpath Mobile SDKs

Our mobile SDK team has been hard at work to make sure their SDK’s work with all of our integrations. They have created some great SDK for iOS and Android which are now available for you to use. Our goal on the integrations we create is to make it possible for you to use and swap in any integration you want to use without much configuration change for your mobile application. We all use the same endpoints and naming conventions. This is to make it easy for you to get up and running with any integration we make. Find out more information on how to use these great SDK’s from the Android documentation or the iOS documentation.

Further Reading

Want to learn more about adding user authentication to PHP applications? Check out these posts:

I hope you enjoyed this tutorial and would love to hear your thoughts. Leave your comments below or you can contact us at Keep up to date with what we are doing on Twitter @goStormpath.


The post Lumen And Stormpath As Your Mobile Backend appeared first on Stormpath User Identity API.

WAYF NewsWAYF to change metadata in May, 2016 [Technorati links]

April 27, 2016 12:20 PM

To whom it may concern, regarding your technical connection to WAYF - Where Are you From.

(If you find someone else in your organisation is a more suitable receiver of this correspondence, please send name, email and phone number to

This is a notification about coming technical changes to the technical connection to WAYF, which will affect all connected web-based services as well as connected institutions.

A detailed description of what needs to changed will follow in the coming week. The purpose of this email is to notify you, so you can allocate ressources for change management in the near future.

The changes must be applied during the time from May 9th to May 30th 2016.

The background for the changes is WAYF’s introduction of a hardware security module (HSM) for handling cryptographic keys. The HSM system is already running, using the old keys, which must now be changed.

This implies that all connected services and institutions must update the SAML metadata about WAYF, in order to ‘move’ to the new setup with the new keys.

We take the opportunity to inform you that WAYF will stop checking the signature of SAML authentication requests, to align better with international practices - without lowering the security of the connected services.

WAYF will also remove the double-signing of both SAML assertions and responses: only the responses will be signed.

Of due diligence we inform you that WAYF has no formal responsibility of your local SAML implementations e.g. simpleSAMLphp or ADFS. This being said, we will do our best to make the process as smooth as possible. Please send inquiries related to metadata update to:

Kind regards

David Simonsen
Head of WAYF - Where Are You From

April 26, 2016

Kuppinger ColeCustomer-centric Identity Management [Technorati links]

April 26, 2016 10:37 PM
While most organizations are at least good enough in managing their employee identities, dealing with millions of consumer and customer identities imposes a new challenge. Many new identity types, various authenticators from social logins to device-related authenticators in smartphones, risk mitigation requirements for commercial transactions, the relationship with secure payments, customer retention, new business models and thus new requirements for interacting with customers: The challenge has never been that big.

KatasoftTutorial: Build an ASP.NET Core Application With User Authentication [Technorati links]

April 26, 2016 04:00 PM


We’re thrilled to announce our open-source ASP.NET Core authentication library is now available! What’s the deal with ASP.NET Core, you ask?

ASP.NET Core 1.0 (formerly ASP.NET 5 or “vNext”) is the latest version of ASP.NET. Instead of building incrementally on ASP.NET 4, Microsoft opted to do a full rewrite of the ASP.NET stack. The end result is a leaner and more modular framework than ever before.

What’s changed? For starters, MVC and Web API have been unified into a single pipeline. Dependency injection is provided out of the box. And, most exciting of all, ASP.NET is now cross-platform!

Not only does this mean native hosting on Linux (woot!), but the modular design of the framework gives you more flexibility to use exactly the components that you want. Like Entity Framework but don’t want to use SQL Server? Not a problem! Not a fan of IIS? Kestrel is blazingly fast and works great with nginx.

How about an application with full-fledged user authentication, no database required? In this tutorial, you’ll learn how to scaffold a basic ASP.NET Core MVC application and plug in Stormpath user authentication with two lines of code.

Let’s get started.

What is Stormpath?

Stormpath is an API service that allows developers to create, edit, and securely store user accounts and user account data, and connect them with one or multiple applications.

Our API enables you to:

In short: we make user account management a lot easier, more secure, and more scalable than what you’re probably used to.

Ready to get started? Register for a free developer account here.

Creating a New ASP.NET Core Project

First, create a new project using the ASP.NET Core template in Visual Studio.

  1. Click on File – New Project.
  2. Under Visual C# – Web, pick the ASP.NET Web Application template.
  3. On the New ASP.NET Project dialog, pick the Web Application template under ASP.NET 5 Templates.
  4. Click Change Authentication and pick No Authentication. You’ll be adding it yourself!

If you prefer the command line, you can use the ASP.NET Yeoman Generator to scaffold a new project instead:

  1. Run yo aspnet.
  2. Pick the Web Application Basic [without Membership and Authorization] template. Done!

Get Your API Credentials

Your ASP.NET application will need a few pieces of information in order to communicate with Stormpath:

The best way to provide these is through environment variables. You can also hardcode the values into your application code, but we recommend environment variables as a best practice for security.

To save the Stormpath credentials (and Application URL) as environment variables, follow these steps:

  1. If you haven’t registered for Stormpath, create a free developer account.
  2. Log in to the Admin Console and use the Developer Tools section on the right side of the page to create and download an API key file.
  3. Open the file up in Notepad (or your favorite editor). Using the command line (or PowerShell), execute these commands:
setx STORMPATH_CLIENT_APIKEY_ID "[value from properties file]"
setx STORMPATH_CLIENT_APIKEY_SECRET "[value from properties file]"
  1. Back in the Stormpath Admin Console, navigate to Applications and find the default application that’s created for you when you sign up for Stormpath (called “My Application”). Select it and copy the Href value (it’ll look like
  2. Save the Application URL as another environment variable:
setx STORMPATH_APPLICATION_HREF "[your Application href]"

Install the Stormpath ASP.NET Core Middleware

This part’s easy! The Stormpath.AspNetCore NuGet package contains everything you need to use Stormpath from your application. Install it using the NuGet Package Manager, or from the Package Manager Console:

PM> install-package Stormpath.AspNetCore

Add Stormpath to Startup.cs

At the top of your Startup.cs file, add this line:

using Stormpath.AspNetCore;

Next, add Stormpath to your services collection in ConfigureServices():

public void ConfigureServices(IServiceCollection services)

// Other service code here

Finally, inject Stormpath into your middleware pipeline in Configure():

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
// Logging and static file middleware (if applicable)


// MVC or other framework middleware here

Note: Make sure you add Stormpath before other middleware you want to protect, such as MVC.

With only two lines of code, the Stormpath middleware will automatically handle registration, login, logout, password reset, and email verification. Pretty cool! By default, this functionality is exposed on the /register, /login routes (and so on).

Customizing the Default View

When a user logs in, the Stormpath middleware will set Context.User automatically. You can use the IsAuthenticated property in your views to show different content depending on whether the user is logged in.

In your Views/Shared/_Layout.cshtml file, replace the existing navbar with one that will update when a user logs in:

<div class="navbar-collapse collapse">
<ul class="nav navbar-nav">
@if (Context.User.Identity.IsAuthenticated)

<form id="logout_form" action="/logout" method="post">
<ul class="nav navbar-nav navbar-right">
<p class="navbar-text">Hello, <b>@Context.User.Identity.Name</b></p>
    <li><a style="cursor: pointer;">Log Out</a></li>
<ul class="nav navbar-nav navbar-right">
    <li><a href="/login">Log In or Register</a></li>


Bask in the Glory of Easy ASP.NET Core Authentication

Time to try it out!

  1. Fire up your application and open it in a browser.
  2. Click the Log In or Register link in the navbar. You’ll be redirected to the default Stormpath login view.
  3. Click Create an Account and fill out the form. Enter the same credentials in the login form to log in with the new account.
  4. You’ll be redirected back to your application. Check the navbar. You’re logged in!
  5. Log out again. Easy as pie.

Updating the navbar is cool, but what about protecting access to controllers that only your logged-in users should use? You’re in luck…

Add an MVC Controller Secured by Stormpath

The Stormpath middleware plugs right into the ASP.NET authentication system, which means you can use the [Authorize] attribute to protect your routes and actions with ease. To demonstrate, add a new MVC controller to allow logged-in users to view their profile:

  1. Right-click on the Controllers folder in the Solution Explorer and choose Add – New Item. Pick the MVC Controller Class template. Name the new file ProfileController.cs.
  2. Add the [Authorize] attribute above the class declaration:
public class ProfileController : Controller
// GET: //
public IActionResult Index()
return View();
  1. Create a new folder under Views called Profile. Create a new Razor view using the MVC View Page template. Use the @inject directive to get the Stormpath IAccount object for the logged-in user:
@inject Stormpath.SDK.Account.IAccount Account

ViewData["Title"] = "Your Profile";
<h3>Profile Details</h3>
<b>Username:</b> @Account.Username

<b>Email:</b> @Account.Email

<b>Full Name:</b> @Account.FullName

<b>First Name:</b> @Account.GivenName

<b>Last Name:</b> @Account.Surname

<b>ID:</b> @Account.Href

  1. Run your application again, and ensure that you are logged out.
  2. Try accessing /profile. You’ll be redirected to the login page. When you log in, you’ll automatically be redirected to the Profile view. Awesome!

Go Forth and Code

As you’ve seen, Stormpath makes it super simple to add authentication to an ASP.NET Core application with only a few lines of code. The middleware and dependency injection patterns used in ASP.NET Core are powerful and can be used to build complex applications very efficiently. And, everything is portable across Windows, Linux, and Mac!

If you want to learn more, check out these resources:

Of course, feel free to grab the full code for this project and play around with it. If you have any questions, leave me a comment below!

The post Tutorial: Build an ASP.NET Core Application With User Authentication appeared first on Stormpath User Identity API.

Mark Dixon - Oracle2016 Data Breach Investigations Report [Technorati links]

April 26, 2016 02:39 PM


Verizon’s 2016 Data Breach Investigations Report (DBIR) is now available to download:

The 2016 dataset is bigger than ever, examining over 100,000 incidents, including 2,260 confirmed data breaches across 82 countries. With data provided by 67 contributors including security service providers, law enforcement and government agencies, this year’s report offers unparalleled insight into the cybersecurity threats you face.


Julian BondOne more time, with feeling. Farr festival is a boutique electronic dance festival on July 14-15-16... [Technorati links]

April 26, 2016 01:39 PM
One more time, with feeling. Farr festival is a boutique electronic dance festival on July 14-15-16. Near Baldock on the A1, 30 miles N of London.

For just a little longer, the ticket link below is for weekend camping tickets at a heavy discount.

[from: Google+ Posts]

Rakesh RadhakrishnanThreat Centric Cloud Compliance and Security [Technorati links]

April 26, 2016 12:53 AM
To me Compliance Centric Security offers a baseline which is a solid baseline and a must for todays Cloud offerings, such as Google Clouds which has achieved an impressive list of compliance certifications including;
1). comprehensive ISO 27001 ( for the systems, applications, people, technology, processes and data centers serving Google Cloud Platform )
2). ISO 27017 specific for cloud services
3). ISO 27018 specific for cloud privacy
4). SOC 2
5). SOC 3
7). FedRAMP (ATO)
8). HIPPA Compliance (BAA)
9).  EU Data Protection Directive (EU Model Contract).
Beyond Compliance, enterprise's moving to the Clouds (such as Google Cloud Platform), need to understand the shared responsibility model and leverage "secure by design", "secure development", "secure deployment", "securing run time data" and "secure diagnostics" - the 5 SD principles, to move towards a Target State that is more Threat centric.
1). Secure by Design - involves "password free", "cookie-free", "stateless", "agentless" and "zero footprint" designs.
2). The development processes need to be secure (regardless if one uses Eclipse, Maven, IntelliJ or any other) - that leverages Google PAAS
3). The deployment model (devops) must support (Stride, Octave and SCAP) like standards
4). The run time environment must use a "threat centric CASB" like pallera for google apps (XML, API, and Data)
5). The diagnostics should be secure via supports for virtual firewalls that are FIPS certified and have identity in the stack (traceability).
Join Cloud Security Alliance Silicon Valley today to learn more about what Pallera CASB can do for Google Cloud Platform and Vidder Precision Access for Clouds

Of course there are nearly a dozen CASB (cloud access security brokers) that partner with Google for specific solutions - like secure gmail (ciphercloud), google apps (netskope), google drive (skyhigh). My favorite is FireLayer as a CASB for Google Clouds as it also supports XACML 3 which is huge as all policies expressed in a standards based XML expression not only will allow for Threat centric Access Exceptions -aka dynamic policies (STIX COA and XACML), it also allows for streamlined auditing of apps hosted in the Cloud.

Christopher Allen - AlacrityThe Path to Self-Sovereign Identity [Technorati links]

April 26, 2016 12:00 AM

Identity-510866Today I head out to a month-long series of events associated with identity: I’m starting with the 22st (!) Internet Identity Workshop next week; then I’m speaking at the blockchain conference Consensus about identity; next I am part of the team putting together the first ID2020 Summit on Digital Identity at the United Nations; and finally I'm hosting the second #RebootingWebOfTrust design workshop on decentralized identity.

At all of these events I want to share a vision for how we can enhance the ability of digital identity to enable trust while preserving individual privacy. This vision is what I call “Self-Sovereign Identity”.

Why do we need this vision now? Governments and companies are sharing an unprecedented amount of information, cross-correlating everything from viewing habits to purchases, to where people are located during the day, to where they sleep at night, and with whom they associate. In addition, as the Third World enters the computer age, digital citizenship is providing Third World residents with greater access to human rights and to the global economy. When properly designed and implemented, self-sovereign identity can offer these benefits while also protecting individuals from the ever-increasing control of those in power, who may not have the best interests of the individual at heart.

But what exactly do I mean by “Self-Sovereign Identity”?

You Can’t Spell Identity without an “I”

Identity is a uniquely human concept. It is that ineffable “I” of self-consciousness, something that is understood worldwide by every person living in every culture. As René Descartes said, Cogito ergo sumI think, therefore I am.

However, modern society has muddled this concept of identity. Today, nations and corporations conflate driver’s licenses, social security cards, and other state-issued credentials with identity; this is problematic because it suggests a person can lose his very identity if a state revokes his credentials or even if he just crosses state borders. I think, but I am not.

Identity in the digital world is even trickier. It suffers from the same problem of centralized control, but it’s simultaneously very balkanized: identities are piecemeal, differing from one Internet domain to another.

As the digital world becomes increasingly important to the physical world, it also presents a new opportunity; it offers the possibility of redefining modern concepts of identity. It might allow us to place identity back under our control — once more reuniting identity with the ineffable “I”.

In recent years, this redefinition of identity has begun to have a new name: self-sovereign identity. However, in order to understand this term, we need to review some history of identity technology:

The Evolution of Identity

The models for online identity have advanced through four broad stages since the advent of the Internet: centralized identity, federated identity, user-centric identity, and self-sovereign identity.

Phase One: Centralized Identity (administrative control by a single authority or hierarchy)

In the Internet’s early days, centralized authorities became the issuers and authenticators of digital identity. Organizations like IANA (1988) determined the validity of IP addresses and ICANN (1998) arbitrated domain names. Then, beginning in 1995, certificate authorities (CAs) stepped up to help Internet commerce sites prove they were who they said they were.

Some of these organizations took a small step beyond centralization and created hierarchies. A root controller could annoint other organizations to each oversee their own heirarchy. However, the root still had the core power — they were just creating new, less powerful centralizations beneath them.

Unfortunately, granting control of digital identity to centralized authorities of the online world suffers from the same problems caused by the state authorities of the physical world: users are locked in to a single authority who can deny their identity or even confirm a false identity. Centralization innately gives power to the centralized entities, not to the users.

As the Internet grew, as power accumulated across hierarchies, a further problem was revealed: identities were increasingly balkanized. They multiplied as web sites did, forcing users to juggle dozens of identities on dozens of different sites — while having control over none of them.

To a large extent, identity on the Internet today is still centralized — or at best, hierarchical. Digital identities are owned by CAs, domain registrars, and individual sites, and then rented to users or revoked at any time. However, for the last two decades there’s also been a growing push to return identities to the people, so that they actually could control them.

Interlude: Foreshadowing the Future

PGP (1991) offered one of the first hints toward what could become self-sovereign identity. It introduced the 'Web of Trust'1, which established trust for a digital identity by allowing peers to act as introducers and validators of public keys2. Anyone could be validator in the PGP model. The result was a powerful example of decentralized trust management, but it focused on email addresses, which meant that it still depended on centralized hierarchies. For a variety of reasons, PGP never became broadly adopted.

Other early thoughts appeared in “Establishing Identity without Certification Authority” (1996), a paper by Carl Ellison that examined how digital identity was created3. He considered both authorities such as Certificate Authorities and peer-to-peer systems like PGP as options for defining digital identity. He then settled on a method for verifying online identity by exchanging shared secrets over a secure channel. This allowed users to control their own identity without depending on a managing authority.

Ellison was also at the heart of the SPKI/SDSI project (1999) 4 - 5. Its goal was to build a simpler public infrastructure for identity certificates that could replace the complicated X.509 system. Although centralized authorities were considered as an option, they were not the only option.

It was a beginning, but an even more revolutionary reconception of identity in the 21st century would be required to truly bring self-sovereignty to the forefront.

Phase Two: Federated Identity (administrative control by multiple, federated authorities)

The next major advancement for digital identity occurred at the turn of the century when a variety of commercial organizations moved beyond hierarchy to debalkanize online identity in a new manner.

Microsoft’s Passport (1999) initiative was one of the first. It imagined federated identity, which allowed users to utilize the same identity on multiple sites. However, it put Microsoft at the center of the federation, which made it almost as centralized as traditional authorities.

In response Sun Microsoft organized the Liberty Alliance (2001). They resisted the idea of centralized authority, instead creating a "true" federation, but the result was instaed an oligarchy: the power of centralized authority was now divided among several powerful entities.

Federation improved on the problem of balkanization: users could wander from site to site under the system. However, each individual site remained an authority.

Phase Three: User-Centric Identity (individual or administrative control across multiple authorities without requiring a federation)

The Augmented Social Network (2000) laid the groundwork for a new sort of digital identity in their proposal for the creation of a next-generation Internet. In an extensive white paper6, they suggested building “persistent online identity” into the very architecture of the Internet. From the viewpoint of self-sovereign identity, their most important advance was “the assumption that every individual ought to have the right to control his or her own online identity”. The ASN group felt that Passport and the Liberty Alliance could not meet these goals because the “business-based initiatives” put too much emphasis on the privatization of information and the modeling of users as consumers.

These ASN ideas would become the foundation of much that followed.

The Identity Commons (2001-Present) began to consolidate the new work on digital identity with a focus on decentralization. Their most important contribution may have been the creation, in association with the Identity Gang, of the Internet Identity Workshop (2005-Present) working group. For the last ten years, the IIW has advanced the idea of decentralized identity in a series of semi-yearly meetings.

The IIW community focused on a new term that countered the server-centric model of centralized authorities: user-centric identity. The term suggests that users are placed in the middle of the identity process. Initial discussions of the topic focused on creating a better user experience7, which underlined the need to put users front and center in the quest for online identity. However the definition of a user-centric identity soon expanded to include the desire for a user to have more control over his identity and for trust to be decentralized8.

The work of the IIW has supported many new methods for creating digital identity, including OpenID (2005), OpenID 2.0 (2006), OpenID Connect (2014), OAuth (2010), and FIDO (2013). As implemented, user-centric methodologies tend to focus on two elements: user consent and interoperability. By adopting them, a user can decide to share an identity from one service to another and thus debalkanize his digital self.

The user-centric identity communities had even more ambitious visions; they intended to give users complete control of their digital identities. Unfortunately, powerful institutions co-opted their efforts and kept them from fully realizing their goals. Much as with the Liberty Alliance, final ownership of user-centric identities today remain with the entities that register them.

OpenID offers an example. A user can theoretically register his own OpenID, which he can then use autonomously. However, this takes some technical know-how, so the casual Internet user is more likely to use an OpenID from one public web site as a login for another. If the user selects a site that is long-lived and trustworthy, he can gain many of the advantages of a self-sovereign identity — but it could be taken away at any time by the registering entity!

Facebook Connect (2008) appeared a few years after OpenID, leveraging lessons learned, and thus was several times more successful largely due to a better user interface9. Unfortunately, Facebook Connect veers even further from the original user-centric ideal of user control. To start with, there’s no choice of provider; it’s Facebook. Worse, Facebook has a history of arbitrarily closing accounts, as was seen in their recent real-name controversy10. As a result, people who access other sites with their “user-centric” Facebook Connect identity may be even more vulnerable than OpenID users to losing that identity in multiple places at one time.

It’s central authorities all over again. Worse, it’s like state-controlled authentication of identity, except with a self-elected “rogue” state.

In other words: being user-centric isn’t enough.

Phase Four: Self-Sovereign Identity (individual control across any number of authorities)

User-centric designs turned centralized identities into interoperable federated identities with centralized control, while also respecting some level of user consent about how to share an identity (and with whom). It was an important step toward true user control of identity, but just a step. To take the next step required user autonomy.

This is the heart of self-sovereign identity, a term that’s coming into increased use in the ‘10s. Rather than just advocating that users be at the center of the identity process, self-sovereign identity requires that users be the rulers of their own identity.

One of the first references to identity sovereignty occurred in February 2012, when developer Moxie Marlinspike wrote about “Sovereign Source Authority”11. He said that individuals “have an established Right to an ‘identity’”, but that national registration destroys that sovereignty. Some ideas are in the air, so it’s no surprise that almost simultaneously, in March 2012, Patrick Deegan began work on Open Mustard Seed, an open-source framework that gives users control of their digital identity and their data in decentralized systems12. It was one of several "personal cloud" initiatives that appeared around the same time.

Since then, the idea of self-sovereign identity has proliferated. Marlinspike has blogged how the term has evolved13. As a developer, he shows one way to address self-sovereign identity: as a mathematical policy, where cryptography is used to protect a user’s autonomy and control. However, that’s not the only model. Respect Network instead addresses self-sovereign identity as a legal policy; they define contractual rules and principles that members of their network agree to follow14. The Windhover Principles For Digital Identity, Trust and Data15 and Everynym’s Identity System Essentials16 offer some additional perspectives on the rapid advent of self-sovereign identity since 2012.

In the last year, self-sovereign identity has also entered the sphere of international policy17. This has largely been driven by the refugee crisis that has beset Europe, which has resulted in many people lacking a recognized identity due to their flight from the state that issued their credentials. However, it’s a long-standing international problem, as foreign workers have often been abused by the countries they work in due to the lack of state-issued credentials.

If self-sovereign identity was becoming relevant a few years ago, in light of current international crises its importance has skyrocketed.

The time to move toward self-sovereign identity is now.

A Definition of Self-Sovereign Identity

With all that said, what is self-sovereign identity exactly? The truth is that there’s no consensus. As much as anything, this article is intended to begin a dialogue on that topic. However, I wish to offer a starting position.

Self-sovereign identity is the next step beyond user-centric identity and that means it begins at the same place: the user must be central to the administration of identity. That requires not just the interoperability of a user’s identity across multiple locations, with the user’s consent, but also true user control of that digital identity, creating user autonomy. To accomplish this, a self-sovereign identity must be transportable; it can’t be locked down to one site or locale.

A self-sovereign identity must also allow ordinary users to make claims, which could include personally identifying information or facts about personal capability or group membership18. It can even contain information about the user that was asserted by other persons or groups.

In the creation of a self-sovereign identity, we must be careful to protect the individual. A self-sovereign identity must defend against financial and other losses, prevent human rights abuses by the powerful, and support the rights of the individual to be oneself and to freely associate19.

However, there’s a lot more to self-sovereign identity than just this brief summation. Any self-sovereign identity must also meet a series of guiding principles — and these principles actually provide a better, more comprehensive, definition of what self-sovereign identity is. A proposal for them follows:

Ten Principles of Self-Sovereign Identity

A number of different people have written about the principles of identity. Kim Cameron wrote one of the earliest “Laws of Identity”20, while the aforementioned Respect Network policy21 and W3C Verifiable Claims Task Force FAQ22 offer additional perspectives on digital identity. This section draws on all of these ideas to create a group of principles specific to self-sovereign identity. As with the definition itself, consider these principles a departure point to provoke a discussion about what’s truly important.

These principles attempt to ensure the user control that’s at the heart of self-sovereign identity. However, they also recognize that identity can be a double-edged sword — usable for both beneficial and maleficent purposes. Thus, an identity system must balance transparency, fairness, and support of the commons with protection for the individual.

  1. Existence. Users must have an independent existence. Any self-sovereign identity is ultimately based on the ineffable “I” that’s at the heart of identity. It can never exist wholly in digital form. This must be the kernel of self that is upheld and supported. A self-sovereign identity simply makes public and accessible some limited aspects of the “I” that already exists.
  2. Control. Users must control their identities. Subject to well-understood and secure algorithms that ensure the continued validity of an identity and its claims, the user is the ultimate authority on their identity. They should always be able to refer to it, update it, or even hide it. They must be able to choose celebrity or privacy as they prefer. This doesn’t mean that a user controls all of the claims on their identity: other users may make claims about a user, but they should not be central to the identity itself.
  3. Access. Users must have access to their own data. A user must always be able to easily retrieve all the claims and other data within his identity. There must be no hidden data and no gatekeepers. This does not mean that a user can necessarily modify all the claims associated with his identity, but it does mean they should be aware of them. It also does not mean that users have equal access to others’ data, only to their own.
  4. Transparency. Systems and algorithms must be transparent. The systems used to administer and operate a network of identities must be open, both in how they function and in how they are managed and updated. The algorithms should be free, open-source, well-known, and as independent as possible of any particular architecture; anyone should be able to examine how they work.
  5. Persistence. Identities must be long-lived. Preferably, identities should last forever, or at least for as long as the user wishes. Though private keys might need to be rotated and data might need to be changed, the identity remains. In the fast-moving world of the Internet, this goal may not be entirely reasonable, so at the least identities should last until they’ve been outdated by newer identity systems. This must not contradict a “right to be forgotten”; a user should be able to dispose of an identity if he wishes and claims should be modified or removed as appropriate over time. To do this requires a firm separation between an identity and its claims: they can't be tied forever.
  6. Portability. Information and services about identity must be transportable. Identities must not be held by a singular third-party entity, even if it's a trusted entity that is expected to work in the best interest of the user. The problem is that entities can disappear — and on the Internet, most eventually do. Regimes may change, users may move to different jurisdictions. Transportable identities ensure that the user remains in control of his identity no matter what, and can also improve an identity’s persistence over time.
  7. Interoperability. Identities should be as widely usable as possible. Identities are of little value if they only work in limited niches. The goal of a 21st-century digital identity system is to make identity information widely available, crossing international boundaries to create global identities, without losing user control. Thanks to persistence and autonomy these widely available identities can then become continually available.
  8. Consent. Users must agree to the use of their identity. Any identity system is built around sharing that identity and its claims, and an interoperable system increases the amount of sharing that occurs. However, sharing of data must only occur with the consent of the user. Though other users such as an employer, a credit bureau, or a friend might present claims, the user must still offer consent for them to become valid. Note that this consent might not be interactive, but it must still be deliberate and well-understood.
  9. Minimalization. Disclosure of claims must be minimized. When data is disclosed, that disclosure should involve the minimum amount of data necessary to accomplish the task at hand. For example, if only a minimum age is called for, then the exact age should not be disclosed, and if only an age is requested, then the more precise date of birth should not be disclosed. This principle can be supported with selective disclosure, range proofs, and other zero-knowledge techniques, but non-correlatibility is still a very hard (perhaps impossible) task; the best we can do is to use minimalization to support privacy as best as possible.
  10. Protection. The rights of users must be protected. When there is a conflict between the needs of the identity network and the rights of individual users, then the network should err on the side of preserving the freedoms and rights of the individuals over the needs of the network. To ensure this, identity authentication must occur through independent algorithms that are censorship-resistant and force-resilient and that are run in a decentralized manner.

I seek your assistance in taking these principles to the next level. I will be at the IIW conference this week, at other conferences this month, and in particular I will be meeting with other identity technologists on May 21st and 22nd in NYC after the ID 2020 Summit on Digital Identity. These principles will be placed into Github and we hope to collaborate with all those interested in refining them through the workshop, or through Github pull requests from the broader community. Come join us!


The idea of digital identity has been evolving for a few decades now, from centralized identities to federated identities to user-centric identities to self-sovereign identities. However, even today exactly what a self-sovereign identity is, and what rules it should recognize, aren’t well-known.

This article seeks to begin a dialogue on that topic, by offering up a definition and a set of principles as a starting point for this new form of user-controlled and persistent identity of the 21st century.


The following terms are relevant to this article. These are just a subset of the terms generally used to discuss digital identity, and have been minimized to avoid unnecessary complexity.

Authority. A trusted entity that is able to verify and authenticate identities. Clasically, this was a centralized (or later, federated) entity. Now, this can also be an open and transparent algorithm run in a decentralized manner.

Claim. A statement about an identity. This could be: a fact, such as a person's age; an opinion, such as a rating of their trustworthiness; or something in between, such as an assessment of a skill.

Credential. In the identity community this term overlaps with claims. Here it is used instead for the dictionary definition: "entitlement to privileges, or the like, usually in written form"23. In other words, credentials refer to the state-issued plastic and paper IDs that grant people access in the modern world. A credential generally incorporates one or more identifiers and numerous claims about a single entity, all authenticated with some sort of digital signature.

Identifier. A name or other label that uniquely identifies an identity. For simplicity's sake, this term has been avoided in this article (except in this glossary), but it's generally important to an understanding of digital identity.

Identity. A representation of an entity. It can include claims and identifiers. In this article, the focus is on digital identity.

Thanks To…

Thanks to various people who commented on early drafts of this article. Some of their suggestions were used word for word, some were adapted to the text, and everything was carefully considered. The most extensive revisions came from comments by Shannon Appelcline, Dave Crocker, Anil John, and Drummond Reed. Other commentators and contributors include: Doc Searls, Kaliya Young, Devon Loffreto, Greg Slepak, Alex Fowler, Fen Labalme, Justin Netwon, Markus Sabadello, Adam Back, Ryan Shea, Manu Sporney, and Peter Todd. I know much of the commentary didn't make it into this draft, but the discussion on this topic continues…

Image by John Hain licensed CC0

The opinions in this article are my own, not my employer's nor necessarily the opinions of those that have offered commentary on it.

1 Jon Callas, Phil Zimmerman. 2015. “The PGP Paradigm”. #RebootingWebOfTrust Design Workshop.
2 Appelcline, Crocker, Farmer, Newton. 2015. “Rebranding the Web of Trust”. #RebootingWebOfTrust Design Workshop.
3 Ellison, Carl. 1996. “Establishing Identity without Certification Authorities”. 6th USENIX Security Symposium.
4 Ellison, C. 1999. “RFC 2692: SPKI Requirements”. IETF.
5 Ellison, C., et. al. 1999. “RFC 2693: SPKI Certificate Theory”. IETF.
6 Jordon, Ken, Jan Hauser, and Steven Foster. 2003. “The Augmented Social Network: Building Identity and Trust into the Next-Generation Internet”. Networking: A Sustainable Future.
7 Jøsang, Audun and Simon Pope. 2005. “User Centric Identity Management”. AusCERT Conference 2005.
8 Verifiable Claims Task Force. 2006. “[Editor Draft] Verifiable Claims Working Group Frequently Asked Questions”. W3C Technology and Society Domain.
9 Gilbertson, Scott. 2011. “OpenID: The Web’s Most Successful Failure”. Webmonkey.
10 Hassine, Wafa Ben and Eva Galperine. “Changes to Facebook’s ‘Real Name’ Policy Still Don’t Fix the Problem”. EFF.
11 Marlinspike, Moxie. 2012. “What is ‘Sovereign Source Authority’?” The Moxie Tongue.
12 Open Mustard Seed. 2013. “Open Mustard Seed (OMS) Framework). ID3.
13 Marlinspike, Moxie. 2016. “Self-Sovereign Identity”. The Moxie Tongue.
14 Respect Network. 2016. “The Respect Trust Network v2.1”.
15 Graydon, Carter. 2014. “Top Bitcoin Companies Propose the Windhover Principles – A New Digital Framework for Digital Identity, Trust and Open Data”. CCN.
16 Smith, Samuel M. and Khovratovich, Dmitry. 2016. “Identity System Essentials”. Evernym.
17 Dahan, Mariana and John Edge. 2015. “The World Citizen: Transforming Statelessness into Global Citizenship”. The World Bank.
18 Identity Commons. 2007. “Claim”. IDCommons Wiki.
19 Christopher Allen. 2015. “The Four Kinds of Privacy”. Life With Alacrity blog.
20 Cameron, Kim. 2005. “The Laws of Identity”.
21 Respect Network. 2016. “The Respect Trust Network v2.1”.
22 Verifiable Claims Task Force. 2006. “[Editor Draft] Verifiable Claims Working Group Frequently Asked Questions”. W3C Technology and Society Domain.
23 "Definition of Credential".
April 25, 2016

Vittorio Bertocci - MicrosoftAzure AD at Techorama 2016 [Technorati links]

April 25, 2016 08:13 AM


It’s hard to believe, but it might be about 5 years that I don’t visit Belgium for talking about identity. I think the last time was for TechDays 2011. Well, thanks to the awesome Mike Martin – I am back! Smile

I will be presenting a couple of sessions at Techorama 2016, at Utopolis Mechelen. Here they are:

Identity as service – developing for the web

Tuesday, May 3 • 11:15 – 12:15

In this session I will discuss how to take advantage of Azure AD and MSA (formerly known as LiveID) to secure your web apps and access Web API, such as the Microsoft Graph. I will mostly focus on ASP,NET, but I will touch on other platforms as well.

Identity as service – developing for devices

Wednesday, May 4 • 11:15 – 12:15

In this session I will focus on mobile, native and rich clients – it will be the first time I will dive deeper in our brand new unified developer libraries, announced just few weeks ago at //build/. I also had a chance to chat about it for literally 3.5 mins with some Belgian guys, you can see the recording here Smile


I should be in town from Sunday. If you want to meet before or after the sessions, for identity purposes or even just for a Duvell, ping me on twitter or via the contact form.

Looking forward to be there!

Kuppinger ColeJun 07, 2016: Data Loss Prevention Best Practice – Applying User-driven Data Classification [Technorati links]

April 25, 2016 07:23 AM
The first step in protecting intellectual property and sensitive information is to classify it. This can be accomplished manually via author classification or automatically via content filtering. Some tools simplify the process and provide greater governance.
April 24, 2016

Rakesh RadhakrishnanThreat IN based AuthN controls, Admission controls and Access Controls [Technorati links]

April 24, 2016 04:51 PM
For large enterprises evaluating next generation Threat Intelligence (Incident of Compromise detection tools) platforms such as Fireye, Fidelis and Sourcefire, one of the KEY evaluation criteria is how much of this Threat Intelligence generated can act as Actionable Intelligence. This requires extensive integration of the Threat IN platform with several control systems in the network and end points. It may also include several "COA" several recommended course of actions in the STIX XML attribute set based on the malware detected. This approach paves the way for enterprise to mature their security architecture into one that is Threat Intelligence Centric and Adaptive to such Threat IN. This integration to a Threat IN platform and a Threat Analytics Platform can range from:
Mobile end points and APT integration similar to Fireeye and Airwatch or Fireeye and Mobile Iron
Kudos to fireeye for an amazing set of security controls integration and their support for STIX. Integrating security systems together for cross control co-ordination is very COOL ! Threat IN standards such as TAXII, STIX and CybOX - allow for XML based expression of an "incident of compromise" STIX and secure straight through integration with TAXII. Since dozens of vendors have started expressing AC policies in XACML from IBM Guardium, to Nextlabs DLP and FireLayer Cloud Data Controller, to Layer7 (XML/API firewalls) and Queralt (PAC/LAC firewalls), it ONLY natural to expect STIX profile support XACML (hopefully an effort from OASIS in 2015). The extensibility of XACML, allows for expression of ACL, RBAC, ABAC, RiskADAC and TBAC all in XACML and the policy combination algorithms in XACML can easily extend to "deny override" when it comes to real time Threat Intelligence. This approach will allow enterprise to capture Threat IN and implement custom policies based on the IN in XACML without having one off integration and vendor LOCK IN ! Similar to the approach proposed by OASIS here. Its good to see many vendors supporting STIX - including Splunk, Tripwire and many more.

Why do we need such Integrated Defense?

Simply having Breach Detection capabilities will not suffice, we need both prevention and tolerance as well.

One threat use case model can include every possible technology put to use; for example clone-able JSONBOTS (json over XMPP) infused from Fast-flux Domains, using Trojan Zebra style (parts of the code shipped randomly), assembled at a pre-determined time (leveraging Zero Day vulnerability) along with a external Commander, appearing in the server side components of the network, making Secure Indirect Object References (the opposite of IDOR) as they have been capturing the direct object names needed (over time with BigData Bots already - reconnaissance), to ex-filtrate sensitive data in seconds, and going dormant in minutes. One malware use case - that is a Bot/BotNet, C&C, ZeroDay, TorjanZebra and an APT that leverages Big Data (all combined).

You don't know what hit you (at the application layer as it is distributed code via distributed injections that was cloned and went dormant in seconds), where it came from (not traceable to IP addresses or domains FFD), how it entered (trojan zebra), when it came alive (zero day), how persistent it can be (cloning), what path it took (distributed) and what data it stole?

This is the reality of today ! Quite difficult to catch in a VEE (virtual execution environment) as well. What is needed is a combination of advanced Threat Detection (threat intelligence and threat analytic) combined with responsive/dynamic Threat Prevention systems (threat IN based access controls including format preserving encryption and fully homogenized encryption ) and short lived, stateless, self cleansing ( ) Threat Tolerant systems, as well (which can include web containers and app containers). Scitlabs like technology used for continually maintaining High Integrity network security services (rebooting from a trusted image) is very critical, to ensure that the preventive controls at the Data Layer will work (consistent, cohesive and co-ordinated data object centric policies in DLP, DB FW and Data Tokenization engines).

If the Data is the crown jewel the thief's are after imagine - U know your community gates are breached and you have a warning - you would lock down your house and ship that "pricey diamond" via a tunnel to the east coast ! wont you...  Even in case the thief barges through your front door gets to the safebox and breaks it open ONLY to find the diamond GONE ! (that's intrusion tolerant designs). And fortunately in the digital world that's quite possible - Data Center is identified with a IOC by a Fireeye or Fidelis, instantly the AC policies for AuthN to End point to App and Data Changes, while at the same time the sensitive data (replicated to an UN breached DR site) hosting DBMS is quiesced and data is purged.

Dynamic Defensive Designs #1

Rakesh RadhakrishnanTrending Towards Threat Centric ESA [Technorati links]

April 24, 2016 04:51 PM
When I started this blog title "Identity Centric ESA" almost a decade back, my motive was to highlight how Enterprise Security Architecture - with the notion of hyper distribution in IT (Cloud models and Mobile end points) has to evolve from "Network Centric Security Architectures" heavily focused on Perimeter defense  (outside in) and "Application+Data Centric Security Architectures" heavily driven by Compliance requirements (inside out) has to evolve into one that is "IAM Centric" - Identity & Access Management, with the advent of SAML, XACML, OAUTH, etc. Integrated Identity Intelligence has become pervasive in Cloud Stacks today and in Networking (such as Cisco ISE), essentially maturing the ESA models that pervaded in the decades of 80' and 90's. Now in 2015 - 2020 it is obvious that ESA will be driven by a Threat Centric Model (like Cisco's slogan - Threat Centric Security) and by that what I mean is the following:

  1. NG SIEM tools with Big Data technologies will be Threat Intelligence Aware and Risk/Behavior Intelligence Aware (making them STIS - Security Threat IN Systems)
  2. Threat Analytics with Big Data Technologies will feed this Threat Intelligence (STIX) to all Security Controls
  3. This Intelligence driven ESA - will involve Real Time Response (actionable intelligence) and be fine grained in terms of recommended set of actions
  4. Threat IN will be integrated into the IAM stack for "design time", "provision time" and "run time" IAM control response
  5. Threat IN will be integrated with Network Security controls and Application+Data Controls (pervasive integration)
  6. This creates the full loop integration that is required in security systems and is IN (intelligence driven)
  7. Policy Automation and Dynamic Policy Generation+Combinations will mature the Enterprise Security Architecture to respond real time (hence STIX- XACML specs) 
All industry verticals (health care, transportation, banking and more) will benefit from this NG maturity model in Enterprise Security Architecture (watch out for a book on Threat Centric ESA in 2015). Threat Centric model is a natural evolution from the earlier "network centric" or "identity centric" models and builds on top of those centric approach. Similar to the notion of what is the center of a spherical ball..  from a multi-dimensional thinking Threat Centric model actually leverages the earlier models for a better level of capability and maturity. It will be awesome to watch this take shape rapidly between 2015 and 2020!

Rakesh RadhakrishnanShort Lived Stateless and Self Cleansing [Technorati links]

April 24, 2016 04:51 PM
A few weeks back I blogged about Dynamics Defensive Design Patterns for application security. Check out Waratek one of the innovative companies at RSA 2015 (with multiple approved patents) for Self Protecting, Self Diagnosing and Self Testing Java Apps.. Checkout their video and white paper..  It is similar to machine learning at the container level.

Rakesh RadhakrishnanDominant Defensive Design Principles 5 [Technorati links]

April 24, 2016 04:51 PM
End point the entry points today have robust technologies to embrace these ideas around - Zero Footprint and Stateless devices and password/cookie free designs.

One can loose a device - yet no data is lost, no security sensitive apps reside in it (zero foot print and stateless) and there are no end point session cookies to hijack a session (dumb display devices or smart display devices).

A sample end to end security process flow includes;

Step 1    Device Provision time (SIM, IMIE, SW, etc.)
Step 2   Authentication (FIDO based & SAMLprofiles)
Step 3    Isolation(end point shim)+VPN layer, Location, Secure Browser, RDP, IPsec, etc.
Step 4     MGW/STS OAUTH Tokens for Native App SSO
Step 5    Mobile DLP (mobile content protection) –via ICAP
Step 6     Inbound and Outbound URI validation against malware (ICAP)
Step 7     Mobile APT (client side and server side)
Step 8    Run Time Mobile Apps in Data Center
Step 9    ISE generates posture (suspect, quarantine and good)

and the respective maturity levels measured by;

Level 1 :  Basis MDM and Mobile Malware protection – access ONLY to trivial services
Level 2 :  Mobile MFA (FIDO) with Mobile SSO (for native Mobile Apps using OAUTH API)
Level 3 :  Mobile end point posture based Network Admission Controls (Mobile and VPN Layer and network context (within Enterprise Ethernet LAN, International Locations, WifiGuest LAN, etc.).
Level 4:  Mobile Isolation (SHIM) driven VPN –that leverages 3 and 4 + RDP (secure remote desktop)
Level 5:  Comprehensive and Consistent integrated control – end to end auditable – globally and continually optimized

75% of advanced malware is caused by leveraging password (a credential) as an attack vector. With FIDO alliances work and IAPP's efforts in a Cloud Model and a BYOD world - we have no choice but to move towards recognition and strong multi factors recognitions as the way to move forward.

Rakesh RadhakrishnanDominant Defensive Design Principles 4 [Technorati links]

April 24, 2016 04:51 PM
 All IAAS stack in a private cloud or a public cloud is hosted in a Network (Data Center Network). While a full blown network security design and architecture will consist of network to network (NNI) interface designs like GRE/L2TP, remote LAN designs and remote network access and more., the focus on these defensive designs are around Data Center networking that acts as the host for IAAS. Fundamentally the dominant design principles are "defense in depth" from a perimeter protection perspective and a "Context driven Admission Control" from a Data Center (internal routing) perspective. The 9 layers or 9 steps to securing the perimeter that leverages the defense in depth principles includes;

Step 1  Design Time – End to End Vulnerability Scans and Secure Configurations
Step 2  End to end Pen Tests and Tuning based on Test Results
Step 3  Stealth Mode – Sniffer and Scrubber for Network DDOS mitigation (can be a Sec AAS)
Step 4  External Firewall –port packet and protocol level filters and DMZ rules
Step 5   Network IDS and IPS – between External and Internal Firewalls for Intrusions
Step 6    Internal Firewall setting up L3 (ASA/Anyconnect) L5 VPN’s (TLS) after device validation
Step 7  Network Services DDOS mitigation – NTP, DHCP, etc.
Step 8   Identity and Application aware (Trust TAGS) based Data Center routing (see risk based routing in next 3 slides)
Step 9   SIEM and continuous internal integrity checks and egress web proxy

Fundamentally the integrity of the packets keeps increasing (ingress) as it passes one step after the other. Step 1 end to end vulnerability scans for example might seem like a design time function, however with vendors offering on premises and Secure SAAS VM tools, it gets to become a continuous process. So is network penetration testing. Network DDOS is typically a Security as a Service offering today, that the cloud DC operator has to offer
at a minimum.

The 2nd dominant principle is this Context Aware Next Generation Admission controls within the data center. Each subjects User Context, Device Context, Access Network context (wifi, wired LAN, location, etc.), are all taken into account when admission into specific VM's in VLAN's are managed - by an Identity and Risk
aware engine. This today must be a standard offering of cloud IAAS vendors and their respective data centers.

This type of an approach is also discussed in a Cisco Arbor BYOD security paper. What's also critical to understand is the extent to which these network facing security systems integrate with one another. This includes MFA (multifactor authentication platforms) like SecureAuth with Cisco ASA and Anyconnect, RSA 2FA with SecureAuth (as one 2FA mechanism), Cisco ASA/Anyconnect with Citrix Netscalar (for VDI and TLS level VPN into VDI), all these contexts carried over to Cisco ISE, and more.

Rakesh RadhakrishnanDominant Defensive Design Principles 3 [Technorati links]

April 24, 2016 04:50 PM
Once you have had apps developed with Secure by Design principles and data objects with privacy baked in principles, you are looking at hosting it in a Private or a Public IAAS or a hybrid (example private cloud for production and public cloud for DR). The two primary principles here is "Identity in the STACK" and leveraging the abstraction between the hypervisor and the virtual machines.

A sample end to end flow of security processes in an IAAS model - such as OpenStack and Cloudfoundry includes;

Step 1    Hypervisor’s goes through the process of IDS/IPS (intrusion tolerance) at Boot time (hypervisor accessible only via a segregated interface to the control networks)
Step 2   When Linux and Windows OS boots up as a VM on the hypervisor appropriate malware/virus checks are complete
Step 3  Virtual Machines (on ESX) are controlled with NSX like firewalls for protocol, communication, processes and more (secure software defined networking)
Step 4  All privileged access management to OS is handled via a Command Control firewall – that handles RBAC and XACML (like beyond trust)
Step 5   All 4 layers report log data for FIPS forensics (traceability) and are identity and policy aware
Step 6     All privileged access management to DBMS is handled via a DB firewall – that handles RBAC and XACML
Step 7  All privileged access is routed to specific network segment (control plane) via Cisco ISE like solutions (for end to end id /policy in the stack) - leveraging separate paths (NIC's, IP addresses, routes, etc). The idea is that application traffic cannot reach Virtual Machines and Hypervisors beneath it (command injections cannot execute - LDAP injection, SQL injection, OS command injections an more).
The 5 levels of maturity one can attain in the IAAS Security space are;

Level 1 :  Silos of Layered Malware Firewalls – Hypervisor centric, OS/VM centric, JVM centric, Network centric, DB FW, etc.
Level 2 :  Integrate Firewalls – example SQL injection or command injection based WL and BL across firewalls –cross coordination
Level 3 :  Fine Grained command level AC at OS and DBMS (privileged administrators)
Level 4:  Identity integrated into tack for end to end forensics
Level 5:  Comprehensive and Consistent automated polices – end to end auditable – globally and continually optimized at the infrastructure layers

While the Virtual machines that run the apps and the data processing for the Secure SAAS app, its addressable interfaces are segregated to its own NIC and path that are typically NAT'ed to a public resolvable IP address. There typically will be no path for advanced threats to permeate into a Hypervisor layer via these applications. Everything from the VM and above as a STACK can potentially be Self Cleansable - when they are designed as short lived and stateless services.
All privileged administrative tasks can take a different path and will comply with ISO 27002, security system for Data Center operations and administration.  They need not take a Cisco ISE like NG NAC path and can involve lower level networking (such as L2TP).

Rakesh RadhakrishnanDominant Defensive Design Principles 2 [Technorati links]

April 24, 2016 04:50 PM
The second area is Applications (web applications, web services, REST service, etc.). Majority of the defensive design mechanisms are built in at Design and Development time and validated with Vulnerability testing and Penetration testing tools (such as HP Web Inspect), in the defense layers 1 and 2.  For example as part of the application code if there is NO Data Abstraction Layer and frequent insecure direct object references are made, there is no point in inserting a run time application firewall. The design has to accommodate a DAL and as part of the penetration testing all iDOR in code has to be removed. These types of applications are also STATELESS (REST API's) and hence can accommodate self cleansing code by instrumenting the code to quiesce itself and restart from a trusted clean image (using SCIT like technologies). 
At run time typical mitigation controls includes;

Step 1    F5 Big IP like IP LB  terminates TLS/SSL and un encrypts payload
Step 2   F5 ASM like OWASP FW inspects payload (html, scripts, xml, json, sql, soap, etc)
Step 3    Layer7 like XML/API FW inspects XML payload for conformance, schema validation and API 
Step 4     Layer 7 interacts with site minder to establish a SM session based on SAML assertion
Step 5    Layer 7 interacts with Axiomatic like FGES to make Authorization calls for RBAC
Step 6     Validated XML payload sent to Apache Web server
Step 7     Apache Web server processes request and sends to REST API in web logic
Step 8    Web logic Web plus REST application executions
Step 9    Web Logic makes Authorization calls to Axiomatic (ABAC and method level)
Step10    Processed XML sent to JMS
Step 11    back end Business Process retrieves XML message
Step 12    back end Business Process logic executes
Step 13     Back end business process publishes XML message
Step 14    JMS de queue XML message
Step 15     Processed XML re-used by REST calls
Step 16     Apache Web server sends message to f5 ASM via layer7 for validation
Step 17    F5 Big IP encrypts outbound message
Step 18      DAL converts XML to SQL to store in DB for audit
Step 19      Consolidated and stored in DB repository including logs
Step 20    DB firewall leveraged for Masking purposes of PII
Step 21   The TLS payload is also shipped to the SecSAASvendor for APT/ATP (from a sensor)
Step 22   Sec SAAS vendor inspects the code for advanced malware (Bots, Botnets, C&C, APT, etc.)
Step 23   If IOC is detected the respective OWASP/WAF FW, XML/API FW and DB FW are notified
Step 24   Short Lived, Stateless, Self Cleansing Web Containers (code integrity)
Step 25   Short Lived, Stateless, Self Cleansing App Containers (code integrity)

 This type of an SAAS Security Designs are well aligned to the idea of a "Password FREE, Cookie FREE, Zero Footprint, Stateless End points". Everything from the client UI code and API code is residing in the SAAS space and delivered post security checks (role based UI rendering for example). Steps 21 to 25 is critical for security sensitive apps that are also targets of advanced threats and can leverage a Virtual Execution environment and self cleansing application containers.

Rakesh RadhakrishnanDominant Defensive Design Principles [Technorati links]

April 24, 2016 04:50 PM
For 10 years (2001-2010) I've worked on integrating IAM (Identity & Access Management) into different spectrum of security tools (at Sun/Oracle) both internal and partner solutions. Now for 5 years I've had the privilege of seeing enterprise wide patterns in Security Designs (in Banking, Healthcare, etc.). What I see as dominant defensive design principles are described below and followed up with one blog in each area.

Core Data Layer: The notion of Privacy Baked in and Intrusion Tolerant Data should be the norm today. Its the Achilles Heel. Mature organizations know what their security sensitive data is and where it is residing. Data need not be decrypted at any point in time (collection, transmission, storage, etc.)., and even when decrypted in memory at process time - only Trusted Execution (by processes that are trusted by an execution engine). Security Sensitive Data collection is minimized and de-identification is the norm. Policies are embedded within the Data Objects, and data purging and data movement rules must allow for Intrusion Tolerance (Storage are network disconnected from a Data Center where intrusions are detected). DDS (data defined storage) plus Big Data Technologies can be used for such movements. As suggested in this article by Raja Patel on Data Identification and Data Classification, the 9 layers of defense is described in the graphic.

Typical Processes involves:

Step 1    end point UI Data Collection Integrity checks
Step 2   end point Device DLP
Step 3  Data integrity during data transmission (VPN and Message/XML sec)
Step 4   Data validation by an XML Firewall (parameter validations)
Step 5   Cloud Data Tokenization (for inbound and outbound with FPE and FHE)
Step 6  IRM for Data in Documents
Step 7   DRM for Data in Media files
Step 8  DB Firewall for Data in RDBMS and non structured DBMS
Step 9   XACML policies for Tagged data (PII, PCI and PHI)

The 5 levels of Maturity typically in this space are:

Level 1 :  non externalized –built in legacy entitlement code within applications, some basic RBAC
Level 2 :  Externalized ABAC policies (in respective application -roles- and DB entitlement -tags- FW)
Level 3 :  Externalized ABAC (context) policies augmented with Risk IN
Level 4:  All risk sensitive applications with security or compliance sensitive data (PII, PCI, PHI, etc.) use Externalized Risk based Entitlement augmented with Password Free, Cookie Free, Stateless, Zero Footprint clients code
Level 5:  Comprehensive and Consistent automated polices – end to end auditable – globally and continually optimized  (Continuous Loop of Entitlement with SIEM –risk intelligence, threat intelligence based polices – STIX and XACML)

Rakesh RadhakrishnanTrending Towards Threat based Access Controls [Technorati links]

April 24, 2016 04:50 PM
I blogged about the opportunity large IT companies had a few years back (IBM, Cisco, Oracle, Microsoft, etc.) around supporting XACML natively or XACML expressions of policies for import and export. I recently blogged about STIX IOC as an input for dynamically generated policies as well, across these tiers. Today majority of the SAAS apps are REST API based with a JSON or XML construct for data exchange. These types of application designs offer the opportunity to deliver device based and role based UI rendering, role based and attribute based access to application logic, TAG and Metadata based access to data in databases, and more. ACL, CSV, RBAC, ABAC and TagBAC all of which can be expressed in XACML.

I am delighted to see IBM taking the lead in terms of supporting XACML in;

a) Secure Access Manager/Gateway for Mobile end points, that expresses these policies in XACML.
b) Embedded XACML PEP/PDP in DataPower XML/API Gateway (& Data Tokenization)
c) XACML based PDP for Application's fine grained Access Controls -Tivoli PM
d) XACML support in IBM Guardium DB firewall for extraction, access and exception policies
e) potential support for XACML in Secure Network Protection (NG context admission controls)

If a SIEM tool (like Splunk or QRadar) can pass STIX IOC to these XACML products.. we get true dynamic threat intelligence based defense.. This approach infuses NEW LIFE into XACML which has inherent capabilities to be a dynamic policy automation language as the policies themselves are auto generated (based on meta-data and relevant policy combinations). The only distributed policy model to support such dynamism.

Yeah RIGHT - XACML is Dead...  What a foolish controversy and debate !

I did a keynote last year at a CISO event calling all CISO's to demand such standards from the vendors ! Wake up CISO world !  Kudo's to IBM for all the efforts in this direction !

The Threat Intelligence driven STIX IOC (Incident of Compromise) can be around an end point IP address, Server/VM IP address, End Point Device posture or a Server/VM posture or an XMLobject or a SQLstatement, or an URI/URL or a specific API, and that IOC can act as the Intelligence Data to respond accordingly with policy changes.

This is not rocket science - similar to the DLP-NAC XACML profiles you can expect to see STIX IOC XACML profile specifications from OASIS in 2015 !

April 23, 2016

Drummond Reed - CordanceZootopia Is My Happy Place [Technorati links]

April 23, 2016 07:14 AM

ZootopiaI don’t think I’ve had such a good time at the movies since Little Miss Sunshine. If you just want to smile—and laugh—and clap—and feel like dancing all over the theatre—don’t miss this. And don’t watch it at home (which you will want to do a thousand times) until you’ve had the full movie theatre experience.

As my wife and I were walking out, one of the ushers said, “This movie should be required viewing in America.” To which I said—with a completely straight face, “I can’t believe it only got 98% on Rotten Tomatoes”.

It’s a 100.

April 22, 2016

Kuppinger ColeAlles zu Consumer Identity Management [Technorati links]

April 22, 2016 09:59 AM
Mittlerweile sind die meisten Unternehmen in der Lage, sicher mit den Identitäten ihrer Mitarbeiter umzugehen. Doch die Handhabung von Kundenidentitäten, deren Anzahl oft in die Millionen geht, stellt noch immer eine Herausforderung für die meisten Unternehmen dar. Mehr Identitäten, der Zugang über Social Logins, mehr Flexibilität bei der Authentifizierung beispielsweise über die in Smartphones integrierten Funktionen, die Anforderungen zur Risikominderung beim eCommerce, die Integration mit sicheren Bezahlsystemen, Kundenbindung, neue Geschäftsmodelle sowie neue Anforderungen für die Interaktion mit Kunden. Die Zahl der Herausforderungen für Unternehmen war nie größer.

Kuppinger ColeMulti-Factor, Adaptive Authentication Security Cautions [Technorati links]

April 22, 2016 09:00 AM

by Ivan Niccolai

KuppingCole has written previously on the benefits of adaptive authentication and authorization, and the need for authentication challenges that go beyond the password. These benefits fall largely under the categories of an improved user experience, since the user only gets challenged for multiple authentication challenges based on risk and context, as well as improved security precisely due to the use of multi-factor, multi-channel authentication challenges.

However, these multi-factor authentication challenges only offer additional security if the multiple challenges used for these authentication challenges are sufficiently separated. Some examples of common approaches to multi-factor authentication include the use of one-time passwords sent via an SMS message, or smartphone applications which function as soft tokens for time-limited passwords. These are generally a good idea, and do offer additional security benefits. But, if the application that depends on multi-factor authentication as an additional security measure is itself a mobile application then the lack of separation between the channels used for multi-factor authentication vitiates the possible security benefits of MFA.

Security researchers have recently proven how both a compromised Android or iOS smartphone can be manipulated by attackers in order to enable them to capture the additional step-up authentication password from the smartphone itself. This is one of the outstanding challenges of anywhere computing. Another attack that that is immune to the additional security provided by multi-factor authentication is the man-in-the-browser-attack MITB. With this type of attack, a malicious actor gains control of a user’s browser via a browser exploit. The user then logs into, for example, online banking, and successfully completes all necessary, multi-factor authentication challenges perform a high risk action such as performing an electronic fund transfer, the hijacked browser can be used by the attacker to substitute form data the the user is imputing. In this example the sum could be redirected to a strangers bank account.

With the MITB attack, the user is seen by the accessed service as fully authenticated, but since the browser itself has been compromised, any action the user could have done legitimately can also appear to have been done by the attacker.

With a user’s smartphone already receiving emails and being used for browsing, the additional use of smartphones for multi-factor authentication must be carefully considered. Otherwise, it only provides the illusion of security. These attacks do not make adaptive, multi-factor authentication useless, but they do show that there is no single mitigation approach that allows an organization to ignore the ever-evolving cybersecurity threat landscape.

Tactical security approaches here include careful selection and separation of authentication channels when MFA is used, as well as the use of additional web service and browser scripting protection approaches which have been developed to mitigate MITB attacks.

Yet the strategic solution remains an approach that is not solely focused on prevention. With the digital transformation well underway, it is difficult to employee endpoints, and almost impossible to control consumer endpoints. A strategic, holistic security approach should focus on prevention, detection and response, an approach known as Real-Time Security Intelligence. It should focus on the data governance, regardless of the location of the information asset, an approach known as Information Rights Management.

Unknown and sophisticated attack vectors will persist, and balancing security and user experience does remain a challenge, but the RTSI approach recognizes this and does not ever assume that a system or approach can be 100% immune to vulnerabilities.

April 21, 2016

GluuGluu Server CE 2.4.3 is now available! [Technorati links]

April 21, 2016 06:48 PM

Today we are pleased to announce general availability of the Gluu Server Community Edition (CE) 2.4.3, our industry leading free open source software (FOSS) identity and access management (IAM) platform. 

Deploy the Gluu Server 2.4.3

Upgrade to the Gluu Server 2.4.3

Here are some of the notable updates included in Gluu Server CE 2.4.3:

A full list of changes and updates can be found in the Gluu Server 2.4.3 release notes.

Gluu Server 2.4.3 is our best release yet.

We hope you like it! 

Questions or Feedback?

Use our support portal or schedule a meeting!


Kuppinger ColeExecutive View: Balabit syslog-ng - 71571 [Technorati links]

April 21, 2016 12:51 PM

by Alexei Balaganski

The Balabit syslog-ng product family provides technologies that allow collecting, processing, and analyzing log events from a variety of different sources, going well beyond the standard syslog component. The products are relevant both as a complement to and a replacement for standard SIEM solutions.

Kuppinger ColeExecutive View: Balabit Blindspotter - 71572 [Technorati links]

April 21, 2016 10:06 AM

by Alexei Balaganski

Blindspotter is a real-time analytics solution, which identifies external and internal security threats by detecting anomalies in user behavior.

April 20, 2016

ForgeRockPrivacy Matters [Technorati links]

April 20, 2016 09:12 PM

Pew Research released an interesting report earlier this year focusing on American attitudes towards privacy and information sharing in the digital world. Every day, people have to choose if they want to share their personal information with businesses and governments in exchange for access to a product or service. Pew found that willingness to share personal data depended greatly on the context. For many respondents, for example,  it was important that they could trust the organization before they were willing to share data with them. Even then, there were concerns over how secure their personal information would be in the hands of the business or government agency. Pew also found that Americans don’t trust the privacy and security of records in the digital age and are concerned about how long the data is collected, stored and accessed. The safety and security of personal data has become a high profile subject for Americans following major data breaches at companies like Anthem Health and AT&T.

Pew presented survey participants with different scenarios involving data sharing including examples focused on healthcare, insurance, and IoT. It was interesting to see the variability between the scenarios in how willing people were to share their data. Check out the full report for detailed results and the wide range of explanations that people provided.

Why the difference in opinion on data sharing across these scenarios? Pew found that a significant amount of people made the choice on whether to share data based upon three factors: the attractiveness of the trade off (sharing data in exchange for a product or service), if they trusted the organization with their data, and what might happen after the organization had their data. Willingness to share data greatly depended on if people thought that they could trust the organization with their information. Without trust, people were more likely to withhold data. A significant concern was that their data would be used for other purposes than the one initially intended. Many of the focus groups that Pew spoke to were concerned about the difficulty in understanding what information is being collected online, who is collecting the data, and how it is used.

A striking quote from one of the respondents was “The data isn’t really the problem. It’s who gets to see and use that data that creates problems”. At ForgeRock, this really hits home. Since our inception we’ve been focused on making sure organizations know “who’s who, what’s what, and who gets access to what” (our favorite quote from Scott McNealy). Today, with our ForgeRock Identity Platform and features such as User-Managed Access (UMA), we can help customers do this as well. With UMA, organizations can give customers control over who can access their personal data, for how long, for what purpose, and over what device. This can give them greater peace of mind that their data will not be misused and build a trusted, long-lasting relationship with customers.

The ForgeRock Identity Platform can help organizations to build digital trust with customers by building secure relationships between users, devices, and connected things. ForgeRock provides organizations with the tools needed to address the privacy and information sharing concerns expressed in the Pew report. For example, we’re currently working with Philips to secure patient data collected by their ecosystem of connected medical devices. Encouraging data sharing is important for organizations because it increases operational efficiency, serves as the foundation for many modern products and services, and provides great insight into customer preferences that is valuable for driving revenue. To do this, you need your customers to trust you.

Complementing Pew’s research on American attitudes towards privacy, we commissioned TechValidate to conduct a global data privacy survey of the extended ForgeRock community of IT professionals. Among the results that we discovered:

For the complete report, download it here.

For more information about how the ForgeRock Identity Platform can help you to build trust with your users in the digital age, check out our website.

To learn more about User-Managed Access visit

Confirming Pew’s finding that trust is an important factor in facilitating data sharing, Accenture has identified “digital trust” as one of the key trends shaping the market in the coming years.

Our Digital Trust Pledge

Blog: The Importance of Trust

The post Privacy Matters appeared first on

Matthew Gertner - AllPeersThe growing need for IT-trained staff in businesses [Technorati links]

April 20, 2016 05:46 PM

IT-trained staff are a must in today's economy ... photo by CC user PeteLinforth on pixabay

photo by CC user PeteLinforth on pixabay

IT has changed the landscape for businesses around the world. Businesses promote themselves on the internet, sell good goods and services, place and take orders online, and access the latest data when drawing up new strategies. This means that your business as a modern enterprise needs to have people on board with knowledge of IT if it is to stay ahead of the competition.

If businesses are to fight off the chasing pack, then they need to have staff that are well trained in various skills, and in particular IT. There are still a significant number of businesses, especially small enterprises, that are not training their employees, and that is a big mistake. If your business is in that position, you should take note and start training your staff.

Many companies nowadays insist that new recruits have some basic IT or computing knowledge, but as the technology becomes increasingly sophisticated, they need to be kept up to date with the latest changes. Then there are the older members of staff who may have joined a particular company before computing and IT became a vital component in the modern economy. Unless they receive at least basic IT training, these people are, quite frankly, holding your company back. Also, should they decide to move on to pastures new, they will find their lack of IT knowledge will be a severe handicap when seeking fresh opportunities.

Obviously, the best way to ensure that your company does not fall behind in relation to IT is to adequately train staff in this discipline. You can send staff on training courses or have them trained in-house. No doubt, this will be expensive, but in the long term, it is money well spent. Also, with IT becoming ever more sophisticated, you should consider periodic refresher courses.

One problem that might prevent your company recruiting skilled IT staff is the fact there is a skills shortage in this field. It is a problem that is worsening, particularly in areas such as cybersecurity. This is likely to mean that if you manage to get hold of highly trained IT specialists, it could put a strain on your wage bill. One way around this predicament is to use the services of IT contractors. IT contractors can be used while you are waiting for staff to be trained, or whenever you feel the need for their specialist services.

IT contractors are freelancers, and many of them honed their skills while working as employees for companies. They are highly trained for their roles, so if you hire one, you will not need to be concerned about having to retrain them. Their daily rates tend to be higher than IT-trained employees, but you only have to pay them for the duration of the contract. The tax implications of utilizing a contractor will need to be explored, and there are companies such as Crystal Umbrella that can advise on this.

Because IT continues to play such a key role in business, it is highly important that enterprises such as yours have IT-trained staff on the payroll.

The post The growing need for IT-trained staff in businesses appeared first on All Peers.

Nat Sakimura「全てのMV制作者を敵に回しそうなMV」とクリエーションの本質について [Technorati links]

April 20, 2016 04:42 PM

「全てのMV制作者を敵に回しそうなMV」という記事[1]が回ってきた。そこで紹介されていたのが、「岡崎体育」氏のMusic Video「Music Video」。









岡崎体育:BASIN TECHNO(初回生産限定盤)(DVD付)

New From: ? 2,296 In Stock

This title will be released on May 18, 2016.

Copyright © 2016 @_Nat Zone All Rights Reserved.

Kuppinger ColeFueling Digital Innovation with Customer Identities [Technorati links]

April 20, 2016 12:04 AM
Identity management has become far more than a key component for defining security and access controls. Understanding customers’ identities through all of their interactions with an organization is key to developing strong and enduring relationships across multiple channels. Combining information from various sources (registration forms, devices, social accounts, etc.) to provide optimal user experiences is now a prerequisite for customer-facing enterprises.

April 19, 2016

Bill Nelson - Easy IdentityPerforming Bulk Operations in OpenIDM [Technorati links]

April 19, 2016 02:27 PM

OpenIDM does not support bulk operations out of the box. One way to to do this, however, is to obtain a list of IDs that you want to perform an operation on and then loop through the list performing the desired operation on each ID.

Yes this is a hack, but let’s be honest, isn’t life just one big set of hacks when you think about it?

Here are the steps.

Suppose for instance that you want to delete all managed users in OpenIDM that have a last name of “Nelson”. The first step is to obtain a list of those users; which you can easily do using a cURL command and an OpenIDM filter as follows:

curl -u openidm-admin:openidm-admin 'http://localhost:8080/openidm/managed/user?_queryFilter=(sn+eq+"Nelson")&_fields=_id&_prettyPrint=true'

This returns a listing of all managed objects that match the filter as follows.

  "result" : [ {
    "_id" : "ed979deb-2da2-4fe1-a309-2b7e9677d931",
    "_rev" : "5"
    "_id" : "581c2e13-d7c4-4fff-95b8-2d1686ef5b9c",
    "_rev" : "1"
    "_id" : "1295d5db-c6f8-4108-9842-06c4cde0d4eb",
    "_rev" : "3"
  } ],
  "resultCount" : 3,
  "pagedResultsCookie" : null,
  "totalPagedResultsPolicy" : "NONE",
  "totalPagedResults" : -1,
  "remainingPagedResults" : -1

But most of the data returned is extraneous for the purposes of this exercise; we only want the “_id” values for these users and to obtain this information, you can pipe the output into a grep command and redirect the output to a file as follows:

curl -u openidm-admin:openidm-admin 'http://localhost:8080/openidm/managed/user?_queryFilter=(sn+eq+"Nelson")&_fields=_id&_prettyPrint=true' | grep "_id" >> bulkOperationIDs.txt

This will produce a file that looks like this:

     "_id": "ed979deb-2da2-4fe1-a309-2b7e9677d931",
     "_id": "581c2e13-d7c4-4fff-95b8-2d1686ef5b9c",
     "_id": "1295d5db-c6f8-4108-9842-06c4cde0d4eb"

(yes, there are leading spaces in that output).

You are still not done yet as you need to strip off all the extraneous stuff and get it down to just the values of the “_id” attribute. You can probably devise a cool sed script, or find an awesome regular expression for the grep command, but it is just as easy to simply edit the file with and perform a couple global search/replace operations:

:1,$ s/ "_id": "//g
:1,$ s/",//g

The above example demonstrates a global search/replace operation in the “vi” editor – the best damn editor on God’s green earth!

However you approach it, the goal is to get the file to consist of only the IDs as follows:


Now that you have this file, you can perform any operation you would like on it using a simple command line script tied into the appropriate cURL command. For instance, the following would perform a GET operation on all entries in the file (it is HIGHLY recommended that you do this before jumping right into a DELETE operation):

for i in `cat bulkOperationIDs.txt`; do curl -u openidm-admin:openidm-admin -X GET "http://localhost:8080/openidm/managed/user/$i?_prettyPrint=true"; done

Once you feel comfortable with the response, you can change the GET operation to a DELETE and kick the Nelsons to the proverbial curb.

Kuppinger ColeExecutive View: AirWatch Content Locker - 71505 [Technorati links]

April 19, 2016 12:27 PM

by Graham Williamson

For organizations trying to provide an attractive user experience while protecting corporate information, the continuing rise in popularity of mobile devices, connecting from both inside and outside the corporate network, is a trend that can be frustrating. For organizations with intellectual property and sensitive information that must be shared between staff and business partners, a solution to protect restricted data and documents from inadvertent release to unauthorized personnel is required. Help is at hand.

April 18, 2016

Matthew Gertner - AllPeersNorth American Travel Destinations You Can Only Get to by Boat [Technorati links]

April 18, 2016 04:02 PM

Some of the world’s most breathtaking locations are only accessible to people with boats. They are the remote isles that are unlike anything on the mainland. Often people immediately envision locales in the South Pacific, Australia and Europe, but there are just as many exceptional spots across North America.

The four travel destinations below provide something unique for boaters that are looking for adventure and relaxation.

A Few Things to Address Before Planning a Boat Trip

Before you can consider taking a vacation on your boat, you have to make sure it’s ready for the trip:

Safety Measures

Every safety precaution should be taken out on the water. You’ll want to make sure your boat is fully equipped with:

The Flooring

One of the most overlooked spots on a boat is the flooring. However, it can be one of the features that affect comfort and maintenance the most. If your floors are worn or showing other signs of wear and tear, it’s a good time to replace them.

Materials made for marine flooring will make the trip more enjoyable because they are easier to clean. One of the best options on the market is luxury woven vinyl. It’s extremely durable but just as attractive.

The Sails

If your vessel is a sailboat the integrity of the sails is a top priority. Many sail issues are preventable, particularly if you catch them early.

Structural and Mechanical Damage

If anything is not 100% structurally and/or mechanically sound it should be addressed before you head out. Depending on where you travel it can be difficult to make repairs once you’re already underway. That’s why you’ll want to have essential tools on board just in case you experience problems en route.

Boat-In Camping at Santa Catalina Island

The only way to get to Santa Catalina Island off the coast of Southern California is by boat. Once you get to the island there are 17 boat-in campsites that are only accessible via boat or kayak. If you are looking for seclusion and spectacular views of the Pacific this is how you want to rough it.

The nine boat-in camping areas are all on the leeside of the island. They are all primitive, which means the necessities are up to you. There’s also a strict no campfire rule so you’ll need to have a BBQ or stove for cooking. If you don’t have camping equipment on your boat you can get rental equipment from the Two Harbors Visitor Center. At just $20 per adult per night it’s an amazing way to save money on vacation.

Hidden Beach Mexico

Off the coast of Puerto Vallarata there is a string of volcanic islands within Banderas Bay. Hidden Beach, also known as Playa Del Amore, is the most popular place on the islands. It’s a natural wonder in the way it’s carved out of the island’s surface. It’s as if a portion collapsed just inland and the result is a circular beach that can only be seen from the air.

It will take about two hours to reach the island from the mainland. You then have to anchor just offshore and kayak through a channel to get to the secluded beach. Hidden Beach is definitely a must-see for anyone that’s traveling along Mexico’s Pacific coast.

Forbes Island

If you are cruising around the San Francisco Bay, Forbes Island is a great place to stop and spend a few hours. It’s a man-made island that has a lighthouse, sandy beaches and a fine dining restaurant. The views of downtown, the Golden Gate Bridge and Alcatraz are absolutely stunning. The photo ops alone are worth pulling up to a dock and planting your feet on land. You’ll also get to see the world’s largest outboard motor, which is used to power the island.

Guana Island

Guana is one of the few privately owned islands in the Caribbean. It’s part of the British Virgin Islands and known for giving visitors a taste of what the islands were like more than a hundred years ago. If you’re among the lucky few that get to come ashore Guana Island you’ll get to explore 850 acres of lush tropical land, including seven pristine beaches.

Staying on the island is so exclusive that there is typically 30 acres for every person at any given time. You can enjoy the peace and quiet of being away from it all, or you can make use of the private facilities. Activities include tennis, croquet, snorkeling, yoga and day trips to neighboring islands.

The post North American Travel Destinations You Can Only Get to by Boat appeared first on All Peers.

Nat Sakimura「2016 メニューイン国際コンクール」ガラ・コンサート速報 [Technorati links]

April 18, 2016 01:20 PM

創設者ユーディー・メニューイン生誕100周年を記念してイギリス・ロンドンで開催された「2016 メニューイン国際コンクール」では、ジュニアの部では、アメリカのYesong Sophie Lee(12歳)が、シニアの部では中国のZiyu He(16歳)が優勝した。

Menuhin Competition 2016 Winners Yesong Sophie Lee (12) & Ziyu He (16)

Menuhin Competition 2016 Winners
Yesong Sophie Lee (12) & Ziyu He (16)


17日にRoyal Festival Hallで行われたガラ・コンサートでは、それぞれ Vivaldi: Concerto from The Four Seasons (Summer)、Dvrak: Violin Concerto 3rd mov. と、アンコールとしてザルツブルグの作曲家のバイオリン独創のための現代曲が演奏された。

室内楽編成のオケを12歳のYesong Sophie Leeが率いて弾いたヴィヴァルディは、非常にニュアンスも細かく、ダイナミックレンジも大きな演奏で、今までの私のヴィヴァルディの四季に対する評価を一変させた。団員が少女との間の音楽をつくりあげようとして、大切に時間を使っているのがひしひしと伝わって来る演奏だった。

一方、Ziyu Heの方は、とても16歳とは思えないような堂々とした演奏だった。ただ、その協奏曲よりも、アンコールで弾いた曲の方がもっと良かった。


後半は、1995年のメニューイン・コンクールのジュニアの部で優勝したJulia Fischerがバルトークのヴァイオリン協奏曲第1番を演奏、それに続いて、オケだけで、チャイコフスキーのFrancesca da Rimini, Op.32 が演奏された。

オケの楽器配置は、アメリカンスタイルの第1・第2・ヴィオラ・チェロ。この配置だと、高音が左に偏ってしまってどうかとも思う。式のDiego Matheuzの指示もあるのかもしれないが、最近の楽器間の分離の良いオケの傾向とは違い、特に弦の音が混ざり合う、良きにつけ悪しきにつけ、20世紀的な音作りのオケだった。

Copyright © 2016 @_Nat Zone All Rights Reserved.

April 17, 2016

Gerry Beuchelt - MITRELinks for 2016-04-16 [] [Technorati links]

April 17, 2016 07:00 AM
April 15, 2016

Matthew Gertner - AllPeersNew documentary aims to slay cultural stereotypes surrounding unmarried Chinese women [Technorati links]

April 15, 2016 08:36 PM

The lives of unmarried Chinese women have value, even if they are beyond the age of 25 ... photo by CC user Huchuansong on wikimedia commons

Over the years, women in Western countries have fought hard to change how they are perceived by society, as they longed for the same opportunities that were afforded to men.

While there are still some areas where the females lag behind their male counterparts, women in places like the United States, Canada, Europe, and elsewhere in the developed world have far more latitude with regards to determining their own life path compared with their ancestors just a century ago.

However, in places like China, the struggle for basic rights of self-determination is only now entering a formative stage, as females there still face massive discrimination from society and even their closest friends and family.

In this country and others throughout the Far East, if they aren’t married to a man by the age of 25, they are considered to be leftover women, or Sheng Nu.

With the internet making social change easier than it ever has been before, women across this populous nation are aiming to put a regressive social tradition to bed, once and for all.

Educated, employed, and married to a successful man: all by the age of 25

Today’s expectations for the Chinese female are far more ambitious than they have been in previous generations; while this seems like great news on the surface, there is one expectation that trumps all others places an almost insurmountable amount of pressure on their shoulders: to get married at a young age.

From grade school to university, the Chinese education system places crushing academic workloads on the backs of its students, and when it comes time to enter the labor force, entry-level employees find themselves working exceedingly long hours in the pursuit of promotions.

However, none of these pressures are quite as taxing on the spirit of Chinese females as the generations-old expectation that they should be married to a man by their 25th birthday.

It doesn’t matter if they pass school with Straight A’s, or move up in their law firm at an unprecedented pace: if they don’t have a wedding ring on their finger by the day they turn 25, Chinese society deems them Sheng Nu, or as undesirable leftovers.

A society that values saving face over self-determination

In this modern age, some women don’t care about what Chinese society thinks of them, so they forge on with their professional ambitions without caring what random people on the street or even their family and friends think about their personal life.

However, their parents, which were raised with a different set of social values instilled into them, worry about how their neighbors and friends will view them or their children.

Out of concern for their own reputation and that of their offspring, they often take the drastic step of crafting a poster ad touting their daughter’s desirable attributes and head to one of many marriage markets that can be found in Chinese cities across the nation.

After finding a single male that they deem to be suitable for their daughter, they stage an intervention where they are introduced and are heavily encouraged to enter into a relationship with each other.

Understandably, this comes as a shocking surprise to the present generation of Chinese females, many of whom have cultivated the expectation that they would be able to chart the course of not just their career, but their love lives as well.

As angry as they may be they often go along (reluctantly) with these arrangements in the interests of keeping the peace. Predictably, this leads to relationships that are either sub-par, or are severely strained.

Thankfully, change is coming

A new social paradigm is beginning to form in China, as women across the country have used internet forums to exchange ideas and obtain support from like-minded females and allies.

Despite the overwhelming pressure from Chinese society for them to conform to long-held social norms, these women are standing strong, as they are able to lean on each other for social support.

Even so, the problem of maintaining and keeping relationships with family members and friends that don’t agree with their stance is a persistent problem for Chinese females.

A documentary to win hearts and minds

This April, a documentary film was released with the intention of blowing the cover off the regressive marriage markets that exists in 21st century China. It also hopes to begin a conversation that aims to encourage the idea that Chinese females have the same right to determine their own life path as males currently do.

This movie follows several brave Chinese women and their parents as they wrestle with the same question that countless families in this nation have grappled with over the past few years: should a woman be considered left over because they choose to forgo marriage in pursuit of other life goals?

It is the position of this film that they are not, as true beauty lies within, and females are irrevocably human at their core, and thus, they should have the same rights as their male counterparts.

The post New documentary aims to slay cultural stereotypes surrounding unmarried Chinese women appeared first on All Peers.

April 14, 2016

Ian GlazerWhy is the Identity leg of the stool missing? [Technorati links]

April 14, 2016 04:59 PM

[Many thanks to Gerry Gebel for giving me the nucleus for this post]

In the midst of the ongoing privacy and security conversation, I pointed out last week that identity is the missing leg of the security/privacy stool. Identity is both a means of expressing privacy requirements and a necessary set of security controls, as well as a key to delighting customers and driving business engagement. A colleague pointed out that while security and privacy might be different halves of the same coin, identity is the coin itself. I’m not sure I fully agree with that but it gets to sentiment I have.

The use and protection of identity data has strong footing in both the privacy and security worlds. And yet identity and identity management professionals are not a first class member of the conversation. Why is that? One reason, in my opinion, is because we didn’t expect the industry to stand alone for the duration.

The inevitable absorption into business process that never happened

Speaking as an identity professional, I don’t think we claimed our seat at the table because, in part, we didn’t expect to be around IT for so long. 10 to 15 years ago there was a thought that identity would be subsumed by larger, adjacent business process engines. Human resource management, for example, should have absorbed identity management, at least for employee identity. I still remember the Catalyst In San Francisco where the Burton Group identity team (I was just a newbie in the audience at the time) had Oracle and SAP talk about their plans (or lack there of) for synergy between HRMS and IAM. What was clear to Burton Group was that the systems that managed your job role and responsibilities ought to be managing that in both on- and offline worlds.

Employee identity really ought to be a function of HR and an extension of HRMS’s. In doing so, identity professionals would become the technical arm of HR. Some companies tried this. Some companies put their technical role management programs within HR. Although some companies tried this approach, for political/organizational/cultural reasons, those approaches did not last.

If HR was to be the home of employee identity, then what of customer identity? Looking to the business process engines that manage customer information, one could see CRM systems absorbing customer identity functions. In such a world, the teams overseeing sales, service, and marketing processes would be the voice of the customer and their business process engines would deliver the identity functionality the customer needed.

In both scenarios the job of “standalone” identity management technology and professionals would be greatly diminished. The path forward for professionals in such a world was to become technical HR, Sales, Service, Marketing, etc professionals, acting as business system analysts serving their constituency or delivering architectures and process integrations to allow identity information to flow and be useful. These worlds did not fully materialize.

The time systems management was going to rule the world but didn’t

If identity management wasn’t going to be sucked into HR or the like, it orbited dangerously close to systems management. In some regards, employee-centric identity was borne from systems management and that set the (wrong) tone for identity for two+ decades. Remember that BMC, CA, and IBM Tivoli were some of the largest user provisioning vendors in their day. They took a systems management approach in which they tried to manage everything about a system including its users. Users were a byproduct of the AIX box you were managing – talk about a user-centric anti-pattern.

In more modern times, ITIL/ITSM groups asserted that identity management is a part of their world. Building user accounts, after all, is an IT service. There’s something to that argument and although it can serve access request scenarios well it leaves out access cert, federation, and a whole slew of other identity functions. But still, systems management could have absorbed identity management.

And while we as a professions waited to see which business or IT process world we would align with, we missed the opportunity to grow our own voice, stake our own territory, and professionalize our industry.

Professionalizing Identity

The identity management industry never professionalized. Unlike security and privacy, who both have organizations to nurture their industries and professionals, identity management has no such thing. We turn to vendors, implementation partners, analysts, and peers in our region for advice for everything from architecture to tips and tricks to getting a project funded to building a career in identity management and everything in between. Certainly all of those can be good sources of information, but it is a piecemeal approach. We need a tide to lift all boats.

I’m still thinking through the notion of a what a professionalized identity management industry would look like, how it would work, and whether it is a good idea in the first place. Hopefully by the European Identity Conference this May, I’ll have worked some of this out, but until then this is what I can share: the identity management leg of the stool didn’t get sawed off by a corporate rival. It didn’t get installed because we lacked the confident voice to say, “Identity management is crucial to both business and security.”

I think it’s time we changed that.

Kuppinger ColeConsumer-Focused Identity Management [Technorati links]

April 14, 2016 07:02 AM
Consumer expectations of Identity and Access Management (IAM) - even if they don't know what it is - are evolving and growing ever higher. The ability to use social media accounts to gain access to various services has revolutionised the way consumers see the space. Increasingly, banks and telcos and other traditional businesses with large user bases are finding it hard to grapple with the IAM needs of the services they deliver. What's worse, these organisations are missing out on opportunities to build deep, engaging relationships with their customers through an archipelago-like siloed approach to customer identity.

Bill Nelson - Easy IdentityConfiguring OpenIDM on a Read Only File System in 10 Easy Steps [Technorati links]

April 14, 2016 01:55 AM


During the normal course of events, OpenIDM writes or updates various files on the file system to which it is installed.  This includes log files, audit files, process IDs, configuration files, or even cached information.  There are times, however when you find yourself needing to deploy OpenIDM to a read only file system – one to which you cannot write typical data.

Read Only File SystemFortunately, OpenIDM is flexible enough to allow such an installation, you just need to make some adjustments to various settings to accommodate this.

The following information provides details on how to configure OpenIDM on a read only file system.  It includes the types of information that OpenIDM writes by default, where it writes the information, and how you can alter the default behavior – and it does it in just 10 EASY Steps (and not all of them are even required!).

Note:  The following steps assume that you have a shared (mounted) folder at /idm, that you are using OpenIDM 4.0, it is running as the frock user, and that the frock user has write access to the /idm folder.

1. Create external folder structure for logging, auditing, and internal repository information.

$ sudo mkdir -p /idm/log/openidm/audit
$ sudo mkdir /idm/log/openidm/logs
$ sudo mkdir -p /idm/cache/openidm/felix-cache
$ sudo mkdir /idm/run/openidm

2. Change ownership of the external folders to the “idm” user.

$ sudo chown -R frock:frock /idm/log/openidm
$ sudo chown -R frock:frock /idm/cache/openidm
$ sudo chown -R frock:frock /idm/run/openidm


Note: OpenIDM writes its audit data (recon, activity, access, etc.) to two locations by default: the filesystem and the repo. This is configured in the conf/audit.json file.


3. Open the conf/audit.json file and verify that OpenIDM is writing its audit data to the repo as follows (note: this is the default setting):

"handlerForQueries" : "repo",

4. Open the conf/audit.json file and redirect the audit data to the external folder as follows:

"config" : {
"logDirectory" : "/idm/log/openidm/audit"

After making these changes, this section of the audit.json will appear as follows (see items in bold font):

"auditServiceConfig" : {
"handlerForQueries" : "repo",
"availableAuditEventHandlers" : [
"eventHandlers" : [
"name" : "csv",
"class" : "",
"config" : {
"logDirectory" : "/idm/log/openidm/audit"
"topics" : [ "access", "activity", "recon", "sync", "authentication", "config" ]

As an alternate option you can disable the writing of audit data altogether by setting the enabled flag to false for the appropriate event handler(s). The following snippet from the audit.json demonstrates how to disable file-based auditing.

"eventHandlers" : [
"name" : "csv",
"enabled" : false,
"class" : "",
"config" : {
"logDirectory" : "/audit",
"topics" : [ "access", "activity", "recon", "sync", "authentication", "config" ]


Note: OpenIDM writes its logging data to the local filesystem by default. This is configured in the conf/ file.


5. Open the conf/ file and redirect OpenIDM logging data to the external folder as follows:

java.util.logging.FileHandler.pattern = /idm/log/openidm/logs/openidm%u.log


Note: OpenIDM caches its Felix files in the felix-cache folder beneath the local installation. This is configured in the conf/ file.


6. Open the conf/ file and perform the following steps:

a. Redirect OpenIDM Felix Cache to the external folder as follows:

# If this value is not absolute, then the felix.cache.rootdir controls
# how the absolute location is calculated. (See buildNext property)${felix.cache.rootdir}/felix-cache

b. Define the relative path to the Felix Cache as follows:

# The following property is used to convert a relative bundle cache
# location into an absolute one by specifying the root to prepend to
# the relative cache path. The default for this property is the
# current working directory.

After making these changes, this section of the will appear as follows (see items in bold font):

# If this value is not absolute, then the felix.cache.rootdir controls
# how the absolute location is calculated. (See buildNext property)${felix.cache.rootdir}/felix-cache

# The following property is used to convert a relative bundle cache
# location into an absolute one by specifying the root to prepend to
# the relative cache path. The default for this property is the
# current working directory.


Note: During initial startup, OpenIDM generates a self-signed certificate and stores its security information in the keystore and truststore files as appropriate. This is not possible in a read-only file system, however. As such, you should generate a certificate ahead of time and make it part of your own deployment.


7. Update keystore and truststore files with certificate information and an updated password file as appropriate. The process you choose to follow will depend on whether you use a self-signed certificate or obtain one from a certificate authority.


Note: On Linux systems, OpenIDM creates a process ID file (PID file) on startup and removes the file during shutdown. The location of the PID File is defined in both the start script ( and the shutdown script ( The default location of the process ID is $OPENIDM_HOME folder.


8. Open the script and update the location of the process ID file by adding the following line immediately after the comments section of the file:


9. Repeat Step 7 with the script.


Note: OpenIDM reads configuration file changes from the file system by default. If your environment allows you to update these files during the deployment process of a new release, then no additional changes are necessary. However, if you truly have a read only file system (i.e. no changes even to configuration files) then you can disable the monitoring of these configuration files in the next step. Keep in mind, however, that this requires that all configuration changes must then be performed over REST.


10. Open the conf/ file and disable monitoring of JSON and subsequent loading of configuration file changes by uncommenting the following line:



April 13, 2016

Kuppinger ColeApr 26, 2016: EIC 2016 Webinar about Customer-Centric Identity Management [Technorati links]

April 13, 2016 01:08 PM
While most organizations are at least good enough in managing their employee identities, dealing with millions of consumer and customer identities imposes a new challenge. Many new identity types, various authenticators from social logins to device-related authenticators in smartphones, risk mitigation requirements for commercial transactions, the relationship with secure payments, customer retention, new business models and thus new requirements for interacting with customers: The challenge has never been that big.

Kuppinger ColeApr 21, 2016: EIC 2016: Alles zu Consumer Identity Management [Technorati links]

April 13, 2016 12:59 PM
Mittlerweile sind die meisten Unternehmen in der Lage, sicher mit den Identitäten ihrer Mitarbeiter umzugehen. Doch die Handhabung von Kundenidentitäten, deren Anzahl oft in die Millionen geht, stellt noch immer eine Herausforderung für die meisten Unternehmen dar. Mehr Identitäten, der Zugang über Social Logins, mehr Flexibilität bei der Authentifizierung beispielsweise über die in Smartphones integrierten Funktionen, die Anforderungen zur Risikominderung beim eCommerce, die Integration mit sicheren Bezahlsystemen, Kundenbindung, neue Geschäftsmodelle sowie neue Anforderungen für die Interaktion mit Kunden. Die Zahl der Herausforderungen für Unternehmen war nie größer.
April 12, 2016

ForgeRockTop Challenges in IAM Program [Technorati links]

April 12, 2016 11:31 PM

Editor’s note: Ashley Stevenson, Identity Technology Director in ForgeRock’s office of the CTO, and head of our Federal business unit, was a panelist on the Federal Executive Forum radio show in late 2015. He appeared on the Identity & Access Management In Government Progress & Best Practices panel alongside execs from Dell, Symantec and immixGroup, and officials from the Department of Homeland Security, Department of Defense and the NIST. We’re running excerpts from Ashley’s remarks alongside clips from the show. Here’s the fifth clip:

Top Challenges in IAM Program

I think we’ve heard overall that the great challenge with identity is that the scope is so broad that you have to tackle all of it to a certain extent to meet your security and your functionality goals because the attacker will find the weakest link. So we’ve talked about governance, we’ve talked about single sign-on. If you think about fine-grained authorization you could have someone authenticated to the highest level, but if they have too many rights that they’re not supposed to have, there’s still a risk of an unintentional or intentional insider threat causing a problem. And also the challenge is, because identity is connected to everything, it’s the ability, the need to find the needle in the haystack of the interconnected relationships between all the people and the access and the credentials to get insight as to what’s going on to either give a better user experience or to mitigate potential threats and discover what’s happening. So the challenge is all of these pieces need to come together to get to the goals that we actually need to reach, and it’s a lot of different moving parts that need to come together.

The post Top Challenges in IAM Program appeared first on

Kuppinger ColeEasy and Secure User Access to Sensitive Information [Technorati links]

April 12, 2016 11:19 PM
In the first part of this webinar, Martin Kuppinger, Founder and Principal Analyst at KuppingerCole, will describe the concept of adaptive authentication and Adaptive Policy-based Access Management (APAM). He will also explain why it is crucial for proper access to information that authentication is dynamically changed and adjusted to the circumstances. In the second part, Reinier van der Drift, former President and CEO at Authasas, now a part of Micro Focus, will present a one-stop-solution that provides users with consistent, easy-to-use and secure access from various devices to sensitive data stored in their company’s own data center as well as by third parties and in the cloud.

Kuppinger ColeAdvisory Note: Integrating Security into an Agile DevOps Paradigm - 71125 [Technorati links]

April 12, 2016 07:44 AM

by Matthias Reinwarth

Developing secure and robust applications and deploying them continuously and cost effectively? All organizations, digital or those undergoing a digital transformation, are facing these challenges though the answers are not straightforward. This document describes agile approaches to system development and delivery. It discusses why and how organisations should embed strong principles for security into their development/operations approach.