October 01, 2014

GluuGluu Sever 1.9 is released! [Technorati links]

October 01, 2014 10:24 PM

iot_communication_mode

The documentation on how to build the Gluu Server on Centos is here: http://www.gluu.co/ce-centos

Notes from Mike Schwartz:

The Gluu Server is an amalgamation of open source components. In the past, no one could build the Gluu Server from our open source instructions. It was just too darn hard. The only way you could get a Gluu Server was if we built it for you.

We were never going to get massive adoption of our platform unless deployment was a lot easier. So in May, we started working on a new package based installation of the Gluu Server for Red Hat and Debian based Linux distributions. We released the first binaries of the “Gluu Server Community Edition” at OSCON in July for Centos and Ubuntu.

However, we were still getting multiple requests per day from large organizations around the world, some of whom were our customers, who got stuck on the install. There were a lot of places where you could get stuck if everything didn’t go right.

That triggered me to start a new project “Community Edition Setup.” The idea of this project was to write a fancy python script that performed an initial configuration after the RPM was installed. There was some initial skepticism at Gluu about my approach, but that didn’t stop me from trying. Once I got close enough and it seemed feasible, its been an around the clock effort by the whole team for the last few weeks until we got to the finish line today. Although we’ve had several attempts to make the Gluu Server installation easy, I think we finally got it right this time.

The size of the Gluu .rpm|.deb distribution is around 553MB right now. It sounds large, but compared to the size of a VM, or even a docker image is around 1GB, its actually pretty compact. (Yes we’re testing a docker Gluu Server, soon available on on https://hub.docker.com)

As with the previous Gluu Server Community release, the only components included are oxAuth (OpenID Connect and UMA endpoints), oxTrust (policy administration point… the web GUI), and Gluu OpenDJ. Also similar to the original release, Centos release is first, and Ubuntu will follow very soon.

What’s on the very near-term roadmap? This week, we are working to add the Shibboleth SAML IDP and Asimba SAML Proxy into the stack. We also want to release more tools that help with the configuration and management of the Gluu Serer clustered deployments. These extra components will be in the 2.0 Gluu Server, hopefully by month end.

Thanks to everyone who provided feedback, and probably struggled with the initial release of the Gluu Server Community Edition. Now that we’ve moved over to Github for both code and documentation, everyone at Gluu is hoping that we can get more contributions from the community. If you want to help out, just submit patches to the Github project forum or ask to be added to the project. We have also re-vamped the Gluu Support site: https://support.gluu.org Asking questions on the forum also helps us build the documentation. So please do!

The work is not yet done. Much editing and perfecting is needed on the documentation http://gluu.org/docs. As previously announced, we are using Mkdocs to publish the documentation on Github. English majors and technical writers please help us make the docs clear for everyone.

Other .rpm and .deb distribution are soon to follow including mod_ox, an Apache HTTPD server plugin that enables directives to protect folders using the OpenID Connect and UMA profiles of OAuth2. Also expected later this year, a plugin for nginx.

KatasoftBuilding and Deploying a Simple Express.js App with Stormpath and Heroku [Technorati links]

October 01, 2014 03:00 PM

Today we’re going to be building a simple web app with Express.js, Stormpath, and Heroku.

The app we’re going to build is really basic. It will:

And that’s it!

Want to watch a video instead of reading this article? If so, you can just watch our YouTube video instead of continuing!

Prerequisites

Before getting started, you’ll need to sign up for Heroku. If you aren’t already familiar with them — Heroku is an application platform that lets you easily deploy web apps.

It won’t cost you any money — through this article we’ll be deploying our web application live using Heroku’s free tier =)

The rest of this article assumes you already have a Heroku account, and the Heroku CLI tool.

We’ll also be using Stormpath in this article (that’s us!). Stormpath is a service which stores your user accounts securely, and makes user registration / login / logout / a bunch of other stuff much simpler. Stormpath will handle all user accounts in this sample application, greatly reducing the amount of code we need to write.

Creating a Project

To begin, let’s create a new project!

Open up your terminal, go into a directory you like, then create a new project directory:

$ mkdir stormpath-test

Next, you’ll want to go into your new project directory and initialize it as a Git repository:

$ cd stormpath-test
$ git init

Now that we’ve created our project directory, let’s go ahead and create a new Heroku app:

$ heroku create

The above command will create a new application on Heroku (this is where we’ll eventually deploy our project live).

Then we’ll set an environment variable as well:

$ heroku config:set STORMPATH_SECRET_KEY=xxx

NOTE: Be sure to replace xxx with a random string of characters! Just type some random characters for a second or two — kbj25lhj23415k5h4243lhj643h63l2h3j634 would be fine, for example.

What we’re doing above is setting a new environment variable for our Heroku application named STORMPATH_SECRET_KEY. This is used by Stormpath later on to securely encrypt your user’s web sessions.

Lastly, we need to provision the Stormpath Heroku addon. This is totally free, and can be installed by running:

$ heroku addons:add stormpath

Initializing our Node App

Since we’ll be writing our web app using Express.js, the Node.js web framework, we now need to properly initialize our Node web app!

To do this, you’ll need to first initialize a Node module:

$ npm init

This will prompt you to answer several questions: just fill them in with any filler information you’d like.

Once you’re done with this — you’ll have a new file named package.json in your directory — this holds your Node module’s information.

Since you now have a base package.json file to work with, let’s go ahead and install all of the required Node modules we’ll be using in our project:

$ npm install express express-stormpath --save

The above command will install both:

It will also modify your package.json and add those two libraries as dependencies. This will later be used by Heroku to install and provision your web app.

Writing the App

Now that we’ve got our Node module setup, and our Heroku app created, let’s actually write our Express.js code!

Create a new file named index.js and insert the following code:

// Import required modules.
var express = require('express');
var stormpath = require('express-stormpath');

// Initialize our Express app.
var app = express();

// Configure Stormpath.
app.use(stormpath.init(app, {
  application: process.env.STORMPATH_URL,
  redirectUrl: '/dashboard',
}));

// Generate a simple home page.
app.get('/', function(req, res) {
  res.send("Hey there! Thanks for visting the site! Be sure to <a href='https://stormpath.com/login'>login</a>!");
});

// Generate a simple dashboard page.
app.get('/dashboard', stormpath.loginRequired, function(req, res) {
  res.send('Hi: ' + req.user.email + '. Logout <a href="https://stormpath.com/logout">here</a>');
});

// Listen for incoming requests and serve them.
app.listen(process.env.PORT || 3000);

The above code is commented for clarity, but let’s quickly recap what is happening here.

Firstly, at the top of the file we’re importing the two libraries we need to make this application work: express and express-stormpath:

// Import required modules.
var express = require('express');
var stormpath = require('express-stormpath');

Next, we’re creating an Express application object — this is the base of all Express.js web applications:

// Initialize our Express app.
var app = express();

After that, we’re initializing the Stormpath library and telling Express how to use it:

// Configure Stormpath.
app.use(stormpath.init(app, {
  application: process.env.STORMPATH_URL,
  redirectUrl: '/dashboard',
}));

This code is going to do a few things for us:

In the next two code blocks we’re writing some Express routes which render a home page (/) and a dashboard page (/dashboard) for users:

// Generate a simple home page.
app.get('/', function(req, res) {
  res.send("Hey there! Thanks for visting the site! Be sure to <a href='https://stormpath.com/login'>login</a>!");
});

// Generate a simple dashboard page.
app.get('/dashboard', stormpath.loginRequired, function(req, res) {
  res.send('Hi: ' + req.user.email + '. Logout <a href="https://stormpath.com/logout">here</a>');
});

The first route is simply linking users to the login page.

The second route is a bit special — it is:

Stormpath automatically provides you with a special object, req.user, inside of all your Express.js route code once a user has logged in.

You can use this to access a user’s information:

Lastly, we’re telling our Express application to run a web server and accept all incoming user requests:

// Listen for incoming requests and serve them.
app.listen(process.env.PORT || 3000);

NOTE: When running web apps on Heroku, you’ll be assign a random port number to run on. This is why we’re listening on a port specified via an environment variable.

Final Touches

Now that we’ve written our Express.js app, let’s make some final touches before deploying our site live, and testing it out!

Firstly, let’s create a new file called Procfile, with the following contents:

web: node index.js

This Procfile tells Heroku how to run our web app. When we deploy our application live, Heroku will get a copy of our code, then simply run node index.js to start our webserver.

We’ll also create a .gitignore file and add the following contents:

node_modules

This ensures that we won’t accidentally store our node modules in our Git repository.

Lastly, you’ll want to run:

$ git add .
$ git commit -m 'First commit!'

This will commit your project into Git.

We’re now ready to deploy our app!

Deploying our App

To deploy the app to Heroku, all we need to do is push our Git code to Heroku:

$ git push heroku master

This will send our code to Heroku, and automatically start up our new web app!

Once this process has finished, simply run:

$ heroku open

And Heroku will open your browser and show you your brand new web application — live! If you share this URL with someone else, they’ll be able to view it just fine.

Testing our App

Feel free to click around and explore the web app!

As you can see, you should now be able to easily create new user accounts, log in, and log out of the web app.

If things worked properly, you should be able to do the following:

Stormpath Heroku Express Demo

Recap

In just a few minutes, using Express.js, Stormpath, and Heroku, we were able to build and deploy a basic Node.js website that:

Not bad, right?!

If you found this useful, please be sure to checkout our official Stormpath Addon Documentation on Heroku! It covers lots more things like:

MythicsAnswers to Common Questions on Java Versions/Editions [Technorati links]

October 01, 2014 02:31 PM

In the past several months, our team has received many inquires about Java Virtual Machine (JVM) and Java Development Kit (JDK) versions and…

September 30, 2014

MythicsIntroducing Oracle Documents Cloud Service [Technorati links]

September 30, 2014 07:03 PM

Overview

This year's Oracle OpenWorld is all about cloud. Our own Brent Seaman just wrote an article entitled, Oracle…

Julian BondReality needs less cowbell, say cows. And I for one, agree. [Technorati links]

September 30, 2014 06:51 PM
Reality needs less cowbell, say cows. And I for one, agree.

But it does need a saxophone on the bridge.

http://www.geek.com/geek-cetera/it-turns-out-that-cowbells-make-cows-miserable-1605552/
 It turns out that cowbells make cows miserable | Geek-Cetera | Geek.com »
A field of cowbell-equipped-cows may create a soothing soundscape of wind and chimes, but what’s soothing to us doesn’t translate to the cows. Though Christopher Walken and internet humor from over 14 years ago require [...]

[from: Google+ Posts]

Julian BondSome things to think about re Fermi's paradox. Life should be all over the galaxy, let alone the universe... [Technorati links]

September 30, 2014 02:23 PM
Some things to think about re Fermi's paradox. Life should be all over the galaxy, let alone the universe. So, where the heck is everybody?

Well, maybe, on any Earth type planet, If there is insufficient stored carbon available, the species will not be able to develop a technologically advanced society due to insufficient energy for the development of enough complexity. But if there is sufficient stored carbon available, the species will inevitably destroy itself.

It's just thermodynamics, innit. 

http://www.paulchefurka.ca/Fermi.html
 Solving Fermi's Paradox »
The Fermi paradox is the apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilization and humanity's lack of contact with, or evidence for, such civilizations. The basic points of the argument, made by physicists Enrico Fermi and Michael H.

[from: Google+ Posts]

Kuppinger ColeGlobalSign acquires Ubisecure, plans to win the IoE market [Technorati links]

September 30, 2014 02:10 PM
In Alexei Balaganski

GlobalSign, one of the world’s biggest certificate authorities and a leading provider of digital identity services, has announced today that it has acquired Ubisecure, a Finnish privately held software development company specializing in Identity and Access Management solutions.

Last year, KuppingerCole has recognized Ubisecure as a product leader in our Leadership Compass on Access Management and Federation. Support for a broad range of authentication methods including national ID cards and banking cards, as well as integrated identity management capabilities with configurable registration workflows have been noted as the product’s strengths. However, it is the solution’s focus on enabling identity services on a large scale, targeted at governments and service providers, which KuppingerCole has noted as Ubisecure’s primary strength.

Unfortunately, until recently the Helsinki-based company has only been present in EMEA (mainly in the Nordic countries), obviously lacking resources to maintain a strong partner network. GlobalSign’s large worldwide presence with 9 international offices and over 5000 reseller partners provides a unique possibility to bring Ubisecure’s technology to a global market quickly and with little effort.

GlobalSign, established in 1996, is one of the oldest and biggest, as well as reportedly the fastest growing certificate authorities on the market. After becoming a part of the Japanese group of companies GMO Internet Inc. in 2006, GlobalSign has been steadily expanding its enterprise presence with services like enterprise PKI, cloud-based managed SSL platform, and strategic collaborations with cloud service providers. With the acquisition of Ubisecure, the company is launching its new long-term strategy of becoming a leading provider of end-to-end identity services for smart connected devices, powering the so-called Internet of Everything.

Market analysts currently estimate that up to 50 billion of such devices (or simply “things”) will be connecting to the Internet within the next 10 years. This may well be the largest technology market in history, with over $14 trillion at stake. Needless to say, the new trend brings new critical challenges that have to be addressed, such as device security and malware protection, however, probably the biggest of all is going to be providing identity services on a massive scale, mediating trust for billions on online transactions between people and “things” every minute and ensuring safety of e-commerce, communications, and content delivery.

A company that manages to bring a service with such capabilities to the market first will definitely be in a very attractive position, and GlobalSign, with their strong background in identity-related solutions, massive existing customer base and a large partner network, is aspiring to grab that position by making Ubisecure’s innovative technology available globally. Time will tell how well they can compete against technological giants on the market, as well as against other API vendors with strong IAM background (Ping Identity and CA / Layer 7 come to mind). Still, recognizing a rare combination of innovative technology and solid market presence, we believe them to be a player in the market that is definitely worth looking at.

CourionUnmanaged & Unused Service Accounts: Your Unseen Access Risk Problem [Technorati links]

September 30, 2014 01:34 PM

Access Risk Management Blog | Courion

Josh GreenWe all have skeletons in our IT closets that we'd rather forget about. In nearly every organization’s network, there is a legacy application or old piece of infrastructure that is bound to reach the end of its useful life at some point, yet plans for removal of obsolete technology typically do not exist. What we often fail to consider, however, is the fate of our service accounts associated with these aging applications and infrastructure. Unmanaged or unused service accounts represent a qualified, and in the case of Target Corporation, hugely quantifiable, risk to any organization. Continuous intelligence-based pattern recognition and monitoring using an identity and access analytics product like Courion Access Insight is the easiest and most effective way to mitigate such risk.Service Accounts and more

Service accounts are accounts on a system that are intended to be used by software in order to gain access to and interact with other software. Correspondingly, It is common practice that passwords for such service accounts are not frequently changed so that the loss of this interconnectivity can be avoided. These accounts are also frequently highly privileged, allowing a large number of activities to be integrated between systems.

How is this a risk if the accounts aren't meant for humans?

The Target breach was no more complicated than the hacks often seen on the news when someone has altered the message displayed on a road construction sign: an attacker finds or knows of a default service account and password that exists on the system and exploits it to gain access.

The Target breach was only slightly more complicated: attackers were aware of a service account laid down automatically by the installation of BMC software. The attackers were able to leverage that service account to elevate the privileges of a new account they created for themselves on the network, and the rest is history. The attack cost Target an estimated $2.2 billion, and highlighted that some common IT practices may not be "best" practices at all.

How can this threat be managed? How does one even identify a service account?

When the service accounts have been purposefully created, identification of these accounts can be straightforward. Naming conventions within your IAM system can be applied that mark an account as a service account. However, too often, there's no such obvious clue. This is where the pattern and trend recognition provided by an identity and access intelligence solution like Access Insight becomes key. The intelligence engine acts like a detective. It uses the circumstantial evidence about an account's activity and history to determine its purpose. The engine analyzes things like password reset history, login history, privilege patterns, ownership, and more to determine accounts that may be service accounts and which may represent a high risk of compromise.

We have quarterly compliance reviews, surely that will catch the risks, right?

Modern access governance is critical, but there are some gaps that modern attackers have learned to exploit. The biggest gap is speed. The typical organization will perform compliance reviews quarterly. These compliance reviews are great for looking back in time and reviewing what has happened, but they're not timely enough to catch an attacker red-handed.

As an analogy, consider the robbery of a bank vault­. If it is discovered three months later, the knowledge of what happened doesn't really help much. But if an alarm sounds right away and summons the police, this will help. Similarly, Access Insight gives you the tools to sound that alarm immediately, so you can understand what is happening within your network so you can take steps to remediate it at that moment, not in three months when the hacker is long gone with your data.

The next biggest gap is complexity. Large organizations can suffer from data overload. A compliance review may or may not catch every single service account risk in the organization which may be hidden somewhere amongst the thousands of pages of mundane, normal accounts. They're easy to overlook, and hard to find after the fact. Access Insight uses built-in algorithms combined with risk weighting you tailor to your network. This provides you with a color-coded, prioritized view of your organization's risk.

How fast can the problem be tackled?

To assist with this problem, Courion now offers a Access Insight risk heat mapcomplimentary quick scan evaluation of access risk which leverages Access Insight, to help organizations gauge whether they have an ungoverned or unmanaged service account problem. This quick scan can often be completed in a single day and provides a prioritized view of where remedial action is needed most. Of course, fully deploying Access Insight on your network, regardless of what IAM suite you have installed, will give you the visibility, or insight, you really need through continuous monitoring to find and fix access-related risk, now and on an ongoing basis, not just at a point in time.

blog.courion.com

September 29, 2014

MythicsEnterprise Management:  One Cloud, One Tool. [Technorati links]

September 29, 2014 06:45 PM

As the Enterprise adopts Cloud technology to reduce spend, an unintended side effect is that the subsystems that keep your business and mission critical applications…

Bill Nelson - Easy IdentityOpenDJ Attribute Uniqueness (and the Effects on OpenAM) [Technorati links]

September 29, 2014 03:37 PM

In real life we tend to value those traits that make us unique from others; but in an identity management deployment uniqueness is essential to the authentication process and should not be taken for granted.

uniqueness

Case in point, attributes in OpenDJ may share values that you may or may not want (or need) to be unique. For instance the following two (different) entries are both configured with the same value for the email address:

dn: uid=bnelson,ou=people,dc=example,dc=com
uid: bnelson
mail: bill.nelson@identityfusion.com
[LDIF Stuff Snipped]
dn: uid=scarter,ou=people,dc=example,dc=com
uid: scarter
mail: bill.nelson@identityfusion.com
[LDIF Stuff Snipped]

In some cases this may be fine, but in others this may not be the desired effect as you may need to enforce uniqueness for attributes such as uid, guid, email address, or simply credit cards. To ensure that attribute values are unique across directory server entries you need to configure attribute uniqueness.

UID Uniqueness Plug-In

OpenDJ has an existing plug-in that can be used to configure unique values for the uid attribute, but this plug-in is disabled by default.  You can find this entry in OpenDJ’s main configuration file (config.ldif) or by searching the cn=config tree in OpenDJ (assuming you have the correct permissions to do so).

dn: cn=UID Unique Attribute,cn=Plugins,cn=config
objectClass: ds-cfg-unique-attribute-plugin
objectClass: ds-cfg-plugin
objectClass: top
ds-cfg-enabled: false
ds-cfg-java-class: org.opends.server.plugins.UniqueAttributePlugin
ds-cfg-plugin-type: preOperationAdd
ds-cfg-plugin-type: preOperationModify
ds-cfg-plugin-type: preOperationModifyDN
ds-cfg-plugin-type: postOperationAdd
ds-cfg-plugin-type: postOperationModify
ds-cfg-plugin-type: postOperationModifyDN
ds-cfg-plugin-type: postSynchronizationAdd
ds-cfg-plugin-type: postSynchronizationModify
ds-cfg-plugin-type: postSynchronizationModifyDN
ds-cfg-invoke-for-internal-operations: true
ds-cfg-type: uid
cn: UID Unique Attribute

Leaving this plug-in disabled can cause problems with OpenAM, however, if OpenAM has been configured to authenticate using the uid attribute (and you ‘accidentally’ create entries with the same uid value). In such cases you will see an authentication error during the login process as OpenAM cannot determine which account you are trying to use for authentication.

Configuring Uniqueness

To fix this problem in OpenAM, you can use the OpenDJ dsconfig command to enable the UID Unique Attribute plug-in as follows:

./dsconfig set-plugin-prop --hostname localhost --port 4444  \
--bindDN "cn=Directory Manager" --bindPassword password \
--plugin-name "UID Unique Attribute" \
--set base-dn:ou=people,dc=example,dc=com --set enabled:true \
--trustAll --no-prompt

This will prevent entries from being added to OpenDJ where the value of any existing uids conflicts with the incoming entry’s uid.  This will address the situation where you are using the uid attribute for authentication in OpenAM, but what if you want to use a different attribute (such as mail) to authenticate? In such cases, you need to create your own uniqueness plug-in as follows:

./dsconfig create-plugin --hostname localhost --port 4444  \
--bindDN "cn=Directory Manager" --bindPassword password \
--plugin-name "Unique Email Address Plugin" \
--type unique-attribute --set type:mail --set enabled:true \
--set base-dn:ou=people,dc=example,dc=com --trustAll \
--no-prompt

In both cases the base-dn parameter defines the scope where the the uniqueness applies. This is useful in multitenant environments where you may want to define uniqueness within a particular subtree but not necessarily across the entire server.

Prerequisites

The uniqueness plug-in requires that you have an existing equality index configured for the attribute where you would like to enforce uniqueness.  The index is necessary so that OpenDJ can search for other entries (within the scope of the base-dn) where the attribute may already have a particular value set.

The following dscconfig command can be used to create an equality index for  the mail attribute:

./dsconfig  create-local-db-index  --hostname localhost --port 4444  \
--bindDN "cn=Directory Manager" --bindPassword password --backend-name userRoot  \
--index-name mail  --set index-type:equality --trustAll --no-prompt

Summary

OpenAM’s default settings (Data Store, LDAP authentication module, etc) uses the uid attribute to authenticate and uniquely identify a user.  OpenDJ typically uses uid as the unique naming attribute in a user’s distinguished name.  When combined together, it is almost assumed that you will be using the uid attribute in this manner, but that is not always the case.  You can easily run into issues when you start coloring outside of the lines and begin using other attributes (i.e. mail) for this purpose.  Armed with the information contained in this post, however, you should easily be able to configure OpenDJ to enforce uniqueness for any attribute.

 


Kuppinger ColeFirst Heartbleed, now Shellshock? [Technorati links]

September 29, 2014 07:21 AM
In Alexei Balaganski

Half a year has passed since the discovery of the dreaded Heardbleed bug, and the shock of that incident, which many have dubbed the most serious security flaw in years, has finally begun to wear off. Then the security community has been shocked again last week, when details of a new critical vulnerability in another widely used piece of software have been made public after the initial embargo.

Apparently, Bash, arguably the most popular Unix shell software used on hundreds of millions of servers, personal computers, and network devices, contains a critical bug in the way it’s processing environment variables, which causes unintentional execution of system commands stored in those variables (you can find a lot of articles explaining the details, ranging from pretty simple to deeply technical). Needless to say, this provides an ample opportunity for hackers to run malicious commands on affected machines, whether they are connected to the network or not. What’s worse, the bug has remained unnoticed for over twenty years, which means that huge numbers of legacy systems are affected as well (as opposed to Heartbleed, which was caused by a bug in a recent version of OpenSSL).

Given the huge number of affected devices, many security researchers have already called Shellshock “bigger than Heartbleed”. In my opinion, however, comparing these two problems directly isn’t that simple. The biggest problem with the Heartbleed bug was that it has affected even those companies that have been consistently following security best practices, simply because the most important security tool itself was flawed. Even worse, those who failed to patch their systems regularly and were still using an old OpenSSL version were not affected.

Shellshock bug, however, is different, since Bash itself, being simply a command-line tool for system administrators, is usually not directly exposed to the Internet, and the vulnerability can only be exploited through other services. In fact, if your IT staff has been following reasonably basic security guidelines, the impact on your network will already be minimal, and with a few additional steps can be prevented completely.

The major attack vector for this vulnerability are naturally CGI scripts. Although CGI is a long outdated technology, which, quite frankly, has no place on a modern web server, it’s still found on a lot of public web servers. For example, the popular Apache web server has a CGI module enabled by default, which means that hackers can use Shellshock bug as a new means to deploy botnet clients on web servers, steal system passwords and so on. There have already been numerous reports about attacks exploiting Shellshock bug in the wild. Researchers also report that weaknesses in DHCP clients or SSH servers can potentially be exploited as well, however this requires special conditions to be met and can be easily prevented by administrators.

So, what are our recommendations on dealing with Shellshock bug?

For consumers:

First of all, you should check whether your computers or network devices are affected by the bug at all. Vulnerable are computers running different Unix flavors, most importantly many Linux distributions and OS X. Obviously, Windows machines are not affected unless they have Cygwin software installed. Most embedded network devices, such as modems and routers, although Linux-based, use a different shell, BusyBox, which doesn’t have the bug. As for mobile devices, stock iOS and Android do not contain Bash shell, but jailbroken iOS devices and custom Android firmwares may have it installed as well.

A simple test for checking whether your shell is vulnerable is this command:

env X="() { :;} ; echo vulnerable" /bin/sh -c "echo hello"

If you see “vulnerable” after running it, you know you are and you should immediately look for a security update. Many vendors have already issued patches for their OS distributions (although Apple is still working on an official patch, there are instructions available for fixing the problem DIY-style).

For network administrators:

Obviously, you should install security updates as well, but to stop there would not be a good idea. Although a series of patches for currently described Bash vulnerability has already been issued, researchers warn that Bash has never been designed for security and that new vulnerabilities can be discovered in it later. A reasonable, if somewhat drastic consideration would be to replace Bash on your servers with a different shell, since just about every other shell does not interpret commands in environment variables and is therefore inherently invulnerable to this exploit.

Another important measure would be to check all network services that can interact with Bash and harden their configurations appropriately. This includes, for example, the ForceCommand feature in OpenSSH.

Last but not the least, you should make sure that your network security tools are updated to recognize the new attack. Security vendors are already working on adding new tests to their software.

For web application developers:

Do not use CGI. Period.

If you are stuck with a legacy application you still have to maintain, you should at least put it behind some kind of a “sanitizing proxy” service that would filter out requests containing malicious environment variables. Many vendors offer specialized solutions for web application security, however, budget solutions using open source tools like nginx are possible as well.

So, if Shellshock bug can be fixed so easily, why are security researchers so worried about it? The main reason is a sheer number of legacy devices that will never be patched and will remain exposed to the exploit for years. Another burning question for IT departments is: how long hackers (or worse, NSA) have been aware of the bug and for how long they could have been secretly exploiting it? Remember, the upper limit for this guess is 22 years!

And of course, in even longer perspective, the problem raises a lot of new questions regarding the latest IT fad: the Internet of Things. Now that we already have smart fridges and smart cars and will soon have smart locks and smart thermostats installed everywhere, how can we make sure that all these devices remain secure in the long term? Vendors predict that in 10 years there will be over 50 billion “things” connected to a global network. Can you imagine patching 50 billion Bash installations? Can you afford not patching your door lock? Will you be able to install an antivirus on your car? Looks like we need to have a serious talk with IoT vendors. How about next year at our European Identity and Cloud Conference?

September 28, 2014

Anil JohnAre We Conflating Identity Verification and Compensating Controls? [Technorati links]

September 28, 2014 06:45 PM

Identity verification is the confirmation that the claimed identity information is linked to the individual making the claim. The techniques used for verification have a direct bearing on the confidence you can have in that link. But there is often a blurring between what is accepted as verification techniques and what could be considered compensating controls.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.


These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

MythicsOracle Cloud Services – What is the deal? [Technorati links]

September 28, 2014 12:24 PM

Oracle’s renewed attention to Public Cloud Services will certainly be evident in many of the topics and agenda items throughout the week at Oracle OpenWorld…

September 27, 2014

Kuppinger ColeCESG Draft Cloud Security Principles and Guidelines [Technorati links]

September 27, 2014 11:53 AM
In Mike Small

UK CESG, the definitive voice on the technical aspects of Information Security in UK Government, has published draft versions of guidance for “public sector organizations who are considering using cloud services for handling OFFICIAL information”. (Note that the guidelines are still at a draft stage (BETA) and the CESG is requesting comments).  There are already many standards that exist or are being been developed around the security of cloud services (see: Executive View: Cloud Standards Cross Reference – 71124) so why is this interesting?

Firstly there is an implied prerequisite that the information being held or processed has being classified as OFFICIAL. KuppingerCole advice is very clear; the first step to cloud security is to understand the risk by considering the business impact of loss or compromise of data.  CESG publishes a clear definition for OFFICIAL which is the lowest level of classification and covers “ALL routine public sector business, operations and services”.  So to translate this into business terms the guidelines are meant for cloud services handling the day to day operational services and data.

Secondly the guidelines are simple, clear and concise, and simple is more likely to be successful that complex. There are 14 principles that apply to any organization using cloud services.  The principles are summarized as follows:

  1. Protect data in transit
  2. Protect data stored against tampering, loss, damage or seizure. This includes consideration of legal jurisdiction as well as sanitization of deleted data.
  3. A cloud consumer’s service and data should be protected against the actions of others.
  4. The CSP (service provider) should have and implement a security governance framework.
  5. The CSP should have processes and procedures to ensure the operational security of the service.
  6. CSP staff should be security screened and trained in the security aspects of their role.
  7. Services should be designed and developed in a way that identifies and mitigates security threats.
  8. The service supply chain should support the principles.
  9. Service consumers should be provided with secure management tools for the service.
  10. Access to the service should be limited to authenticated and authorized individuals.
  11. External interfaces should be protected
  12. CSP administration processes should be designed to mitigate risk of privilege abuse.
  13. Consumers of the service should be provided with the audit records they need to monitor their access and the data.
  14. Consumers have responsibilities to ensure the security of the service and their data.

Thirdly there is detailed implementation advice for each of these principles.  As well as providing technical details for each principle it describes six ways in which the customer can obtain assurance.  These assurance approaches can be used in combination to increase confidence.   The approaches are:

  1. Service provider assertions – this relies upon the honesty, accuracy and completeness of the information from the service provider.
  2. Contractual commitment by the service provider.
  3. Review by an independent third party to confirm the service provider’s assertions.
  4. Independent testing to demonstrate that controls are correctly implemented and objectives are met in practice. Ideally this and 3 above should be carried out to a recognised standard. (Note that there are specific UK government standards here but for most commercial organizations these standards would include ISO/IEC 27001, SOC attestations to AICPA SSAE No. 16/ ISAE No. 3402 and the emerging CSA Open Certification Framework)
  5. Assurance in the service design – A qualified security architect is involved in the design or review of the service architecture.
  6. Independent assurance in the components of a service (such as the products, services, and individuals which a service uses).

These guidelines provide a useful addition to the advice that is available around the security of cloud services.  They provide a set of simple principles that are easy to understand.  These principles are backed up with detailed technical advice on their implementation and assurance.  Finally they take a risk based approach where the consumer needs to classify the data and services in terms of their business impact.

KuppingerCole has helped major European organizations to successfully understand and manage the real risks associated with cloud computing. We offer research and services to help cloud service providers, cloud security tool vendors, and end user organizations.  To learn more about how we can help your organization, just contact sales@kuppingercole.com).

September 26, 2014

WAYF NewsWAYF unaffected by &quot;Shellshock&quot; [Technorati links]

September 26, 2014 12:20 PM

A vulnerability in the bash shell has recently been discovered making it possible to execute arbitrary code on Linux/Unix machines over the net. WAYF's servers are not affected by the vulnerability, which has been dubbed Shellshock.

Kuppinger Cole20.11.2014: SAP Security made easy. How to keep your SAP systems secure [Technorati links]

September 26, 2014 11:38 AM
In KuppingerCole

Security in SAP environments is a key requirement of SAP customers. SAP systems are business critical. They must run reliably, they must remain secure – despite a growing number of attacks. There are various levels of security to enforce in SAP environments. It is not only about user management, access controls, or code security. It is about integrated approaches.
more

Mike Jones - MicrosoftJOSE -33 and JWT -27 drafts addressing Stephen Kent’s JWK comments [Technorati links]

September 26, 2014 06:48 AM

IETF logoUpdated JOSE and JWT drafts have been published that address JSON Web Key (JWK) secdir review comments by Stephen Kent that were inadvertently not addressed in the previous versions. Most of the changes were to the JWK draft. A few changes also had to be made across the other drafts to keep them in sync. I also added acknowledgements to several additional contributors. No breaking changes were made.

The specifications are available at:

Differences since the previous drafts can be viewed at:

HTML formatted versions are available at:

September 25, 2014

GluuThe Gluu in an NSTIC Pilot [Technorati links]

September 25, 2014 09:32 PM

facereg

Last week, there was a lot of press around the the announcement of this year’s NSTIC pilots. Here at Gluu, we are excited to participate in one of these projects, and are hopeful that it will be a nice showcase for free open source software and the power of open standards for security. The goal of this blog is to shed some light on how the Gluu Server will help this project come to life. Note, these are my thoughts as CEO of Gluu, and don’t necessarily reflect the opinion of MorpoTrust, the lead contractor, NIST, the State of North Carolina, or any of the other contractors.

So what is this pilot about? In my opinion, its about one thing: electronic enrollment. You can think of enrollment as a kind of online registration. You know the drill–you need an account on a website, you fill out a form, pick a password, validate some “CAPTCHA”, perhaps validate your email, and you’re off to the races.

However this ritual has a few weaknesses: there is not a strong link to an actual person. With a plethora of ways for hackers (or your friends) to figure out your passwords, control of an email account hardly provides much of an assurance that the actual person filled out the registration form. In identity geek parlance, we call “identity proofing” the process where you correlate a person to an electronic credential. Email validation is a very weak form of identity proofing, sufficient for only low value transactions.

Today, in many situations, identity proofing requires you to show a printed government issued ID. As a person needs to transact more important business online, the strength of that identity-proofing process needs to also increase. Here is an extreme example, but it makes a point. Recently I was issued a US Dept. of Interior smart card. It was really a pain in the neck. I had to drive to Temple TX from Austin, which is 70 miles north. This was the nearest DOI office that was authorized to issue these cards. I presented two forms of valid ID. At that meeting, they collected high quality biometrics (fingerprint and photo). Subsequently I was interviewed by the FBI at my office, and I provided contact information for my family and childhood friends. After background checks, my ID was ready. I asked for it to be FedEx’d. No way… I had to drive 70 miles back to Temple, TX. At which point, they verified the previously collected biometrics. And after some chit-chat, I was handed my smart card–280 miles and four hours of driving later. I’ll say one thing: they were pretty darn sure that they handed that ID to Michael Schwartz. But it was an expensive and inconvenient process.

The North Carolina Food and Nutrition Services Program online also needs to issue electronic credentials to citizens. As I understand it, some people in North Carolina who need the benefits offered by this program might be quite far from a physical office. Wouldn’t it be great if there was some way we could save them the drive? There are many reasons why this makes sense. But there is only one problem: there is no alternative to the “in person” identity proof.

The magic in this pilot would be to develop an alternative to the in person identity proof by leveraging the sensors of a mobile device. Can the camera of a mobile device collect enough data to identify me as well as a person could do it? Its not that far-fetched, especially for me (when I passed age 40, let’s just say my visual acuity isn’t what it used to be…) The precedent for electronic “non-in person” enrollment just doesn’t exist. But once it does, we could see many services that required in person identity proofing–like voting–have a better chance of becoming a reality.

So what is the Gluu Server going to do to help make this magic happen? For those who have never heard of Gluu, we publish free open source Internet security software that is used by universities, government agencies and companies to enable Web and mobile applications to securely identify a person, and manage what information they are allowed to access.

In this pilot, there are two critical authentications: the first time you enroll, we need to identify you using information gathered from the mobile device, and compared against information held by the State of North Carolina, and other contextual information (like your location). This authentication might be a little bit inconvenient, but it may save you hours of driving! After this initial authentication, we will use crypto techniques to enable you to re-authenticate very conveniently–without even using a password.

The algorithms to do this identification (to do the image processing for example), or to detect fraud, are proprietary. I understand that these will be supplied by MorphoTrust and the University of Texas Identity Center. The Gluu Server is used to communicate with the mobile device, to communicate with servers that analyze the data secured inside the state environment. It is the “glue” (no pun intended) between the mobile device and the backend identification engine.

Identifying a person is only half the battle. The second half of the battle is authorizing the person to access certain protected APIs, that will be used by the mobile application to do its business. The Gluu Sever provides a way for a domain (in this case the State of North Carolina), to define policies that can control which people, using which devices, can access which APIs. IT veterans may not be impressed. Oracle, IBM, and Computer Associates all have software that can perform this function. However, the Gluu Server is the only free open source platform that uses open standards to enable centralized access management.

Ultimately, the vision of Gluu, and the vision of NSTIC area aligned: to make the Internet a safer place. Its an honor to participate in such an effort, and we’re looking forward to serving the citizens of North Carolina to the best of our ability.

Nat SakimuraShell Shock: UNIX系のソフト「bash」に重大バグ、システム乗っ取りも [Technorati links]

September 25, 2014 06:04 PM

米国土安全保障省のコンピューター緊急対応チーム(US―CERT)は、Linuxを含むUNIXベースのOSや米アップル のマックOS・X(テン)が影響を受ける危険性があるとの警告を発表した。

サイバーセキュリティー会社、トレイル・オブ・ビッツによると、「ハートブリード」ではパスワードやクレジットカード情報などの個人情報が盗まれる恐れはあったが、bashと異なりシステムを乗っ取ることはできなかった。

引用元: UNIX系のソフト「bash」に重大バグ、システム乗っ取りも | マネーニュース | 最新経済ニュース | Reuters.

これはねぇ、あかんやつですね。特にCGIとか要注意。

噂によると、zshもダメらしいので、shell呼び出ししてるようなスクリプトはちゃんと中身すぐに見たほうが良いですねぇ…。

詳しいことは、こちらのブログがオススメです→ bashの脆弱性がヤバすぎる件

曰く、

CGIはパラメータを環境変数として格納しています(*2)から、HTTPリクエストヘッダをいじって

User-Agent: () { :; }; rm -rf /

とかやるとゾクゾクするかもしれません。わたしは実証していませんが。

引用元: bashの脆弱性がヤバすぎる件

 

いやー、怖すぎる。

ちなみに、この対策の為にbashを抜く作業のことを「抜歯」と言うそうです。上手いな。

Nat Sakimuraマイナンバーの通知カード及び個人番号カード等に関する省令案に関するパブコメが出ています [Technorati links]

September 25, 2014 05:32 PM

「通知カード及び個人番号カード並びに情報提供ネットワークシステムによる特定個人情報の提供等に関する省令(仮称)案に対する意見募集」[1][2]が出ています。2014年10月22日までです。

マイナンバー法における、通知カードの様式・再交付手続、個人番号カードの様式・有効期間・再交付手続、情報提供ネットワークシステムによる特定個人情報の提供の方法・送信事項・記録事項などが対象です。

たとえば、通知カードの様式は次のようになっています。

別紙様式第1(第9条関係)

通知カード案

 

 

これ、デジタル読み取りできるようにして欲しいですよね…。OCRするにしても、外字とか字形の似た字とか間違いますからね。デジタル署名付きのQRコードとかで入れてくれると良いのですがねぇ…。そうすれば、改ざん検知もできるし…。

 

[1] http://www.soumu.go.jp/menu_news/s-news/01gyosei02_02000065.html

[2] http://search.e-gov.go.jp/servlet/Public?CLASSNAME=PCMMSTDETAIL&id=145208416&Mode=0

GluuPR from our recent NSTIC Pilot Award [Technorati links]

September 25, 2014 05:21 PM

A complete run down of who published news about our NSTIC Pilot award, and the people on twitter who tweeted and retweeted the announcement.

Learn more about Gluu’s involvement in the NSTIC Pilot.

Our Award

miiCard Announcement

Toopher Announcement

NIST announcement

Full Stories

Tweets

@jgrantindc just announced the 2014 #NSTIC pilots at the #GlobalIdentitySummit – congrats to @Confyrm, @MorphoTrust and @GSMA!

— NSTIC NPO (@NSTICNPO) September 17, 2014

11 retweets by:

 

Proud to be partnering with @MorphoTrust @UTCenterforID @GluuFederation @Toopher & Debra Diener on eID NSTIC pilot #GlobalIdentitySummit

— miiCard (@miicard) September 17, 2014

8 retweets by:

     

Third @NSTICNPO pilot: @MorphoTrust to leverage state-issued identity solutions to improve citizen services. #GlobalIdentitySummit — Center for Identity (@UTCenterforID) September 17, 2014

5 retweets by

 

@NSTICNPO: @jgrantindc just announced the 2014 #NSTIC pilots at the #GlobalIdentitySummit – congrats to @Confyrm, @MorphoTrust and @GSMA!

— Emma Lindley (@EmLindley) September 17, 2014

5 retweets by:

    

Learn more about @Morphotrust‘s #NSTIC pilot grant to create a trusted eID: http://t.co/la5w7O7iUZ — Center for Identity (@UTCenterforID) September 17, 2014

2 Retweets

 

Our NSTIC proposal has been funded! Secure digital ID coming to NC: http://t.co/UA3o9GmK9q @MorphoTrust @Toopher @miicard @UTCenterforID

— Gluu (@GluuFederation) September 17, 2014

2 retweets by

       

Congrats to @Morphotrust on @NSTICNPO pilot grant award. Excited to partner with @ncdhhs @gluu @toopher @miiCard to create a trusted eID! — Center for Identity (@UTCenterforID) September 17, 2014

3 retweets by

 

@NSTICNPO awards pilot grant to @MorphoTrust secure ID project w/ partners @UTCenterforID @GluuFederation @miicard http://t.co/Kc5guu108H

— Toopher (@toopher) September 17, 2014

2 retweets by:

  

.@NSTICNPO awards MorphoTrust grant to create #eID with @toopher, @GluuFederation, @miicard and @UTCenterforID http://t.co/fSHt8O8L0Z — MorphoTrust USA (@MorphoTrust) September 17, 2014

4 retweets by:

 

3 Pilot Projects Receive Grants to Improve Online Security & Privacy http://t.co/5Gg4HmDBcQ Proud to be involved w @MorphoTrust #trustonline

— miiCard (@miicard) September 17, 2014

1 retweet

 

@Toopher part of group awarded $1.47M NIST grant to pilot Secure eID, led by @MorphoTrust, State of NC http://t.co/F1OJKOmGXB @DarkReading — Alexa Leigh (@ToopherLex) September 17, 2014

2 retweets by

 

#FF to our @NSTICNPO pilot partners @MorphoTrust @UTCenterforID @GluuFederation @Toopher & Debra Diener http://t.co/uRI7sfX89d

— miiCard (@miicard) September 19, 2014

2 retweets by:

 

.@MorphoTrust gets pilot grant from @usnistgov to create secure electronic ID http://t.co/GsyZZr4BCM — U.S. Commerce Dept. (@CommerceGov) September 19, 2014

2 retweets by:

 

 

Kaliya Hamlin - Identity WomanFacebook so called “real names” and Drag Queens [Technorati links]

September 25, 2014 02:16 PM

So, Just when we thought the Nym Wars were over at least with Google / Google+.

Here is my post about those ending including a link to an annotated version of all the posts I wrote about my personal experience of it all unfolding.

Facebook decided to pick on the Drag Queens – and a famous group of them the Sisters of Perpetual Indulgence.  Back then I called for the people with persona’s to unite and work together to resist what Google was doing. It seems like now that Facebook has taken on the Drag Queens a real version of what I called at the time the Million Persona March will happen.

One of those affected created this graphic and posted it on Facebook by Sister Sparkle Plenty:

MyNameIs

Facebook meets with LGBT Community Over Real Name Policy  on Sophos’ Naked Security blog.

EFF covers it with Facebook’s Real Name Policy Can Cause Real World Harm in LGBT Community.

Change.org has a petition going. Facebook Allow Performers to Use Their Stage Names on their Facebook Accounts.

 

 

 

 

CourionAn Epic Connection - Managing Access Risk for Healthcare Applications [Technorati links]

September 25, 2014 01:16 PM

Access Risk Management Blog | Courion

Nick BerentsAs the leading provider of IAM solutions for healthcare organizations, Courion’s connector framework is designed to interface with a wide variety of IT systems, including popular healthcare applications from vendors such as Epic.

Healthcare institutions continue to move rapidly to adopt a range of technology solutions for improving patient outcomes and reducing costs by automating clinical information and processes.

In order to effectively address the security concerns posed by these applications, healthcare organizations turn to identity and access management solutions to ensure that users, such as physicians or billing clerks, are provided timely and efficient access to information and that their access rights are consistent with their roles and enterprise security policy. These IAM solutions require the use of connectors to various healthcare-specific and general use applications in order to create, manage and terminate user access rights in accordance with policies and regulations.Courion Connector Architecture

Courion recently published a technology brief for healthcare organizations interested in implementing and managing user identity profiles for Epic and other systems throughout their organization.

To download a copy of this paper, click here.

blog.courion.com

Kuppinger Cole11.11.2014: How to protect your data in the Cloud [Technorati links]

September 25, 2014 07:30 AM
In KuppingerCole

More and more organizations and individuals are using the Cloud and, as a consequence, the information security challenges are growing. Information sprawl and the lack of knowledge about where data is stored are in stark contrast to the internal and external requirements for its protection. To meet these requirements it is necessary to protect data not only but especially in the Cloud. With employees using services such as iCloud or Dropbox, the risk of information being out of control and...
more
September 24, 2014

MythicsOracle SOA Suite 12c:  Enhanced Developer Productivity [Technorati links]

September 24, 2014 08:37 PM

Oracle has announced the General Availability release of its long anticipated Oracle SOA Suite 12c. This release includes a number of enhancements and features over the…

Gerry Gebel - AxiomaticsA Closer Reading of the NIST report on ABAC [Technorati links]

September 24, 2014 02:30 PM

On October 1st, I will host a webinar that focuses on the NIST Special Publication 800-162 Guide to Attribute Based Access Control (ABAC) Definition and Considerations, published January 2014. I highly recommend the report for anyone that has responsibility for and an interest in authorization technologies and approaches.

The NIST report is a seminal event for the industry as it is their first report on this topic. Many organizations, public and private, look to NIST for guidance on a wide range of IT topics. Having a NIST document on ABAC is a strong signal that this is a technology worthy of further examination and exploration.

In this webinar, I’ll walk through key parts of the report and add comments based on our experiences at Axiomatics. I hope to see you there and look forward to your comments and questions. Please register for the webinar here.


Ludovic Poitou - ForgeRockJoin us for the 2014 European IRM Summit, Nov 3-5 2014… [Technorati links]

September 24, 2014 01:09 PM
Photo by  https://www.flickr.com/photos/tochis

Photo by https://www.flickr.com/photos/tochis

There are conferences and there are Conferences. The Conferences are the ones that you remember, because they happened in unusual places, because they’ve had a different atmosphere, you’ve met lots of friendly and bright persons. They are the ones you leave with the satisfaction of having learned something, having received value, and you’re looking forward to come back next year.

The IRM Summit is one of these Conferences. The next European IRM Summit is taking place in November, 3 – 5, near Dublin, Ireland, at the Powerscourt Estate pictured here. It’s a 2 days event where you can learn and discuss about the Identity Relationship Management space, standards, platforms, solutions…There will be many presentations, demos, trainings, plenty of time for discussions and meetings, a free half day Kantara Initiative workshop around “Trusted IDentity Exchange (TIDX)”, and some fun. I can already hear the fiddle, the pipes, the harp and smell the Guinness ! And I hope the weather will let us enjoy the wonderful garden.

Check out the agenda and the list of speakers, and don’t wait until last minute to register. While there are hundreds of rooms available, they are still counted and limited. Last year’s summit was sold out !

I’m looking forward to see you in beautiful Ireland !


Filed under: Identity Tagged: conference, Dublin, europe, ForgeRock, identity, Ireland, IRM, IRMSummit, IRMSummit2014, IRMSummitEurope, openam, opendj, openidm, openig, opensource

Kuppinger ColeIntelligent Identity Management in the Cloud - A Use Case [Technorati links]

September 24, 2014 10:50 AM
In KuppingerCole Podcasts

Most organisations fail to plan identity management in the Cloud. They adopt a variety of software-as-a-service solutions each requiring its own identity repository with a periodic synchronisation that fails to provide sufficient governance over de-provisioned accounts. This webinar looks at the issues with managing identities in the Cloud and one potential solution.



Watch online

Kuppinger Cole18.11.2014: Database Security On and Off the Cloud [Technorati links]

September 24, 2014 02:14 AM
In KuppingerCole

Continued proliferation of cloud technologies offering on-demand scalability, flexibility and substantial cost savings means that more and more organizations are considering moving their applications and databases to IaaS or PaaS environments. However, migrating sensitive corporate data to a 3rd party infrastructure brings with it a number of new security and compliance challenges that enterprise IT has to address. Developing a comprehensive security strategy and avoiding point solutions for...
more
September 23, 2014

Kaliya Hamlin - Identity WomanWe “won” the NymWars? did we? [Technorati links]

September 23, 2014 11:43 PM

Mid-July,  friend called me up out of the blue and said “we won!”

“We won what” I asked.

“Google just officially changed its policy on Real Names”

He said I had  to write a post about it. I agreed but also felt disheartened.
We won but we didn’t it took 3 years before they changed.

They also created a climate online where it was OK and legitimate for service providers to insist on real names.

For those of you not tracking the story – I along with many thousands of people had our Google+ accounts suspended – this posts is an annotated version of all of those.

This was the Google Announcement:

When we launched Google+ over three years ago, we had a lot of restrictions on what name you could use on your profile. This helped create a community made up of real people, but it also excluded a number of people who wanted to be part of it without using their real names.

Over the years, as Google+ grew and its community became established, we steadily opened up this policy, from allowing +Page owners to use any name of their choosing to letting YouTube users bring their usernames into Google+. Today, we are taking the last step: there are no more restrictions on what name you can use.

We know you’ve been calling for this change for a while. We know that our names policy has been unclear, and this has led to some unnecessarily difficult experiences for some of our users. For this we apologize, and we hope that today’s change is a step toward making Google+ the welcoming and inclusive place that we want it to be. Thank you for expressing your opinions so passionately, and thanks for continuing to make Google+ the thoughtful community that it is.

There was lots of coverage.

Google kills real names from ITWire.

Google Raises White Flag on Real Names Policy in the Register.

3 Years Later Google Drops its Dumb Real Name Rule and Apologizes in TechCrunch.

Change Framed as No Longer Having Limitations Google Offers Thanks for Feedback in Electronista

Google Stops Forcing All Users to Use Their Real Names in Ars Technica

The most important was how Skud wrote a “real” apology that she thought Google should have given:

When we launched Google+ over three years ago, we had a lot of restrictions on what name you could use on your profile. This helped create a community made up of people who matched our expectations about what a “real” person was, but excluded many other real people, with real identities and real names that we didn’t understand.

We apologise unreservedly to those people, who through our actions were marginalised, denied access to services, and whose identities we treated as lesser. We especially apologise to those who were already marginalised, discriminated against, or unsafe, such as queer youth or victims of domestic violence, whose already difficult situations were worsened through our actions. We also apologise specifically to those whose accounts were banned, not only for refusing them access to our services, but for the poor treatment they received from our staff when they sought support.

Everyone is entitled to their own identity, to use the name that they are given or choose to use, without being told that their name is unacceptable. Everyone is entitled to safety online. Everyone is entitled to be themselves, without fear, and without having to contort themselves to meet arbitrary standards.

As of today, all name restrictions on Google+ have been lifted, and you may use your own name, whatever it is, or a chosen nickname or pseudonym to identify yourself on our service. We believe that this is the only just and right thing to do, and that it can only strengthen our community.

As a company, and as individuals within Google, we have done a lot of hard thinking and had a lot of difficult discussions. We realise that we are still learning, and while we appreciate feedback and suggestions in this regard, we have also undertaken to educate ourselves. We are partnering with LGBTQ groups, sexual abuse survivor groups, immigrant groups, and others to provide workshops to our staff to help them better understand the needs of all our users.

We also wish to let you know that we have ensured that no copies of identification documents (such as drivers’ licenses and passports), which were required of users whose names we did not approve, have been kept on our servers. The deletion of these materials has been done in accordance with the highest standards.

If you have any questions about these changes, you may contact our support/PR team at the following address (you do not require a Google account to do so). If you are unhappy, further support can be found through our Google User Ombuds, who advocates on behalf of our users and can assist in resolving any problems.

BotGirl chimed in with her usual clear articulate videos about the core issues.

 

 

And this talk by Alessandro Acquisti surfaced about. Why privacy matters

 

Google has learned something from this but it seems like other big tech companies haven not.

 

Mike Jones - MicrosoftJOSE -32 and JWT -26 drafts addressing IETF Last Call comments [Technorati links]

September 23, 2014 10:48 PM

IETF logoNew versions of the JSON Object Signing and Encryption (JOSE) and JSON Web Token (JWT) specifications have been published incorporating feedback received in IETF Last Call comments. Thanks to Russ Housley and Roni Even for their Gen-ART reviews, to Tero Kivinen, Scott Kelly, Stephen Kent, Charlie Kaufman, and Warren Kumari for their secdir reviews, to Tom Yu for his individual review, and to James Manger and Chuck Mortimore who provided feedback based on deployment experiences, as well as to the many JOSE and OAuth working group members who pitched in to discuss resolutions. Many clarifications resulted. No breaking changes were made.

The specifications are available at:

HTML formatted versions are available at:

Kuppinger ColeSo Your Business is Moving to the Cloud - Will it be Azure or Naked Cloud? [Technorati links]

September 23, 2014 04:29 PM
In KuppingerCole Podcasts

Most companies do not plan their migration to the cloud. They suddenly find that there are multiple users of cloud services in their organisation, each of which was a good idea at the time but now form a disparate approach to cloud services with no strategic vision, a significant training impost and little governance over their cloud-based applications and infrastructure.



Watch online

Kuppinger Cole16.10.2014: IAM for the user: Achieving quick-wins in IAM projects [Technorati links]

September 23, 2014 02:55 PM
In KuppingerCole

Many IAM projects struggle or even fail because demonstrating their benefit takes too long. Quick-wins that are visible to the end users are a key success factor for any IAM program. However, just showing quick-wins is not sufficient, unless there is a stable foundation for IAM delivered as result of the IAM project. Thus, building on an integrated suite that enables quick-wins through its features is a good approach for IAM projects.
more

Kuppinger Cole16.10.2014: IAM for the user: Achieving quick-wins in IAM projects [Technorati links]

September 23, 2014 02:55 PM
In KuppingerCole

Many IAM projects struggle or even fail because demonstrating their benefit takes too long. Quick-wins that are visible to the end users are a key success factor for any IAM program. However, just showing quick-wins is not sufficient, unless there is a stable foundation for IAM delivered as result of the IAM project. Thus, building on an integrated suite that enables quick-wins through its features is a good approach for IAM projects.
more
September 21, 2014

Anil JohnWho Else Wants a Portable Token as the First Authentication Factor? [Technorati links]

September 21, 2014 08:30 PM

There is a great deal of interest when delivering digital public services to leverage a strong token, ideally one that has already been obtained by or issued to an individual, across multiple relying parties. This blog post identifies some of the challenges to overcome to enable a true bring-your-own-token experience.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.


These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

Julian BondWhen we marched against the Iraq war, the demands were fairly simple even if they were ignored. [Technorati links]

September 21, 2014 07:19 AM
When we marched against the Iraq war, the demands were fairly simple even if they were ignored.

But when we march against "Climate Change" what are we asking for? What do we want to happen next?

http://peoplesclimate.org/global/#map

I suspect marching about climate change is like dancing about architecture. But if it makes us feel better, maybe it's still worth while.

http://www.theguardian.com/environment/live/2014/sep/21/peoples-climate-march-live
 Global »

[from: Google+ Posts]
September 20, 2014

Julian BondNew Taboos [Technorati links]

September 20, 2014 11:55 AM
New Taboos

From an essay by John Shirley.
http://www.amazon.com/Taboos-Outspoken-Authors-John-Shirley/dp/1604867612

From http://www.dailykos.com/story/2013/07/21/1225129/-Sci-Fi-Fantasy-Club-New-Taboos-by-John-Shirley
What if the phrase "Obscene Profits" were not just a figure of speech?  What if the practice of amassing huge profits while exploiting one's employees, or while contaminating the environment, or while lying to the public, was actually regarded as revolting, and the people who engaged in such practices were shunned as pariahs?

We need some new taboos that we reject utterly. That make us sick. That we will not allow under any circumstances. That are never acceptable. No end ever justifies these means. In the words of Rorschach from Watchmen, "Never compromise. Not even in the face of Armageddon".

Here's a short and incomplete list of possible new taboos:-
1) Polluting or toxifying the environment. Particularly by corporate action for profit but also by individual action.
2) Lying or deceiving for profit. Especially to manipulate children for profit.
3) Using political influence for personal gain.
4) Hiding someone else's theft, fraud, dishonesty or pollution to protect one's own part in the system.
5) Discriminating on the basis of race, gender or sexual orientation.
6) Making unreasonably large profits. eg. by taking advantage of a monopoly position to price gouge, or by avoiding tax, or paying absurd top salaries
7) Exploiting workers via uneconomic wages or contracts. eg zero hour contracts or paying minimum wage as opposed to living wage.
8) Exploiting workers via unsafe workplaces and practices for profit
9) Torture under any circumstances
10) Engaging in warfare except in the most dire necessity

These aren't hard. They're just the basic kindergarten rules of behaviour we teach kids.
- Don't poison other children
- Don't lie
- Don't steal
- Don't hurt other kids just to get what you want
- Don't take more than your share of the pudding

So now apply them to adults.
 New Taboos (Outspoken Authors): John Shirley: 9781604867619: Amazon.com: Books »
New Taboos (Outspoken Authors) [John Shirley] on Amazon.com. *FREE* shipping on qualifying offers.

Mixing outlaw humor, sci-fi adventure, and cutting social criticism


[from: Google+ Posts]
September 19, 2014

Ludovic Poitou - ForgeRockSome OpenIG related articles… [Technorati links]

September 19, 2014 07:25 AM

OpenIGMy coworkers have been better than me at writing blog articles about OpenIG (at least faster).

Here are a few links :

Simon Moffat describes the benefits of OAuth2.0 and OpenID Connect and how to start using those with OpenIG 3.0.

Warren Strange went a little bit further and with a short introduction to OpenIG, made available on GitHub sample configuration files for OpenIG 3.0 to start using OpenID Connect.

Mark, who run ForgeRock documentation team, describes the improvements done on the Introduction section of the OpenIG docs that we’re making based on received feedback since the release of OpenIG 3.0.


Filed under: Identity Gateway Tagged: ForgeRock, identity, identity gateway, openig, opensource

Anil JohnPlease Take My 2014 Reader Survey [Technorati links]

September 19, 2014 03:30 AM

I want to make the content on my blog more relevant to your needs and interests. To do that, would you please take a few minutes to fill out my reader survey?

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.


These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

September 18, 2014

Ludovic Poitou - ForgeRockNew ForgeRock product available : OpenIG 3.0 [Technorati links]

September 18, 2014 04:11 PM

Since the beginning of the year, I’ve taken an additional responsibility at ForgeRock: Product Management for a new product finally named ForgeRock Open Identity Gateway (built from the OpenIG open source project).

OpenIG is not really a new project, as it’s been an optional module of OpenAM for the last 2 years. But with a new engineering team based in Grenoble, we’ve taken the project on a new trajectory and made a full product out of it.

OpenIGOpenIG 3.0.0 was publicly released on August 11th and announced here and there. But as I was on holidays with the family, I had not written a blog post article about it.

So what is OpenIG and what’s new in the 3.0 release ?

OpenIG is a web and API access management solution that allows you to protect enterprise applications and APIs using open standards such as OAuth 2.0, OpenID Connect and SAMLv2.

Enhanced from the previous version are the Password Capture and Replay and SAMLv2 federation support features. But OpenIG 3.0 also brings several new features:

I’ve presented publicly the new product and features this week through a Webinar. The recording is now available, and so is the deck of slides that I’ve used.

You can download OpenIG 3.0 from ForgeRock.com, or if you would like to preview the enhancements that we’ve already started for the 3.1 release, get a nightly build from ForgeRock.org.

Play with it and let us know how it is working for you, either by email, using a blog post or writing an article on our wiki. I will be reviewing them, relaying and advertising your work. And I’m also preparing a surprise for the authors of the most outstanding use cases !

I’m looking forward to hear from you.


Filed under: Identity, Identity Gateway Tagged: authentication, authorization, ForgeRock, gateway, identity, oauth2, openidconnect, openig, opensource, product, release, samlv2, security

Nat Sakimura車輪は丸くなったのか〜ID関連標準の成熟度と動向 [Technorati links]

September 18, 2014 12:58 PM

ID&ITのサイトは仮題のままですが、明日、ANAホテルで「車輪は丸くなったのか〜ID関連標準の成熟度と動向」というタイトルで30分ほどスピーチさせていただきます。セッション番号は GE-05 です。外タレ、ナット・サキムラとしてです。

お申し込みはこちら→ http://nosurrender.jp/idit2014/registration.html

内容は、3月までガートナーのIdentity関連のアナリストだったイアン・グレイザーから独自に入手したCloud Identity Summit基調講演のスライドのネタを下敷きにして、彼の考え、私の考え、はたまた、元米国大統領サイバーセキュリティー特別補佐官のハワード・シュミット氏との朝食会で話したことなどを交えながら、認証、認可、属性、プロビジョニング、の国際標準の状況を「今使えるのか」という観点も含めながら紹介します。

外タレとしてなので、同時通訳を要求したのですが、予算厳しきおり認められませんで、一人同時通訳による日本語でお届けいたしますw。はい。それじゃぁ「外タレ」じゃなくて「ヘタレ」ですね。それでも果敢に最初のスライドは英語で入りますんで、生ぬるい笑いをお願いします (_o_)。

Do we have a round wheel yet?

September 17, 2014

Ian GlazerFinding your identity (content) at Dreamforce [Technorati links]

September 17, 2014 10:07 PM

Dreamforce is simply a force of nature (excuse the pun.) There are more sessions (1,400+) then you could possibly attend even if you clone yourself a few times over. And that’s not even including some amazing keynotes. Needless to say there’s a ton to occupy your time when you come join us.

The Salesforce Identity team has been putting together some awesome sessions. Interested in topics such as single sign-on for mobile applications, stronger authentication, or getting more out of Active Directory? You need to check out our sessions!

I’ve put together a handy list of all of the identity and access management content at Dreamforce 14. Hope you find it helpful and I cannot wait to meet all of the Salesforce community grappling with identity management issues.

Monday, Oct. 13th

Implementing Single Sign On (SSO) to Improve User Experience and Drive Adoption

Single Sign On is a fairly simple concept. Yet implementing it within your organization can have its challenges and complexities. Join us as we discuss common business challenges and the solutions that leverage Salesforce capabilities, such as SAML, OAuth, and Authentication Providers. You’ll hear about the best practices and common solution approaches that customers utilized in order to implement SSO within their own environments.

InterContinental Ballroom C: 1:00 p.m. – 1:40 p.m.

Open in Agenda Builder

Integrating Active Directory with Salesforce: Keep User Identities in Sync

With Active Directory being the system of record for user identities at many organizations, keeping users in sync with Salesforce is a challenge. Salesforce Identity Connect allows you to sync users from Active Directory into Salesforce, authenticate users into Salesforce using Active Directory credentials, and provide seamless Single Sign-on using Integrated Windows Authentication (IWA). Join us to learn about the product, and case studies on how customers are using it to simplify their architecture.

Grand Ballroom AB: 4:00 p.m. – 4:40 p.m.

Open in Agenda Builder

Customizing User Authentication with Login Flows

The Winter 15 release introduces a powerful new feature called Login Flows. Join us to learn how to use Login Flows to completely customize your login experience, integrate with different two-factor authentication methods, leverage identity verification services and more. We’ll get hands on, building a custom process from scratch, demonstrate YubiKey and Twillio integrations and you’ll walk away ready to bring this power to your own deployment.

Room 2007, Moscone West: 4:00 p.m. – 4:40 p.m.

Open in Agenda Builder

Tuesday, Oct. 14th

Implementing Single Sign-On in Mobile Applications with Salesforce Identity

Want to learn how to integrate single sign-on and policy management into mobile applications? Custom applications that use secure SAML single sign-on and OAuth allow you to seamlessly deliver an integrated experience for your users. Come learn how Salesforce Identity makes this possible directly from your Salesforce org. We’ll get hands-on and build an app!

Room 2008, Moscone West: 11:00 a.m. – 11:40 a.m.

Open in Agenda Builder

Delivering Single Sign-On and Identity for Employees, Customers, and Partners

With the proliferation of cloud applications, mobile devices, and industry trends like Bring Your Own Device (BYOD), IT organizations are increasingly challenged with how to manage and gain transparency into user access to systems and applications. Salesforce Identity helps address these issues by providing identity and access management (IAM) for web and mobile applications, built on the trusted Salesforce1 Platform. Join us to make sense of things like federation, OAuth, and social sign-on in order to connect you with your employees, customers, and partners.

Grand Ballroom AB: 12:00 p.m. – 12:40 p.m.

Open in Agenda Builder

Wednesday, Oct. 15th

Deploying Single Sign-on and Provisioning for Active Directory

Join us to learn about Salesforce Identity Connect, and how it integrates Salesforce and Force.com apps with Active Directory for data synchronization and seamless single sign-on using Integrated Windows Authentication (IWA). We’ll walk through a full deployment of Identity Connect, customizing it using JavaScript, and you’ll learn how salesforce.com IT is implementing it to solve their use case.

Room 2011, Moscone West: 9:00 a.m. – 9:40 a.m.

Open in Agenda Builder

Deploying Single Sign-On and Identity for Employees, Customers, and Partners

Salesforce Identity helps address challenges around identity and access management for employees, customers, and partners using identity capabilities built directly into the Salesforce1 Platform. In this session, you’ll learn how to deploy Salesforce Identity to solve major use-cases like single sign-on and provisioning with hands-on demonstrations of setting this all up from scratch. You’ll leave empowered to put this to use in your own org.

Room 2008, Moscone West: 3:15 p.m. – 3:55 p.m.

Open in Agenda Builder

OpenID Connect and Single Sign-On for Beginners

Websites and applications are implementing social single sign-on to allow users to login using trusted authentication providers such as Google, Facebook, and even Salesforce. Join us to learn how to configure the OpenID Connect authentication provider to allow users to authenticate at Google to access a Salesforce environment. We’ll also look at how you can relieve yourself of the burden of password management by having your web app login users via Salesforce.

Room 2009, Moscone West: 4:15 p.m. – 4:55 p.m.

Open in Agenda Builder

Thursday, Oct. 16th

Social Single Sign-On with OpenID Connect

Get hands-on with the new Salesforce Identity feature OpenID Connect to link your Salesforce or Community Identity with a Social Identity such as Google+. Once connected, you can enable single sign-on and even share data between services. See how you can update a Chatter Profile based on a Social Identity and share actions in your Community on social networks. Enable employees to access and synchronize their Google Apps hosted data from within Salesforce. By the end of this session you will have the confidence to go and use OpenID Connect in your own projects.

Room 2008, Moscone West: 9:30 a.m. – 10:10 a.m.

Open in Agenda Builder

Seamless Authentication with Force.com Canvas

Join us to learn how to leverage SSO technologies (such as SAML) with Force.com Canvas. We’ll show examples of using Canvas with your existing SSO application to provide a seamless user experience, how you can use Canvas and Salesforce Identity to demo cross-org Visualforce pages, and we’ll show this behaving in Salesforce.

Room 2007, Moscone West: 9:30 a.m. – 10:10 a.m.

Open in Agenda Builder

 See you in October!

Ian Glazer [Technorati links]

September 17, 2014 10:05 PM

Dreamforce is simply a force of nature (excuse the pun.) There are more sessions (1,400+) then you could possibly attend even if you clone yourself a few times over. And that’s not even including some amazing keynotes. Needless to say there’s a ton to occupy your time when you come join us.

The Salesforce Identity team has been putting together some awesome sessions. Interested in topics such as single sign-on for mobile applications, stronger authentication, or getting more out of Active Directory? You need to check out our sessions!

I’ve put together a handy list of all of the identity and access management content at Dreamforce 14. Hope you find it helpful and I cannot wait to meet all of the Salesforce community grappling with identity management issues.

Monday – October 13th

Tuesday – October 14th

Wednesday – October 15th

Thursday – October 16th

 See you in October!

Nishant Kaushik - OracleMy Relationship with Metadata: It’s Complicated! [Technorati links]

September 17, 2014 09:27 PM

Ever since the Snowden revelations broke, there has been a lot of interest in metadata, with a lot of ink (or should that be bytes?) devoted to defining exactly what it is, where it can be gathered from, who is capable (and how) of doing said gathering, and most importantly of all, if it is even important enough to warrant all the discussion. Official statements of “We’re only collecting metadata” have attempted to downplay the significance and privacy implications of the metadata collection. Organizations like the EFF have tried to counter that with simple to understand examples (like the ones below) that show how a conclusion could be drawn by having access to just the metadata and not the data (the content).

eff-metadata

Debunking the Myth of “It’s Just Metadata”. With Data

And today, I read the most easy-to-understand account of just how much can be gleaned from metadata. A group of researchers were given access to “the same type of metadata that intelligence agencies would collect, including phone and email header information” for just one person, Ton Siedsma, for the period of just one week. They gathered this metadata by installing a data-collecting app on his phone. Here’s what they were able to do with it:

As the article points out, the intelligence agencies have access to a lot more metadata (in volume and over time), and much more sophisticated ways to analyze said metadata. So you can see why all the privacy advocates are raising alarms about this.

102511825-nsa-cartoon

All this, and we haven’t even touched about all the other organizations that are able to gather this metadata, and whose business models are dependent on selling data and user dossiers to advertisers and other data brokers.

And Yet, I Can Haz Metadata?

I_luv_Metadata_buttonWith all this, you’d think that I, with all the privacy related advocacy that I do on Twitter, would hate metadata. But the fact is that it’s a complicated relationship. In looking at the future of Security, I’ve talked recently about how we can make it possible for us to have good security that does not negatively impact usability. But that model relies on doing more work in the background using environmental, transactional and behavioral information – aka metadata. Bob Blakley long ago talked about the move from Authentication to Recognition, which relies on continuous data gathering through different sensors to help in identifying the person or device interacting with the service. Most multi-factor authentication and risk-analysis services are already there, and going deeper.

All of this means that the security frameworks enterprises rely on will need to be able to gather and have access to all this metadata. This was much easier in the days of employer issued laptops and phones. BYOD and IoT completely change the landscape by creating new concerns regarding the what, when and how of metadata gathering by enterprises. Commercial entities also have the need to make their offerings more secure, which is to the benefit of their customers. But how does that mesh with the need that creates to gather metadata about their customers, a need that would ordinarily get a viscerally negative reaction if disclosed? The individual me is constantly having vigorous debates on this topic with the security practitioner me, leading to many amused (and some alarmed) glances from my fellow subway riders. At my core, I’m driven by the belief that we can find a way to balance the metadata gathering necessary to support the security models we’re advocating while giving individuals the necessary controls to manage and preserve their privacy in an informed way.

One thing is clear. Because one person’s metadata is another person’s data, enterprises need to start dealing with the collection, disclosure, usage and protection requirements of this PII (yes, I just classified Metadata as PII. Let the flame wars begin). As are laws. And engineers. It is likely going to get hidden inside those interminable ToS documents nobody ever reads. And employment contracts.

It’s going to be interesting for a while. And complicated.

OpenID.netGeneral Availability of Microsoft OpenID Connect Identity Provider [Technorati links]

September 17, 2014 02:45 PM

Microsoft has announced the general availability of the Azure Active Directory OpenID Connect Identity Provider.  It supports the discovery of provider information as well as session management (logout).  On this occasion, the OpenID Foundation wants to recognize Microsoft for its contributions to the development of the OpenID Connect specifications and congratulate them on the general availability of their OpenID Provider.

Don Thibeau
OpenID Foundation Executive Director

OpenID.netReview of Proposed Errata to OpenID Connect Specifications [Technorati links]

September 17, 2014 01:05 AM

The OpenID Connect Working Group recommends the approval of Errata to the following specifications:

An Errata version of a specification incorporates corrections identified after the Final Specification was published. This note starts the 45 day public review period for the specification drafts in accordance with the OpenID Foundation IPR policies and procedures. This review period will end on Friday, October 31, 2014. Unless issues are identified during the review that the working group believes must be addressed by revising the drafts, this review period will be followed by a seven day voting period during which OpenID Foundation members will vote on whether to approve these drafts as OpenID Errata Drafts. For the convenience of members, voting may begin up to two weeks before October 31st, with the voting period still ending on Friday, November 7, 2014.

These specifications incorporating Errata are available at:

The corresponding approved Final Specifications are available at:

A description of OpenID Connect can be found at http://openid.net/connect/. The working group page is http://openid.net/wg/connect/. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.

You can send feedback on the specifications in a way that enables the working group to act upon your feedback by (1) signing the contribution agreement at http://openid.net/intellectual-property/ to join the working group (please specify that you are joining the “AB+Connect” working group on your contribution agreement), (2) joining the working group mailing list at http://lists.openid.net/mailman/listinfo/openid-specs-ab, and (3) sending your feedback to the list.

A summary of the errata corrections applied is:

— Michael B. Jones – OpenID Foundation Board Secretary

OpenID.netReview of Proposed Implementer’s Draft of OpenID 2.0 to OpenID Connect Migration Specification [Technorati links]

September 17, 2014 12:59 AM

The OpenID Connect Working Group recommends approval of the following specification as an OpenID Implementer’s Draft:

An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification. This note starts the 45 day public review period for the specification drafts in accordance with the OpenID Foundation IPR policies and procedures. This review period will end on Friday, October 31, 2014. Unless issues are identified during the review that the working group believes must be addressed by revising the drafts, this review period will be followed by a seven day voting period during which OpenID Foundation members will vote on whether to approve these drafts as OpenID Implementer’s Drafts. For the convenience of members, voting may begin up to two weeks before October 31st, with the voting period still ending on Friday, November 7, 2014.

This specification is available at:

A description of OpenID Connect can be found at http://openid.net/connect/. The working group page is http://openid.net/wg/connect/. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.

You can send feedback on the specifications in a way that enables the working group to act upon your feedback by (1) signing the contribution agreement at http://openid.net/intellectual-property/ to join the working group (please specify that you are joining the “AB+Connect” working group on your contribution agreement), (2) joining the working group mailing list at http://lists.openid.net/mailman/listinfo/openid-specs-ab, and (3) sending your feedback to the list.

— Michael B. Jones – OpenID Foundation Board Secretary

September 16, 2014

GluuThe Gluu Server: the WordPress of IAM [Technorati links]

September 16, 2014 05:20 PM

iot_communication_mode

One of our goals for the Gluu Server is to replicate the success of WordPress, the popular open source content management system (CMS) used by more than 72 million domains on the Internet (including this one!).

Along the way, we’ve identified many similarities between the two platforms.

Just like a CMS, every domain on the Internet needs a solution for authenticating people and controlling access to resources. Also, a CMS and an identity and access management (IAM) system are invariably intertwined: if you think of a CMS as the house, the IAM system is the lock that restricts or enables access to the appropriate resources and people.

Due to the inherently custom nature of both a CMS and an IAM system, only an extremely flexible, scalable and open solution backed by a large community of developers can broadly serve market needs.

WordPress provides the foundation for Fortune 500 organizational websites all the way down to individual blogs. While the developers of WordPress at Automattic provide enterprise support, training and more–just as Gluu does for the Gluu Server–many small and medium businesses are able to utilize the platform thanks in part to the large community of independent and affordable service providers.

As modern security needs and access to third-party apps continue to make IAM a similarly universal requirement, a utility open source platform supported by a strong community of developers is needed.

Currently the market for identity and access management is widely distributed with no one solution able to meet the needs of the majority of organizations. SaaS solutions can be quick and affordable, but not flexible or secure enough for many organizations. Proprietary enterprise software, on the other hand, may be good for the Fortune 500 but is too expensive for general market adoption.

The few open source solutions that are available today are either not comprehensive enough to provide a unified solution, forcing developers to mix and match several existing open source projects to meet their needs, or have restrictive licenses that make the software enterprise priced when used in production. Both limitations reduce usability and community development.

Like WordPress, the Gluu Server is free to use in production, provides a large enough feature set for most organizations out of the box, and is extensible enough to support custom features as needed. By enabling people to use and build upon the Gluu Server for free, we envision a worldwide community of security professionals that are able to help organizations with access management challenges using the Gluu Server.

Two business are never exactly alike. Core software like a CMS or an IAM system needs to be flexible, affordable and open to serve the hundreds of millions of new websites being created each year.

As the pain and frustration of dealing with insufficient access management solutions continues to grow, we see the Gluu Server, with its industry leading feature set and free open source license, securing a dominant place in the IAM market just as WordPress did in the CMS market.

KatasoftNew Pricing: More API Calls, More Options! [Technorati links]

September 16, 2014 03:00 PM

New Stormpath Pricing

We’re excited to announce new pricing tiers for Stormpath!

Since launching last year, we have kept our pricing stable and simple as we watched early customers use the API. Our goal: learn what our customers care about and then tailor pricing to people’s actual usage and concerns. Now we know!

Raised Included API Calls

The biggest piece of feedback we received: it’s hard to know how many API calls you need before building your app. To that end, we raised the API calls included at each tier, to 1M, 5M, 10M or more, respectively. So, no matter what you’re building, you should have more than enough API calls to get started and we can affordably scale with you. If you get a big signup surge, you will have more important things to deal with.

An API call to Stormpath just got a lot cheaper.

Flat Rate for Additional API Calls

We flattened the rate for additional API calls. Any calls over what’s included in your tier will be billed at $0.20/1000 API calls, a huge decrease relative to our prior plans. A flat rate is just easier for everyone.

Cheaper Access to the Full API

We lowered the cost of the tier that gets you full access to the API – features like Hosted Login screens and LDAP for $149 per month (instead of $295), per app. It comes with 5M API calls, so fire away. This is a great option for straight forward applications, with 5x the API calls of the old Premium Tier and access to the same features.

One Big Change

The lowest paid tier is increasing from $19 to $49, and is now limited to one application, with the number of included API calls raised to 1M. Why? Most customers at this tier are building proof-of-concept apps or one small app, and we need to balance affordability with access. This new tier gives customers a lot more room to load test, and also raises the API calls included. And at $49/month we can afford to give them great service.

Free Developer Tier Is Still Free. Forever.

The developer tier still gets 100,000 API calls each month, unlimited users and groups, and social login features for one application.

Current customers keep their old plans, but most will save money on the new model. Contact support@stormpath.com to find out how.

Customers with Sprint pricing for Startups can keep those perks on their current pricing plan or port the discount to a new tier.

The “Stormpath Admin” Application that is created by default will not count toward per-app pricing.

As ever, we welcome your feedback in the comments below. Our goal at Stormpath is to make developers’ lives easier with a “no brainer” service. We hope this new pricing plan means everyone developing against Stormpath can spend less time worrying about API limits, and more time building awesome products!

Julian BondNew Taboos (Outspoken Authors) by John Shirley [Technorati links]

September 16, 2014 07:34 AM
PM Press (2013), Edition: First Edition, Paperback, 128 pages
[from: Librarything]

Julian BondI'll just leave this here. [Technorati links]

September 16, 2014 07:29 AM
I'll just leave this here.

"If there is no centre, we're all on the edge."

From this review.
http://www.residentadvisor.net/review-view.aspx?id=15628
 Review: Various - Worth The Weight Vol. 2 »
Punch Drunk's second compilation reflects the dubstep diaspora a couple of generations deep.

[from: Google+ Posts]

Julian BondI was looking for something else and just found this from 2004. Dig those forgotten cultural references... [Technorati links]

September 16, 2014 07:08 AM
I was looking for something else and just found this from 2004. Dig those forgotten cultural references! Not entirely sure where it came from so needs citation.

The revolution will be blogged.

You will be able to stay home, brother.
You will be able to plug in, turn on and get your own IP.
You will be able to lose yourself on generic V.i.a.g.r.a and Prozac,
Skip out for a Frapuccino during the free pr0n download,
Because the revolution will be blogged.

The revolution will be blogged.
The revolution will not be brought to you by the NY Times
in 4 parts with commercial popups after a one time registration.
The revolution will not show you pictures of Bush
landing on a carrier and leading a charge by John
Ashcroft, Douglas Rumsfeld and Dick Chaney to eat
the profits stolen on the way to Iraq.
The revolution will be blogged.

The revolution will not be brought to you by
the RIAA and will not star Britney
Spears and Paris Hilton or Eminem and Madonna.
The revolution will not give your iPod a new battery.
The revolution will not lock you in with DRM.
The Atkins diet will not make you look five pounds
thinner, because the revolution will be blogged, Brother.

There will be a webcam of you and Winona Ryder
pushing that shopping cart down the block on the dead run,
and Marsha Stewart trying to sneak the drug money past the SEC.
You will be able to predict the winner at 8:32
and get reports from 563 districts because,
The revolution will be blogged.

There will be phonecam pictures of pigs shooting down
brothers on TextAmerica.
There will be phonecam pictures of pigs shooting down
brothers on TextAmerica.
There will be pictures of Rush Limbaugh being
run out of ABC on a rail for "Addiction to prescription drugs".
There will be slow motion and 360 degree QTVR of John
Kerry strolling through Watts in a doubleknit leisure suit
that he had been saving
For just the proper occasion.

Gap, Starbucks, and Hooters will no longer be so damned relevant,
and women will not care if Aleks finally gets down with
Carrie on Sex in the City because people of colour, or even
no colour at all, will be online looking for a brighter day.
The revolution will be blogged.

There will be no highlights on the eleven o'clock
news and no pictures of heavily pierced women
activists or Condoleeza Rice blowing her nose.
The theme song will not be written by Moby,
the Red Hot Chili Peppers, nor sung by Beyonce, Justin
Timberlake, Sheryl Crow, Alicia Keys, or R.E.M.
The revolution will be blogged.

The revolution will not be right back after a message from our leader about weapons of mass destruction, homeland security, or the axis of evil.
You will not have to worry about anthrax in your
post, armed sky marshalls, or biometric ID cards.
The revolution will not give you a transportable TV entertainment center.
The revolution will not help you to be all you want to be.
The revolution will put you in control of the keyboard.

The revolution will be blogged, will be blogged,
will be blogged, will be blogged.
The revolution will be no re-run brothers;
The revolution will be live.
[from: Google+ Posts]

Ludovic Poitou - ForgeRock4 years ! [Technorati links]

September 16, 2014 07:00 AM

ForgeRock logoFour years ago, exactly I was free from all obligations with my previous employer and started to work for ForgeRock.

My first goal was to setup the French subsidiary and start thinking of building a team to take on development of what we named a coming later OpenDJ.

4 years later, I look at where we are with ForgeRock and I feel amazed and really proud of what we’ve built. vertical-logo_webForgeRock is now well established global business with several hundreds of customers across the globe, and plenty of opportunities for growth. The company has grown to more than 200 employees worldwide and still expanding. The ForgeRock Grenoble Engineering Center has moved to new offices end of May and counts 13, soon 14 employees and we’re still hiring.

Thanks to the ForgeRock founders for the opportunity and let’s keep rocking !!!
ForgeRock CEOForgeRock CTO and Founder


Filed under: General Tagged: ForgeRock, france, grenoble, identity, opendj, startup
September 15, 2014

Nat SakimuraIT&ID 2014に出演します [Technorati links]

September 15, 2014 11:38 PM

IT & ID 2014

9/17(水)に大阪、9/19(金)に東京で開催される『IT&ID 2014』に、外タレの枠で出演します。

General Session [GE-05] です。さて、お約束の「** is DEAD」は出るのか?!

[GE-05]

デジタル・アイデンティティの標準化動向とそのゴール

標準化が進む認証・認可プロトコルやプロビジョニングAPI。現時点で実際のシステム、サービスとして十分に活用できるレベルの標準化技術はどの技術でしょうか。また、どこまでも続く標準化作業のゴールはどこにあるのでしょうか。

OpenIDファウンデーションの議長である崎村氏がわかりやすく解説します。

講師:

OpenID Foundation

Chairman

Mr. Nat Sakimura

 大阪  9/17 15:40~16:10 : ROOM A & B
 東京  9/19 15:50~16:20 : ROOM A & B

外タレ枠ですが、日本語でやりますのでご心配なく。

また、例年通り、クロージングパネルにも登場します。General Session [GE-07]です。

[GE-07]

日本IT業界の特殊性と対応

スマホだけではなく、クラウドでも BYODにしてもその特殊性が目立ち始めた日本の IT市場。特殊性の原因とこの特殊性を認識した上で IT部門や SIerはどう対処すべきか、毎年恒例のメンバーでパネルディスカッションしていただきます。

パネリスト:

株式会社 企
株式会社TNC

代表取締役 クロサカ タツヤ 氏

OpenID Foundation 理事長
Kantara Initiative 理事
株式会社 野村総合研究所 オープンソースソリューション推進室 上席研究員

崎村 夏彦 氏

国際大学GLOCOM

客員研究員 楠 正憲 氏

モデレータ:

一般社団法人 OpenIDファウンデーション・ジャパン

コミュニティ・リード 山中 進吾 氏

 大阪  9/17 16:40~17:40 : ROOM A & B
 東京  9/19 16:50~17:40 : ROOM A & B

無料ですから、ぜひ参加登録の上、お越しください。

[参加登録はこちら]

Julian BondEchopraxia by Peter Watts [Technorati links]

September 15, 2014 07:27 PM
Tor Books (2014), Edition: First Edition, Hardcover, 384 pages
[from: Librarything]