August 22, 2014

Kuppinger ColeExecutive View: CA ControlMinder - 71059 [Technorati links]

August 22, 2014 07:54 AM
In KuppingerCole

CA Technologies is a multinational publicly held software company headquartered in New York, USA. Founded in 1976 to develop and sell mainframe software, over the decades CA Technologies has grown significantly via a series of strategic acquisitions. Although it used to produce consumer software, currently CA Technologies is a major player in the B2B segment, offering a wide range of products and services for mainframe, cloud and mobile platforms in such areas as security, infrastructure...

August 21, 2014

Mike Jones - MicrosoftWorking Group Draft for OAuth 2.0 Act-As and On-Behalf-Of [Technorati links]

August 21, 2014 11:06 PM

OAuth logoThere’s now an OAuth working group draft of the OAuth 2.0 Token Exchange specification, which provides Act-As and On-Behalf-Of functionality for OAuth 2.0. This functionality is deliberately modelled on the same functionality present in WS-Trust.

Here’s a summary of the two concepts in a nutshell: Act-As indicates that the requestor wants a token that contains claims about two distinct entities: the requestor and an external entity represented by the token in the act_as parameter. On-Behalf-Of indicates that the requestor wants a token that contains claims only about one entity: the external entity represented by the token in the on_behalf_of parameter.

This draft is identical to the previously announced token exchange draft, other than that is a working group document, rather than an individual submission.

This specification is available at:

An HTML formatted version is also available at:

Nat Sakimura米国保健福祉省国家医療IT調整室(ONC)がOpenID Foundationに理事として加盟 [Technorati links]

August 21, 2014 11:00 PM

National Coordinator for Health Information Technology米国保健福祉省(HHS)国家医療IT調整室(ONC)が、現地時間8月21日、米国OpenID Foundation(OIDF, 理事長:崎村夏彦)に理事機関として加盟しました。ONCは米国連邦政府における全米の健康情報の電子的な交換を行うための最高度の健康情報技術(HIT)[1]を利用・実装するための調整を主担当する機関です。

ONCは現政権のの健康情報技術関連の取り組みの最前線におり、健康情報技術の普及と全国規模の医療情報共有インフラ構想である「NwHIN(Nationwide Health Information Network)[2]を推進するための、国家健康システム用の標準開発の主要リソースです。デビー・ブーチ氏(Ms. Debbi Bucci)がONCの代表としてOpenID Foundationに参加します。

ONCはOIDFで2つのことに取り組もうと考えています。一つ目は、Healthcare Information Exchange (HIE) ワーキング・グループを主導してOpenID Connectのプロファイルを定義することで、もう一つは、それを使ったパイロットプロジェクトをを推進することです。ONCにおいて技術プロファイリングと相互互換性試験を主導する実装・試験部門のITアーキテクトであるブーチ氏が、HIE WGの活動を率います。


[1] Health Information Technology, HIT.


[3] OpenID Foundation: “US Government Office of the National Coordinator for Health Information Technology (ONC) Joins the OpenID Foundation”,

Kantara InitiativeKantara and IEEE IoT Leaders Gather in Mountain View, CA [Technorati links]

August 21, 2014 06:45 PM

Welcome to the Kantara “IoT and Harmonization Workshop”

 In a world of increasing network connectivity that interacts with more and more active and passive sensors, data is generated, managed, and consumed in mass.  Industry experts will discuss findings regarding standardization of the IoT space and where possible gaps exist.  Focus will include review of use cases and demos as well as  implications of identity and personal identifiable information with in the IoT space.

There are many initiatives in the IoT space and knowing where to go can be a challenge.   Our goal for this event is to connect broad IoT experts with Identity & IoT experts.  Kantara Initiative’s Identities of Things (IDoT) group is leading the way for the intersection of IoT and Identity. With this opportunity we will connect IEEE communities with Identity communities through our Kantara workshop. We are proud to partner with the IEEE-SA as one of the leaders in standardization of IoT.  If you’re already attending the IEEE-SA event consider this your warm up.

Space for the Kantara IoT Harmonization workshop is limited, to register for the workshop, please click here.

Why attend:

Who should attend:

September 17, Day 1/2 Workshop - This event will begin at 12:00pm and conclude at 5:00pm.  This will be an information and interactive discussion to kick-off the IEEE Standards Association 2-day Internet of Things (IoT) Workshop with the goal to begin connecting the Identity IoT communities with the IEEE IoT community.

Agenda coming soon.

OpenID.netUS Government Office of the National Coordinator for Health Information Technology (ONC) Joins the OpenID Foundation [Technorati links]

August 21, 2014 03:07 PM

The Office of the National Coordinator for Health Information Technology (ONC) located within the Office of the Secretary for the U.S. Department of Health and Human Services (HHS) has joined the OpenID Foundation (OIDF). ONC is the principal federal entity charged with coordination of nationwide efforts to implement and utilize the most advanced health information technology for the electronic exchange of health information.

ONC is at the forefront of the Administration’s Health IT efforts and is a key standards development resource to the national health system to support the adoption of health information technology and the promotion of nationwide health information exchanges. Ms. Debbie Bucci will join the Board of Directors of the OpenID Foundation as the ONC representative.

Two key initiatives the ONC plans to undertake within the OIDF is to lead a Healthcare Information Exchange (HIE) working group to create a profile of OpenID Connect and follow-on associated pilot projects. Ms. Bucci, an IT Architect in the Implementation and Testing Division, is helping lead a profiling and interoperability testing effort at ONC and will be one of the leaders of the HIE working group activities.

Don Thibeau, Executive Director of the OIDF, pointed out that this public sector effort parallels the increasing global adoption among large commercial enterprises. Google, Microsoft, Ping identity, Salesforce, ForgeRock and others have embraced OpenID Connect as fundamental to their identity initiatives. Thibeau noted, “After the launch of OpenID Connect early this year, the OIDF finds itself working on one of the hardest use cases in identity; patient medical records at the same time as working on the platform of choice; the mobile device. Working with OIDF member organizations like the ONC, GSMA and others brings important domain expertise and a user-centric focus to these OIDF working groups. These standards development activities are loosely coupled with pilots in the US, UK and Canada.”

If you are interested in the HIE working group, please consider attending the OpenID Day on RESTful Services in Healthcare at MIT on September 19th in Cambridge, MA. This event will focus on emerging Web-scale technologies as applied to health information sharing. The focus will be on group discussion among MIT’s expert participants. The OIDF will follow its standards development process while MIT leads outreach and industry engagement. This day is part of the 2-day annual MIT KIT Conference at MIT on September 18-19. For more information on this event and to register, please visit

Mike Jones - MicrosoftMicrosoft JWT and OpenID Connect RP libraries updated [Technorati links]

August 21, 2014 12:02 AM

This morning Microsoft released updated versions of its JSON Web Token (JWT) library and its OpenID Connect RP library as part of today’s Katana project release. See the Microsoft.Owin.Security.Jwt and Microsoft.Owin.Security.OpenIdConnect packages in the Katana project’s package list. These are .NET 4.5 code under an Apache 2.0 license.

For more background on Katana, you can see this post on Katana design principles and this post on using claims in Web applications. For more on the JWT code, see this post on the previous JWT handler release.

Thanks to Brian Campbell of Ping Identity for performing OpenID Connect interop testing with us prior to the release.

August 20, 2014

Julian BondI'm pleased to see that the 1000 minute Longplayer choral project has reached it's funding target. [Technorati links]

August 20, 2014 07:54 PM
I'm pleased to see that the 1000 minute Longplayer choral project has reached it's funding target.

I still need to make the pilgrimage to the Longplayer installation at Trinity Buoy Wharf. Open at the weekends, 11am to 4/5pm.

Longplayer is a one thousand year long composition that's been running so far for 14 years 232 days 07 hours 52 minutes and 05 seconds and counting.
 Longplayer for Voices - the next step »
Help us to create Longplayer for 240 Voices, the next step in an incredible 1000-year-long musical journey.

[from: Google+ Posts]

GluuGluu’s Business Model [Technorati links]

August 20, 2014 03:45 PM

After listening to a session at SXSWV2V by Patrick van der Pijl, I was encouraged to read Business Model Generation, and to develop the business model diagram below for Gluu.


Nishant Kaushik - OracleWhat Ended Up On The Cutting Room Floor [Technorati links]

August 20, 2014 02:00 PM

If you managed to catch my talk at this years Cloud Identity Summit, either in-person or using the video recording I posted (and if you haven’t, what are you waiting for?), then you know that I relied on humor to engage my audience while presenting a serious vision of how IAM needs to evolve for the better. That humor relied in large part on me visually lampooning some members of the Identerati. Now, its not an easy thing to do (especially when you have a subject like Jonathan) or always fit seamlessly into a narrative, so some of the visuals I spent a lot of time creating ended up not making it into the talk for one reason or another. I just got finished watching the ‘Deleted & Extended Scenes’ in the iTunes Extras of the excellent ‘Captain America: The Winter Soldier’ digital release, and it inspired me to share them with all of you instead of hoarding them for a future talk. So, without further ado, I present:

Pope Bob the Percipient

Was going to use in a slide about the move from authentication to recognition, but Pam was covering a lot of that in her talk before me.

Janitor Brian

This was going to be part of a different version of the Paul Madsen slide, where Brian was cleaning up the debris of buzzwords Paul had discarded. But I couldn’t get the slide to look right.

Jona-Than Sander (alternative version)

Given how my Sith incarnation of Sander got misconstrued as being a nun version of Sander instead, maybe I should have stuck with this one.

Bonus bonus: Saint Patrick and the dragon P@$$w0rd

This wasn’t actually for my talk. I made this afterwards using a CISmcc photo of Patrick in response to this twitter conversation. But I kinda wish I’d had it for the talk. Would have been fun to use.

Tags: , ,

Nat SakimuraJIPDEC、ヤフー他6社と組んで、なりすましメール防止ソリューションを銀行へ提供開始 [Technorati links]

August 20, 2014 01:38 AM


図1 安心マーク

図1 安心マーク







※ Disclosure: 筆者は2014年現在、JIPDECのアドバイザリー委員です。

[1] JIPDECニュースリリース安心して利用できる電子メール環境への取り組みについて
~ なりすましメール防止安心マークを銀行へ導入開始。~」

[2] インフォマニア、シナジーマーケティング、トライコーン、ニフティ、パイプドビッツ、ヤフー

[3] DKIMの仕組みはこちらの記事が詳しいです→電子署名方式の最新技術「DKIM」とは

[4] 「安心マーク」を銀行が初採用、送信ドメイン認証でなりすましメール防止 (2014/8/11)


August 19, 2014

Radiant LogicDiversity Training: Dealing with SQL and non-MS LDAP in a WAAD World [Technorati links]

August 19, 2014 10:20 PM

Welcome to my third post about the recently announced Windows Azure Active Directory (AKA the hilariously-acronymed “WAAD”), and how to make WAAD work with your infrastructure. In the first post, we looked at Microsoft’s entry into the IDaaS market, and in the second post we explored the issues around deploying WAAD in a Microsoft-only environment—chiefly, the fact that in order to create a flat view of a single forest to send to WAAD, you must first normalize the data contained within all those domains. (And let’s be honest, who among us has followed Microsoft’s direction to centralize this data in a global enterprise domain???)

It should come as no surprise that I proposed a solution to this scenario: using a federated identity service to build a global, normalized list of all your users. Such a service integrates all those often overlapping identities into a clean list with no duplicates, packaging them up along with all the attributes that WAAD expects (usually a subset of all the attributes within your domains). Once done, you can use DirSync to upload this carefully cleaned and crafted identity to the cloud—and whenever there’s a change to any of those underlying identities, the update is synchronized across all relevant sources and handed off to DirSync for propagation to WAAD. Such an infrastructure is flexible, extensible, and fully cloud-enabled (more on that later…). Sounds great, right? But what about environments where there are multiple forests—or even diverse data types, such as SQL and LDAP?

Bless this Mess: A Federated Solution for Cleaning Up ALL Your Identity

So far, we’ve talked about normalizing identities coming from different domains in a given forest, but the same virtualization layer that allow us to easily query and reverse-engineer existing data, then remap it to meet the needs of a new target, such as WAAD, is not limited to a single forest and its domains. This same process also allows you to reorganize many domains belonging to many different forests. In fact, this approach would be a great way to meet that elusive target of creating a global enterprise domain out of your current fragmentation.

But while you’re federating and normalizing your AD layer, why stop there? Why not extend SaaS access via WAAD to the parts of your identity that are not stored within AD? What about all those contractors, consultants, and partners stored in your aging Sun/Oracle directories? Or those identities trapped in legacy Novell or mainframe systems? And what about essential user attributes that might be captured in one of these non-AD sources?

As you can see below, all these identities and their attributes can be virtualized, transformed, integrated, then shipped off to the cloud, giving every user easy and secure access to the web and SaaS apps they need.

Creating a Global Image of all Your Identities

Creating a Global Image of all Your Identities

Credentials: Sometimes, What Happens On-Premises Should Stay On-Premises

So we’ve seen how we can get to the attributes related to identities from many different silos and turn them into a cloud-ready image. But there’s still one very important piece that we’ve left out of the picture. What about credentials? They’re always the hardest part—should we sync all those &#@$ passwords, along with every &%!?# password change, over the Internet? If you’re a sizable enterprise integrating an array of SaaS applications, that’s a recipe for security breaches and hack attacks.

But fortunately, within Microsoft’s hybrid computing strategy, we can now manage our identities on-premises, while WAAD interfaces with cloud apps and delegates the credential-checking back to the right domain in the right forest via our good friend ADFS. Plus, ADFS even automatically converts the Kerberos ticket to a SAML token (well, it’s a bit more complex than that, but that’s all you need to know for today’s story).

The bottom line here is that you’ve already given WAAD the clean list of users, as well as the information it needs to route the credential-checking back to your enterprise AD infrastructure, using ADFS. So WAAD acts as a global federated identity service, while delegating the low-level authentication back to where it can be managed best: securely inside your domains and forests. (And I’m happy to say that we’ve been preaching the gospel of on-premises credential checks for years now, so it’s great to see mighty Microsoft join the choir. ;) )

While this is very exciting, we still face the issue of all those identities not managed by Microsoft ADFS. While I explained above how a federated identity layer based on virtualization can help you normalize all your identities for use by WAAD, there’s still one missing link in the chain: how does WAAD send those identities back to their database or Sun/Oracle directory for the credential checking phase? After all, ADFS is built to talk to AD—not SQL or LDAP. Luckily, federation standards allow you to securely extend this delegation to any other trusted identity source. So if you have a non-MS source of identities in your enterprise and you can wrap them through a federation layer so they work as an IdP/secure token service, you’re in business. Extend the trust from ADFS to your non-AD subsystem through an STS and—bingo—WAAD now covers all your identity, giving your entire infrastructure secure access to the cloud.

How WAAD, ADFS, and RadiantOne CFS Work Together

How WAAD, ADFS, and RadiantOne CFS Work Together

We call this component “CFS” within our RadiantOne architecture, and with CFS and our VDS, you have a complete solution for living a happy, tidy, and secure life in the hybrid world newly ordained by Microsoft…(cue the choir of angels, then give us a call if you’d like to discuss how we can make this happen within your infrastructure…). :)

Thanks, as always, for reading my thoughts on these matters. And feel free to share yours in the comments below.

← Part 2: Hybrid Identity in the MS World

The post Diversity Training: Dealing with SQL and non-MS LDAP in a WAAD World appeared first on Radiant Logic, Inc

Kantara InitiativeKantara IoT Leaders Gather in Utrecht [Technorati links]

August 19, 2014 09:20 PM

Kantara Initiative leaders and innovators are set to gather in Utrecht, Netherlands September 4th-5th.  In an event kindly hosted by SURFnet and sponsored by Forgerock, leaders from the Kantara IDentities of Things (IDoT), User Managed Access (UMA), and Consent and Information Sharing (CIS) Open Notice Groups are set to present 1.5 days of innovation harmonization.  Areas of coverage include use cases and demos that focus on the Identity layer of IoT.  Specifically, the event will address access control, notice, and consent with regard to contextual Identity systems.  Leaders will discuss these topics ranging from user-centric to enterprise and industrial access. Don’t miss this opportunity to connect with peers, partners, and competitors.

Find the draft agendas below.  Note: Agenda subject to change in this dynamic event.

Space is Limited. Register Now: Identity and Access Control – Context, Choice, and Control in the age of IoT                


In a world of increasing network connectivity that interacts with more and more active and passive sensors, data is generated, managed, and consumed in mass.  Industry experts will discuss findings regarding standardization of the IoT space and where possible gaps exist.  Focus will include review of use cases and demos as well as implications of identity and personal identifiable information within the IoT space.

Why attend:

Who should attend:

Day 1: Thursday September 4th

Time Topic Lead
13:00  Welcome – Setting the Stage Allan Foster, Forgerock, President Kantara InitiativeJoni Brennan, Executive Dir. Kantara Initiative
13:15  UMA Use Cases and Flows (technical and non-technical) Maciej Machulak, Cloud IdentityMark Dobrinic
14:15  IDoT Use Cases Ingo Friese, Deutche Telekom
14:45  Break
15:00  Open Notice Use Cases and Flows Mark Lizar, Smart Species
15:30  Collection of Breakout Topics & Working Sessions Joni Brennan, Executive Dir. Kantara InitiativeGroup Participation
16:30  Calls to Action & Thanks (Dankuwel!) Joni Brennan, Executive Dir. Kantara InitiativeAllan Foster, Forgerock, President Kantara Initiative

Day 1: Friday September 5th

Time Topic Lead
10:00 Welcome – Setting the Stage Allan Foster, Forgerock, President Kantara InitiativeJoni Brennan, Executive Dir. Kantara Initiative
10:15 Kantara Mission Overview – Opportunities and Trust in the age of IoT Joni Brennan, Executive Dir. Kantara Initiative
10:30 UMA Presentation & Demo Maciej Machulak, Cloud IdentityMark Dobrinic
11:30 UMA as an authorization mechanism for IoT Maciej Machulak, Cloud IdentityIngo Friese, Deutche Telekom
12:30 Lunch
13:30 Open Notice - Minimum Viable Consent Receipt Mark Lizar, Smart Species
14:30 Privacy in the age of IDentities of Things Ingo Friese, Deutche TelekomMaciej Machulak, Cloud Identity
15:30 Break
15:45 Collection of Breakout Topics & Breakouts Sessions Joni Brennan, Executive Dir. Kantara InitiativeGroup Participation
16:15 Calls to Action & Thanks (Dankuwel!) Joni Brennan, Executive Dir. Kantara InitiativeAllan Foster, Forgerock, President Kantara Initiative
August 18, 2014

Julian BondHere's the next foodie quest. Who makes the best Chai Tea Bags? [Technorati links]

August 18, 2014 03:12 PM
Here's the next foodie quest. Who makes the best Chai Tea Bags?

Teapigs. Both the Chai and Chilli Chai are excellent. But I seriously baulk at £4 for 15 bags. I mean, WTF?

Natco Masala. A good spicy tea with a bit of bite. But there's a lot of pepper in there and the bags are quite low quality so you get a lot of dust. Hard to get except in the two big supermarkets at the bottom of Brick Lane. Luckily they do some big packs so you don't need to buy them too often.

Palanquin spiced tea. ISTR these are ok, although I haven't had any for a while. Seem to be quite widely available in Asian corner shops.

Twinings, Tesco, Sainsburys. These are all just a bit tasteless. Not nearly enough cardomum, clove, coriander and so on. Chai really should be at least as strong as yorkshire builder's tea with the added flavours of the spices.

Anyone tried Wagh Bakri Masala Chai?
[from: Google+ Posts]

Julian BondThings I've learned about my Aeropress [Technorati links]

August 18, 2014 10:18 AM
Things I've learned about my Aeropress

Ignore all the obsessing about using it upside down, pre-watering the filter and so on. Only 3 things matter, the quality of the coffee, the temperature of the water and emptying it as soon after use as possible so the rubber bung doesn't harden and lose it's seal.

I like a good strong Italian style taste without it being too aggressive.

- Mainstream. Tescos Italian Blend, Lavazza Black, Carte Noire. These are all perfectly serviceable, easily available, every day, fine filter or expresso grinds that just work and are predictable.

- Algerian Coffee Shop, Soho, London at

"Formula Rossa" their main blend that they use for the take away coffee they serve in the shop. Straight forwards and recommended. Ideal for an Americano 
"Cafe Torino" For a stronger Expresso/Ristretto cup, try this one. It's a bit more aggressive than the Formula Rossa.
"Velluto Nero" After Dinner Expresso. Gorgeous but too much for every day drinking.

A note about grinds. I find a straight expresso grind works best. In the Algerian Coffee shop that's a "4" on their machine. Finer than filter or french press, but not so fine that you get finings and dust in the bottom of the cup.

Water Temperature.
After the choice of coffee this is the single biggest factor in the quality of the end product. You need to aim for 80-85C. This is tricky without spending huge amounts on clever kettles or messing around with thermometers. Any higher than that and you'll "burn" the grounds and make the coffee more bitter. The simple trick is to boil about 750ml of water (1/2 a kettle?) and then wait 30-60 seconds after the kettle turns itself off. So don't start assembling the Aeropress, coffee, filter, mug and so on until the kettle has boiled and by the time you're ready to pour in the water, 60 secs will have gone by and you'll be about right. 

Rubber bungs, filter caps, filters, stainless filter disks, tote bags, etc, etc. The stainless filters didn't really work for me. The paper filters are cheap and easier and just work. There's a rubber travel cap but it's a bit inconvenient and only really works for storing a few days supplies of filters in the plunger. 

Just empty the Aeropress immediately in the bin and wipe the base of the rubber bung under the tap. Then store it either in two pieces or with the piston all the way through so the bung isn't under pressure. Otherwise the bung will eventually take a set and won't seal any more. It's pretty much self cleaning so just a quick rinse is all that's needed.

Don't bother with all the complication. Don't worry about pressing air though the grounds. Don't bother with the upside down method. If your cup is too small to fit the aeropress in the top, use the hexagonal funnel.

White Americano or filter coffee.
This is the typical every day mug of coffee.  Put on your 750ml (ish) of water in the kettle. When it boils get the mug, aeropress and stuff out of the cupboard. Assemble the paper filter and cap and set it on the mug. Add a 15ml scoop of grounds. Fill slowly with hot water to the 3 mark. Give it a quick swirl with a spoon to settle the grounds. Wait till it drips so the surface is down to the 2 mark, say 20 seconds. Insert the plunger and press gently down till the grounds are squashed. Add a splash of milk. Empty the aeropress and wipe. Done! Enjoy! 

Double expresso.
As above but 30ml of coffee grounds which is the scoop that comes with the Aeropress. Fill with water to the 2 mark. Let it drop to the 1 mark and press.

I have a stubby 15fl oz, 400ml thermos which holds about 2 mugs worth. 30ml or 45ml of grounds, fill to the 4 mark. Press when it drops to 3. Add milk till it's the right colour. Top up with boiling water.

I struggle to think of any! I think there's potentially a redesign that makes it easier to travel with the kit and a week's supply of filters and coffee. Perhaps the cap could screw onto the other end of the plunger.

Just occasionally the seal doesn't quite work between the main cylinder and the cap. I'm not quite sure where it leaks from but it can lead to dribbles down the side of the mug.

Anyway. If you haven't tried one and you like coffee then get an Aeropress. for making one or two cups of coffee it's way better than Cafetieres, Mocha stove machines, drip filters and so on. And it's considerably cheaper and easier than expresso machines. And even if the pod machines are convenient, they're just WRONG. The old school filter coffee machines still work best for 4 mugs and upwards.

So I really don't think there's anything better for small quantities.
[from: Google+ Posts]

Kuppinger ColeExecutive View: CyberArk Privileged Threat Analytics - 70859 [Technorati links]

August 18, 2014 08:51 AM
In KuppingerCole

In some form, Privilege Management (PxM) already existed in early mainframe environments: those early multi-user systems included some means to audit and control administrative and shared accounts. Still, until relatively recently, those technologies were mostly unknown outside of IT departments. However, the ongoing trends in the IT industry have gradually shifted the focus of information security from perimeter protection towards defense against...

Kuppinger ColeExecutive View: Oracle Audit Vault and Database Firewall - 70890 [Technorati links]

August 18, 2014 08:36 AM
In KuppingerCole

Oracle Audit Vault and Database Firewall monitors Oracle databases and databases from other vendors. It can detect and block threats to databases while consolidating audit data from the database firewall component and the databases themselves. It also collects audit data from other sources such as operating system log files, application logs, etc...


Kaliya Hamlin - Identity WomanBC Identity Citizen Consultation Results!!!! [Technorati links]

August 18, 2014 04:22 AM

As many of you know I (along with many other industry leaders from different industry/civil society segments) was proactively invited to be part of the NSTIC process including submitting a response to the notice of inquiry about how the IDESG and Identity Ecosystem should be governed.

I advocated and continue to advocate that citizen involvement and broad engagement from a broad variety of citizen groups and perspectives would be essential for it to work. The process itself needed to have its own legitimacy even if "experts" would have come to "the same decisions" if citizens were and are not involved the broad rainbow that is America might not accept the results.

I have co-lead the Internet Identity Workshop since 2005 every 6 months in Mountain View, California at the Computer History Museum. It is an international event and folks from Canada working on similar challenges have been attending for several years this includes Aran Hamilton from the National oriented Digital ID and Authentication Council (DIAC) and several of the leaders of the British Columbia Citizen Services Card effort.

I worked with Aron Hamilton helping him put on the first Identity North Conference to bring key leaders together from a range of industries to build shared understanding about what identity is and how systems around the world are working along with exploring what to do in Canada.

CoverThe British Columbia Government (a province of Canada where I grew up) worked on a citizen services card for many years. They developed an amazing system that is triple blind. An article about the system was recently run in RE:ID. The system launched with 2 services - drivers license and health services card. The designers of the system knew it could be used for more then just these two services but they also knew that citizen input into those policy decisions was essential to build citizen confidence or trust in the system.  The other article in the RE:ID magazine was by me about the citizen engagement process they developed.

They developed to extensive system diagrams to help provide explanations to regular citizens about how it works. (My hope is that the IDESG and the NSTIC effort broadly can make diagrams this clear.)


The government created a citizen engagement plan with three parts:

The first was convening experts. They did this in relationship with Aron Hamilton and Mike Monteith from Identity North - I as the co-designer and primary facilitator of the first Identity North was brought into work on this. They had an extensive note taking team and the reported on all the sessions in a book of proceedings. They spell my name 3 different ways in the report.

The most important was a citizen panel that was randomly selected citizens to really deeply engage with citizens to determine key policy decisions moving forward. It also worked on helping the government understand how to explain key aspects of how the system actually works. Look in the RE:ID I wrote an article for RE:ID about the process you can see that here.
The results were not released when I wrote that. Now they are! yeah! The report is worth reading because it shows the regular citizens who are given the task of considering critical issues can come out with answers that make sense and help government work better.



They also did an online survey open for a month to any citizen of the province to give their opinion. That you can see here.

Together all of these results were woven together into a collective report.


Bonus material: This is a presentation that I just found covering many of the different Canadian province initiatives.


PS: I'm away in BC this coming week - sans computer.  I am at Hollyhock...the conference center where I am the poster child (yes literally). If you want to be in touch this week please connect with William Dyson my partner at The Leola Group.

August 17, 2014

Anil JohnThe Missing Link Between Tokens and Identity [Technorati links]

August 17, 2014 06:20 PM

Component identity services, where specialists deliver services based on their expertise, is a reality in the current marketplace. At the same time, the current conversations on this topic seem to focus on the technical bits-n-bytes and not on responsibilities. This blog post is an attempt to take a step back and look at this topic through the lens of accountability.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

August 16, 2014

Nat Sakimura政府、マイナンバー用コールセンター10月を目処に設置へ [Technorati links]

August 16, 2014 09:36 PM


引用元: マイナンバーでコールセンター 政府、10月メド  :日本経済新聞.



[1] 内閣官房:『社会保障・税番号制度』



Julian BondA map of the introvert's heart. [Technorati links]

August 16, 2014 05:31 PM
A map of the introvert's heart.

It's missing a ship that visits the island occasionally, but doesn't stay for long; "The Valley of Longing for Company".
 A Map of the Introvert’s Heart By an Introvert »

We missed this wonderful illlustration when it hit the internet last month, but how timeless is Gemma Correll's map of an introvert's heart?

More cool stuff in Medium's "I Love Charts" archives.

[from: Google+ Posts]
August 15, 2014

Mike Jones - MicrosoftThe Increasing Importance of Proof-of-Possession to the Web [Technorati links]

August 15, 2014 12:40 AM

W3C  logoMy submission to the W3C Workshop on Authentication, Hardware Tokens and Beyond was accepted for presentation. I’ll be discussing The Increasing Importance of Proof-of-Possession to the Web. The abstract of my position paper is:

A number of different initiatives and organizations are now defining new ways to use proof-of-possession in several kinds of Web protocols. These range from cookies that can’t be stolen and reused, identity assertions only usable by a particular party, password-less login, to proof of eligibility to participate. While each of these developments is important in isolation, the pattern of all of them concurrently emerging now demonstrates the increasing importance of proof-of-possession to the Web.

It should be a quick and hopefully worthwhile read. I’m looking forward to discussing it with many of you at the workshop!

August 14, 2014

KatasoftBuild a Node API Client - Part 2: Encapsulation, Resources, & Architecture [Technorati links]

August 14, 2014 03:00 PM

Build a Node API Client – Part 2: Encapsulation, Resources, & Architecture... oh my!

Welcome to Part Two of our series on Node.js Client Libraries. This post serves as our guide to REST client design and architecture. Be sure to check out Part One on Need-To-Know RESTful Concepts before reading on.

API Encapsulation

Before sinking our teeth into resources and architecture, let’s talk about encapsulation. At Stormpath, we like to clearly separate the public and private portions of our API client libraries, aka ‘SDKs’ (Software Development Kit).

All private functionality is intentionally encapsulated, or hidden from the library user. This allows the project maintainers to make frequent changes like bug fixes, design and performance enhancements, all while not impacting users. This leads to a much greater level of maintainability, allowing the team to deliver better quality software, faster, to our user community. And of course, an easier-to-maintain client results in less friction during software upgrades and users stay happy.

To achieve this, your Node.js client should only expose users to the public version of your API and never the private, internal implementation. If you’re coming from a more traditional Object Oriented world, you can think of the public API as behavior interfaces. Concrete implementations of those interfaces are encapsulated in the private API. In Node.js too, functions and their inputs and output should rarely change. Otherwise you risk breaking backwards compatibility.

Encapsulation creates lot of flexibility to make changes in the underlying implementation. That being said, semantic versioning is still required to keep your users informed of how updates to the public will affect their own code. Most developers will already be familiar semantic versioning, so it’s an easy usability win.

Encapsulation In Practice

We ensure encapsulation primarily with two techniques: Node.js module.exports and the ‘underscore prefix’ convention.


Node.js gives you the ability to expose only what you want via its module.exports capability: any object or function in a module’s module.exports object will be available to anyone that calls.

This is a big benefit to the Node.js ecosystem and helps improve encaspsulation goals better than traditional JavaScript environments.

Underscore Names

Additionally we use the ‘underscore prefix’ convention for objects or functions that are considered private by the development team but still accessible at runtime because of JavaScript’s weak encapsulation behavior. That is, any object or function that starts with the underscore _ character is considered private and its state or behavior can change, without warning or documentation, on any given release.

The takeaway is that external developers should never explicitly code against anything that has a name that starts with an underscore. If they see a name that starts with an underscore, it’s simply ‘hands off’.

Alternatively, other libraries use @public and @private annotations in their JS docs as a way of indicating what is public/allowed vs. private/disallowed. However, we strongly prefer the underscore convention because anyone reading or writing code that does not have immediate access to the documentation can still see what is public vs private. For example it is common when browsing code in GitHub or Gists that annotations in documentation are not easily available. However, you can still always tell that underscore-prefixed methods are to be considered private.

Either way, you need to consistently convey which functions to use and which to leave alone. You may want to omit the private API from publicly hosted docs to prevent confusion.

Public API

The public API consists of all non-private functions, variables, classes, and builder/factory functions.

This may be surprising to some, but object literals used as part of configuration are also part of the public API. Think of it like this: if you tell people to use a function that requires an object literal, you are making a contract with them about what you support. It’s better to just maintain backwards and forwards compatibility with any changes to these object literals whenever possible.

Prototypical OO Classes

We use prototypical inheritance and constructor functions throughout the client, but the design reflects a more traditional OO style. We’ve found this makes sense to most of our customers of all skill/experience levels.

Stormpath is a User Management API, so our classes represent common user objects like Account, in addition to more generic classes, like ApiKey. A few classes used as examples in this post:

Builder Functions

Node.js and other APIs often use method chaining syntax to produce a more readable experience. You may have also heard of this referred to as a Fluent Interface.

In our client, it’s possible to perform any API operation using a client instance. For example, getApplications obtains all Applications by using the client and method chaining:

.execute(function (err, apps){

There are two important things to note from this getApplications example:

  1. Query construction with where, startsWith and orderBy functions is synchronous. These are extremely lightweight functions that merely set a variable, so there is no I/O overhead and as such, do not need to be asynchronous.
  2. The execute function at the end is asynchronous and actually does the work and real I/O behavior. This is always asynchronous to comply with Node.js performance best practices.

Did you notice getApplications does not actually return an applications list but instead returns a builder object?

A consistent convention we’ve added to our client library is that get* methods will either make an asynchronous call or they will return a builder that is used to make an asynchronous call.

But we also support direct field access, like, and this implies a normal property lookup on a dictionary and a server request will not be made.

So, calling a getter function does something more substantial. Both still retain familiar dot notation to access internal properties. This convention creates a clear distinction between asynchronous behavior and simple property access, and the library user knows clearly what to expect in all cases.

Writing code this way helps with readability too – code becomes more simple and succinct, and you always know what is going on.

Base Resource Implementation

The base resource class has four primary responsibilities:

  1. Property manipulation methods – Methods (functions) with complicated interactions
  2. Dirty Checking – Determines whether properties have changed or not
  3. Reference to DataStore – All our resource implementations represent an internal DataStore object (we’ll cover this soon)
  4. Lazy Loading – Loads linked resources

Resource and all of its subclasses are actually lightweight proxies around a DataStore instance, which is why the constructor function below takes two inputs:

  1. data (an object of name/value pairs)
  2. A DataStore object.

     var utils = require('utils');
     function Resource(data, dataStore) {
       var DataStore = require('../ds/DataStore');
       if (!dataStore && data instanceof DataStore){
         dataStore = data;
         data = null;
       data = data || {};
       for (var key in data) {
         if (data.hasOwnProperty(key)) {
           this[key] = data[key];
       var ds = null; //private var, not enumerable
       Object.defineProperty(this, 'dataStore', {
         get: function getDataStore() {
           return ds;
         set: function setDataStore(dataStore) {
           ds = dataStore;
       if (dataStore) {
         this.dataStore = dataStore;
     utils.inherits(Resource, Object);
     module.exports = Resource;

When CRUD operations are performed against these resource classes, they just delegate work to the backend DataStore. As the DataStore is a crucial component of the private API, we keep it hidden using object-defined private property semantics. You can see this in practice with the public getters and setters around the private attribute above. This is one of the few ways to implement proper encapsulation in JavaScript.

If you remember to do just two things when implementing base resource classes, let them be:

  1. Copy properties over one-to-one
  2. Create a reference to a DataStore object to use later

Base Instance Resource Implementation

InstanceResource is a subclass of Resource. The base instance resource class prototypically defines functions such as save and delete, making them available to every concrete instance resource.

Note that the saveResource and deleteResource functions delegate work to the DataStore.

var utils = require('utils');
var Resource = require('./Resource');

function InstanceResource() {
  InstanceResource.super_.apply(this, arguments);
utils.inherits(InstanceResource, Resource); = function saveResource(callback) {
  this.dataStore.saveResource(this, callback);

InstanceResource.prototype.delete = function deleteResource(callback) {
  this.dataStore.deleteResource(this, callback);

In traditional object oriented programming, the base instance resource class would be an abstract. It isn’t meant to be instantiated directly, but instead should be used to create concrete instance resources like Application:

var utils = require('utils');
var InstanceResource = require('./InstanceResource');

function Application() {
  Application.super_.apply(this, arguments);
utils.inherits(Application, InstanceResource);

Application.prototype.getAccounts = function 
getApplicationAccounts(/* [options,] callback */) {
  var self = this;
  var args =;
  var callback = args.pop();
  var options = (args.length > 0) ? args.shift() : null;
  return self.dataStore.getResource(self.accounts.href, options, 
                                    require('./Account'), callback);

How do you support variable arguments in a language with no native support for function overloading? If you look at the getAccounts function on Applications, you’ll see we’re inspecting the argument stack as it comes into the function.

The comment notation indicates what the signature could be and brackets represent optional arguments. These signal to the client’s maintainer(s) (the dev team) what the arguments are supposed to represent. It’s a handy documentation syntax that makes things clearer.

Application.prototype.getAccounts = function 
getApplicationAccounts(/* [options,] callback */) {

options is an object literal of name/value pairs and callback is the function to be invoked. The client ultimately directs the work to the DataStore by passing in an href. The DataStore uses the href to know which resource it’s interacting with server-side.

Usage Paradigm

Let’s take a quick look at an example JSON resource returned by Stormpath:

  “href”: “”,
  “givenName”: “Tony”,
  “surname”: “Stark”,
  “directory”: {
    “href”: “”

Every JSON document has an href field that exists in all resources, everywhere. JSON is exposed as data via the resource and can be referenced via standard dot notation like any other JavaScript object.

Note: Check out this blog post on linking and resource expansion if you’re wondering how we handle linking in JSON.

Proxy Pattern

Applications using a client will often have an href for one concrete resource and need access to many others. In this case, the client should support a method (e.g. getAccount) that takes in the href they have, to obtain the ones they need.

String href = '';

client.getAccount(href, function(err, acct) {
  if (err) throw err;

  account.getDirectory(function(err, dir) {
    if (err) throw err;

In the above code sample,getAccount returns the corresponding Account instance, and then the account can be immediately used to obtain its parent Directory object. Notice that you did not have to use the client again!

The reason this works is that the Account instance is not a simple object literal. It is instead a proxy, that wraps a set of data and the underlying DataStore instance. Whenever it needs to do something more complicated than direct property access, it can automatically delegate work to the datastore to do the heavy lifting.

This proxy pattern is popular because it allows for many benefits, such as programmatic interaction between references, linked references, and resources. In fact, you can traverse the entire object graph with just the initial href! That’s awfully close to HATEOS! And it dramatically reduces boilerplate in your code by alleviating the need to repeat client interaction all the time.

SDK architecture diagram

So how does this work under the hood? When your code calls account.getDirectory, the underlying (wrapped) DataStore performs a series of operations under the hood:

  1. Create the HTTP request
  2. Execute the request
  3. Receive a response
  4. Marshal the data into an object
  5. Instantiate the resource
  6. Return it to the caller

Client Component Architecture

Clearly, the DataStore does most of the heavy lifting for the client. There’s actually a really good reason for this model: future enhancements.

Your client will potentially handle a lot of complexity that is simpler in the long run to decouple from resource implementations. Because the DataStore is part of the private API, we can leverage it to plugin new functionality and add new features without changing the Public API at all. The client will just immediately see the benefits.


Here is a really good example of this point. The first release of our SDK Client did not have caching built in. Any time a Stormpath-backed app called getAccount, getDirectory, or any number of other methods, the client always had to execute an HTTP request to our servers. This obviously introduced latency to the application and incurred an unnecessary bandwidth hit.

However our DataStore-centric component architecture allowed us to go back in and plug in a cache manager. The instant this was enabled, caching became a new feature available to everyone and no one had to change their source code. That’s huge.

Anyway, let’s walk through the sequence of steps in a request, to see how the pieces work together.

Cache Manager Diagram

First, the DataStore looks up the cache manager, finds a particular region in that cache, and checks if the requested resource is in cache. If it is, the client returns the object from the cache immediately.

If the object is not in cache, the DataStore interacts with the RequestExecutor. The RequestExecutor is another DataStore component that in turn delegates to two other components: an AuthenticationStrategy and the RequestAuthenticator.


REST clients generally authenticate by setting values in the authorization header. This approach is incredibly convenient because it means swapping authentication strategies is a simple matter of changing out the header. All that is required is to change out the AuthenticationStrategy implementation and that’s it – no internal code changes required!

Many clients additionally support multiple/optional authentication schemes. More on this topic in part 3.

After authentication, the RequestExecutor communicates the outgoing request to the API server.

RequestExecutor to API Server

Finally, the ResourceFactory takes the raw JSON returned by the API server and invokes a constructor function to create the instance resource that wraps (proxies) this data, and again, the DataStore.


All of the client components represented in this diagram should be pluggable and swappable based on your particular implementation. To make this a reality as you architect the client, try to adhere to the Single Responsibility Principle: ensure that your functions and classes do one and only one thing so you can swap them out or remove them without impacting other parts of your library. If you have too many branching statements in your code, you might be breaking SRP and this could cause you pain in the future.

And there you have it! Our approach to designing a user-friendly and extremely maintainable client to your REST API. Check back for Part Three and a look at querying, authentication, and plugins!

API Management with Stormpath

Stormpath makes it easy to manage your API keys and authenticate developers to your API service. Learn more in our API Key Management Guide and try it for free!

CourionPurdue Pharma Selects Courion to Fulfill Identity and Access Management Requirements [Technorati links]

August 14, 2014 02:56 PM

Access Risk Management Blog | Courion

David DiGangiPurdue Pharma L.P., a privately held pharmaceutical company based in Stamford Connecticut, has selected the Courion Access Assurance Suite after an evaluation of several competing offerings. The pharmaceutical company will leverage the intelligence capabilities of access assurance suite to maintain regulatory compliance and mitigate risk.

Purdue Pharma, together with its network of independent associated US companies, has administrative, research and manufacturing facilities in Connecticut, New Jersey and North Carolina.Purdue Pharma logo

With implementation of the intelligence capabilities within the Courion IAM Suite, Purdue will be able to leverage this product to automate routine IAM tasks and maintain compliance with US Food & Drug Administration requirements.

Kuppinger ColeExecutive View: WSO2 Identity Server - 71129 [Technorati links]

August 14, 2014 10:11 AM
In KuppingerCole

In contrast to common application servers, WSO2 provides a more comprehensive platform, adding on the one hand features such as event processing and business rule management, but on the other hand also providing strong support for security features. The latter includes WSO2 API Manager, which manages API (Application Programming Interface) traffic and thus supports organizations in managing and protecting the APIs they are exposing, for instance to business partners....

Kuppinger ColeExecutive View: Druva inSync - 71131 [Technorati links]

August 14, 2014 09:53 AM
In KuppingerCole

Druva’s approach to information protection is quite unique among traditional solutions, since instead of maintaining a centralized data storage and enabling secure access to it from outside, inSync maintains a centralized snapshot of data backed up from all endpoints and operates on this snapshot only, leaving the original data on endpoints completely intact.
Having its roots in a multiplatform cloud backup and file sharing platform, inSync has evolved into an integrated service...

August 13, 2014

Pamela Dingle - Ping IdentityThe next conversation to be had [Technorati links]

August 13, 2014 05:01 PM

Ok, now that CIS and Catalyst conferences are (almost) out of the way, we need to rally the identity geeks and start talking about OAuth and OpenID Connect design patterns.   We need to get some public discourse going about token architectures for various real world business access scenarios.

The value proposition needs to be made more concrete.  So let’s try to push on that rope in the next few months.


August 12, 2014

Nat Sakimura政府、マイナンバー制度に関わる本人確認の措置についての資料を公表 [Technorati links]

August 12, 2014 11:00 PM


これは、行政手続における特定の個人を識別するための番号(マイナンバー)の利用時に法律に基づき要請される番号確認および身元確認の方法について解説したもので、(I)本人から提供を受ける場合 (II)代理人から提供を受ける場合に分け、それぞれ(1)対面・郵送(2)オンライン(3)電話の場合に分けて説明したものです。

わけかたとして、対面と郵送を一つにしているところが面白いですね。国際規格だと、(1)対面(2)リモートに分けますから、郵送はどちらかと言うとオンラインとセットになりそうですが、今回の政府のわけかたはむしろ(1)紙での確認 (2)電子的確認 (3)音声による確認、という分け方にしたように思われます。


① 個人番号カード(ICチップの読み取り)【則4一】
② 公的個人認証による電子署名 【則4二ハ】
③ 個人番号利用事務実施者が適当と認める方法 【則4二ニ】

ここで【則4二ニ】は、「施行規則第四条第二項のニを参照するように」ということですね。実際に該当部分を見ると「ハに掲げるもののほか、個人番号利用事務実施者が適当と認める方法により、当該電子情報処理組織に電気通信回線で接続した電子計算機を使用する者が当該提供を行う者であることを確認すること。 」と書いてあります。

この資料で個人的に注目したのは、上記の③ 個人番号利用事務実施者が適当と認める方法に付けられた解説です。「※ 民間発行の電子署名、個人番号利用事務実施者によるID・PWの発行などを想定」とのことですので、今後の広がりが考えられますね。

[1] 内閣官房:『本人確認の措置についての資料』

[2] 内閣官房:『行政手続における特定の個人を識別するための番号の利用等に関する法律施行規則(マイナンバー法施行規則)』

[3] うさぎイラストはこちらからいただきました→ マイナンバーのキャラクターのうさぎを使うと、使用規約にひっかかるとの指摘があったので。

Kantara InitiativeRoad to SXSW 2015 [Technorati links]

August 12, 2014 10:43 PM

Care and Feeding of Human & Device Relationships

It’s that time again to choose your sessions for SXSW Interactive.  Here’s a summary from our experience last year as well as just a few of our suggested picks. 

SXSW Interactive provides a unique and innovative platform to share experiences and connect with a diverse set a stakeholders that can only be found at springtime in Austin. We love to regularly connect with best in class Identity services professionals, but SXSW stands out as an event where we connect with people and organizations of all types. The opportunity for unmatched diversity in one place is something that comes only once a year.

Last year Kantara Initiative presented Tips and Tools for Protected Connection as part of the broader IEEE technology for humanity series. Our panel included privacy technology innovations, practices, solutions and research from ISOC, TOR Project and UMA. We’re focusing on IoT and identity this year with 2 panel submissions. We’ve submitted the Care and Feeding of Human & Device Relationships with panelists from Forgerock, CA, and  We’ve also worked with our Board Member IEEE-SA to submit a proposal from the Identities of Things WG with panelists from Deutche Telekom, Cisco, Perey Research and Consulting and Forgerock as part of the IEEE 2015 series.

The road to SXSW is a long one but with your support we hope to get on the schedule again!  Have a look at our highlighted session for your voting pleasure. There are MANY quality proposals this year so this is just a taste.  Please vote for our submissions and let us know about your favourites!

Our Picks for SXSW 2015

1. Care and Feeding of Human & Device Relationships

Relationships are formed of connections and interactions. We have relationships with humans and entities like our employers, Twitter, Facebook, and our families. We also have relationships with objects like our phones, cars, and gaming consoles. Our connections, roles, and relationships are multiplying with each innovation. People, entities, and things all have identities. Who is paying attention to the relationships between each? Who has the authority to confirm if a relationship is valid, current, and should be trusted? With more and more interactions and automation how can we understand the associated relationships and how to manage billions of relationships? This session discusses the developing laws of relationships between people, entities, and things and provides an innovative view of the landscape from the Identity Relationship Management Open Work Group. Find out what you need to know about the management human and device relationships. Discover how you can participate.

2. Identities of Things Group: Paving the Way for IoT

There’s a ton of promise in “smart everything.” However, the convergence of technology and sheer proliferation of data being gathered by sensors, cameras and other networked devices are making the road to the Internet of Things (IoT) a bumpy one. Today, there are no overarching frameworks that support broad authentication or data management, fueling serious data privacy and security concerns. Further, there’s no “DNS-like” framework that maps object identities, so things can effectively communicate and work with each other. In order for IoT to realize its promise, we must differentiate between people and objects, putting standards and structures in place that maximize the use of networked data, while guarding against abuse. Learn how the Identities of Things Group is working with industry to assess the IoT Landscape and develop harmonized frameworks that will help enable the Internet of Things. Find out how to get involved in defining an IoT future where PEOPLE matter most!

3. A framework for Privacy by Design

We live in an era where the pace of technological advancement is speeding along faster than the world can comprehend or respond. As we try to keep up, we are merging our limited understanding of emerging technology with our own antiquated views, policies and concepts related to personal identity, privacy and data governance. As a result, the world is playing an awkward and inefficient global game of “catch-up” that may do more harm to privacy than good. It is time for a more proactive stance; a vision, framework and standards that can help the world incorporate “Privacy by Design.” Join these two incredible thought leaders for a conversation around a new, global Privacy by Design concept that incorporates standards for privacy as an integral part and practice of development. We’ll explore what we own, how we store it, and who’s responsible for keeping it secure and what’s at stake for the future.

4. Biometrics & Identity: Beyond Wearable

From mobile devices to wearable gear, the increasingly ergonomic, small, lightweight, body conscious, attachable, controllable and comfortable devices we use are becoming physical extensions of ourselves. From phone to fitbit, as we become more dependent on these devices, our comfort level with the capture and use of our intimate personal data increases. However, will we become comfortable using our biometric and genomic data to digitally unlock our every day lives — from car to communications, home security to banking, healthcare to services? We are moving beyond wearables, to an age where products like biyo, which connects physical payment to a scan of the unique veins in the human palm, are becoming present market realities. What are the implications of using personal biometric data as the virtual keys that unlock our very real lives? How should we feel about using such sensitive, personal data as a means of self-identification?

We look forward to SXSW 2015.  Happy voting!!


Netweaver Identity Management Weblog - SAPOverlooked Risk in Middle Tier M&A? [Technorati links]

August 12, 2014 08:02 PM

If you have ever been part of a big public company merger then most likely the merger included an audit and review of the IT assets, principally those that provide the accounting and reporting.  Post merger and before the two merged companies are interconnected there is also a review of the security policies in order to determine risks and gaps that could lead to compromise.  If there is a large difference in policy, the interconnectivity can be delayed until the security differences are corrected and verified.  This behavior is prudent.  Data compromise can damage a company’s carefully guarded reputation and lead to significant losses.  Beyond loss in sales it can also drive the stock price down.

Private Equity firms that buy and sell companies in the middle tier are strongly focused on the financial health of the company they are purchasing.  Certainly financial health indicates a well run company. Hours are spent structuring the deal and ensuring they know what they are acquiring.  No one wants to be defrauded.  From a sellers perspective they want a high asking pricing and zero encumbrances.

From what I have seen, both buy side and sell side are paying little attention to either information security or physical security risks.  This, even though middle tier companies tend to have fewer resources and are more likely to have major security gaps, whether within their facilities, or their network infrastructure.  Consider a scenario where you are either buying or selling a company and it has been compromised and the hackers are quietly laying in wait, collecting additional access credentials and elevating privileges.  Over time they will be able to exfiltrate all intellectual property.  In the case where the hacking is being done by a state actor it will be shared with domestic competitors.  If this is a platform company, that has been built up over several years, this amounts to a staggering loss of value. The buyer is accumulating exposure in the same way someone who sells naked options without holding the underlying asset accumulates exposure.  The same can be said for the supply chain where down stream providers of services connected into the network increase the size and diversity of the threat landscape.  A compromise within this system if not properly secured could bring down years of work and destroy any equity built.  Any time one is purchasing or selling  a company, he should take security exposure seriously and hire the teams necessary to do a thorough review.

I frequently hear people say that a business ending compromise is a rare event.  How rare or improbable an event is, matters less than the consequences of it occurring.  You can’t zero out risks, of course, but you should follow what works.  If one is not already doing this I recommend the list below.  It applies to domestic acquisitions within first world countries.  Cross border buys add additional challenges (e.g. FCPA exposure) but this list will still apply at the macro level.

  1. Thorough review and harmonization of security policies.
  2. Reciprocal audit agreements with 3rd party suppliers in place.
  3. Thorough review of security controls.
  4. Conduct a network vulnerability assessment covering both internal networks and boundaries.
  5. Perform a penetration test (physical and digital).
  6. Look at patch management processes.
  7. Review identity management practices and access control.
  8. Code audit of custom mission critical applications.
  9. An up to date threat model.
  10. Physical security audit.

MythicsThe Power of ZS3 Amplified by DTrace [Technorati links]

August 12, 2014 06:12 PM

The infinite appetite for data from our applications is a never ending challenge to the IT staff. Not only do we need to keep feeding…

GluuSXSW 2015: How API access control = monetization + freedom [Technorati links]

August 12, 2014 04:25 PM


Control access to your APIs, and you can charge for them. Large companies see API access management at scale as a competitive advantage and a way to lock in customers. Think about Google docs: it only works if both parties have an account at Google.

But the greatness of the Internet was not achieved by the offering of one domain. If each device and cloud service has proprietary security controls, people will have no way to effectively manage their personal digital infrastructure. Luckily, standards have emerged thanks to a simple but flexible JSON/REST framework called OAuth2, and the “OpenID Connect” and “User Managed Access” profiles of it.

This talk will provide a history of access management and a deep dive into the concepts, patterns, and tools to enable mobile and API developers to put new OAuth2 standards to use today. It will provide specific examples and workflows to bring OAuth2 to life to help organizations understand how they can hook into the API economy.



Vote here


Mark Dixon - OracleYou’re Home at Last, my iPad, You’re Home at Last! [Technorati links]

August 12, 2014 03:16 PM

Last Wednesday, a dreaded First World Fear was realized.  During a tight connection between flights at the Dallas – Fort Worth airport, I left my iPad in the seat pocket on my first flight.  I didn’t realize what I had done until I reached into my briefcase for it on my next flight. My heart sank. I use the IPad for so many things. To lose it was a huge disruption in my day to day life, not to mention the cost and hassle of replacement

A call to the DFW lost and found department was not reassuring. I was instructed by the telephone robot to leave a message with contact information and lost item description, and wait.  I dutifully complied, but had real doubts about whether I’d ever see my iPad again.  A conversation with an American Airlines gate agent gave a little bit of hope.  She assured me that every lost item was investigated, and that I should be patient for the process to take its course.

My Monday morning, I had about given up hope.  But then – the phone call – my iPad had been found!  I had activated the “Find my iPhone” feature, which caused my phone number to be displayed when ever the device was turned on.  The lost and found agent called me, verified that the device was indeed mine and arranged for it to be returned to me by Fedex. Then things got interesting …

Soon after I received the happy phone call, I received an email, also informing me that the iPad had been found – another nice feature of Find my iPhone.  


Apparently, when a device is in the “lost” mode, it will continue to wake up periodically and attempt to send its location via email.  I have received 18 emails to that effect since the iPad was first found yesterday morning, each with a little map pinpointing its current location.

I really enjoyed tracking the iPad’s progress as it found its way back to me via my iPhone’s Find My iPhone app.  In the photos below, you can see my iPad’s circuitous journey around DFW yesterday, its flight to the Fedex hub and back to Phoenix overnight, and the fairly direct route to my home by 7:33 this morning!


So, in addition to getting my treasured iPad back, I received an object lesson in the value of mobile location services!  We live in wonderful times!

KatasoftBuild a Node API Client – Part 1: REST Principles [Technorati links]

August 12, 2014 03:00 PM

Build a Node API Client – Part 1: REST Principles FTW

If you want developers to love your API, focus the bulk of your efforts on designing a beautiful one. If you want to boost adoption of your API, consider a tool to make life easier on your users: client libraries. Specifically, Node.js client libraries.

This series will cover our playbook for building a stable, useful Node.js client in detail, in three parts:

Part 1: Need-to-know RESTful Concepts

Part 2: REST client design in Node.js

Part 3: Functionality

If you would like to learn general REST+JSON API design best practices, check out this video. In this article series, we are going to focus exclusively on client-side concepts.

Lastly, keep in mind that while these articles use Node.js, the same concepts apply to any other language client with only syntax differences.

OK. Let’s begin with the RESTful concepts that make for a killer client. After all, if we don’t nail these down, no amount of JavaScript will save us later.


HATEOAS, usually pronounced ‘Haiti-ohs’, is an acronym for “Hypermedia As The Engine of Application State”. Aside from being an unfortunate acronym, HATEOAS dictates that REST API clients should not know anything about that REST API: a REST client should issue an initial request and everything it needs from that point on can be discovered from the initial response.

HATEOAS is still considered the ideal target for REST API design, but at times, HATEOAS will present a challenge to our REST client design efforts. Think of it this way:

  1. Your end-users will want to use your shiny new client to do the things they know are possible in your API. They will want to invoke known / defined behavior.
  2. You will want to provide them convenience functions to make it super easy for them to interact with your API – things that may not be as easy or possible with standard HTTP requests.
  3. Reconciling 1 and 2 with HATEOAS simply won’t always possible.

Our philosophy takes the pragmatic view: while HATEOS is ideal for automated software agents (like browsers), it is often not as nice for humans that want library functions that address specific needs, so we will diverge from HATEOAS when it makes sense.

REST Resources

Resources transferred between the client and the API server represent things (nouns), not behaviors (verbs). Stormpath is a User Management API, so for us, resources are records like user accounts, groups, and applications.

No matter what your API’s resources are, each resource should always have it’s own canonical URL. This globally unique HREF will identify each resource and only that resource. This point really can’t be stressed enough; canonical URLs are the backbone of RESTful architecture.

In addition to having a canonical URL, resources should be coarse-grained. In practice, this means we return all resource properties in their entirety in the REST payload instead of as partial chunks of data. Assumptions about who will use the resource and how they will use it only make it more difficult to expand your use cases in the future. Besides, coarse-grained resources translate to fewer overall endpoints!

Collection Resources

Collection resources are first-class citizens with their own first-class properties. These properties contain data that in turn describe the collection itself, such as limit and offset.

Collection resources should have a canonical URL and follow a plural naming convention, like /applications (and not /application) to make identification easy. A collection resource always represents potentially many other resources, so plural naming conventions are the most intuitive.

Collections usually support create requests as well as query requests, for example, ‘find all children resources that match criteria X’.

Instance Resources

An instance resource is usually represented as a child of some parent collection. In the example below, the URI references a particular application within the /applications collection. The implication is that if you interact with this endpoint, you interact with a single application.


For the most part, instance resources only need to support read, update, and delete operations. While not a hard and fast rule, reserving create for collections is a common convention in REST API design, especially if you want to generate unique identifiers for each newly-created resource.

Resource Code Examples

Ok, now the fun stuff – code!

If we are to translate these REST resource concepts to working code, it would make sense to have code artifacts that represent both collection and instance resources.

Here’s an example of what it might look like to define a very general resource concept in a Node.js library:

var util = require('util');

function Resource(...) { ... }
util.inherits(Resource, Object);


We’re using JavaScript’s prototypical inheritance here to simulate classical Object Oriented inheritance. We’ve found this to be the easiest to understand abstraction for most developers using code libraries, so we went with this paradigm.

As you can see, The Resource ‘class’ above takes advantage of the standard Node util library to create a resource constructor function. If you want to simulate a classical OO hierarchy, util is a great way to do it.

Next, we’ll extend this general resource class to create more specific Instance and Collection resource classes.

function InstanceResource(...) {...}
util.inherits(InstanceResource, Resource); (err, saved) {

anInstanceResource.delete(function (err) {

As mentioned, you’ll notice save and delete methods on InstanceResource, but no create. The callback on save returns either an error or the successfully saved object; delete has no object to return, so only an error might be provided to the callback. Both methods are called asynchronously after the operation is complete.

So what’s the takeaway? You can save or delete individual things, but not necessarily entire collections. Which leads us to our next resource class:

function CollectionResource(...) {...}
util.inherits(CollectionResource, Resource);

aCollResource.each(function (item, callback) {
}, function onCompletion(err) {

... other async.js methods ...

CollectionResource can support a number of helper functions, but the most common is each. each takes an iterator function which is invoked asynchronously for every instance in the collection.

applications.each(function(app, callback){
}, function finished(err) {
    if (err) console.log(‘Error: ‘ + err);

This example uses each to simply log all the instance resources.

As a great convenience, we made the decision early on to assimilate all of async.js’ collection utility functions into all Collection resources. This allows developers to call Stormpath methods using the semantics of async.js and allows the client to delegate those methods to the corresponding async.js functions behind the scenes. Eliminating even just one package to import has proven to be really convenient, as we’ll discuss more in part 3. We’re big fans of async.js.

(Note that async.js requires you to invoke a callback method when you’re done with a given iteration step.)

That’s it for the RESTful groundwork! We have a number of other articles and videos pertaining to REST security, linking, POST vs PUT, and more on the blog if you’re interested.

Part two of this series will be all about the nitty-gritty details of coding the client. Expect sections on encapsulation, public vs. private API implementation, and our component architecture. Stay tuned!

API Management with Stormpath

Stormpath makes it easy to manage your API keys and authenticate developers to your API service. Learn more in our API Key Management Guide and try it for free!

August 11, 2014

GluuGluu SXSW 2015 Interactive Picks [Technorati links]

August 11, 2014 07:19 PM

SXSW 2015 Panel Picker Picks

Voting for SXSW 2015 interactive sessions is NOW OPEN!

Approximately every day until voting ends we’ll highlight a new proposal that seems worthy of inclusion in SXSWi 2015, with a bias towards security related topics.

Click on any session title below to access the voting page.

A Walk Through the Identity Ecosystem in 3D

  1. NEW TODAY! A Walk Through the Identity Ecosystem in 3D Take a 3D tour of the modern digital identity eco-system and learn how persons, organizations, and devices provide the new foundation for defining and mitigating identity threats. Glasses included.
    By: Suzanne Barber, UT Center for Identity
  2. How API access control = monetization + freedom Gluu CEO Mike Schwartz will provide a history of access management and a deep dive into the concepts, patterns, and tools needed to enable mobile and API developers to put new OAuth2 standards to use today.
    By: Mike Schwartz, CEO Gluu
  3. Prototyping Tools and Techniques for UX Designers UX design prototyping has come a long way in recent years. Learn about cutting edge tools, techniques, and various ways to incorporate interactive design prototyping along with user testing into your overall process.
    By: John Goff, Ebay
  4. Digital Identity and the New Consumer The evolution of consumer digital identity, how it changes the brand conversation, empowers consumers and most importantly, creates real opportunities for authentic brand engagement.
    By: Reggie Wideman, Janrain
  5. Care and Feeding of Human & Device Relationships This session discusses the developing laws of relationships between people, entities, and things and provides an innovative view of the landscape from the Identity Relationship Management Open Work Group.
    By: Eve Maler, ForgeRock, Joni Brennan, Kantara, Ian Glazer, SalesForce & Michelle Waugh, CA
  6. OAuth2 – The Swiss-Army Framework This session will focus on the myriad of ways OAuth2 can be used to protect APIs, and how OpenID Connect is replacing SAML as the developer friendly way to handle SSO and federated logins.
    By: Brent Shaffer, Adobe
  7. Death to passwords – mobile security done right What techniques exist to offer a more mobile friendly person-identification flow. Highlighting authorization and authentication techniques like OAuth, OpenID Connect and even hardware features like Bluetooth Low Energy this talk will be interesting for anyone who’s facing a situation where creating and storing user accounts matters.
    By: Tim Messerschmidt, Paypal
  8. Secrets to Powerful APIs What’s new in API development from some of today’s most popular APIs including GitHub, SoundCloud, Stripe, and Dropbox. Topics will include designing RESTful APIs, user authentication, APIs for media, developing SDKs, and APIs for mobile.
    By: Leah Culver, Developer Advocate at Dropbox, Greg Brockman,CTO Stripe, Erik Michaels-Ober, Developer Soundcloud, Wynn Netherland Developer at Github
  9. Fingerprints are Usernames, not Passwords What are the implications of biometric sensors in consumer devices, and how we might want to change our thinking and approach to protect privacy and increase security.
    By: Dustin Kirkalnd, Canoncial
  10. Passwords are broken. Time for alternatives! What is the current state of passwords, and what are the options for web developers to avoid the dreaded username/password SQL table.
    By: David Ochel, Secuilibrium
  11. Rethinking Privacy in the Internet of Things With the mass boom in online profiles, companies will very soon be dealing with millions to billions of potential ID challenges. This brings security issues and usage problems to the forefront that nobody is talking about quite yet.
    By: Steve Shoaff, CEO at UnboundID


If you plan to be in Austin for SXSW, please let us know!

KatasoftManage your API Keys with Java, Jersey, and Stormpath [Technorati links]

August 11, 2014 03:00 PM

If you are a Java developer, then you are undoubtedly familiar with frameworks such as Spring, Play!, and Struts. While all three provide everything a web developer wants, I decided to write a RESTful web application using the Jersey framework. This sample app uses Java + Jersey on the back-end and Angular JS on the front-end.

Jersey annotation service makes it easy to do routing, injection, and other functions important to a RESTful web application. My goal was to demonstrate the use of the Stormpath Java SDK for user management and the protection of a REST endpoint using API Keys and Oauth Tokens, all while relying on Jersey.

You can check out the Stormpath Jersey sample app in github, and follow along here for the implementation details and concepts I found most important while building this application. I will explain Stormpath SDK calls, Jersey annotations, and the general flow of the application, so it the codebase is easy to decipher.

Let’s code!


Stormpath provides username/password authentication in three lines of Java SDK method calls. That makes it very simple to launch a basic login form, securely.

As soon as a user enters their credentials and clicks the “Sign In” button, an AJAX request is made to the /login endpoint. Let’s take a look at the server side login code:

public class Login {

private HttpServletResponse servletResponse;

public void getDashboard(UserCredentials userCredentials) 
  throws Exception {

  Application application = StormpathUtils.myClient.getResource(
    StormpathUtils.applicationHref, Application.class);

  AuthenticationRequest request = new UsernamePasswordRequest(
    userCredentials.getUsername(), userCredentials.getPassword());

  Account authenticated;

  //Try to authenticate the account
  try {
   authenticated = application.authenticateAccount(request).getAccount();

  } catch (ResourceException e) {
    System.out.println("Failed to authenticate user");

  Cookie myCookie = new Cookie("accountHref", authenticated.getHref());
  myCookie.setMaxAge(60 * 60);

Here we see three examples of Jersey’s annotation feature. The @Path annotation acts as our router. The @Context annotation injects the HTTP Request object into our class. Finally, the @POST specifies the CRUD operation.

User authentication is done by first creating an Application object, creating an AuthenticationRequest object, and finally calling application.authenticationAccount(request) to ask Stormpath to authenticate this account.

Create Account

Creating an account is just as simple as logging in to the service:

public class StormpathAccount {

public void createAccount(UserAccount userAccount) throws Exception {

  Application application = StormpathUtils.myClient.getResource(
    StormpathUtils.applicationHref, Application.class);
  Account account = StormpathUtils.myClient.instantiate(Account.class);

  //Set account info and create the account


All we had to do here, was create an Application and an Account, set the Account attributes, and call createAccount.

Generating an API Key ID/Secret

Once a user logs in, they will be given API Key credentials. In this application, generation of the Keys is a simple AJAX call to /getApiKey:

public class Keys {

private HttpServletRequest servletRequest;

private HttpServletResponse servletResponse;

public Map getApiKey(@CookieParam("accountHref") String accountHref) throws Exception {

  Account account = StormpathUtils.myClient.getResource(accountHref, 

  ApiKeyList apiKeyList = account.getApiKeys();

  boolean hasApiKey = false;
  String apiKeyId = "";
  String apiSecret = "";

  //If account already has an API Key
  for(Iterator<ApiKey> iter = apiKeyList.iterator(); iter.hasNext();) {
     hasApiKey = true;
     ApiKey element =;
     apiKeyId = element.getId();
     apiSecret = element.getSecret();

  //If account doesn't have an API Key, generate one
  if(hasApiKey == false) {
     ApiKey newApiKey = account.createApiKey();
     apiKeyId = newApiKey.getId();
     apiSecret = newApiKey.getSecret();

  //Get the username of the account
  String username = account.getUsername();

  //Make a JSON object with the key and secret to send back to the client
  Map<String, String> response = new HashMap<>();

  response.put("api_key", apiKeyId);
  response.put("api_secret", apiSecret);
  response.put("username", username);

  return response;

We use Jersey’s @CookieParam annotation to grab the account Href from the Cookie that was created at login. We create an account, and an ApiKeyList object. We then check if this account already has an API Key. If so, our job is to simply request it from Stormpath; if not, we tell Stormpath to make a new one for this account and return this back to the client. By Base64 encoding the API Key:Secret pair, a developer can now target our endpoint using Basic authentication:

Using a Jersey Filter

A cool feature of the Jersey framework is its Request filter. By implementing ContainerRequestFilter we can intercept an HTTP request even before it gets to our endpoint. To demonstrate, I added an additional level of security around API Key generation. Before a user is allowed to target the /getApiKey endpoint they must pass through the Jersey request filter, which will check if the client is actually logged in (a.k.a has a valid session in the form of a cookie).

public class JerseyFilter implements ContainerRequestFilter {

private HttpServletResponse servletResponse;

public void filter(ContainerRequestContext requestContext) throws 
  IOException {

  URI myURI =  requestContext.getUriInfo().getAbsolutePath();
  String myPath = myURI.getPath();

  if(myPath.equals("/rest/getApiKey")) {
    Iterator it = requestContext.getCookies().entrySet().iterator();
    String accountHref = "";

    while(it.hasNext()) {
      Map.Entry pairs = (Map.Entry);

      if(pairs.getKey().equals("accountHref")) {
       String hrefCookie = pairs.getValue().toString();
       accountHref = 
    if(!accountHref.equals("")) {
      //Cookie exists, continue.
    else {
      System.out.println("Not logged in");

If a client is trying to get an API Key without being logged in, they will get a 403 before even reaching the actual endpoint.

Exchanging your API Keys for an Oauth Token

Want even more security? How about trading your API Key for an Oauth Token? Using Oauth also brings the functionality of scope, which we can use to allow users to get weather from specified cities.

Let’s take a look at the code:

public class OauthToken {

public String getToken(@Context HttpHeaders httpHeaders,
                       @Context HttpServletRequest myRequest,
                       @Context final HttpServletResponse servletResponse,
                       @FormParam("grant_type") String grantType,
                       @FormParam("scope") String scope) throws Exception {

/*Jersey's request.getParameter() always returns null, so we have to 
    reconstruct the entire request ourselves in order to keep data

  Map<String, String[]> headers = new HashMap<String, String[]>();

  for(String httpHeaderName : httpHeaders.getRequestHeaders().keySet()) {
    //newBuilder.header(String, String[]);  
    List<String> values = httpHeaders.getRequestHeader(httpHeaderName);
    String[] valueArray = new String[values.size()];
    headers.put(httpHeaderName, valueArray);

  Map<String, String[]> body = new HashMap<String, String[]>();
  String[] grantTypeArray = {grantType};
  String[] scopeArray = {scope};

  body.put("grant_type", grantTypeArray );
  body.put("scope", scopeArray);

  HttpRequest request = HttpRequests.method(HttpMethod.POST).headers(

  Application application = StormpathUtils.myClient.getResource(  
    StormpathUtils.applicationHref, Application.class);

  //Build a scope factory
  ScopeFactory scopeFactory = new ScopeFactory(){
    public Set createScope(AuthenticationResult result, 
      Set requestedScopes) {

      //Initialize an empty set, and get the account
      HashSet returnedScopes = new HashSet();
      Account account = result.getAccount();

        In this simple web application, the scopes that were sent in the 
        body of the request are exactly the ones we want to return. If 
        however we were building something more complex, and only wanted 
        to allow a scope to be added if it was verified on the server 
        side, then we would do something as shown in this for loop. The 
        'allowScopeForAccount()' method would contain the logic which 
        would check if the scope is truly allowed for the given account.

        for(String scope: requestedScopes){
                if(allowScopeForAccount(account, scope)){

      return requestedScopes;

  AccessTokenResult oauthResult = application.authenticateOauthRequest(

  TokenResponse tokenResponse = oauthResult.getTokenResponse();

  String json = tokenResponse.toJson();

  return json;

Notice the 10 lines of code right after the getToken() declaration. This is a workaround for Jersey’s lack of providing us with a complete request object. Calling request.getParamter() or request.getParameterMap() will always return null, and since creating an AccessTokenResult object requires the Request object with the body still intact, we must recreate the entire request ourselves.

Finally: Securing your REST endpoint

Ahh, the moment we’ve all been waiting for. Now that we have given our users the ability to target this weather endpoint using Basic and Oauth authentication, it is up to us to figure out which protocol they choose to use.

public class WeatherApi {

private HttpServletRequest servletRequest;
private HttpServletResponse servletResponse;

private String weatherResult;

public String getWeatherApi(@PathParam("city") final String myCity) 
  throws Exception {

  Application application = StormpathUtils.myClient.getResource(
    StormpathUtils.applicationHref, Application.class);


  //Make sure this use is allowed to target is endpoint
  try {
    ApiAuthenticationResult authenticationResult = 

    authenticationResult.accept(new AuthenticationResultVisitorAdapter() {

      public void visit(ApiAuthenticationResult result) {
        System.out.println("Basic request");

        URL weatherURL = getURL(myCity);

        //Parse weather data into our POJO
        ObjectMapper mapper = new ObjectMapper();

        mapper.configure(DeserializationConfig.Feature.FAIL_ON_UNKNOWN_PROPERTIES, false);

        City city = null;

        try {
           InputStream in = weatherURL.openStream();
           city = mapper.readValue(in, City.class);

        } catch (IOException e) {

        weatherResult = city.toString() + " °F";

      public void visit(OauthAuthenticationResult result) {

        //Check scopes
        if(result.getScope().contains("London") && myCity.equals("London")){
          weatherResult = getWeather(myCity) + " °F";;
        else if(result.getScope().contains("Berlin") && myCity.equals("Berlin")){
          weatherResult = getWeather(myCity) + " °F";;
        else if(result.getScope().contains("SanMateo") && myCity.equals("San Mateo")){
          weatherResult = getWeather(myCity) + " °F";;
        else if(result.getScope().contains("SanFrancisco") && myCity.equals("San Francisco")){      
          weatherResult = getWeather(myCity) + " °F";;
        else {
          try {
          } catch (IOException e) {
            /* To change body of catch statement use File | Settings | File Templates.*/

  return weatherResult;

} catch (ResourceException e) {
  return "Cannot authenticate user.";

To do this we use a visitor. We create a visitor for each type of authentication protocol that we expect our clients to use (in our case Basic and Oauth). Based on the type of the ApiAuthenticationResult object, the appropriate visitor will be targeted. Notice how inside the OauthAuthenticationResult visitor, we check the scope of the Oauth token that we received, and appropriately give/forbid access to the requested cities.

When we generated our Oauth token in the sceenshot above, we gave access to view weather in London, Berlin, and San Francisco. Thus we can view London’s weather using Oauth:

However, since San Mateo was not included in the scope of the Oauth token, we cannot see its weather:


Jersey is yet another Java framework that seamlessly integrates with the Stormpath SDK to offer user management, API Key management, Oauth, and more. If you’d like to see more code and even run this application yourself please visit:

ForgeRockOpenIG 3.0 Released! [Technorati links]

August 11, 2014 02:00 PM

As the world moves from traditional IAM to Identity Relationship Management (IRM), focusing more on consumer-facing services, we need tools to increase velocity, facilitate the development of mobile and cloud services, and make life easier for end users.

We’re proud to announce the new major release of the ForgeRock Open Identity Gateway (OpenIG 3.0) that addresses the needs of IRM and helps protect access to APIs and applications more quickly and consistently. Focusing on a pure standards-based approach and built with OpenIG open source project 100% open source code, the new OpenIG excels for three main use cases.

The first one is about ensuring everything is integrated into one platform so you can create that single view of the customer. Say an existing application has its own authentication mechanism and needs to integrate into the Enterprise SSO and/or Federation service. The built-in support for SAML 2.0 and OpenID Connect combined with a very flexible password capture and replay allows for the integration of any application without requiring any modifications. 

The second use case addresses a typical IRM scenario. Leveraging the new OpenID Connect standard, OpenIG 3.0 can be configured to let consumers select their Identity Provider of choice to securely access services, and removes the burden of storing and managing user passwords from application developers. This also speeds time-to-market for deploying those applications, while offering a consistent and uniform authentication and authorisation layer.

The final use case is about machine-to-machine communication and API access. By integrating the OAuth 2.0 standard and acting as a resource provider, OpenIG controls access to APIs and services through secure tokens. Those tokens can be granted to applications or developers by OpenAM administrators or other OAuth 2.0 Identity Providers.

ForgeRock Open Identity Gateway 3.0 is available today as a standalone product, when solely relying on well-known social networks. It continues to be available as an optional module of ForgeRock OpenAM, the all-in-one access management solution.

For more information, check out the OpenIG product page on, or head to our downloads page to get started.

The post OpenIG 3.0 Released! appeared first on ForgeRock.

ForgeRockTwo Product Releases: OpenIDM 3.0 & OpenIG 3.0! [Technorati links]

August 11, 2014 02:00 PM

The most effective organizations have the identity infrastructure in place to generate a single, unified view of their customer for better customer engagement.  However, as more applications, devices and things come online and interact on the customer’s behalf, this picture gets more complex. To keep pace with the constant change, it is essential that organizations have the right identity administration tools in place to easily identity-enable any service online.

With that in mind, we at ForgeRock are extremely excited to announce the launch of two new fantastic product updates to our Open Identity Stack: OpenIDM 3.0 and OpenIG 3.0. New versions of the products are available immediately and can be downloaded at

OpenIDM 3.0:

Unlike legacy identity management offerings, which were designed for internal identity governance and compliance, we have always viewed identity administration through a very different lense. We want to arm organization’s with a massively scalable identity administration solution that can integrate with anything so that you can offer more dynamic services to your customer. In short, we view identity administration as the key to enabling seamless customer experiences across any application, device or thing, helping you to provide a more engaging and integrated customer experience.

Key release highlights include:

For more information, go to

And check out our PM’s blog here:

OpenIG 3.0:

AS organizations roll out new applications and APIs they need extremely agile ways of identity-enabling these services so they can expose them to the customer. Whether it be to offer more value to end-users or to monetize a new service, the ability to tie identity to these offerings is critical. With OpenIG, organizations can quickly identity-enable applications and APIs for roll-out and monetization. 

Key release highlights include:

For more information, go to

And check out our PM’s blog here:

Enjoy the new innovations and as the great Curtis Mayfield once said “keep on keepin’ on!”

The post Two Product Releases: OpenIDM 3.0 & OpenIG 3.0! appeared first on ForgeRock.

ForgeRockOpenIDM 3.0 Released! [Technorati links]

August 11, 2014 01:00 PM

Today is a big day! After many months of hard work by ForgeRock Engineering the brand new OpenIDM 3.0 is released to the general public. This release is important not only because it includes new rich features and completes the core OpenIDM architecture. The new OpenIDM is a key component of our Identity Relationship Management stack and core to our overarching digital transformation vision. It also marks a significant milestone in the product’s history – a history spanning the time I have been working for ForgeRock, in various capacities, but mainly as the Product Manager for OpenIDM.

Successful businesses now have to transition to a digital marketplace where each customer is uniquely identified, understood, and engaged with based on their data. Interactions are context-driven, device-agnostic, and immediate. To achieve a single, overarching view of each of their customers, businesses must invest in an Identity platform designed from the ground up to handle millions of external, customer-facing relationships flexibly, efficiently, and effectively. OpenIDM 3.0 rounds out ForgeRock’s identity relationship management stack and is prepared to do exactly that.

OpenIDM started off as an idea to provide the market with a new identity management system. This new system breaks away from the old, monolithic notion of deploying a highly complex piece of software with proprietary scripting languages for business logic and workflow, designed strictly to deal with use-cases for the enterprise. The new system moves to a modern and modular architecture dealing with the hybrid world that bridges the gap between enterprise and applications in the Cloud. It leverages powerful, standardized mechanisms to expose identity services, and implement business logic and workflows while retaining the lightweight principles that always have been the motto of OpenIDM.

Why is OpenIDM so unique in the market space and how does it impact our customers in a positive way?

First of all, OpenIDM is the first product in the ForgeRock Identity Relationship Management (IRM) stack that fully implements the Common REST API. The Common REST API is a ForgeRock-unique RESTful API with a set of easy-to-remember REST calls to Create, Read, Update, Delete, Patch, Action and Query (CRUDPAQ) identity objects and services. The simplicity of this API makes it easy for implementors and deployers of OpenIDM to solve business-critical identity management related problems quickly. There is no need to assign a horde of developers learning tricky JAVA APIs – and the cool thing is,it works the same way across ForgeRock products.

Second, the OpenIDM architecture has matured to the point at which all the expected components are available. Role-based provisioning, high availability architecture, provisioning guarantees with rollbacks should something go wrong, product-wide Groovy scripting support and a matured OpenICF framework that finally has a thriving community – marking the de facto standard of integration layers among Open Source provisioning vendors.

And third, to make it even better for our customers and system integrators, a set of default, out-of-the-box sample workflows are shipped with the product, solving a large array of Identity Management business problems with little or no customization.

Of course, this release of OpenIDM also addresses numerous bug fixes, enhancements and additional features. Most importantly this release sets the standard for how modern provisioning systems deal with target resources, both within the firewall and externally in the cloud, with additions such as a scriptable PowerShell connector and a generic scriptable connector that can leverage technologies such as JDBC, REST and SOAP.

Be sure to read all about the new features and enhancements in the release notes.

Getting started with OpenIDM is easy, as always, because it ships as a single zip-file that you simply download, unzip and start. A wide range of sample configurations is available at your fingertips, in the samples directory. Because OpenIDM includes all the necessary components, ready to run, there is no need to install any additional software components such as application servers or databases. If you decide to try it out right away, the installation instructions will get you started quickly.

Looking forward, ForgeRock will continue its firm commitment to delivering OpenIDM with an improved User Interface, thus making the already decoupled UI a better, more powerful and easier experience for configuration and administration of the product and of the identities it supports. Today, with OpenIDM 3.0, you have a super flexible, lightweight, high performing and modern identity management system that exposes all its identity services via an easy to use REST API. OpenIDM 3.0 ships with a scalable, performant end user dashboard to manage typical external facing identity use-cases such as on boarding, self-service password management and profile management.

Try it out and give us your feedback, learn more, or contact us to see how we can help you solve your identity management challenges with OpenIDM 3.0.

The post OpenIDM 3.0 Released! appeared first on ForgeRock.

Kuppinger Cole23.09.2014: So your business is moving to the Cloud – Will it be Azure or Naked Cloud? [Technorati links]

August 11, 2014 12:58 PM
In KuppingerCole

Most companies do not plan their migration to the cloud. They suddenly find that there are multiple users of cloud services in their organisation, each of which was a good idea at the time but now form a disparate approach to cloud services with no strategic vision, a significant training impost and little governance over their cloud-based applications and infrastructure.
August 10, 2014

Anil JohnBackpacking the Glacier National Park Gunsight Pass Trail [Technorati links]

August 10, 2014 02:40 AM

Glacier National Park is called by many the Crown of the Continent. After spending the last week hiking and backpacking the 20+ miles of the Gunsight Pass Trail, I can see why. It was amazing, wild and spectacular!

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

August 08, 2014

KatasoftSSO Vs. Centralized Authentication [Technorati links]

August 08, 2014 03:00 PM

Single Sign-On (SSO) has become an over-loaded term for many developers. It’s used, sometimes inaccurately, to refer to any tool that simplifies login for the end-user. But SSO has a very specific technical definition, and the confusion increasingly makes it difficult for developers to find the right tool for the job.

In most cases, what a developer is really looking for is one (or a combination) of three different tools: Centralized Authentication, Social Login (i.e. Facebook Login), or actual Single Sign-on.

Social Login is pretty well defined at this point, and most developers know what it is— the user logs into your application by clicking a facebook button and using their Facebook credentials (usually through a custom Facebook OAuth flow). You can learn all about it in our Facebook Login Guide.

Less clear and often more misunderstood is the difference between Single Sign-On and Centralized Authentication. At Stormpath we see this confusion often and wrote this article to help developers find the right solution for their application.

Single Sign-On

With single sign-on (SSO), users are authenticated only once, regardless of how many other applications they attempt to access after the initial login. In general, this is achieved when the SSO Identity Provider (IDP) sends the target applications an assertion that the user has been authenticated and should be trusted by that application.

For example, say a user logs on to Application 1, then decides to access Application 2. Typically, Application 2 would require another username and password for authentication. But in an SSO environment, Application 2 simply determines whether it can authenticate the user based on information the SSO IDP provides. The assertions are typically based on a common protocol like SAML, OpenID Connect, or JWT.

SSO solves two major problems for users:

  1. They don’t need to enter authentication information multiple times
  2. …or remember multiple sets of login credentials

It also requires a user repository in order to authenticate a user for the first time. Typical user repositories include Active Directory, LDAP, a custom database, or Stormpath. In turn, these same repositories are often centralized authentication and user management systems.

On its own, SSO is a poor solution for sharing user data across applications, as SSO generally expects the application (or “Service Provider” in SSO speak) to maintain its own user management system. More specifically, when an SSO provider sends an authentication assertion to an application, it still needs to create or look up the user in the app’s own local repository. Even though the application can trust that the user is authenticated.

Centralized Authentication

With centralized authentication, the authentication process is different. Once a user has logged into Application 1, logging into App 2 doesn’t feel automatic. Even though the required credentials are identical, the user would still need to enter her authentication information again.

Like SSO, Centralized authentication solves two problems, but the problems are different:

  1. Users don’t need to remember multiple sets of authentication credentials
  2. The applications they are logging into can share user data.

Typically, centralized authentication solutions completely offload user management from an application. They provide powerful APIs or query languages to connect the user system to one or many applications. Moreover, centralized authentication is often the first step toward a true SSO environment.

SSO vs Centralized Authentication? Why not both?!

So, should you use SSO or Centralized Authentication in your application? Of course the answer is: it depends.

However, SSO and Centralized Authentication are not direct competitors. In fact, many developers implement both to give customers a seamless user experience across applications or web properties. At the same time, an SSO/Centralized Authentication combo allows development teams to share user infrastructure across applications, so they aren’t reinventing the identity system with each new application.

Stormpath combines both SSO and Centralized Authentication in one clean and elegant user management system. With an elegant API, powerful SDKs, and easy to use framework integrations, your applications have full access to user and group data .

In addition, we now offer SSO across your custom applications with little to no coding on your end through our new ID Site feature, so you can offer customers a seamless user experience.

Kuppinger ColeFrom preventive to detective and corrective IAM [Technorati links]

August 08, 2014 09:27 AM
In Martin Kuppinger

Controls in security and GRC (Governance, Risk Management, and Compliance) systems are commonly structured in preventive, detective, and reactive controls. When we look at IAM/IAG (Identity and Access Management/Governance), we can observe a journey from the initial focus on preventive controls towards increasingly advanced detective and corrective controls.

Initially IAM started with a preventive focus. This is done by managing users and access controls in target systems. Setting these entitlement rights prevents users from performing activities they should not perform. Unfortunately, this rarely works perfectly. A common example is access entitlements that are granted but not revoked.

With the introduction of Access Governance capabilities, some forms of detective controls were introduced. Access recertification focuses on detecting incorrect entitlements. The initial “access warehouse” concept as well as various reports also provided insight into these details. Today’s more advanced Access Intelligence and Access Risk Management solutions also focus on detecting issues.

Some vendors have already added integration with User Activity Monitoring (e.g. CA Technologies), SIEM (e.g. NetIQ), or Threat Detection Systems (e.g. IBM, CyberArk). These approaches move detection from a deferred approach towards near-time or real-time detection. If unusual activity is detected, alerts can be raised.

The next logical step will be corrective IAM – an IAM that automatically reacts by changing the settings of preventive controls. Once unusual activity is detected, actions are put in place automatically. The challenge therein is obvious: how to avoid interrupting the business in the case of “false positives”? And how to react adequately on “false positives”, without over-reacting?

In fact, corrective IAM will require moving action plans that today are in drawers (best case) or just in the mind of some experts (worst case) into defined actions, configured in IAM systems.

However, with the tightening threat landscape, with the knowledge that the attacker already might be inside the system, and with the IAM covering not only employees and internal systems, but business partners, customers, and the Cloud, IAM has to become far more responsive. IAM needs to become not only “real-time detective”, but also needs to have corrective controls put in place. This will be the next logical step in the evolution of IAM, which started way back with preventive controls.

August 07, 2014

Julian BondIt's pronounced "gnod", actually. [Technorati links]

August 07, 2014 05:11 PM
It's pronounced "gnod", actually.

My turn now. Also quite looking forward to Hacker Farm, Death Shanties, Anji Cheung and John Doran's DJ set.
 Supernormal Festival | 3 day, experimental arts and music festival at Braziers Park in Oxfordshire »
Supernormal is a festival like no other, providing a powerful antidote to the current malaise of festivals-as-big-business. Blurring the boundaries between art and music, performer and audience, it champions the iconoclastic and the experimental, allowing risks to be taken and leaps of ...

[from: Google+ Posts]

Kuppinger ColeDid someone just steal my password? [Technorati links]

August 07, 2014 09:27 AM
In Alexei Balaganski

Large-scale security breaches are nothing new. Last December we’ve heard about the American retail chain Target’s network hack, when over 40 million credit cards and 70 million addresses have been stolen. This May, eBay announced that hackers got away with more than 145 million of their customer data. And the trend doesn’t stop: despite of all the efforts of security researchers and government institutions, data breaches occur more frequently and get bigger and more costly. The average total cost of a data breach for a company is currently estimated at $3.5 million. The public has already heard about these breaches so often that it became a bit desensitized to them. However, the latest announcement from an American company Hold Security should definitely make even the laziest people sit up and take notice.

Apparently, a gang of cybercriminals from Russia, which the company dubbed CyberVor (“cyber thief” in Russian), have managed to amass the largest known collection of stolen credentials, over 1.2 billion passwords and more than 500 million email addresses! The company hasn’t revealed a lot of details, but these were not, of course, spoils of a single breach – the gang has allegedly compromised over 420 thousand websites over the course of several years. Still, the numbers are overwhelming: the whole collection contains over 4.5 billion records. Surely, I can be somewhere in that huge list, too? What can I do to prevent hackers from stealing my precious passwords? Can someone help me with that?

In a sense, we still live in the era of the Internet Wild West. No matter how often the passwords are proclaimed dead and how hard security vendors are trying to sell their alternative, more secure authentication solutions, no matter how long government commissions are discussing stricter regulations and larger fines for data breaches – way too many companies around the world are still storing their customers’ credentials in clear text and way too many users are still using the same password “password” for all their accounts. Maybe in twenty years or so, we will be remembering these good old days of the “Internet Freedom” with romantic nostalgia, but now we have to face the harsh reality of the world where nobody is going to protect our personal information for us.

This, by the way, reminds me about another phenomenon of the Wild West era: snake oil peddlers. Unfortunately, quite a few security companies now attempt to capitalize on the data breach fear in a similar way. Instead of providing customers with the means to protect their credentials, they offer instead such services like “pay to see whether your account has been stolen”. And these services aren’t cheap.

Surely, these companies need to earn money just like everyone else, but charging people for such useless information is dubious at best. I’m not even going to mention the fact that there might be even services out there that are essentially good old phishing sites, which would collect your credentials and use them for malicious purposes.

As a famous Russian novel “The Twelve Chairs” states, mocking a common propaganda slogan of the early Soviet period: “Assistance to drowning persons is in the hands of those persons themselves.” I’ve published a blog post some time ago, outlining a list of simple rules one should follow to protect themselves from the consequences of a data breach: create long and complex passwords, do not reuse the same password for several sites, invest in a good secure password manager, look for sites that support two-factor authentication and so on. Of course, this won’t prevent future breaches from happening (apparently, nothing can), but it will help minimize the consequences: in the worst case, only one of your accounts will be compromised, not all of them.

Whenever you hear that a website you’re using has been hacked, you no longer have to wonder whether your credentials have been stolen or not, you simply assume the worst and then spend a minute to change your password and stay assured that the hackers have no use for your old credentials anymore. This way, you’re not only avoiding exposure to “CyberVors”, but also don’t let “CyberZhuliks” (cyber fraudsters) make money by selling you their useless services.

Julian BondA coffee puzzle. [Technorati links]

August 07, 2014 09:01 AM
A coffee puzzle.

Go into a cafe in any provincial French town before about 11am and you'll be able to get "un petit café créme avec un croissant" and consume them standing at the bar. If you're greedy you can have "un grand café créme" and a pain au chocolat. But look on the internet, even in wikipedia and all discussion of coffee is by Americans and trying to find recipes for the authentic café créme is impossible. The Petit is typically in a large/double expresso cup. The Grand is often served in something more like a small soup bowl. They both involve expresso and hot milk. But they are both emphatically NOT a latte, capuccino, flat white, Cortado or any of the other hundreds of white coffees. And they would never involve cream or that horrible American concoction, Half''n'half. It's quite likely that the milk is skimmed and may even be UHT.

So how do you make them? I think, as follows.

Un petit café créme: One shot of Expresso, slow pulled into a large expresso cup, usually brown outer, white inner, with a saucer. Add approximately equal quantities of warmed semi-skimmed milk that's just been hit by the steam pipe to get it hot but before it starts frothing.

Un grand café créme: Two shots of Expresso, slow pulled into a giant coffee cup or small soup bowl. Roughly two to one hot milk to coffee brought to a point just before it boils and froths.
[from: Google+ Posts]
August 06, 2014

CourionMaking Traditional IAM More Intelligent: Deterrence & Detection [Technorati links]

August 06, 2014 04:30 PM

Access Risk Management Blog | Courion

Brian MilasNow that Cloud Identity Summit is over, I’m taking some time to reflect on the Intelligence workshop. In the workshop we looked at some of the IAM approaches used today and some of their limitations. Given that the bad guys are motivated and creative, we need to look to new techniques to detect and deter them. Applying analytics and Intelligence fundamentally changes the game from the traditional approaches.

Reports on data breaches illustrate the large contribution that hackers make to data breaches as compared to other methods such as lost laptop or lost media. As an example, check out:


– Ponemon’s 2014 report on the cost of data breaches, which states, “In most countries, the primary root cause of the data breach is a malicious insider or criminal attack.”

– Verizon Data Breach Investigations Report, which states,“ . . . 92% of the 100,000 incidents we’ve analyzed from the last ten years can be described in just nine basic patterns.”

– New York State Attorney General Data Breach Report, “Hacking attacks accounted for over 40 percent of data security breaches, between 2006 and 2013.”


 So just how prevalent are data breaches? Consider these statistics:

– 20M

– 7.4M

– 900

– 1.3B

These numbers come from the aforementioned New York State Attorney General Report which analyzed data breaches:

– 20M: the population of New York City in July 2013

– 7.4M: the number of residents breached in 2013, that’s about 85% of the population

– 900: the number of breaches in 2013, about 2.5 per day or 8,000 records/breach.  BTW, the number has tripled since 2006

– $1.3B: the cost to the public and private citizens of these breaches

So what’s missing from today’s techniques? We see two (2) major challenges.

Deterrence: What can you do NOW in IAM to reduce the likelihood of a breach? Clean house and reduce the attack surface: get rid of abandoned accounts, make sure orphan accounts are properly managed, eliminate access that is not needed, keep Superuser administrator accounts to a minimum, manage to least privilege. For further confirmation of these suggestions, see the 2014 Verizon DBIR recommendations and the SANS 5 Security Control recommendations:

The 2014 Verizon Data Breach Incident Report recommends 4 identity and access management tactics to address insider and privilege misuse:

­– Know your data and who has access to it

– Review user accounts

– Watch for data exfiltration

– Publish audit results

And the SANS Institute, a leader in computer security training, offers version 5 of the organization’s Top 20 Critical Security Controls, which recommend several identity management processes:

– Controlled Use of Administrative Privileges

– Maintenance, Monitoring, and Analysis of Audit Logs

– Account Monitoring and Control

– Data Protection

Monitoring and Detection:  Cleaning your house (reducing the attack surface) is good, but you must detect when a “spill occurs”. By monitoring and taking actions on the anomalies, you’re able to start reducing the window available for exploit, so you need to be keeping constant watch with identity and access intelligence or analytics.

To get the big picture of access across everything (from person to data) you’ll need to understand and analyze relationships between different objects and systems . . . but this quickly becomes millions and billions of relationships in the typical organization. As Mark Diodati of Ping Identity talked about in his “Modern Identity” presentation, the difficult of managing identity and access increases with distance, which you can think of as “remoteness.”

The second challenge has to do with time, more specifically reaction time. Our ability to detect and react to a breach or vulnerability is moving slower than the adversary. Hence we’re have little (or no) time to act . . . we’re constantly “on our heels”.

Let’s look at a typical lifecycle with IAM. The frequency between “Assign” and “Review” may be months, quarters, or even longer:

Assign Access >> Time passes >> Things Change >> Review Access & Remediate

How do we increase the frequency of our detect/react cycle to better combat the adversary? By improving our capabilities around:

– Complexity

– Speed

We need to continually analyze and understand the complexity and monitor. “Monitor” can be done on the order of hours or minutes . . . allowing the “Review” steps to happen much more quickly.

Assign Access >> Monitor as Time Passes & Things Change >> Review Access & Remediate

The Insider and Privilege Misuse section of the Verizon DBIR summarizes the discovery timeline (figure 38).  Detection within days (34%) is good, but many took months (11%) or years (2%) to discover.Discovery Timeline 2014 Verizon DBIR

By applying Intelligence and Analytics, we can continually update and understand complexity, and then detect and act on things that we have been proactively looking for . . . increasing our speed and frequency. In addition, with all of the complex relationships analyzed and at hand, we’re free to slice, dice, drill down and apply forensics to identify the next/upcoming set of things to monitor . . . adding those into the category of complex items that we can:

Understand, and


Traditional approaches are an important part of providing security, speed, and value to the business  . . . but we can do better. As CIOs and CISOs, we are in an arms race with the bad guys, and in some ways it’s an arms race to keep up with the complexity of the business’s environment. Through the application of Analytics and Intelligence along with other approaches, we can understand and manage complexity and act on it more quickly, mitigating breaches quickly, or even better reducing risk and avoiding some them altogether.

Matthew Gertner - AllPeersA Quick Reference List Of Homeschooling Tips [Technorati links]

August 06, 2014 02:31 AM

Money is sometimes an issue when it comes to selecting a private school, but all parents want the best education for their child. For them, homeschooling is a great option. Using the current best practices in homeschooling, you will know that your children are getting a good education. Assistance and information can be found in the following article.

Laws and Requirments

Every state has a set of homeschooling laws in place. There are varied rules and regulations in different areas, and you have to follow them to create a successful school. You frequently won’t be handed a curriculum, instead you’ll have to draft your own. Keep the school district’s school day set up in mind when planning your own schedule. While you may not want your kids to hang out with the kids in public school, they have to have some external social interaction. Schedule playtimes with family and neighbors. Take your kids to the park and let them play with the children there. Look into sports teams, clubs or other organizations.

Mom homeschooling her little girlUtilize art in all your subjects, not just Art class. Draw pictures about things you are learning or make a quilt with fabric. Sculpting, singing and acting are only a few options, The sky is the limit. Teaching the material through many different mediums is known to improve educational outcomes, so get creative. Make sure that your kids do their share of chores, and you can even hire help if needed. Taking care of everything will prove to be impossible. Consider that cooking, cleaning, childcare, and shopping are simply added on to the stress of homeschooling, and will wear you out much more quickly. Say yes to some help whenever it is offered, and do not be afraid to hire someone if your budget allows.

It is important to fully understand what homeschooling is all about. The Internet has an abundance of helpful information to help you make this huge decision. Remember that there are prerequisites that have to be met before a homeschooling program can succeed, including sufficient money, time and a good relationship with your child. Network with others who are homeschooling their children. There are any number of reasons that people decide to homeschool, so find people who have similar objectives as you do. You can likely find or create a group of others with similar goals as yours. This can help you form a group or community with others who are going through the same thing.

Family Tips

Allow your kids to have break times so that they can run around and use up extra energy stores. This helps to improve concentration and focus for you and your child. Schedule some breaks and tell them when it’s about time for their break. Come up with ways that your kids can socialize with others. This calls for a bit of creativity. A field trip with other families that homeschool is an excellent idea. Your community will also have sports groups to join. Always remember, Boy Scout and Girl Scout troops are available as well.

It won’t always be a barrel of laughs. At some times, you will have to use your authority to get your children to study. Some activities that are required for learning are just not fun, but must be done. To motivate your children in learning topics that may not be exciting, use a reward system.

image of lunch durring homeschoolWhen homeschooling, meals should be planned out. There are a few ways to attack this problem, including preparing a bunch of meals and freezing them, or just preparing a single meal the night before. If you cook and freeze your meals in advance, you will be less stressed throughout the week. Utilize new cooking plans to determine what is appropriate for your schedule.

Make sure you keep up your relationships with your family during homeschooling. You significant other will appreciate you setting aside some free time that might be lost while you are busy teaching your kids. Remind them of how important they are by spending quality time together on date nights, catching a movie or in some other special way. You can really help your relationship by doing a little something together.

After learning more about teaching your kids at home, you should know that you can do it. With enough knowledge, you can do it. Use the information from this article to give your kids a great education.

Matthew Gertner - AllPeersNo More Sugar Loaded Juice, Juice Your Own! [Technorati links]

August 06, 2014 02:29 AM

No Sweet Drinks or snacks imageMost people, at some point, didn’t want to finish all of their vegetables. However, if you don’t fancy the idea of eating fruit the way nature provides it, juicing is definitely going to appeal to you.

Green vegetables such as spinach, kale and broccoli have many, wonderful health benefits. Shoot for making your juices contain around 50-75% greens, and then throw in some other vegetables and fruits for flavoring. When you make juices primarily of fruit, they tend to be less healthy as they have much more sugar than those juices made with mostly greens.

The best kind of juicer is a masticating juicer. A masticating juicer is a lot gentler than ordinary juicers. This means they will gently extract the juice, thus, extracting more vital nutrients. Juice from masticating juicers also last longer in storage. Pick a veggie that is dark green to use for the foundation of your juice, if you are creating juice to optimize health benefits. To maximize health benefits, you should aim for the juice to contain between fifty and seventy-five percent chard, spinach, broccoli, or a similar vegetable. Fill the rest with your choice of fruits to give it a great taste.

A hearty glass of juice can serve as a meal replacement. After you get used to juicing, you will begin to understand what you need to include in the juice to make it substantial and nutritious. Drink the entire glass as if it is a meal, so the nutrients and vitamins reach your bloodstream more quickly.

Nutritional Information

Use the color of a fruit or vegetable to determine its nutritional content. From bright reds to vibrant greens, all the different colored fruits and vegetables have different nutrients and minerals. Employ a diversity of colors for a complete culinary experience. If you are having a hard time getting your kids to eat vegetables, juice them instead. A lot of kids do not like vegetables. You can make a great tasting vegetable and fruit juice, and the kids won’t know they’re eating vegetables.

Before you start juicing, research your produce. There are a number of different minerals and vitamins found in fruits and vegetables. After you know which produce offers what, you can create a blend of juices that meets a variety of your nutritional needs. Not only will your body benefit from all the healthy nutrients you take in, but your palate might also enjoy some of the blends you’ll be tasting.

When juice sits in the refrigerator for a few days it changes into unappetizing colors. Brown or off-colored juice is less than appetizing. Try juicing half a lemon into the juice you plan to store. Since it is only a small amount, the lemon flavor will not overpower your juice, but it will help keep it fresh looking.

Tips for Beginners

If you’re diabetic or hypoglycemic, just juice veggies until you speak with your doctor. Fruit juice could cause a fast increase when it comes to your blood sugar. You need to monitor the amount of fruit you juice so you can watch your medical needs. Vegetable juicing has less risk for diabetics, as long as you consider the sugar content of items such as carrots.

Listen to your body if it reacts negatively to any of the juice that you drink. For some people, certain ingredients simply do not sit well in the body. If you experience nausea or another stomach upset, take the time to identify the ingredient that might have caused it. Often this will be something you rarely consume. You could use small amounts to let your body adjust to them. We all know that we should eat a certain amount of fruits and vegetables every day to maintain a healthy body and well functioning system. If you use the tips you have read here to add healthy juices to your diet, you can expect to see significant benefits for your mental and physical health.

Matthew Gertner - AllPeersCamping Advice That Really Helps You Enjoy The Trip [Technorati links]

August 06, 2014 01:52 AM

An image of a camp siteWith popular reality programming focused on the outdoors, it is no wonder that we are seeing a resurgence of interest in camping. Continue reading if you want to have a camping trip that is enjoyable and memorable.

If you have a new tent to take on your camping trip, you should set it up at home before you go on your camping trip. You can be sure there are no missing pieces and learn ahead of time the correct way to set your tent up. This can eliminate the frustration of trying to set up your tent in a hurry.

Take a class on how to do first aid. This is especially important if you are taking kids with you. You will have all of the medical knowledge you will need in case of an accident. Be sure to research beforehand. Knowing about the native species in your camp area, such as snakes or other dangerous animals, is essential.

Review the medical coverage that you have. If you are going camping out of state, you may have to add an additional policy for full coverage. This can be especially important if you leave the country on your trip, such as camping across the border in Canada. Make sure that you that you are prepared, just in case!

Try to fit swimming into your schedule in some way. You might long for a good shower when you are camping. Spending some time in cool water can help you stay clean and refresh your spirits, so a little swimming can soothe the part of your heart longing for a bath.

New Camper Tips

If you are a camping amateur, keep your camping adventure near your home. You might have gear problems, or you might figure that you want to cut your camping trip short. Also, you can easily get home if you don’t have enough food or clothing. Many issues can occur for new campers, so you should camp near home your first time.

Are you new to camping and now have a brand new tent in your possession? Spend time practicing pitching a tent prior to leaving for your trip. This will allow you take an inventory of all the necessary equipment needed to set up the tent. This will help you quickly pitch a tent before darkness falls at your campsite.

Come prepared before you go camping. Make sure you bring the right things when you go camping. Just neglecting to include a thing or two can completely ruin the outing. It is best to create a list a few weeks before your trip and to use it while packing. A few things that you probably should pack include a sleeping bag, tent, knife, food, soap, and plenty of water.  It is important to be prepared for certain situations. However, your plans never unfold exactly how you want them. Weather conditions may abruptly change for the worse, someone could get sick or injured, along with a number of other possible mishaps. It’s important to not be careless, not take any unnecessary risks, and to think before taking any actions.

Child Safety

If your kids are going camping with you, have a photo of them on you. While it is a worst case scenario, a child can easily be separated from the rest of the party, and a picture will make it easier to locate them. Bring one to use for emergencies, particularly if you are a long way from home.

There are a few important things to keep in mind if you are going camping. Now, you should have the information you need to handle the basics. Being forewarned you can now enjoy your trip away and have a lot of fun on your next camping vacation!

Matthew Gertner - AllPeersHow To Decide When It Has Come Time To Bug Out [Technorati links]

August 06, 2014 01:51 AM

When to bugoutThere are a variety of circumstances and situations which will require you to bug out of your home or temporary shelter. For those of you who are unfamiliar, the term bug out refers to when your position has been compromised and you need to move on. Although this term found it’s origins in army and soldier slang, it still rings true for emergency situations. If you and your family are held up after a natural disaster and the damage has been catastrophic, it may be time for you to move on and find a better place to stay. Considering a variety of factors, you will have to make the tough decision to leave your possessions and other valuables behind in favour of a safer environment. Because you can’t take everything with you, having a bug out bag (along with a first-aid kit and food stockpile) is essential for every family. You will need to consider the unique circumstances that surround your disaster to assess when the proper time to bug out is.

Has The Natural Disaster Caused Significant Damage To Your Home And Community

After any disaster, it’s your responsibility to step outside and assess the damage. In some places, tornados and hurricanes can level entire city blocks after a period of only two to five minutes. If your community and home have been destroyed and you’re able to recognize this from the safety of your hideaway, it may be time to bug out and find emergency shelters set up somewhere else. Connecting with other neighbors who are contemplating the same situation can give you a network of people to move with, therefore protecting yourselves from people who are in panic mode and looking to steal from families. By keeping your cool and calmly letting your family know of the situation up top, you can all come to a collective agreement about what your next steps should be.

Do You Feel Unsafe In Your Home Or Shelter

There will come times when you’ll need to bug out from shelters as well, which depends on a variety of different circumstances. If you’ve been held up at home for a while and more damage starts appearing, then it may be time for you to grab the kit and family and move everyone to a safer location. A home that has been destroyed above can still pose some serious threats, especially if your safe zone is located underground. Additional collapse of your home could trap you inside and rescue crews may not reach you for months on end. This is why it is crucial to make a decision early on, for the best possible reasons. Just because you are attached to your home doesn’t mean it’s the end of the world. Material possessions, just like homes, can be replaced. What matters is if your family is in immediate danger by staying in the same spot. Always consider bugging out a viable option.

For more information about bugout bags, visit


August 05, 2014

Mike Jones - MicrosoftOAuth Dynamic Client Registration specs addressing remaining WGLC comments [Technorati links]

August 05, 2014 10:36 PM

OAuth logoAn updated OAuth Dynamic Client Registration spec has been published that finished applying clarifications requested during working group last call (WGLC). The proposed changes were discussed during the OAuth working group meeting at IETF 90 in Toronto. See the History section for details on the changes made.

The OAuth Dynamic Client Registration Management was also updated to change it from being Standards Track to Experimental.

The updated specifications are available at:

HTML formatted versions are also available at:

Kuppinger ColeCloud Provider Assurance [Technorati links]

August 05, 2014 10:35 AM
In Mike Small

Using the cloud involves an element of trust between the consumer and the provider of a cloud service; however, it is vital to verify that this trust is well founded. Assurance is the process that provides this verification. This article summarizes the steps a cloud customer needs to take to assure that cloud a service provides what is needed and what was agreed.

The first step towards assuring a cloud service is to understand the business requirements for it. The needs for cost, compliance and security follow directly from these requirements. There is no absolute assurance level for a cloud service – it needs to be just as secure, compliant and cost effective as dictated by the business needs – no more and no less.

The needs for security and compliance depend upon the kind of data and applications being moved into the cloud. It is important to classify this data and any applications in terms of their sensitivity and regulatory requirement needs. This helps the procurement process by setting many of the major parameters for the cloud service as well as the needs for monitoring and assurance. Look at Advisory Note: From Data Leakage Prevention (DLP) to Information Stewardship – 70587.

Use a standard process for selecting cloud services that is fast, simple, reliable, standardized, risk-oriented and comprehensive. Without this, there will be a temptation for lines of business to acquire cloud services directly without fully considering the needs for security, compliance and assurance. For more information on this aspect see Advisory Note: Selecting your cloud provider – 70742.

Take care to manage the contract with the cloud service provider. An article on negotiating cloud contracts from Queen Mary University of London provides a comprehensive list of the concerns of organizations adopting the cloud and a detailed analysis of cloud contract terms. According to this article, many of the contracts studied provided very limited liability, inappropriate SLAs (Service Level Agreements), and a risk of contractual lock in. See also – Advisory Note: Avoiding Lock-in and Availability Risks in the Cloud – 70171.

Look for compliance with standards; a cloud service may have significant proprietary content and this can also make the costs of changing provider high. Executive View: Cloud Standards Cross Reference – 71124 provides advice on this.

You can outsource the processing, but you can’t outsource responsibility – make sure that you understand how responsibilities are divided between your organization and the CSP. For example, under EU Data Protection laws, the cloud processor is usually the “data processor” and the cloud customer is the “data controller”. Remember that the “data controller” can be held responsible for breaches of privacy by a “data processor”.

Independent certification is the best way to verify the claims made by a CSP. Certification of the service to ISO/IEC 27001 is a mandatory requirement. However, it is important to properly understand that what is certified is relevant to your needs. For a complete description of how to assure cloud services in your organization see Advisory Note: Cloud Provider Assurance – 70586.

This article was originally published in the KuppingerCole Analysts’ View Newsletter.

Kuppinger ColeCan EU customers rely on US Cloud Providers? [Technorati links]

August 05, 2014 10:30 AM
In Martin Kuppinger

The recent US court decision has added to the concerns of EU customers (and of other regions such as APAC) regarding the use of Cloud services from US-based providers. The decision orders Microsoft to turn over a customer’s emails stored in Ireland to the US government. The decision required the company to hand over any data it controlled, regardless of where it was stored.

While the judge has temporarily suspended the order from taking effect to allow Microsoft time to appeal to the 2nd US Circuit Court of Appeals, it remains, like the sword of Damocles, hanging atop of the US Cloud Service Providers (CSPs).

The decision further increases the uncertainty many customers feel regarding the Cloud, and is the latest since the Snowden revelations. So let’s look at the facts behind the FUD (fear, uncertainty, doubt).

In fact, the most important issue of the Cloud is control, not location. There have been critics against many of the current regulations focusing on the location instead of control. When appropriate security controls are in place, why should it make a difference whether data is stored in an EU datacenter or in an US datacenter? The location argument is somewhat invalid anyhow given the fact that data might be routed through other locations, based on how the IP protocol stack works. This caused the recent discussion about an EU Cloud.

However, if control is the better concept in these days of the Internet and the Cloud, the court decision has some logic. The alternative – it is about location, not about control – would in fact mean: A US criminal can hide data simply by storing it outside the US in the Cloud.

Notably, the recent US court decision (still subject to appeal) does not provide blanket access to data held. In this case it appears that the data is related to criminal activity. It is common in virtually all legislations, that data can be seized by law enforcement if they have suspicion that a crime has been committed.

However, there is a risk that your data could legally be seized by law enforcement in a non EU country (e.g. the US, Russia, etc.) on suspicion of an act that is not a crime in your country and which may not have been committed in the country wishing to seize it. There have been a number of contentious example of UK citizens being extradited to the US for these kinds of reason.

The differences in laws and legal system between various countries and court decisions, such as the recent one, do not make it easier for EU customers to trust non-EU Cloud Providers. In fact, uncertainty seems to increase, not decrease. Waiting for harmonization of legislation or trade agreements such as (TTIP Transatlantic Trade and Investment Partnership) is not an answer.

Organizations today are in a situation where on one hand business wants new types of IT services, some only available from the Cloud. On the other hand, there is this uncertainty about what can be done or not.

The only thing organizations can (and must) do is to manage this uncertainty in the same way as for other kinds of risks. Businesses are experienced in deciding which risks to take. This starts with a structured approach to Cloud Service Provider selection, involving not only IT but also procurement and legal. It includes legal advice to understand the concrete legal risks. It also includes analyzing the information sensitivity and information protection requirements. In this way, the specific risk of using individual Cloud Service Providers and different deployment models such as public or private Clouds can be analyzed. It transforms uncertainty into a good understanding of the risk being taken.

KuppingerCole’s research around Cloud Assurance and Information Stewardship and our Advisory Services, for instance, can help you with this.

Notably, the frequently quoted answer “let’s just rely on EU CSPs” oversimplifies the challenge. It needs real alternatives and pure play EU offerings. Both are rare. Many EU offerings are not feature-equal or are far more expensive; others are not pure play EU. The same applies for other regions, for sure. Yes, these services must be taken into consideration. But “EU is good, US is bad” is too simple when looking at all aspects. It is better to understand the real risks of both and choose the best way based on this – which might include on-premise IT. The basic answer to the question in the title simply is: “It depends.” The better answer is: “Understand the real risk.”

This article was originally published in the KuppingerCole Analysts’ View Newsletter.

August 04, 2014

Julian BondHigh-Rise: A Novel by J. G. Ballard [Technorati links]

August 04, 2014 03:02 PM
Liveright (2012), Paperback, 208 pages
[from: Librarything]

Julian BondHistoria Discordia by Adam Gorightly [Technorati links]

August 04, 2014 03:02 PM
RVP Press (2014), Paperback, 296 pages
[from: Librarything]