September 02, 2014

CourionBaking In Intelligence at the Beginning [Technorati links]

September 02, 2014 01:13 PM

Access Risk Management Blog | Courion

Nick BerentsRecently we announced the latest version of the Access Assurance Suite. The 8.4 revision brings Courion’s market-leading intelligence capabilities to where it all begins, provisioning. Now, business policy validation is fully baked into the access definition and user provisioning process in real-time. As a result, inappropriate access assignments can now be flagged from the start and prevented.

Here’s how it works: when an access request is submitted, the embedded intelligence engine alerts the user with a list of defined business policy violations.

SoD violation imagesFor example, an alert could be triggered automatically if a user requested access to both create purchase orders and approve orders, a Segregation of Duty (SoD) business policy violation.

You are then able to remedy the violation or request a policy exemption. All of your approvers can easily view the history of the request along with any follow-on exemption requests, providing a more intuitive approval process and eliminating bottlenecks.

This is a great complement to the suite’s existing continuous monitoring capabilities, which detect business policy violations whenever they occur, enabling provisioning remediation without the need for human intervention and further automating the governance process. Now your organization can both start compliant and stay compliant on an ongoing basis. A nice one-two punch!

Watch for future posts about additional new features in 8.4.

blog.courion.com

Kuppinger ColeReal-time Security Intelligence: history, challenges, trends [Technorati links]

September 02, 2014 10:31 AM
In Alexei Balaganski

Information security is just as old as Information Technology itself. As soon as organizations began to depend on IT systems to run their business processes and to store and process business information, it has become necessary to protect these systems from malicious attacks. First concepts of tools for detecting and fighting off intrusions into computer networks were developed in early 1980s, and in the following three decades security analytics has evolved through several different approaches, reflecting the evolution of IT landscape as well as and changing business requirements.

First-generation security tools – firewalls and intrusion detection and prevention systems (IDS/IPS) – have essentially been solutions for perimeter protection. Firewalls were traditionally deployed on the edge of a trusted internal network and were meant to prevent attacks from the outside world. First firewalls were simply packet filters that were effective for blocking known types of malicious traffic or protecting from known weaknesses in network services. Later generation of application firewalls can understand certain application layer protocols and thus provide additional protection for specific applications: mitigate cross-site scripting attacks on websites, protect databases from SQL injections, perform DLP functions, etc. Intrusion detection systems can be deployed within networks, but old signature-based systems were only capable of reliably detecting known threats and later statistical anomaly-based solutions were known to generate an overwhelming number of false alerts. In general, tuning an IDS for a specific network was always a difficult and time-consuming process.

These traditional tools are still widely deployed by many organizations and in certain scenarios serve as a useful part of enterprise security infrastructures, but recent trends in the IT industry have largely made them obsolete. Continued deperimeterization of corporate networks because of adoption of cloud and mobile services, as well as emergence of many new legitimate communication channels with external partners has made the task of protecting sensitive corporate information more and more difficult. The focus of information security has gradually shifted from perimeter protection towards detection and defense against threats within corporate networks.

The so-called Advanced Persistent Threats usually involve multiple attack vectors and consist of several covert stages. These attacks may go on undetected for months and cause significant damage for unsuspecting organizations. Often they are first uncovered by external parties, adding reputation damage to financial losses. A well-planned APT may exploit several different vulnerabilities within the organization: an unprotected gateway, a bug in an outdated application, a Zero-Day attack exploiting a previously unknown vulnerability and even social engineering, targeting the human factor often neglected by IT security.

By the mid-2000s, it was obvious that efficient detection and defense against these attacks requires a completely new approach towards network security. The need to analyze and correlate security incidents from multiple sources, to manage a large number of alerts and to be able to perform forensic analysis has led to development of a new organizational concept of Security Operations Center (SOC). An SOC is a single location where a team of experts is monitoring security-related events of entire enterprise information systems and taking actions against detected threats. Many large enterprises have established their own SOCs and for smaller organizations that cannot afford considerable investments and maintaining a skilled security staff on their own, such services are usually offered as a Managed Security Service.

The underlying technological platform of a security operations center is SIEM: Security Information and Event Management – a set of software and services for gathering, analyzing and presenting information from various sources, such as network devices, applications, logging systems, or external intelligence sources. The term has been coined in 2005 and the concept has been quickly adopted by the market: currently there are over 60 vendors offering SIEM solutions in various forms. There was a lot of initial hype around the SIEM concept, as it was offered as a turnkey solution for all security-related problems mentioned above. The reality, however, has shown that, although SIEM solutions are very capable sets of tools for data aggregation, retention and correlation, as well as for monitoring, alerting and reporting of security incidents, they are still just tools, requiring a team of experts to deploy and customize and another team to run it on daily basis.

Although SIEM solutions are currently widely adopted by most large enterprises, there are several major challenges that, according to many information security officers, are preventing them from efficiently using them:

Another common shortcoming of current SIEM solutions is lack of flexibility when dealing with unstructured data. Since many of the products are based on relational databases, they enforce applying rigid schemas to collected information and do not scale well when dealing with large amounts of data. This obviously prevents them from efficiently detecting threats in real time.

Over the last couple of years, these challenges have led to the emergence of the “next-generation SIEM” or rather a completely new technology called Real-time Security Intelligence (RTSI). Although the market is still in its early stage, it is already possible to summarize the key differentiators of RTSI offerings from previous-generation SIEM tools:

The biggest technological breakthrough that made these solutions possible is Big Data analytics. The industry has finally reached the point, when business intelligence algorithms for large-scale data processing, previously affordable only to large corporations, have become commoditized. Utilizing readily available frameworks such as Apache Hadoop and inexpensive hardware, vendors are now able to build solutions for collecting, storing and analyzing huge amounts of unstructured data in real-time.

This makes it possible to combine real-time and historical analysis and identify new incidents as being related to others that occurred in the past. Combined with external security intelligence sources that provide current information about the newest vulnerabilities, this can greatly facilitate identification of ongoing APT attacks on the network. Having a large amount of historical data at hand also significantly simplifies initial calibration to the normal patterns of activity of a given network, which are then used to identify anomalies. Existing RTSI solutions are already capable of automated calibration with very little input required from administrators.

Alerting and reporting capabilities of RTSI solutions are also significantly improved. Big Data analytics technology can generate a small number of concise and clearly categorized alerts to allow even an inexperienced person to make a relevant decision, yet provides a forensic expert with much more details about the incident and its relations with other historical anomalies.

As mentioned above, the RTSI market is still in its early stage. There are many new offerings with various scopes of functionality from both established IT security vendors as well as startups available today or planned for release in near future. It is still difficult to predict in which direction the market will evolve and which features should be expected from an innovation leader. However, it is already clear that only the vendors that will offer complete solutions and not just set of tools will win the market. It is important to understand that Real-time Security Intelligence is more than just SIEM 2.0.

This article was originally published in the KuppingerCole Analysts’ View Newsletter. Also check out video statements of my colleagues Mike Small and Rob Newby on this topic.

Vittorio Bertocci - MicrosoftAzure AD Records User Consent for Native Apps in the Refresh Token [Technorati links]

September 02, 2014 05:45 AM

image

An alternative title for this post could have been “Why users of my native app are prompted by Azure AD for consent every time they authenticate?”.
In brief: for native apps, the consent granted by the user is recorded by Azure Active Directory in the refresh token issued on the first successful authentication.
As long as the same refresh token is used (and is not expired, which BTW can take weeks or months), the user won’t be prompted for consent or credentials again.
To stress the point: the consent for native apps/public clients is not persisted anywhere in the cloud (as instead it happens when consenting for a web app/confidential client). If the refresh token is lost or it expires, the user will be prompted again for both credentials and consent.

That said: if you use ADAL, you can happily forget that a refresh token plays a role here – the library takes care of everything for you. More specifically: ADAL caches and uses refresh tokens automatically and transparently every time you call AcquireToken*. As long as you keep calling AcquireToken* every time you need a token, the right behavior should take place.
In Windows Store (tablet and phone) the token cache is automatically persisted for your app, so that it is available across multiple run & shutdown cycles.
In .NET, however, the default is to keep cached tokens in memory – as soon as you close your process, it is gone. If you want to have keep those available across different runs and avoid re-prompting, you have to initialize ADAL with a persistent cache as shown here.

Another frequent issue that leads to extra prompt is the misuse of the common endpoint, see the ADAL section of this post.

I know this post is uncharacteristically brief – but hopefully it will be enough to help you through the obstacle if you landed here while searching for a solution for this issue Smile

September 01, 2014

Kuppinger ColeWhat does Real-time really mean? [Technorati links]

September 01, 2014 02:03 PM
In KuppingerCole Podcasts

What does it actually mean to be in the real time? It's really the convergence of three areas: SIEM (Security Incident and Event Management), forensics and Big Data. Big Data itself is still an area with the lack of clarity around it, but put simply, it's the ability to process large amounts of data very quickly...





Watch online

Kuppinger ColeReal-time Security Intelligence - a solution for all security problems? [Technorati links]

September 01, 2014 02:02 PM
In KuppingerCole Podcasts

Organizations depend upon their IT to run their business and to grow their profits into the future. Yet these IT systems are under a constant attack. Unfortunately, many of there attacks are only detected by outsiders rather than internally. Real-time Security Intelligence is a solution that's intended to rectify that.





Watch online

Kuppinger ColeMicrosoft OneDrive file sync problems [Technorati links]

September 01, 2014 08:19 AM
In Mike Small

A number of users of Microsoft’s OneDrive cloud storage system have reported problems on the Microsoft community relating to synchronizing files between devices. So far I have not seen an official response from Microsoft. This can be very disconcerting so, in the absence of a response from Microsoft, here are some suggestions to affected users. These worked for me but – in the absence of a formal response from Microsoft – I can offer no cast iron guarantees.

What is the problem? It appears that files created on one device are synced to another device in a corrupt state. This only seems to affect Microsoft Office files (Word, Excel, PowerPoint etc.) which have been created or updated since around August 27th. It does not appear to affect other types of files such as .pdf, .jpg and .zip for example. When the user tries to access the corrupt file they get a message of the form “We’re sorry, we can’t open the <file> because we found a problem with its contents”

This problem does not affect every device but it can be very disconcerting when it happens to you! The good news is that the data appears to be correct on the OneDrive cloud and – if you are careful – you can retrieve it.

Have I got the problem? Here is a simple test that will allow you to see if you have the problem on your device:

  1. Create a simple Microsoft Office file and save it on the local files store of the device. Do not save it on the OneDrive system.
  2. Log onto OneDrive https://onedrive.live.com/using a browser and upload the file to a folder on your OneDrive.
  3. Check the synced copy of the file downloaded by the OneDrive App onto your device. If the synced file is corrupted you have the problem!

What can I do? Do not panic – the data seems to be OK on the OneDrive cloud. Here is how I was able to get the data back onto my device:

  1. Log onto OneDrive https://onedrive.live.com/using a browser and download the file to your device- replace the corrupt copy.
  2. Do NOT delete the corrupt file on your device this will send the corrupt version to the recycle bin. It will also cause the deletion of the good version on other devices.
  3. It is always a good idea to run a complete malware scan on your devices. If you have not done so recently now is a very good time. I did that but no threats were detected.
  4. Several people, including me have followed the advice on how to troubleshoot sync problems published by Microsoft – but this did not work for me or them.
  5. I did a complete factory reset on my Surface RT – this did not help. Many other people have tried this also to no avail.

Is there a work around? I have not yet seen a formal response from Microsoft so here are some things that all worked for me:

  1. Accept the problem and whenever you find a corrupt file perform a manual download as described above.
  2. Use WinZip to zip files that are being changed. It seems that .zip files are not being corrupted.
  3. Protect your Office files using a password – it appears that password protected files are not corrupted. In any case KuppingerCole recommends that information held in cloud storage should be encrypted.
  4. Use some other cloud storage system or a USB to share these files.

This example illustrates some of the downsides of using a cloud service. Cloud services are very convenient when they work but when they don’t work you may have very little control over the process to fix the problem. You are completely in the hands of the CSP (Cloud Service Provider). If you are using a service for business, access to the data you are entrusting to the CSP may be critical to your business operations. One of the contributors to Microsoft support community described how since he was unable to work he was getting no pay and this is a graphic illustration of the problem.

KuppingerCole can offer research, advice and services relating to securely using the cloud. In London on October 7th KuppingerCole will hold a Leadership Seminar on Risk and Reward from the Cloud and the Internet of Things. Attend this seminar to find out how to manage these kinds of problems for your organization.

Mike Small August 31st, 2014.

Kuppinger ColeMicrosoft OneDrive file sync problems [Technorati links]

September 01, 2014 07:57 AM
In Mike Small

A number of users of Microsoft’s OneDrive cloud storage system have reported problems on the Microsoft community relating to synchronizing files between devices. So far, I have not seen an official response from Microsoft. This can be very disconcerting so, in the absence of a response from Microsoft, here are some suggestions to affected users. These worked for me but – in the absence of a formal response from Microsoft – I can offer no cast iron guarantees.

What is the problem? It appears that files created on one device are synced to another device in a corrupt state. This only seems to affect Microsoft Office files (Word, Excel, PowerPoint etc.), which have been created or updated since around August 27th. It does not appear to affect other types of files such as .pdf, .jpg and .zip, for example. When the user tries to access the corrupt file, they get a message of the form “We’re sorry, we can’t open the <file> because we found a problem with its contents”.

This problem does not affect every device, but it can be very disconcerting when it happens to you! The good news is that the data appears to be correct on the OneDrive cloud and – if you are careful – you can retrieve it.

Have I got the problem? Here is a simple test that will allow you to see if you have the problem on your device:

  1. Create a simple Microsoft Office file and save it on the local files store of the device. Do not save it on the OneDrive system.
  2. Log onto OneDrive https://onedrive.live.com/ using a browser and upload the file to a folder on your OneDrive.
  3. Check the synced copy of the file downloaded by the OneDrive App onto your device. If the synced file is corrupted, you have the problem!

What can I do? Do not panic – the data seems to be OK on the OneDrive cloud. Here is how I was able to get the data back onto my device:

  1. Log onto OneDrive https://onedrive.live.com/ using a browser and download the file to your device – replace the corrupt copy
  2. Do NOT delete the corrupt file on your device – this will send the corrupt version to the recycle bin. It will also cause the deletion of the good version on other devices.
  3. It is always a good idea to run a complete malware scan on your devices. If you have not done so recently, now is a very good time. I did that but no threats were detected.
  4. Several people including me have followed the advice on how to troubleshoot sync problems published by Microsoft – but this did not work for them or me.
  5. I did a complete factory reset on my Surface RT – this did not help. Many other people have tried this also to no avail.

Is there a work around? I have not yet seen a formal response from Microsoft, so here are some things that all worked for me:

  1. Accept the problem and whenever you find a corrupt file perform a manual download as described above.
  2. Protect your Office files using a password – this caused the files to be encrypted and it appears that password protected files are not corrupted.  In any case KuppingerCole recommends that information held in cloud storage should be encrypted.
  3. Use WinZip to zip files that are being changed. It seems that .zip files are not being corrupted.
  4. Use some other cloud storage system or a USB to share these files.

This example illustrates some of the downsides of using a cloud service. Cloud services are very convenient when they work, but when they don’t work you may have very little control over the process to fix the problem. You are completely in the hands of the CSP (Cloud Service Provider). If you are using a service for business, access to the data you are entrusting to the CSP may be critical to your business operations. One of the contributors to Microsoft support community described how since he was unable to work he was getting no pay and this is a graphic illustration of the problem.

KuppingerCole can offer research, advice and services relating to securely using the cloud. In London of October 7th KuppingerCole will hold a Leadership Seminar on Risk and Reward from the Cloud and the Internet of Things. Attend this seminar to find out how to manage these kinds of problems for your organization.

August 30, 2014

Anil JohnAttributes are the New Money [Technorati links]

August 30, 2014 05:30 PM

Yes, I said it. Since then I've been asked often enough about what I meant by it, that I thought I would provide an explanation.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.


These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

Nat Sakimura「マイナちゃん」にマイナンバー・キャラクターの名前決定! [Technorati links]

August 30, 2014 02:29 AM

以前公募をお知らせしていた社会保障と税の共通番号制度(マイナンバー制度)のマスコットキャラクターの名前が、29日(金)発表されました[1]

「マイナちゃん」

これは、6月20日(金)から7月21日(月)まで行われた一般公募で寄せられた723案の中から、「マイナンバーを連想しやすい名称であり、ロゴマークのウサギの親しみやすさが表現されている」[2]ということで選定されました。表彰者については、複数の応募があったため、厳正な抽選を行った上で後日連絡をするとのことです。

マイナンバーは(基本的に)一生変わらない番号を個人に割当て、年金や納税の事務の効率化、正確性向上と手続きの簡便化を目指す制度です。これによって「消えた年金問題」なども起きなくなることが期待されます。2015年後半からカードを配り始め、2016年1月から利用が始まります。お給料をもらっておられる方は、必ずご自身とご家族のマイナンバーを勤め先に届け出る必要が出ますので、ほとんどの皆さんに影響がある制度です。国民の皆様にこの制度を知っていただくためにも、マイナちゃんの活躍が期待されます。ふなっしー並になったらすごいですね!


[1] 内閣官房 『「マイナンバー広報用ロゴマーク」の愛称決定』http://www.cas.go.jp/jp/seisaku/bangoseido/logo/aisyou.html


[2] あと重要な点として、既存の商標を侵害していないというのがあったそうです。なんでも、ウサギ系は結構商標押さえられてしまっていて、ご苦労されたとのこと。ポケモンの「マイナン®」とかとか…。(←これ、うさぎっぽいのすごい)。お疲れさまでした!

August 29, 2014

Andreas Åkre Solberg - Feide/UNINETTHTTPjs – a new API debugging, prototyping and test tool [Technorati links]

August 29, 2014 07:27 AM

Today, we released a new API debugging, prototyping and test tool that is available at:

When you arrive at the site, you’ll immediately be delegated a separate sub domain, such as http://f12.http.net. This subdomain is now ready to receive any kind of HTTP requests. At the site, you get a javascript editor window, where you can prototype the server side of the API server.

All requests sent to your new domain, will be processed in your browser with your custom javascript implementation. The web site will display the full HTTP log for you to inspect.

This tool is very useful for rapid development, and testing of API clients. In example, you may select a template OAuth Server implementation to start from, then attempt to return variations, invalid responses and similar to inspect how your client behaves.

The tool was made possible with Node.js, Websockets with Socket.io, Expressjs, requirejs, Grunt, Bower.io, nconf, select2, ace, bootstrap, momentjs, highlightjs.

Ian YipHow to spot a meaningless contributed article [Technorati links]

August 29, 2014 04:55 AM
What is a contributed article? They're the ones where the author works for a vendor or solution provider and not the publication. In other words, their day job is not as a journalist. I'm speaking from first hand experience as I've written a number for various publications and understand the process.

Contributed articles do not typically involve any form of payment. When they do, reputable publications will disclose this fact. More commonly, they are freely given to a publication based on a brief that was provided. For example, a publication may say they are interested in a contributed article about a new smartphone's features and the implications on digital security. A vendor's marketing and public relations team will then work with a subject matter expert (SME) on crafting such an article for submission. Of course, if the SME isn't really one, then nothing will save the article.

Naturally, the process results in content of varying quality. The worst ones are typically not written by the individual, but ghost-written by someone else (usually without sufficient domain expertise). The vendor spokesperson/SME simply gets the byline. These end up sounding generic and the reader learns nothing.

More commonly, the resulting article is an equal and collaborative effort between everyone involved. While this is marginally better, it still sounds unauthentic, somewhat generic and provides little value. Why? They keyword here is "equal". The SME needs to be the main contributor instead of simply providing their equal share of input.

The best contributed articles are the ones written by someone:
  1. With the necessary domain expertise.
  2. That knows how to write.
  3. That has the time to do it.
  4. Willing to allow an editor/reviewer to run their virtual red pens through it without getting offended.
  5. That is not blatantly trying to sell something.
Unfortunately, contributed articles tend to be mediocre or just terrible and that is a real shame, because there are lots of really smart people that could produce great content (with some help and editing) if they weren't under corporate pressure to be 100% "on message". The art of course, is to be "on message" subtly while still being able to contribute to the conversation in a meaningful way.

So how do you spot a meaningless contributed article? They usually look like this...

Meaningless headline that was put here for click-baiting purposes

You know that issue that's been in the news this week? And that other bit of similar news from last week? Oh, and those other countless ones from the past few months? They're only going to get worse because of buzzword 1, buzzword 2 and buzzword 3. Oh, don't forget about buzzword 4.

That large analyst firm, their biggest competitor and that other one that tries really hard to be heard all agree. Here's some meaningless statistic and a bunch of percentages from these analyst firms that prove what I'm saying in the previous paragraph is right. I'm adding some independent viewpoints here people, so it's not just about what I'm saying, even though it is.

So what to do about all this? You should be really worried about solving the problem you may or may not have had but now that I've pointed it out, you definitely have it. You aren't sure? Well, then listen to this.

Here's an anecdote I may or may not have made up about some organisation that shall remain nameless but is in a relevant industry relating to what I'm trying to sell you, oh wait, that I'm providing advice on because you've got this really big issue that you're trying to solve but just don't know you need to solve it yet but will do once you've read this.

So how do you solve your problem? Well, the company I work for happens to have a solution for this problem that you've now got. I won't be so blatant as to tell you this, but you will no doubt look me or my company up that search engine thing and see what we do and put it all together and then contact our sales team who will then sell it to you so I can get paid.

Here is another anecdote I may or may not have made up about how an organisation has solved the issues I've so clearly laid out for you that can so easily be solved, as shown by this very real (or fictitious, nameless) organisation.

My word-limit is almost up so I'll tell you what I've already told you but just in a slightly different way. In conclusion, you're screwed unless you solve this really generic issue with the silver bullet that organisation x used. So, buy my stuff.
I'm not saying every article with these characteristics is terrible. But very often, the "I have a hammer to sell, so everything is a nail" articles are structured this way. They are generic and leave the reader with the feeling that they just read a bunch of random words. I for one, stop reading an article when it starts to smell like this.

Note:
For the record, I NEVER allowed my articles to be ghost-written, much to the frustration of the people managing the whole process. The problem this introduced was that content could not be churned out as quickly because I became the bottleneck. I wouldn't even agree to have someone else start the article for me. I had to start it from scratch and have final approval on it (once my drafts were run past a set of editors and reviewers of course). This made for more authentic, balanced content while still maintaining some level of being "on message", which kept marketing happy.
August 28, 2014

Vittorio Bertocci - MicrosoftUse ADAL to Connect Your Universal Apps to Azure AD or ADFS [Technorati links]

August 28, 2014 07:49 AM

In short: using ADAL from a Universal App is easy, but not obvious.
For what we hear, your experience is that you try to add a reference to the ADAL NuGet in the shared project – and it fails.
There are a number of reasons for that. This post will give a bit of background, then will give you instructions on how to successfully use ADAL in a Universal App project. Hint: it does not entail adding ADAL to the shared project Smile curious? read on!

Background

Universal Apps are a new app project type introduced in Visual Studio 2013 Update 2, which allows you to use a single solution to target Windows 8.1 and Windows Phone 8.1 and share a lot of code between the two, while still being able to tailor the experience to the features of every platform. Wow, I should not try to summarize too much in a single sentence Smile if you want a less crumpled introduction to Universal Apps, head to this page. From now on I’ll just assume that you have some familiarity with the concept of Universal Apps, and go straight to my point.

Although the percentage of code that is common across the Tablet/PC and the phone platforms is astoundingly high, the WebAuthenticationBroker (WAB, main authentication API in the Windows Runtime and a key requirement for our own ADAL libraries) is one of the exceptions that confirms the rule. Back in April I wrote a post that went in the details of the calling pattern that the WAB requires on the phone, but it boils down to the fact the WAB acts a separate app – calling the WAB means leaving your app (which could be deactivated) and handling reactivation and parameters retrieval when the WAB is done. It’s the famous continuation model.

The continuation model has a pretty significant impact on the structure of the code that needs its services. As a result, we simply could not reuse the ADAL component we wrote for tablet/PC Windows Store apps – we had to create a brand new one for the phone that accommodated for the continuation model, as explored here.

Another reason that made it difficult to share the same component across tablet/pc and phone was the choice we made to package ADAL for Windows Store as a Windows Runtime Component. That gives you the ability of using any of the Windows Runtime languages (C#,WinJS, C++) when using ADAL in your app, but is pretty much antithetic to packaging in the portable class library (PCL) format that would have afforded us more sharing options. For future versions we are considering changing approach, but for this generation this is what we went for.

What to do, then? How do you use ADAL in a Universal App? Pretty easy, once you know the trick:

That’s all there is to it. I suspect there’s some cleverer sharing technique, and if some XAML expert chimes in with a proposal I am happy to revise the above; but in the meanwhile, the above works and is simple.

Use ADAL in a Universal App Project

Let’s make this a tad more practical. Fire up VS2013 (update 2 and above) and create a new blank Universal App.

The Windows 8.1 Part

Let’s start with adding some ADAL code to the Windows 8.1 part.

Right click on the Windows 8.1 project in solution explorer, choose Manage NuGet Packages, ensure that the first dropdown says “include prerelease”, type adal in the search box, and pick the entry for the library:

image

We are going to keep things real simple. Let’s add some basic logic in the app page to get a token for the Graph. I will not even use the token, I just want to demonstrate that ADAL works in this context.

Here there’s my super sophisticated page:

<Page
    x:Class="ADALUniversalApp1.MainPage"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:local="using:ADALUniversalApp1"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    mc:Ignorable="d">

    <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">        
            <Button HorizontalAlignment="Center" Click="Button_Click">aaa</Button>
    </Grid>
</Page>

…and the code behind it:

using Microsoft.IdentityModel.Clients.ActiveDirectory;
using System;
using Windows.UI.Popups;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;

namespace ADALUniversalApp1
{
    public sealed partial class MainPage : Page
    {
        public MainPage()
        {
            this.InitializeComponent();
        }

        private async void Button_Click(object sender, RoutedEventArgs e)
        {
            AuthenticationContext ac = 
                new AuthenticationContext("https://login.windows.net/common");
            AuthenticationResult ar = 
                await ac.AcquireTokenAsync("https://graph.windows.net",
                "e11a0451-ac9d-4c89-afd8-d2fa3322ef68", new Uri("http://li"));
            MessageDialog mg = new MessageDialog("hello, Mr/Ms " + ar.UserInfo.FamilyName);
            await mg.ShowAsync();
        }
    }
}

All this does is requesting to a generic AAD tenant an access token scoped for the Graph, then showing one of the values found in the id_token that AAD sends down along with the access and refresh ones. I know, I knooow… no error checking. Lazy hippie.
This is code you can use directly with your own tenants without even modifying the values I pass in, given that I am using the common endpoint (see here) and the native clients are automagically available everywhere.
You do have to modify the values of authority/resource/clientid/returnURI if you want to go against ADFS – you’d need the values of what you provisioned in ADFS. However the code itself is the exact same. Apart from having to turn off authority validation, that is.

Let’s run this bad boy, shall we? Right click on the Windows 8.1 project, debug/start new instance. Click on the button.

image

Authenticate as any user from any tenant you like. If everything goes well, you’ll get the following:

image

The Windows 8.1 project in the Universal App solution works fine with ADAL. Check.

The Windows Phone 8.1 Part

Now the fun begins! Smile No panic tho, this will be a slightly modified version of the project structure you can observe in this sample.

Stop the debugger, right click on the Windows Phone 8.1 project in solution explorer, choose Manage NuGet Packages, type adal in the search box, and pick the entry for the library:

image

Déjà vu? Wow, they must be changing something Smile
That’s right, this is the exact same step you have done for the Windows 8.1 project, and the same ADAL NuGet package. However: the project type is different, hence the NuGet runtime will add in the project a different component – the one for Windows Phone 8.1.

The XAML source for the phone page is gloriously the same as the one I pasted earlier: same lonely button in the center of the screen.
The code behind, however, is far more colorful this time:

using Microsoft.IdentityModel.Clients.ActiveDirectory;
using System;
using Windows.ApplicationModel.Activation;
using Windows.UI.Popups;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Navigation;

namespace ADALUniversalApp1
{
    public sealed partial class MainPage : Page, IWebAuthenticationContinuable
    {
        AuthenticationContext ac = null;
        public MainPage()
        {
            this.InitializeComponent();
            this.NavigationCacheMode = NavigationCacheMode.Required;            
        }

        protected override async void OnNavigatedTo(NavigationEventArgs e)
        {
            ac = await AuthenticationContext.CreateAsync("https://login.windows.net/common");
        }

        private async void Button_Click(object sender, RoutedEventArgs e)
        {
             AuthenticationResult result = 
                 await ac.AcquireTokenSilentAsync("https://graph.windows.net", 
                                                  "e11a0451-ac9d-4c89-afd8-d2fa3322ef68"); 
             if (result != null && result.Status == AuthenticationStatus.Success) 
             {                  
                 ShowGreeting(result); 
             } 
             else 
             {                
                 ac.AcquireTokenAndContinue("https://graph.windows.net", 
                                            "e11a0451-ac9d-4c89-afd8-d2fa3322ef68", 
                                            new Uri("http://li"), ShowGreeting); 
             } 
        }
        public async void ShowGreeting(AuthenticationResult ar)
        {
            MessageDialog mg = new MessageDialog("hello, Mr/Ms " + ar.UserInfo.FamilyName);
            await mg.ShowAsync();
        }
        public async void ContinueWebAuthentication(WebAuthenticationBrokerContinuationEventArgs args) 
         {              
             await ac.ContinueAcquireTokenAsync(args); 
         } 
    }
}

Well, there is decisively more stuff here. Allow me to briefly walk you through the various moving parts.

That takes care of the Windows Phone 8.1 project. However, if you would run this now nothing would work! We still need to hook up the logic that at reactivation time routes execution where we want it to be. That’s where things get tricky: we need to inject in the App (which lives in the shared project) a lot of logic that is specific to the phone platform.

First, let’s add to the shared project a class that encapsulates the details of the continuation model. You can take the corresponding class from our GitHub sample, you can find it here. Together with the continuation model, that class contains the definition of IWebAuthenticationContinuable.

If after adidng that class you’d try to compile, you’d get lots of errors. That’s because it uses many phone specific types. To solve the issue in a somewhat brisk fashion, simply add a #if WINDOWS_PHONE_APP on the line above the class definition and place a #endif at the very end of the file. That will take care of the compilation errors.

Next, open App.xaml.cs. Here we need to add the logic that will handle the OnActivated event and use the continuation manager class.

Search the first #if WINDOWS_PHONE_APP block, and declare in it a ContinuationManager:

#if WINDOWS_PHONE_APP
        private TransitionCollection transitions;
        
        ContinuationManager continuationManager;
#endif

That done, scroll to the end of the class. Right below OnSuspending, insert the following block:

#if WINDOWS_PHONE_APP
        private Frame CreateRootFrame() 
         { 
             Frame rootFrame = Window.Current.Content as Frame; 
             if (rootFrame == null) 
             { 
                 rootFrame = new Frame(); 
                 rootFrame.Language = Windows.Globalization.ApplicationLanguages.Languages[0]; 
                 Window.Current.Content = rootFrame; 
             } 
             return rootFrame; 
         } 

         protected override async void OnActivated(IActivatedEventArgs e) 
         { 
             base.OnActivated(e); 
             continuationManager = new ContinuationManager(); 
             Frame rootFrame = CreateRootFrame(); 
             if (rootFrame.Content == null) 
             { 
                 rootFrame.Navigate(typeof(MainPage)); 
             } 
             var continuationEventArgs = e as IContinuationActivatedEventArgs; 
             if (continuationEventArgs != null) 
             { 
                 Frame scenarioFrame = Window.Current.Content as Frame; 
                 if (scenarioFrame != null) 
                 { 
                     continuationManager.Continue(continuationEventArgs, scenarioFrame); 
                 } 
             } 
             Window.Current.Activate(); 
         }
#endif

I could expand on what that code does, but for the goals of this post this could very well be the equivalent of Harry Potter weaving his wand toward the phone emulator and utter “Patronus Continuatio!” – which is a rather dorky way of saying “this is windows phone specific logic that impacts authentication only indirectly, hence see MSDN if you want more details”.

Let’s give the phone app a spin! Right click on the Windows Phone 8.1 project in solution explorer, right click/Debug/Start new instance.

Once the button appears, hit it. You’ll get the following:

phone1

That’s the WAB in its mobile attire. Sign in and…

phone2

Voila’. Token successfully acquired. Also the Windows Phone 8.1 portion of the Universal App works. Q.E.D.

Wrapup

Making ADAL work in a Universal App is not very intuitive, but is also not very difficult. The code required to get a token in the two platform does differ, which is somewhat un-Universal – however, modulo the need to add a NuGet reference twice, this is pretty much the same structural difference what you’d have to handle if you were to use the WebAuthenticationBroker directly instead of via ADAL. As mentioned here there are good reasons for the mobile WAB to adopt the continuation model – I think that some extra steps are well worth the extended reach to low powered devices that this model affords you.

I hope this post was useful to unblock you, if you have issues hit me via the Contact link in the top menu. Happy coding!

August 27, 2014

Nat Sakimuraクラシック音楽の父、C.P.E. バッハ〜生誕300年 [Technorati links]

August 27, 2014 05:57 PM

C.P.E. Bachあーまた扇情的なタイトルを付けてしまった…。

でも、今年生誕300年のC.P.E. Bach (カール・フィリップ・エマニュエル・バッハ)、あの大バッハ(J.S.Bach)の次男ですね、彼を狭義のクラシック音楽=古典派音楽の父というのは、あながち間違ってないと思うのですよ。あのモーツァルトさえ、「彼は父であり、われわれは子供だ」[1]と言っているくらいですし。

ちょっと古典派音楽をそれ以前の音楽からわける特徴を書き出してみましょう。

  1. メロディー+和声の形(モノフォニー)
  2. 複数の性格の異なる主題
  3. バロックに比べて短い主題とその動機(motif)分解、およびその運用・展開
  4. → ソナタ形式

これらは、ロマン派以降にも引き継がれる、いわば古典派以降の「クラシック音楽」の根幹をなすものです。

一方、C.P.E. Bachの音楽の特徴は以下のようにいわれます。

  1. ギャラント様式(メロディ+和声)
  2. 多感様式[12](複数の性格の異なる主題)
  3. 短い主題と動機分解、およびその運用・展開(動機分解とその運用をしたのは彼が最初といわれています[2])
  4. 3楽章形式のソナタと、イントロ+(第一主題[主調]+ブリッジ+第二主題[属調/平行調])x2 + 展開 + 再現部の楽章=ソナタ形式

あれ、これ、古典派音楽の特徴そのものじゃん。どうりでモーツァルトが彼のことを「父」と言うわけですね。

実際、ハイドン、モーツァルト、ベートーヴェンへの影響は顕著で、ベートーヴェンも尊敬していた[3]ようです。また、メンデルスゾーンのオラトリオ「エリア」は、C.P.E. Bachの「荒野のイスラエル人」の影響が見て取れますし、ブラームスは彼の作品の校閲[4]をしたりもしているようです。

バッハファミリーでいうと、弟のJ.C.Bachとモーツァルトの親交が深く、

の影響が強いといわれることもあります。まぁ、そうかな、と思うことも無いのですが、晩年のモーツアルトに関しては、結構C.P.E. Bachの影響も強いような気もしています。晩年のモーツアルトは、それまでの「楽しい単純なホモフォニー音楽」とは打って変わって、非常に凝った作りのポリフォニー音楽、ある意味難しい音楽になっていって[6]人気失墜、妻の病気も有り借金苦に苦しむわけですが、そのきっかけとなったのが、スヴィーテン男爵に見せられたバッハの音楽と楽譜だといわれています[7]。

モーツアルトが研究したC.P.E.バッハ…

[1] Wikipediaカール・フィリップ・エマニュエル・バッハ (2014/8/27取得)

[2] Wikipedia: “Sonata Form“, http://en.wikipedia.org/wiki/History_of_sonata_form(2014/8/27取得)

[3] 要出典

[4] Wikipediaカール・フィリップ・エマニュエル・バッハ 後世への影響(2014/8/27取得)

[5] 要出典

[6] それまでまず書き損じがなかったモーツアルトが、ハイドン・セット (1782〜1785)では、散々書き直しをするようになったらしいことからも見て取れる。って、ほんまか?30年以上前に読んだ吉田秀和さんか誰かの本に書いてあったのを覚えているだけなんだが…。

[7] シンフォニア Wq. 182/1〜6, H.657〜662 (1773年作曲)

[12] 多感様式(ドイツ語:Empfindsamer Stil , 英語:Sensitive Style)は、18世紀ドイツで作られた音楽様式。「すなおで自然な感情」の表現を目指し、突然の気持ちの変化を特徴とする。一曲(楽章)を通じて同じ感情が支配するべきであるというバロック音楽のドクトリン「Affektenlehre」に対比する形で発展した。

 

August 26, 2014

CourionSame “Stuff”, Different Day [Technorati links]

August 26, 2014 07:24 PM

Access Risk Management Blog | Courion

Chris SullivanOn August 20th, UPS Stores announced that they hired a private security company to perform a review of their Point of Sale (PoS) systems after receiving Alert (TA14-212A) Backoff Point-of-Sale Malware about a new form of PoS attack and, surprise, they found out that they had a problem. They released some information about which stores and the type of information was exposed, but little else. Freedom of Information Act requests have already been filed.

What followed was the predictable media buzz, where it was postulated that this was yet-another PoS breech similar to those that affected Neiman Marcus and Target. While there is some truth is this, there are interesting bits that make this case very different.

What’s different?

What’s the same?

What can you do to deter a breach that takes advantage of vulnerabilities in your identity and access equation? Begin by practicing good hygiene by following the identity and access controls recommended in Alert (TA14-212A), the 2014 Verizon Data Breach Report and the SANs Security Controls Version 5 as outlined by mUPS Stores Logoy colleague Brian Milas in this blog post.

What can you to detect a breach as soon as possible? Brian points out in the same post that by using a intelligent IAM solution, you will be better equipped to minimize the type of access risk that leads to a breach by provisioning users effectively from the start, but also will be better able to detect access risk issues as they happen and remediate them on an ongoing basis by leveraging continuous monitoring capabilities.

The point is, regardless of the exact details and mechanisms employed in an attack, you can and should do what is under your control to minimize risk and equip yourself for early detection. Identity and access intelligence is a good place to start.

blog.courion.com

Vittorio Bertocci - MicrosoftThe Common Endpoint: Walks Like a Tenant, Talks Like a Tenant… But Is Not a Tenant [Technorati links]

August 26, 2014 07:34 AM

The common endpoint is one of the most powerful development features of AAD – unfortunately, it is also one of the least intuitive ones. In this post I will give you a brief taste of what it does, what it is useful for, and how ADAL surfaces its strange properties.

Azure AD Tenant Endpoints

You probably know all this already, but a quick refresher is never a bad thing.

Every Azure AD tenant provides a bunch of endpoints that you can use to secure your applications, choosing between the various protocols AAD supports.
Although those endpoints are all referring to different protocols, they all follow the same basic pattern. In the immortal BNF notation, you could define that pattern as follows:

<protocol-endpoint> ::= <instance><tenant><protocol-specific-path>

<protocol-specific-path> ::= “/oauth2/authorize?api-version=1.0” | “/oauth2/token?api-version=1.0” | “/federationmetadata/2007-06/federationmetadata.xml” | …

<tenant> ::=  <tenant-id> | <domain>

<tenant-id> ::= <GUID>

<domain> :: = <hostname>

<instance> ::= “https://login.windows.net/” | …

If you prefer things a tad less formal, we can think of some examples. For instance, if I want to obtain the OAuth2 authorization endpoint of my test tenant developertenant I can simply write:

https://login.windows.net/developertenant.onmicrosoft.com/oauth2/authorize?api-version=1.0

Another way I can refer to the exact same endpoint is the following:

https://login.windows.net/6c3d51dd-f0e5-4959-b4ea-a80c4e36fe5e/oauth2/authorize?api-version=1.0

This is the exact same endpoint – I am simply choosing a different way to identify the corresponding tenant. The advantage of using the domain is mostly that it’s human readable and easier to remember than a GUID. The tenantID, on the other hand, has good properties like immutability (a domain can be discarded, a tenantID is forever), is non-reassignable (discarded domains might get bought by other orgs, messing with your endpoints) and provides a single identifier to construct a stable endpoint no matter how many domains you registered.

Where do I find the tenantID? There are various places to pick it up from. The easy one is via the Azure portal: all endpoints are listed via the view endpoints button on the bottom common bar in the AAD/tenant/applications page. Those endpoints are expressed via the tenantID.

image

Another way of retrieving that, which I favor because it does not require me to sign in, is to hit the public ws-federation metadata endpoint of the tenant – which is public. You can build that URL using the domain (which I usually remember), but the content of the metadata will always refer the tenantID. For example, if I follow

https://login.windows.net/developertenant.onmicrosoft.com/federationmetadata/2007-06/federationmetadata.xml

I’ll land on the following document:

image

Et voila’, the highlighted text is the tenantID. Also note the entityID, which is in itself a URI parametric in respect to the tenantID: it will come in handy later.

Anyway: once you have your endpoints, you can plug them in your favorite development stack and use them to your heart’s content (usually to request tokens).

Late Binding a Tenant

The above is all fine and dandy if you are writing a line of business app, where the organization you want to authenticate with is known at development time.

However, that does leave out a very large class of important applications: SaaS and multitenant applications. My recent favorite example is Org Navigator, the little app I wrote few weeks ago that allows you to search for users in an Azure AD tenant.

While it sleeps its dreamless sleep in its cell up in the Windows Store cloud, Org Navigator does not know which AAD tenant it will need to search. When you download and install it, Org Navigator still does not know which tenant it should work with. However, when you first launch the app and you try your first query – BAM: it presents you the usual AAD credential experience. You sign in with one user from the tenant you want to target, and you’re in: from that moment on, Org Navigator knows which AAD tenant to query (and knows how to get the tokens needed to do so).

How did I achieve this behavior when developing Org Navigator? What endpoint rendered the AAD credential experience, given that at that point the app did not know which domain or tenantID to use?
If your short term memory is not as bad as mine these days, you already guessed what made this possible: the common endpoint.

The BNF I provided earlier wasn’t really complete: the <tenant> entry should have been

<tenant> ::=  <tenant-id> | <domain> | “common”

In a nutshell, common is a convention used to tell AAD “I don’t yet know which tenant should be used. Please render a generic credential gathering experience, and we’ll figure out the tenant depending on what account the user enters”.

That is precisely what I did in Org Navigator. I arranged for the very first authentication operation to go against the authority (in ADAL parlance) https://login.windows.net/common, and that afforded me the behavior described above (for a more detailed walkthrough see Org Navigator’s help page – or download the app and try it on your own tenant or the graph test tenant!).
Other canonical uses can be observed in our multitenant Web samples – in particular, the sign in and sign up links. More details later.

That is pretty handy! The user does the work for you, effectively late binding the tenant. However, there are a couple of things to keep an eye on.

That said, let’s take a look at some practicalities of using common with web apps (via our OWIN middleware) and via ADAL.

OWIN Middleware and the Common Endpoint

As the post title says, common is NOT a tenant: rather, it is a convention that is used in place of a tenant for driving the real tenant identification process.
Given its placement in the endpoints URI template, however, it is very hard not to think about it as a tenant and just use it everywhere one would use a real AAD tenant. In fact, that’s largely the way in which we use it: however we do need to perform some extra steps to accommodate for common’s peculiar behavior.

Let’s make a practical example. Let’s say that we want to write a multi tenant web app secured via OpenId Connect which can sign users from any tenant. In order to late bind the tenant, we could write the following OWIN auth middleware init:

public void ConfigureAuth(IAppBuilder app) {          
         string ClientId = ConfigurationManager.AppSettings["ida:ClientID"]; 
         string Authority = "https://login.windows.net/common/"; 
         // ...

         app.UseOpenIdConnectAuthentication( 
         new OpenIdConnectAuthenticationOptions 
         { 
           ClientId = ClientId, 
          Authority = Authority, 
         }
// ...

This would indeed cause the desired behavior… almost. If you’d run the app you’d be able to sign in with any user from any tenant, but upon authentication with AAD, the app would reject the call. Why?
Recall how we managed to keep the object model of the OWIN authentication middleware so compact? That’s right, we use the tenant coordinates to read from metadata documents the extra info we need to validate tokens. Now, the common endpoint does expose metadata as well – but they are somewhat incomplete. For example, if I open the OpenId Connect discovery doc I see the following:

image

Whereas in a real tenant you’d find actual tenantid values, the common endpoint cannot offer that given that the actual tenant that will be used is undefined. The “{tenantid}” string is clearly a placeholder rather than a real value.
The “issuer” value in that doc is the string that the OWIN middleware will use in its default validation logic to validate the “issuer” value of incoming tokens . Clearly that cannot work, given that once the user enters an account from a specific tenant the issuer value will be something like “https://sts.windows.net/6c3d51dd-f0e5-4959-b4ea-a80c4e36fe5e/” which clearly does not match “https://sts.windows.net/{tenantid}/”.

If we’d be working with WS-Federation, the music would be the same. This is common’s ws-fed metadata:

image

 

What to do? The only way around this is to override the default issuer validation, which is meant to work with fixed-tenant line of business apps, with your own validation logic.

The OWIN middleware makes it exceedingly easy. For example, in our multitenant web app sample we have a sign up experience which dynamically adds new tenants (or even individuals from arbitrary tenants). There, we turn off the default validation (via TokenValidationParameters/ValidateIssuer=false) and we inject custom logic in the SecurityTokenValidated notification, late in the pipeline because in this case we need to have the user info available. Code here.
If in your heuristic you are only interested in the issuer, you can directly inject your own validation logic in the IssuerValidator notification of the TokenValidationParameters.
Or even further, if for some reason you are not interested in restricting access to your app per tenant you can simply turn off issuer validation and not provide any extra validation logic (if you think you’re in this case, think long and hard about it to ensure that’s truly OK for your scenario to skip that validation!).

That’s pretty much what there is to know about the common endpoint and web apps.

ADAL and the Common Endpoint

The common endpoint is easy to use with ADAL: you just pass it to your AuthenticationContext as the authority, and the authentication experience will follow the behavior described. However common is not a real tenant, and ADAL needs to perform some extra steps here. Namely: once the authentication takes place and a tenant from a real tenant is returned, the AuthenticationContext’s authority is automatically reassigned to that real tenant.

Below you can see an example in which the authority starts as common, but as I sign in with a user from developertenant the authority changes accordingly.

image

Leaving the authority as “common” would cause all sorts of problems, given that the token so obtained is cached under the real authority: caching under common would not make much sense, given that common can be used multiple times against multiple tenants – which would all end up under the same “authority” and triggering all sorts of weirdness/security issues at cache retrieval time.

Note, this is particularly important for apps that persist their cache across multiple runs. Your AuthenticationContext should use common exclusively when you truly don’t know which tenant to use. That means that on the very first run you can use common to let the user select their tenant of choice, however after that you should track the tenant you find in AuthenticationResult and use it if you happen to create new AuthenticationResults instances afterwards. That is typically the case when you close and re-open an app. Remember: if you don’t do that, you will never hit the cache and your user will always be prompted, every time.
You can choose to save the tenantID in your own store and use it at AuthenticationContext init time. An alternative is to always init AuthenticationContext with common, but once you have the instance in memory to check if the cache does contain already a token for a given tenant (via TokenCache.ReadAllItems +LINQ) – if it does, you can dispose of the common-inited AuthenticationContext and create a new AuthenticationContext which uses the tenant from the cached token; otherwise, you keep the common around so that it will do its late binding magic on the first call. A bit more cumbersome than the case in which you save the tenantID in your own custom location, but it does work as well.

Wrap

The common endpoint is a great feature, I’d daresay indispensible in multitenant scenarios. The way in which it is represented (a parametric tenant) allows you to take advantage of its capabilities using a familiar approach, e.g. treating common as a tenant. There are some differences that eventually you have to handle, however I hope that this post showed they really aren’t rocket science. Let us know if you have feedback or you encounter specific issues, otherwise… happy coding!

August 25, 2014

ForgeRockCreating an Uber Customer Experience Multiverse [Technorati links]

August 25, 2014 11:07 PM

Last week Uber and Expensify inked a fascinating deal. The two services will seamlessly integrate so Expensify customers can now order Uber cars based on travel reservations submitted to Expensify. According to the joint announcement: “upon landing, an Uber can be automatically ordered to take the traveler straight to their destination, completing the “last mile” of a long journey in style.” The offering provides a worry-free travel experience for employees while simultaneously making it brain-dead simple to submit travel expenses. Oracle, IBM, Concur—you have been warned!

This online partnership is the “digital shot heard round the world.” It takes online partnerships beyond the traditional single sign-on integrations and introduces beautifully integrated offerings focused on one thing — transforming the customer experience for the better. Expensify and Uber understand that the future of their business is based on their ability to deliver transformative services that bridge the digital and physical worlds, a problem every CEO worldwide is facing. 

In fact, according to Gartner Research*, 64% of CEOs view growth as their number one priority and see digital as their route to achieving this goal. On the surface, this seems like the classic CIO challenge of delivering more online assets with less money. A “geocentric” model in which customers, prospects, and partners revolve around a relatively stationary digital strategy. However, this new approach to growth is different: it’s about transforming the customer experience rather than the technology. A “heliocentric” model in which digital and physical assets revolve around the customer.

To repeatedly deliver innovations like the Uber-Expensify partnership requires a transformation of identity technology. Organizations must instantaneously use customer data to deliver new digital and physical services based on constantly changing customer characteristics, like location, device, time of day, familiarity, etc. The ability to rapidly manage identity relationships across an ever-changing digital multi-verse is what we call Identity Relationship Management (IRM).

Organizations need to roll out offerings unencumbered by monolithic identity platforms designed for outdated employee-centric use cases.  Instead, CIOs must invest in IRM platforms that empower organizations to construct a single view of their customer and deliver useful information about that customer to any application, device, or thing. These platforms deliver identity services at unprecedented speed (months not years), and seamlessly integrate disparate business units. The organizations that place the customer at the center of the universe will achieve the most growth because, like Uber and Expensify, they will be primed to continuously transform a customer’s life experiences.

*2014 Gartner CEO and Senior Executive Survey

The post Creating an Uber Customer Experience Multiverse appeared first on ForgeRock.

Nat Sakimura政府、マイナンバーを利用した「所得連動返還型奨学金」を導入する方針 [Technorati links]

August 25, 2014 01:33 PM

文部科学省は、2018年度から大学生の奨学金制度に「所得連動返還型」を導入する方針を固めた。所得連動返還型はイギリスやオーストラリア、アメリカで採用されている制度で、卒業後の年収に応じて返還月額が変動する。景気や年収の増減に応じて返還額が決定するため、低所得の者ほど負担が少なく、回収率を上げることができるという。

引用元: EconomicNews(エコノミックニュース).

明日の某研究会で話そうかと調べていたら、既に発表されていたようだ。

所得連動返済型奨学金は、オーストラリア、ニュージーランド、英国などで提供されている。オーストラリア(HELP)は、2013年現在、450,314人が利用している。また、米国でもFederal Student Loan に関して同様の取り組みが始まっている。ただし、こちらは低所得者のみが対象である。

所得連動変換型奨学金は一定の収入になるまでは返済しなくて良いため、個人に取ってリスクが低く借りやすい。そのため、学資が無くて進学を断念するなど、平等性の観点から問題がある状況を緩和できる。オーストラリアは収入がAU$53,345以上で返済開始、英国は£21,000で返済開始である。オーストラリアの場合、新卒の平均収入がこの閾値を全分野で超えるのは5年目で、最初に返済を始めてから平均8.1年で返済完了している。 収入の把握は各国とも税当局が行い、収入条件を満たした時には、税と一緒に徴収する。利率はCPI+α(収入が上がると上がる)などいろいろである。

これらの国は、ほとんどの大学が国立なので、ファンドも国が用意している。日本の場合は私学が多いので、このファンドをどうするかというのは別途検討が必要かもしれない。たとえば、各学校が用意するなどということも考えられるかもしれない。この場合、良い教育をして高収入の人を産出すればリターンが良くなるわけで、大学の教育の質の改善にも寄与することが期待される。

ちなみに、オーストラリアのHELPの残高の推移は、図1のようになっている。

オーストラリアHELP残高推移

(図1)オーストラリアHELP残高推移 (出所)Group of Eight “HELP: Understanding Australia’s system of income-contingent student loans”

一方、金額の問題もある。上記の記事によると現在の貸与額は年平均80万円だそうである。

教育の高度化に伴って、今後、教育費の高騰が予想される[1]が、これでは全然足りない。学資が無くて進学を断念するのを防ぐためには、金額面での充実も必要になってくると思われる。

いずれにせよ、良い方向への一歩だと思うので、継続的にフォローしていきたいと思う。

[1] 米国の名門大学だと、授業料だけで年間500万円ほどかかるといわれる。

Nat SakimuraFacebookに対してウィーンで集団訴訟–プライバシー法違反の疑い [Technorati links]

August 25, 2014 12:42 PM

Facebookは同社に対するプライバシー関連の苦情に対処しなければならない、とウィーンの裁判所が判断したことで、Facebookに対する集団代表訴訟が動き始めた。

8月初旬、プライバシー活動家で弁護士のMax Schrems氏が率いるユーザーグループは、Facebookが複数のプライバシー法に違反しているとして、ウィーンの裁判所で同社のアイルランドの子会社を相手取って訴訟を起こした。

苦情には以下のものが含まれる。Facebookのデータ使用ポリシーは欧州連合(EU)の法律に違反している。Facebookはユーザーの有効な同意を得なくてもデータを再利用できるようになっている。ユーザーの許可を得ずにサードパーティーアプリケーションにデータが譲渡されている。

引用元: Facebookに対してウィーンで集団訴訟–プライバシー法違反の疑い – CNET Japan.

どうやら欧州でFB相手の集団訴訟が動き出したようです。

このグループは、過去にアルルランドの裁判所で訴訟を起こしており、広告クリックデータの保持期間を二年に制限したり、顔認識機能についてユーザに警告したりすることを引き出しているようです。

今回の訴訟はウィーン地方裁判所で起こされており、2万5000人以上が原告として名を連ね、一人あたり500ユーロの損害賠償金を要求しているとのこと。

欧州における集団訴訟はあまり効果的でないというようなこともちらほら聞きますが、今後の展開に注目していきたいところです。

Julian BondYet another warning [Technorati links]

August 25, 2014 07:39 AM
Yet another warning
http://www.theguardian.com/environment/2013/jun/30/stephen-emmott-ten-billion

"If we discovered tomorrow that there was an asteroid on a collision course with Earth and – because physics is a fairly simple science – we were able to calculate that it was going to hit Earth on 3 June 2072, and we knew that its impact was going to wipe out 70% of all life on Earth, governments worldwide would marshal the entire planet into unprecedented action. Every scientist, engineer, university and business would be enlisted: half to find a way of stopping it, the other half to find a way for our species to survive and rebuild if the first option proved unsuccessful. We are in almost precisely that situation now, except that there isn't a specific date and there isn't an asteroid."

Then there would be a large number of people who didn't expect to be around in 2072 and didn't want to give up what they currently have in the mean time. There'd be the people who denied the asteroid existed. And then there would be the 5 Bn people who didn't even know about the asteroid and were mostly focussed on getting enough to eat and drink to survive another day.

Of course a world of 4B or 2B or 1B people in 100 years might well be a more pleasant place. But nobody will talk about the process of getting from the current 7B to the peak of 10B to a sustainable 1B. Because it ain't pretty.
 Humans: the real threat to life on Earth »
If population levels continue to rise at the current rate, our grandchildren will see the Earth plunged into an unprecedented environmental crisis, argues computational scientist Stephen Emmott in this extract from his book Ten Billion

[from: Google+ Posts]

Kuppinger ColeExecutive View: IBM Security Policy Management - 70953 [Technorati links]

August 25, 2014 06:09 AM
In KuppingerCole

Some years ago IBM brought out a brilliant product in the Tivoli Security Policy Manager (TSPM), a tool to centralize policy administration for access control solutions. It was IBM’s first foray into attribute-based access control and provided a “discrete” externalized authentication tool to service multiple “relying” applications. It was released under the very successful Tivoli branding because it was part of IDM identity management product line...
more
August 23, 2014

Anil JohnNear Real-Time Anomaly Detection and Remediation [Technorati links]

August 23, 2014 11:30 PM

Real-time or near real-time anomaly detection and applying appropriate remediation is becoming more and more a necessity when delivering online services at scale. This blog post looks at some of the potential components associated with this type of compensating control.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.


These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

August 22, 2014

Kuppinger ColeExecutive View: CA ControlMinder - 71059 [Technorati links]

August 22, 2014 07:54 AM
In KuppingerCole

CA Technologies is a multinational publicly held software company headquartered in New York, USA. Founded in 1976 to develop and sell mainframe software, over the decades CA Technologies has grown significantly via a series of strategic acquisitions. Although it used to produce consumer software, currently CA Technologies is a major player in the B2B segment, offering a wide range of products and services for mainframe, cloud and mobile platforms in such areas as security, infrastructure...
more

August 21, 2014

Mike Jones - MicrosoftWorking Group Draft for OAuth 2.0 Act-As and On-Behalf-Of [Technorati links]

August 21, 2014 11:06 PM

OAuth logoThere’s now an OAuth working group draft of the OAuth 2.0 Token Exchange specification, which provides Act-As and On-Behalf-Of functionality for OAuth 2.0. This functionality is deliberately modelled on the same functionality present in WS-Trust.

Here’s a summary of the two concepts in a nutshell: Act-As indicates that the requestor wants a token that contains claims about two distinct entities: the requestor and an external entity represented by the token in the act_as parameter. On-Behalf-Of indicates that the requestor wants a token that contains claims only about one entity: the external entity represented by the token in the on_behalf_of parameter.

This draft is identical to the previously announced token exchange draft, other than that is a working group document, rather than an individual submission.

This specification is available at:

An HTML formatted version is also available at:

Nat Sakimura米国保健福祉省国家医療IT調整室(ONC)がOpenID Foundationに理事として加盟 [Technorati links]

August 21, 2014 11:00 PM

National Coordinator for Health Information Technology米国保健福祉省(HHS)国家医療IT調整室(ONC)が、現地時間8月21日、米国OpenID Foundation(OIDF, 理事長:崎村夏彦)に理事機関として加盟しました。ONCは米国連邦政府における全米の健康情報の電子的な交換を行うための最高度の健康情報技術(HIT)[1]を利用・実装するための調整を主担当する機関です。

ONCは現政権のの健康情報技術関連の取り組みの最前線におり、健康情報技術の普及と全国規模の医療情報共有インフラ構想である「NwHIN(Nationwide Health Information Network)[2]を推進するための、国家健康システム用の標準開発の主要リソースです。デビー・ブーチ氏(Ms. Debbi Bucci)がONCの代表としてOpenID Foundationに参加します。

ONCはOIDFで2つのことに取り組もうと考えています。一つ目は、Healthcare Information Exchange (HIE) ワーキング・グループを主導してOpenID Connectのプロファイルを定義することで、もう一つは、それを使ったパイロットプロジェクトをを推進することです。ONCにおいて技術プロファイリングと相互互換性試験を主導する実装・試験部門のITアーキテクトであるブーチ氏が、HIE WGの活動を率います。

詳しくは、OIDFの英文リリース[3]をご参照ください。


[1] Health Information Technology, HIT.

[2] http://www.healthit.gov/policy-researchers-implementers/nationwide-health-information-network-nwhin

[3] OpenID Foundation: “US Government Office of the National Coordinator for Health Information Technology (ONC) Joins the OpenID Foundation”, http://openid.net/2014/08/21/us-government-office-of-the-national-coordinator-for-health-information-technology-onc-joins-the-openid-foundation/

Kantara InitiativeKantara and IEEE IoT Leaders Gather in Mountain View, CA [Technorati links]

August 21, 2014 06:45 PM

Welcome to the Kantara “IoT and Harmonization Workshop”

 In a world of increasing network connectivity that interacts with more and more active and passive sensors, data is generated, managed, and consumed in mass.  Industry experts will discuss findings regarding standardization of the IoT space and where possible gaps exist.  Focus will include review of use cases and demos as well as  implications of identity and personal identifiable information with in the IoT space.

There are many initiatives in the IoT space and knowing where to go can be a challenge.   Our goal for this event is to connect broad IoT experts with Identity & IoT experts.  Kantara Initiative’s Identities of Things (IDoT) group is leading the way for the intersection of IoT and Identity. With this opportunity we will connect IEEE communities with Identity communities through our Kantara workshop. We are proud to partner with the IEEE-SA as one of the leaders in standardization of IoT.  If you’re already attending the IEEE-SA event consider this your warm up.

Space for the Kantara IoT Harmonization workshop is limited, to register for the workshop, please click here.

Why attend:

Who should attend:

September 17, Day 1/2 Workshop - This event will begin at 12:00pm and conclude at 5:00pm.  This will be an information and interactive discussion to kick-off the IEEE Standards Association 2-day Internet of Things (IoT) Workshop with the goal to begin connecting the Identity IoT communities with the IEEE IoT community.

Agenda coming soon.

OpenID.netUS Government Office of the National Coordinator for Health Information Technology (ONC) Joins the OpenID Foundation [Technorati links]

August 21, 2014 03:07 PM

The Office of the National Coordinator for Health Information Technology (ONC) located within the Office of the Secretary for the U.S. Department of Health and Human Services (HHS) has joined the OpenID Foundation (OIDF). ONC is the principal federal entity charged with coordination of nationwide efforts to implement and utilize the most advanced health information technology for the electronic exchange of health information.

ONC is at the forefront of the Administration’s Health IT efforts and is a key standards development resource to the national health system to support the adoption of health information technology and the promotion of nationwide health information exchanges. Ms. Debbie Bucci will join the Board of Directors of the OpenID Foundation as the ONC representative.

Two key initiatives the ONC plans to undertake within the OIDF is to lead a Healthcare Information Exchange (HIE) working group to create a profile of OpenID Connect and follow-on associated pilot projects. Ms. Bucci, an IT Architect in the Implementation and Testing Division, is helping lead a profiling and interoperability testing effort at ONC and will be one of the leaders of the HIE working group activities.

Don Thibeau, Executive Director of the OIDF, pointed out that this public sector effort parallels the increasing global adoption among large commercial enterprises. Google, Microsoft, Ping identity, Salesforce, ForgeRock and others have embraced OpenID Connect as fundamental to their identity initiatives. Thibeau noted, “After the launch of OpenID Connect early this year, the OIDF finds itself working on one of the hardest use cases in identity; patient medical records at the same time as working on the platform of choice; the mobile device. Working with OIDF member organizations like the ONC, GSMA and others brings important domain expertise and a user-centric focus to these OIDF working groups. These standards development activities are loosely coupled with pilots in the US, UK and Canada.”

If you are interested in the HIE working group, please consider attending the OpenID Day on RESTful Services in Healthcare at MIT on September 19th in Cambridge, MA. This event will focus on emerging Web-scale technologies as applied to health information sharing. The focus will be on group discussion among MIT’s expert participants. The OIDF will follow its standards development process while MIT leads outreach and industry engagement. This day is part of the 2-day annual MIT KIT Conference at MIT on September 18-19. For more information on this event and to register, please visit http://kit.mit.edu/events.

Mike Jones - MicrosoftMicrosoft JWT and OpenID Connect RP libraries updated [Technorati links]

August 21, 2014 12:02 AM

This morning Microsoft released updated versions of its JSON Web Token (JWT) library and its OpenID Connect RP library as part of today’s Katana project release. See the Microsoft.Owin.Security.Jwt and Microsoft.Owin.Security.OpenIdConnect packages in the Katana project’s package list. These are .NET 4.5 code under an Apache 2.0 license.

For more background on Katana, you can see this post on Katana design principles and this post on using claims in Web applications. For more on the JWT code, see this post on the previous JWT handler release.

Thanks to Brian Campbell of Ping Identity for performing OpenID Connect interop testing with us prior to the release.

August 20, 2014

Julian BondI'm pleased to see that the 1000 minute Longplayer choral project has reached it's funding target. [Technorati links]

August 20, 2014 07:54 PM
I'm pleased to see that the 1000 minute Longplayer choral project has reached it's funding target.
https://www.kickstarter.com/projects/333361486/longplayer-for-voices-the-next-step

I still need to make the pilgrimage to the Longplayer installation at Trinity Buoy Wharf. Open at the weekends, 11am to 4/5pm. 
http://longplayer.org/visit/

Longplayer is a one thousand year long composition that's been running so far for 14 years 232 days 07 hours 52 minutes and 05 seconds and counting.
 Longplayer for Voices - the next step »
Help us to create Longplayer for 240 Voices, the next step in an incredible 1000-year-long musical journey.

[from: Google+ Posts]

GluuGluu’s Business Model [Technorati links]

August 20, 2014 03:45 PM

After listening to a session at SXSWV2V by Patrick van der Pijl, I was encouraged to read Business Model Generation, and to develop the business model diagram below for Gluu.

gluu_business_model_generation_diagram

Nishant Kaushik - OracleWhat Ended Up On The Cutting Room Floor [Technorati links]

August 20, 2014 02:00 PM

If you managed to catch my talk at this years Cloud Identity Summit, either in-person or using the video recording I posted (and if you haven’t, what are you waiting for?), then you know that I relied on humor to engage my audience while presenting a serious vision of how IAM needs to evolve for the better. That humor relied in large part on me visually lampooning some members of the Identerati. Now, its not an easy thing to do (especially when you have a subject like Jonathan) or always fit seamlessly into a narrative, so some of the visuals I spent a lot of time creating ended up not making it into the talk for one reason or another. I just got finished watching the ‘Deleted & Extended Scenes’ in the iTunes Extras of the excellent ‘Captain America: The Winter Soldier’ digital release, and it inspired me to share them with all of you instead of hoarding them for a future talk. So, without further ado, I present:

Pope Bob the Percipient

Was going to use in a slide about the move from authentication to recognition, but Pam was covering a lot of that in her talk before me.

Janitor Brian

This was going to be part of a different version of the Paul Madsen slide, where Brian was cleaning up the debris of buzzwords Paul had discarded. But I couldn’t get the slide to look right.

Jona-Than Sander (alternative version)

Given how my Sith incarnation of Sander got misconstrued as being a nun version of Sander instead, maybe I should have stuck with this one.

Bonus bonus: Saint Patrick and the dragon P@$$w0rd

This wasn’t actually for my talk. I made this afterwards using a CISmcc photo of Patrick in response to this twitter conversation. But I kinda wish I’d had it for the talk. Would have been fun to use.

Nat SakimuraJIPDEC、ヤフー他6社と組んで、なりすましメール防止ソリューションを銀行へ提供開始 [Technorati links]

August 20, 2014 01:38 AM

日本情報経済社会推進協会(JIPDEC)のプレスリリース[1]によると、ヤフーら6社[2]と共同で、なりすましメール防止を目的とした「安心マーク」(写真)の銀行への導入を開始したとのことです。採用の一番乗りは常陽銀行で、Webメール利用時のセキュリティ対策として安心マーク(図1)の採用を決定したとのこと。

図1 安心マーク

図1 安心マーク

これは、受信者が簡単にそのメールがなりすましメールでないことをDKIM[3]というメールに対する電子署名技術と、JIPDECが提供するサイバー法人台帳であるROBINSの組合せで確認し、Webメールから見るときに、この「安心マーク」を表示することによって示すというものです[4]。

この安心マークのサービスは、昨年7月の参議院議員選挙のときに開始されたもので、今回は新たに金融機関向けにもサービス開始したものです。執筆時点で安心マークがついているのは、自民党、民主党、JIPDEC、常陽銀行となります。

現状、Webメールからしか確認できないのが玉に瑕ですが、それでも安心できる方向への第一歩ですね。メジャーなメールクライアントにもプラグインなどで提供されるとさらに良いのですね。

また、ちょっと専門的になりますが、これはある意味、ROBINSがトラストフレームワークとして機能して、DKIMを使った今回の仕組みがその登録を確認する、いわば「メタデータ・サービス」として機能しているとの見方もできると思います。

一方で、ボーダレス社会においては、日本国内の法人にたぶん限られるROBINSだけでなく、他国の同様な仕組みも統合的に組み込めると、なお良いとも言えましょう。

いずれにせよ、今後の展開に注目です。

※ Disclosure: 筆者は2014年現在、JIPDECのアドバイザリー委員です。

[1] JIPDECニュースリリース安心して利用できる電子メール環境への取り組みについて
~ なりすましメール防止安心マークを銀行へ導入開始。~」

[2] インフォマニア、シナジーマーケティング、トライコーン、ニフティ、パイプドビッツ、ヤフー

[3] DKIMの仕組みはこちらの記事が詳しいです→電子署名方式の最新技術「DKIM」とは

[4] 「安心マーク」を銀行が初採用、送信ドメイン認証でなりすましメール防止 (2014/8/11)

 

August 19, 2014

Radiant LogicDiversity Training: Dealing with SQL and non-MS LDAP in a WAAD World [Technorati links]

August 19, 2014 10:20 PM

Welcome to my third post about the recently announced Windows Azure Active Directory (AKA the hilariously-acronymed “WAAD”), and how to make WAAD work with your infrastructure. In the first post, we looked at Microsoft’s entry into the IDaaS market, and in the second post we explored the issues around deploying WAAD in a Microsoft-only environment—chiefly, the fact that in order to create a flat view of a single forest to send to WAAD, you must first normalize the data contained within all those domains. (And let’s be honest, who among us has followed Microsoft’s direction to centralize this data in a global enterprise domain???)

It should come as no surprise that I proposed a solution to this scenario: using a federated identity service to build a global, normalized list of all your users. Such a service integrates all those often overlapping identities into a clean list with no duplicates, packaging them up along with all the attributes that WAAD expects (usually a subset of all the attributes within your domains). Once done, you can use DirSync to upload this carefully cleaned and crafted identity to the cloud—and whenever there’s a change to any of those underlying identities, the update is synchronized across all relevant sources and handed off to DirSync for propagation to WAAD. Such an infrastructure is flexible, extensible, and fully cloud-enabled (more on that later…). Sounds great, right? But what about environments where there are multiple forests—or even diverse data types, such as SQL and LDAP?

Bless this Mess: A Federated Solution for Cleaning Up ALL Your Identity

So far, we’ve talked about normalizing identities coming from different domains in a given forest, but the same virtualization layer that allow us to easily query and reverse-engineer existing data, then remap it to meet the needs of a new target, such as WAAD, is not limited to a single forest and its domains. This same process also allows you to reorganize many domains belonging to many different forests. In fact, this approach would be a great way to meet that elusive target of creating a global enterprise domain out of your current fragmentation.

But while you’re federating and normalizing your AD layer, why stop there? Why not extend SaaS access via WAAD to the parts of your identity that are not stored within AD? What about all those contractors, consultants, and partners stored in your aging Sun/Oracle directories? Or those identities trapped in legacy Novell or mainframe systems? And what about essential user attributes that might be captured in one of these non-AD sources?

As you can see below, all these identities and their attributes can be virtualized, transformed, integrated, then shipped off to the cloud, giving every user easy and secure access to the web and SaaS apps they need.

Creating a Global Image of all Your Identities

Creating a Global Image of all Your Identities

Credentials: Sometimes, What Happens On-Premises Should Stay On-Premises

So we’ve seen how we can get to the attributes related to identities from many different silos and turn them into a cloud-ready image. But there’s still one very important piece that we’ve left out of the picture. What about credentials? They’re always the hardest part—should we sync all those &#@$ passwords, along with every &%!?# password change, over the Internet? If you’re a sizable enterprise integrating an array of SaaS applications, that’s a recipe for security breaches and hack attacks.

But fortunately, within Microsoft’s hybrid computing strategy, we can now manage our identities on-premises, while WAAD interfaces with cloud apps and delegates the credential-checking back to the right domain in the right forest via our good friend ADFS. Plus, ADFS even automatically converts the Kerberos ticket to a SAML token (well, it’s a bit more complex than that, but that’s all you need to know for today’s story).

The bottom line here is that you’ve already given WAAD the clean list of users, as well as the information it needs to route the credential-checking back to your enterprise AD infrastructure, using ADFS. So WAAD acts as a global federated identity service, while delegating the low-level authentication back to where it can be managed best: securely inside your domains and forests. (And I’m happy to say that we’ve been preaching the gospel of on-premises credential checks for years now, so it’s great to see mighty Microsoft join the choir. ;) )

While this is very exciting, we still face the issue of all those identities not managed by Microsoft ADFS. While I explained above how a federated identity layer based on virtualization can help you normalize all your identities for use by WAAD, there’s still one missing link in the chain: how does WAAD send those identities back to their database or Sun/Oracle directory for the credential checking phase? After all, ADFS is built to talk to AD—not SQL or LDAP. Luckily, federation standards allow you to securely extend this delegation to any other trusted identity source. So if you have a non-MS source of identities in your enterprise and you can wrap them through a federation layer so they work as an IdP/secure token service, you’re in business. Extend the trust from ADFS to your non-AD subsystem through an STS and—bingo—WAAD now covers all your identity, giving your entire infrastructure secure access to the cloud.

How WAAD, ADFS, and RadiantOne CFS Work Together

How WAAD, ADFS, and RadiantOne CFS Work Together

We call this component “CFS” within our RadiantOne architecture, and with CFS and our VDS, you have a complete solution for living a happy, tidy, and secure life in the hybrid world newly ordained by Microsoft…(cue the choir of angels, then give us a call if you’d like to discuss how we can make this happen within your infrastructure…). :)

Thanks, as always, for reading my thoughts on these matters. And feel free to share yours in the comments below.

← Part 2: Hybrid Identity in the MS World
SHARE
facebooktwittergoogle_pluslinkedinmail

The post Diversity Training: Dealing with SQL and non-MS LDAP in a WAAD World appeared first on Radiant Logic, Inc

Kantara InitiativeKantara IoT Leaders Gather in Utrecht [Technorati links]

August 19, 2014 09:20 PM

Kantara Initiative leaders and innovators are set to gather in Utrecht, Netherlands September 4th-5th.  In an event kindly hosted by SURFnet and sponsored by Forgerock, leaders from the Kantara IDentities of Things (IDoT), User Managed Access (UMA), and Consent and Information Sharing (CIS) Open Notice Groups are set to present 1.5 days of innovation harmonization.  Areas of coverage include use cases and demos that focus on the Identity layer of IoT.  Specifically, the event will address access control, notice, and consent with regard to contextual Identity systems.  Leaders will discuss these topics ranging from user-centric to enterprise and industrial access. Don’t miss this opportunity to connect with peers, partners, and competitors.

Find the draft agendas below.  Note: Agenda subject to change in this dynamic event.

Space is Limited. Register Now: Identity and Access Control – Context, Choice, and Control in the age of IoT                

 

In a world of increasing network connectivity that interacts with more and more active and passive sensors, data is generated, managed, and consumed in mass.  Industry experts will discuss findings regarding standardization of the IoT space and where possible gaps exist.  Focus will include review of use cases and demos as well as implications of identity and personal identifiable information within the IoT space.

Why attend:

Who should attend:

Day 1: Thursday September 4th

Time Topic Lead
13:00  Welcome – Setting the Stage Allan Foster, Forgerock, President Kantara InitiativeJoni Brennan, Executive Dir. Kantara Initiative
13:15  UMA Use Cases and Flows (technical and non-technical) Maciej Machulak, Cloud IdentityMark Dobrinic
14:15  IDoT Use Cases Ingo Friese, Deutche Telekom
14:45  Break
15:00  Open Notice Use Cases and Flows Mark Lizar, Smart Species
15:30  Collection of Breakout Topics & Working Sessions Joni Brennan, Executive Dir. Kantara InitiativeGroup Participation
16:30  Calls to Action & Thanks (Dankuwel!) Joni Brennan, Executive Dir. Kantara InitiativeAllan Foster, Forgerock, President Kantara Initiative

Day 1: Friday September 5th

Time Topic Lead
10:00 Welcome – Setting the Stage Allan Foster, Forgerock, President Kantara InitiativeJoni Brennan, Executive Dir. Kantara Initiative
10:15 Kantara Mission Overview – Opportunities and Trust in the age of IoT Joni Brennan, Executive Dir. Kantara Initiative
10:30 UMA Presentation & Demo Maciej Machulak, Cloud IdentityMark Dobrinic
11:30 UMA as an authorization mechanism for IoT Maciej Machulak, Cloud IdentityIngo Friese, Deutche Telekom
12:30 Lunch
13:30 Open Notice - Minimum Viable Consent Receipt Mark Lizar, Smart Species
14:30 Privacy in the age of IDentities of Things Ingo Friese, Deutche TelekomMaciej Machulak, Cloud Identity
15:30 Break
15:45 Collection of Breakout Topics & Breakouts Sessions Joni Brennan, Executive Dir. Kantara InitiativeGroup Participation
16:15 Calls to Action & Thanks (Dankuwel!) Joni Brennan, Executive Dir. Kantara InitiativeAllan Foster, Forgerock, President Kantara Initiative

KatasoftBuild a Node API Client – Part 3: Queries, Caching and Authentication [Technorati links]

August 19, 2014 03:00 PM

Build a Node API Client – Part 3: Queries, Caching and Authentication

Welcome to Part Three of our guide to Node.js REST clients. This third and final blogpost will wrap up the series with a look at topics like querying, caching, API authentication, and lessons learned the hard way.

If you haven’t already, please start with Part One, the RESTful principles important for REST clients. Or skip to Part Two on building the client’s Public API and designing a component-based architecture.

Queries

Robust querying support reduces the effort required to use your API, especially if you implemented it with familiar conventions.

For instance, say your client user needs to interact with a particular collection resource…hopefully a plausible scenario! They would probably write something along these lines:

account.getGroups(function(err,groups) {
  ... callback logic ...
});

Assuming the groups are not in cache, a request needs to be automatically sent to the server to obtain the account’s groups, for example:

GET https://api.stormpath.com/v1/accounts/a1b2c3/groups

This is a great start, but ultimately a limited one. What happens the client user wants to specify search parameters? We need more powerful request query capabilities.

Query Parameters with Object Literals

An improvement is to accept an object literal to populate query parameters. This is a common technique in Node.js that specifies query parameters or skips the optional object and passes in the proceeding callback function.

account.getGroups({
  name: ‘foo*’,
  description: ‘*test*’,
  orderBy: ‘name desc’,
  limit: 100
}, function onResult(err, groups) {
  ...
});

This would result in the following HTTP request:

GET https://api.stormpath.com/v1/accounts/a1b2c3/groups?name=foo*&description=*test*&orderBy=name%20desc&limit=100

The client simply takes the name/value pairs on the object and translates them to url-encoded query parameters. We accept an object literal rather than query strings because no one wants to mess with URL encoding. This is a perfect place to offload work from your client user to the client.

Fluent API Queries

While the above approach is nice and convenient, it still requires some specific knowledge of the API’s specific query syntax. In the above example, to find any group that starts with the name foo, you need to know to use a wildcard matching asterisk at the end of the search term, i.e.

account.getGroups({
  name: 'foo*', // for comparison, in SQL, this looks like
                // 'where name like foo%'
  ... etc ...

While it is easy enough to learn these syntax additions, it is even easier for client users if they have a Fluent API to help them construct queries in a fully explicit and self-documenting way. IDEs with intelligent auto-completion can even help tell you what methods are available while writing your query, helping you write queries much faster!

Consider the following example. The resulting query to the server is no different than the one above that used an object literal, but the query author now does not need to know any syntax-specific modifiers:

account.getGroups().where()
.name().startsWith(“foo”)
.description().contains(“test”)
.orderBy(“name”).desc()
.limitTo(100)
.execute(function onResult(err, result) {
  ... handle result ...
});

As expected, this results in the same HTTP request:

GET https://api.stormpath.com/v1/accounts/a1b2c3/groups?name=foo*&description=*test*&orderBy=name%20desc&limit=100

As you can see, the query author just chains method calls and then the client implementation constructs the relevant query parameter map and executes the request.

In addition to being easier to read and perhaps better self-documenting, this approach has some other very compelling benefits:

There is, of course, a downside to supporting a fluent API for querying: implementation effort. It definitely requires a little more time and care to develop a builder/chaining API that client users can interact with easily. However, because of the self-documenting and syntatic checking nature, we feel fluent APIs are one of those features that really takes your library to the next level and can only make users happier.

Even with this downside though, there is one side benefit: when you’re ready to add a fluent query API to your library, your implementation can build directly on top of the object literal query capability described above. This means you can build and release your library in an iterative fashion: build in the object literal query support first, ensure that works, and then release your library.

When you’re ready, you can create a fluent query implementation that just generates the same exact object literals that a client user could have specified. This means you’re building on top of something that already works – there is no need to re-write another query mechanism from scratch, saving you time.

Caching

Our SDK utilizes a CacheManager, a component used by the DataStore to cache results that come back from the server. While the cache manager itself is a very simple concept, caching is extremely important for performance and efficiency. This is especially true for clients that communicate with REST servers over the public Internet. Cache if you can!

The CacheManager reflects a pluggable design. Users can configure the client to use the default in-memory implementation, or they can configure (plug in) out-of-the-box support for Memcache or Redis. This is a big benefit to many production-grade apps that run on more than one web server. Additionally, the CacheManager API itself is very simple; you can also implement and plug in new ones easily if the existing three implementations do not suit your needs.

Regardless of the specific CacheManager implementation selected, the client instance can access one or more Cache objects managed by the CacheManager. Typically, each Cache object represents a single region in the overall cache memory space (or cache cluster) where data can be stored. Each cache region typically has a specific Time-To-Live and Time-To-Idle setting that applies to all data records in that cache region.

Because of this per-region capability, the Stormpath SDK stores resource instances in a region by type. All Accounts are in one region, all Groups in another, etc. This allows the client user to configure caching policies for each data type as they prefer, based on their application’s data consistency needs.

So, how does this work internally in the Client?

Because of RESTful philosophies (covered in Part One of this blog series), every resource should have a globally-unique canonical HREF that uniquely identifies it among all others. Because of this canonical and unique nature, that means a resource’s HREF is a perfect candidate for cache key. Cache entries under a href key will never collide, so we’ll use the HREF as the cache key.

This allows a client DataStore to obtain a cached version of a resource before attempting to send a REST API server HTTP request. More importantly, the DataStore should be able to get an element out of the cache by passing in a resource HREF. In the example below, the callback function takes an error or the raw object stored in the cache.

var cache = cacheManager.getCache(regionName);

cache.ttl //time to live
cache.tti //time to idle
cache.get(href, function(err, obj) {
  ...
});

At Stormpath, we only store name/value pairs in the cache and nothing else. The data in our cache is the same stuff sent to and from the server.

client.getAccount(href, function(err, acct) {...});

// in the DataStore:
var cache = cacheManager.getCache(‘accounts’);

cache.get(href, function(err, entry) {
  if (err) return callback(err);
  if (entry) {
    ... omitted for brevity ...
    return callback(entry.value);
  }

  //otherwise, cache miss – execute a request:
  requestExecutor.get(href, function(err, body) {
    //1. cache body
    //2. convert to Resource instance
    //3. invoke callback w/ instance
  }
}

When getAccount is called, the client first interacts with cacheManager to get the cache region and then requests the cached object. If the object is found, it executes the callback. If the object isn’t found, it executes a request to the server using requestExecutor.

The fallback logic is fairly straightforward: Check the cache before issuing a request to the server. The beautiful thing is that your client users don’t have to change any of their code – they can just navigate the object tree regardless of whether the data is held in cache.

Recursive Caching

Recursive caching is, in a word, important.

When you request a resource from the server, you can use something called reference expansion or link expansion to not only obtain the desired resource, but any of its linked/referenced resources as well.

This means any expanded response JSON representation can be an object graph: the requested resource plus any other nested JSON objects for any linked resources. Each one of these resources needs to be cached independently (again, keyed based on each resource’s href).

Recursive caching is, then, basically walking this resource object graph, and for each resource found, caching that resource by its href key. Our implementation of this ‘graph walk’ utilizes recursion because it was simpler for us to implement and it’s fairly elegant. This is why it is called recursive caching. If we implemented it via iteration instead, I suppose we would have called it iterative caching.

Authentication

Secure authentication is near and dear to our hearts at Stormpath, which is why we recommend defaulting API authentication in your client to a digest-based scheme instead of HTTP basic. Although basic authentication is the simplest form of HTTP authentication, it is probably the one with the most security risks:

Because of these perils, we only advocate supporting HTTP Basic authentication if you have no other choice, and instead use what is known as a Digest-based authentication scheme. While digest authentication schemes are out of scope of this blogpost, OAuth 1.0a and Stormpath’s sauthc algorithm are good, very secure examples.

That being said, clients should offer basic authentication as an optional strategy for environments where digest schemes are incompatible. For instance, Google App Engine manipulates HTTP request object headers before sending requests out on the wire – the exact behavior that digest algorithms protect against. Our original clients didn’t work on GAE for this reason until we implemented optional basic auth.

Note: Basic authentication can only ever be implemented over TLS. It’s never okay to use basic without TLS, it’s simply too easy to find the raw password value.

Because of the need for this occasional customization, clients can be configured to specify an alternative Authentication Scheme as a constant object. For example:

For example, if a client user needed to use BASIC authentication:

var client = new Stormpath.Client({
  authcScheme: ‘basic’ //defaults to 'sauthc1' otherwise
});

If they don’t specify a config value, default to the most secure option your API server supports.

Plugins

A well-defined API permits the client to support plugins/extensions without the support of type safety or interfaces. Duck typing helps too.

For example, Stormpath’s RequestExecutor relies on the Request.js module (as do many other Node.js applications). However, if anyone wanted to modify our client to use a different HTTP request library, they could implement an object with the same functions and signatures of the RequestExecutor API and just plug it in to the client configuration.

This flexibility becomes important as your client supports a broader variety of environments and applications.

Promises and Async.js

Callback Hell: The bane of Node.js library maintainers and end-users alike. Luckily, there are a couple good options to keep your Node.js code readable. For instance, Promises promote traceability even in async environments executing concurrently.

var promise = account.getGroups();

promise.then(function() {
  //called on success
}, function() {
  //called on error
}, function() {
  //called during progress
});

However, excessive use of Promises has been linked to degraded application performance – use them in a calculated fashion. Since Promises are a fairly new design approach in Node.js, most Node.js applications today use callbacks instead. If you want to stick with the dominant approach, and still avoid highly-nested functions, take a look at the fantastic async.js module.

Check out this waterfall style control flow!

async.waterfall([
  function(callback) {
    callback(null, 'one', 'two');
  },
  function(arg1, arg2, callback) {
    // arg1 now equals 'one' and arg2 now equals 'two'
    callback(null, 'three');
  },
  function(arg1, callback) {
    // arg1 now equals 'three'
    callback(null, 'done');
  }], 
  function (err, result) {
  // result now equals 'done'
  }
);

This code is readable – it looks like more readable imperative-style programming code, but because async.js is managing invocation, the code is still asynchronous and conforms to Node.js performance best practices.

As we mentioned previously in Part One, all of the Stormpath SDK Collections inherit async.js iterator functions so you can use collections in the same way. Convenient!

Stormpath Node.js Client

The Stormpath Node.js client is Apache licensed and 100% open source. Now that you’ve gotten an idea for how we built ours, try cloning it for more real-world examples.

$ git clone https://github.com/stormpath/stormpath-sdk-node.git

$ cd Stormpath-sdk-node
$ npm install
$ grunt

API Management With Stormpath

Stormpath makes it easy to manage your API keys and authenticate developers to your API service. Learn more in our API Key Management Guide and try it for free!

Happy coding!

August 18, 2014

Julian BondHere's the next foodie quest. Who makes the best Chai Tea Bags? [Technorati links]

August 18, 2014 03:12 PM
Here's the next foodie quest. Who makes the best Chai Tea Bags?

Teapigs. Both the Chai and Chilli Chai are excellent. But I seriously baulk at £4 for 15 bags. I mean, WTF?

Natco Masala. A good spicy tea with a bit of bite. But there's a lot of pepper in there and the bags are quite low quality so you get a lot of dust. Hard to get except in the two big supermarkets at the bottom of Brick Lane. Luckily they do some big packs so you don't need to buy them too often.

Palanquin spiced tea. ISTR these are ok, although I haven't had any for a while. Seem to be quite widely available in Asian corner shops.

Twinings, Tesco, Sainsburys. These are all just a bit tasteless. Not nearly enough cardomum, clove, coriander and so on. Chai really should be at least as strong as yorkshire builder's tea with the added flavours of the spices.

Anyone tried Wagh Bakri Masala Chai?
[from: Google+ Posts]

Julian BondThings I've learned about my Aeropress [Technorati links]

August 18, 2014 10:18 AM
Things I've learned about my Aeropress
http://aerobie.com/products/aeropress.htm

Ignore all the obsessing about using it upside down, pre-watering the filter and so on. Only 3 things matter, the quality of the coffee, the temperature of the water and emptying it as soon after use as possible so the rubber bung doesn't harden and lose it's seal.

Coffee.
I like a good strong Italian style taste without it being too aggressive.

- Mainstream. Tescos Italian Blend, Lavazza Black, Carte Noire. These are all perfectly serviceable, easily available, every day, fine filter or expresso grinds that just work and are predictable.

- Algerian Coffee Shop, Soho, London at http://www.algcoffee.co.uk/

"Formula Rossa" their main blend that they use for the take away coffee they serve in the shop. Straight forwards and recommended. Ideal for an Americano 
"Cafe Torino" For a stronger Expresso/Ristretto cup, try this one. It's a bit more aggressive than the Formula Rossa.
"Velluto Nero" After Dinner Expresso. Gorgeous but too much for every day drinking.

A note about grinds. I find a straight expresso grind works best. In the Algerian Coffee shop that's a "4" on their machine. Finer than filter or french press, but not so fine that you get finings and dust in the bottom of the cup.

Water Temperature.
After the choice of coffee this is the single biggest factor in the quality of the end product. You need to aim for 80-85C. This is tricky without spending huge amounts on clever kettles or messing around with thermometers. Any higher than that and you'll "burn" the grounds and make the coffee more bitter. The simple trick is to boil about 750ml of water (1/2 a kettle?) and then wait 30-60 seconds after the kettle turns itself off. So don't start assembling the Aeropress, coffee, filter, mug and so on until the kettle has boiled and by the time you're ready to pour in the water, 60 secs will have gone by and you'll be about right. 

Spares.
http://www.hasbean.co.uk
http://ablebrewing.com
Rubber bungs, filter caps, filters, stainless filter disks, tote bags, etc, etc. The stainless filters didn't really work for me. The paper filters are cheap and easier and just work. There's a rubber travel cap but it's a bit inconvenient and only really works for storing a few days supplies of filters in the plunger. 

Cleaning.
Just empty the Aeropress immediately in the bin and wipe the base of the rubber bung under the tap. Then store it either in two pieces or with the piston all the way through so the bung isn't under pressure. Otherwise the bung will eventually take a set and won't seal any more. It's pretty much self cleaning so just a quick rinse is all that's needed.

Recipes.
Don't bother with all the complication. Don't worry about pressing air though the grounds. Don't bother with the upside down method. If your cup is too small to fit the aeropress in the top, use the hexagonal funnel.

White Americano or filter coffee.
This is the typical every day mug of coffee.  Put on your 750ml (ish) of water in the kettle. When it boils get the mug, aeropress and stuff out of the cupboard. Assemble the paper filter and cap and set it on the mug. Add a 15ml scoop of grounds. Fill slowly with hot water to the 3 mark. Give it a quick swirl with a spoon to settle the grounds. Wait till it drips so the surface is down to the 2 mark, say 20 seconds. Insert the plunger and press gently down till the grounds are squashed. Add a splash of milk. Empty the aeropress and wipe. Done! Enjoy! 

Double expresso.
As above but 30ml of coffee grounds which is the scoop that comes with the Aeropress. Fill with water to the 2 mark. Let it drop to the 1 mark and press.

Thermos.
I have a stubby 15fl oz, 400ml thermos which holds about 2 mugs worth. 30ml or 45ml of grounds, fill to the 4 mark. Press when it drops to 3. Add milk till it's the right colour. Top up with boiling water.

Improvements.
I struggle to think of any! I think there's potentially a redesign that makes it easier to travel with the kit and a week's supply of filters and coffee. Perhaps the cap could screw onto the other end of the plunger.

Just occasionally the seal doesn't quite work between the main cylinder and the cap. I'm not quite sure where it leaks from but it can lead to dribbles down the side of the mug.

Anyway. If you haven't tried one and you like coffee then get an Aeropress. for making one or two cups of coffee it's way better than Cafetieres, Mocha stove machines, drip filters and so on. And it's considerably cheaper and easier than expresso machines. And even if the pod machines are convenient, they're just WRONG. The old school filter coffee machines still work best for 4 mugs and upwards.

So I really don't think there's anything better for small quantities.
[from: Google+ Posts]

Kuppinger ColeExecutive View: CyberArk Privileged Threat Analytics - 70859 [Technorati links]

August 18, 2014 08:51 AM
In KuppingerCole

In some form, Privilege Management (PxM) already existed in early mainframe environments: those early multi-user systems included some means to audit and control administrative and shared accounts. Still, until relatively recently, those technologies were mostly unknown outside of IT departments. However, the ongoing trends in the IT industry have gradually shifted the focus of information security from perimeter protection towards defense against...
more

Kuppinger ColeExecutive View: Oracle Audit Vault and Database Firewall - 70890 [Technorati links]

August 18, 2014 08:36 AM
In KuppingerCole

Oracle Audit Vault and Database Firewall monitors Oracle databases and databases from other vendors. It can detect and block threats to databases while consolidating audit data from the database firewall component and the databases themselves. It also collects audit data from other sources such as operating system log files, application logs, etc...


more

Kaliya Hamlin - Identity WomanBC Identity Citizen Consultation Results!!!! [Technorati links]

August 18, 2014 04:22 AM

As many of you know I (along with many other industry leaders from different industry/civil society segments) was proactively invited to be part of the NSTIC process including submitting a response to the notice of inquiry about how the IDESG and Identity Ecosystem should be governed.

I advocated and continue to advocate that citizen involvement and broad engagement from a broad variety of citizen groups and perspectives would be essential for it to work. The process itself needed to have its own legitimacy even if “experts” would have come to “the same decisions” if citizens were and are not involved the broad rainbow that is America might not accept the results.

I have co-lead the Internet Identity Workshop since 2005 every 6 months in Mountain View, California at the Computer History Museum. It is an international event and folks from Canada working on similar challenges have been attending for several years this includes Aran Hamilton from the National oriented Digital ID and Authentication Council (DIAC) and several of the leaders of the British Columbia Citizen Services Card effort.

I worked with Aron Hamilton helping him put on the first Identity North Conference to bring key leaders together from a range of industries to build shared understanding about what identity is and how systems around the world are working along with exploring what to do in Canada.

CoverThe British Columbia Government (a province of Canada where I grew up) worked on a citizen services card for many years. They developed an amazing system that is triple blind. An article about the system was recently run in RE:ID. The system launched with 2 services – drivers license and health services card. The designers of the system knew it could be used for more then just these two services but they also knew that citizen input into those policy decisions was essential to build citizen confidence or trust in the system.  The other article in the RE:ID magazine was by me about the citizen engagement process they developed.

They developed to extensive system diagrams to help provide explanations to regular citizens about how it works. (My hope is that the IDESG and the NSTIC effort broadly can make diagrams this clear.)

 

The government created a citizen engagement plan with three parts:

The first was convening experts. They did this in relationship with Aron Hamilton and Mike Monteith from Identity North – I as the co-designer and primary facilitator of the first Identity North was brought into work on this. They had an extensive note taking team and the reported on all the sessions in a book of proceedings. They spell my name 3 different ways in the report.

The most important was a citizen panel that was randomly selected citizens to really deeply engage with citizens to determine key policy decisions moving forward. It also worked on helping the government understand how to explain key aspects of how the system actually works. Look in the RE:ID I wrote an article for RE:ID about the process you can see that here.
The results were not released when I wrote that. Now they are! yeah! The report is worth reading because it shows the regular citizens who are given the task of considering critical issues can come out with answers that make sense and help government work better.

 

 

They also did an online survey open for a month to any citizen of the province to give their opinion. That you can see here.

Together all of these results were woven together into a collective report.

 

Bonus material: This is a presentation that I just found covering many of the different Canadian province initiatives.

 

PS: I’m away in BC this coming week – sans computer.  I am at Hollyhock…the conference center where I am the poster child (yes literally). If you want to be in touch this week please connect with William Dyson my partner at The Leola Group.

August 17, 2014

Anil JohnThe Missing Link Between Tokens and Identity [Technorati links]

August 17, 2014 06:20 PM

Component identity services, where specialists deliver services based on their expertise, is a reality in the current marketplace. At the same time, the current conversations on this topic seem to focus on the technical bits-n-bytes and not on responsibilities. This blog post is an attempt to take a step back and look at this topic through the lens of accountability.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.


These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

August 16, 2014

Nat Sakimura政府、マイナンバー用コールセンター10月を目処に設置へ [Technorati links]

August 16, 2014 09:36 PM

政府は10月をメドに、内閣府に社会保障給付と納税を1つの個人番号で管理する「マイナンバー制度」のコールセンターを設置する。2016年1月から制度が始まるのを前に、企業や個人からの問い合わせに対応し、制度の周知を図る。

引用元: マイナンバーでコールセンター 政府、10月メド  :日本経済新聞.

マイナンバー制度への一般の理解が不十分として、10月から取り組む周知活動の一環のようです。これまでも文書やセミナー、Webサイト[1]を通じて周知活動はされてきていますが、音声でも受け付けるようにすることによって、これまでリーチできていなかった層へのリーチにも取り組むという意味もあろうかと思います。

Webサイトには、よくある質問(FAQ)などもありますが、それを読んでもよくわからない時など、やはり問合せしたくなることは十分考えられます。そんな時に、電話やメール、問合せフォームなどで問合せができるととてもよいですね。最近の海外のWebサイトなどでは、コールセンターを使って、チャットで質問に回答するなどということも良く行われています。こうした対応もできるようになると、更に良いと思います。今後の内容拡充に期待大です。

[1] 内閣官房:『社会保障・税番号制度』http://www.cas.go.jp/jp/seisaku/bangoseido/

うさぎ

イラストは本文と関係ありません。

Julian BondA map of the introvert's heart. [Technorati links]

August 16, 2014 05:31 PM
A map of the introvert's heart.
http://boingboing.net/2014/08/15/a-map-of-the-introverts-hea.html

It's missing a ship that visits the island occasionally, but doesn't stay for long; "The Valley of Longing for Company".
 A Map of the Introvert’s Heart By an Introvert »

We missed this wonderful illlustration when it hit the internet last month, but how timeless is Gemma Correll's map of an introvert's heart?

More cool stuff in Medium's "I Love Charts" archives.

[from: Google+ Posts]
August 15, 2014

Mike Jones - MicrosoftThe Increasing Importance of Proof-of-Possession to the Web [Technorati links]

August 15, 2014 12:40 AM

W3C  logoMy submission to the W3C Workshop on Authentication, Hardware Tokens and Beyond was accepted for presentation. I’ll be discussing The Increasing Importance of Proof-of-Possession to the Web. The abstract of my position paper is:

A number of different initiatives and organizations are now defining new ways to use proof-of-possession in several kinds of Web protocols. These range from cookies that can’t be stolen and reused, identity assertions only usable by a particular party, password-less login, to proof of eligibility to participate. While each of these developments is important in isolation, the pattern of all of them concurrently emerging now demonstrates the increasing importance of proof-of-possession to the Web.

It should be a quick and hopefully worthwhile read. I’m looking forward to discussing it with many of you at the workshop!

August 14, 2014

KatasoftBuild a Node API Client - Part 2: Encapsulation, Resources, & Architecture [Technorati links]

August 14, 2014 03:00 PM

Build a Node API Client – Part 2: Encapsulation, Resources, & Architecture... oh my!

Welcome to Part Two of our series on Node.js Client Libraries. This post serves as our guide to REST client design and architecture. Be sure to check out Part One on Need-To-Know RESTful Concepts before reading on.

API Encapsulation

Before sinking our teeth into resources and architecture, let’s talk about encapsulation. At Stormpath, we like to clearly separate the public and private portions of our API client libraries, aka ‘SDKs’ (Software Development Kit).

All private functionality is intentionally encapsulated, or hidden from the library user. This allows the project maintainers to make frequent changes like bug fixes, design and performance enhancements, all while not impacting users. This leads to a much greater level of maintainability, allowing the team to deliver better quality software, faster, to our user community. And of course, an easier-to-maintain client results in less friction during software upgrades and users stay happy.

To achieve this, your Node.js client should only expose users to the public version of your API and never the private, internal implementation. If you’re coming from a more traditional Object Oriented world, you can think of the public API as behavior interfaces. Concrete implementations of those interfaces are encapsulated in the private API. In Node.js too, functions and their inputs and output should rarely change. Otherwise you risk breaking backwards compatibility.

Encapsulation creates lot of flexibility to make changes in the underlying implementation. That being said, semantic versioning is still required to keep your users informed of how updates to the public will affect their own code. Most developers will already be familiar semantic versioning, so it’s an easy usability win.

Encapsulation In Practice

We ensure encapsulation primarily with two techniques: Node.js module.exports and the ‘underscore prefix’ convention.

module.exports

Node.js gives you the ability to expose only what you want via its module.exports capability: any object or function in a module’s module.exports object will be available to anyone that calls.

This is a big benefit to the Node.js ecosystem and helps improve encaspsulation goals better than traditional JavaScript environments.

Underscore Names

Additionally we use the ‘underscore prefix’ convention for objects or functions that are considered private by the development team but still accessible at runtime because of JavaScript’s weak encapsulation behavior. That is, any object or function that starts with the underscore _ character is considered private and its state or behavior can change, without warning or documentation, on any given release.

The takeaway is that external developers should never explicitly code against anything that has a name that starts with an underscore. If they see a name that starts with an underscore, it’s simply ‘hands off’.

Alternatively, other libraries use @public and @private annotations in their JS docs as a way of indicating what is public/allowed vs. private/disallowed. However, we strongly prefer the underscore convention because anyone reading or writing code that does not have immediate access to the documentation can still see what is public vs private. For example it is common when browsing code in GitHub or Gists that annotations in documentation are not easily available. However, you can still always tell that underscore-prefixed methods are to be considered private.

Either way, you need to consistently convey which functions to use and which to leave alone. You may want to omit the private API from publicly hosted docs to prevent confusion.

Public API

The public API consists of all non-private functions, variables, classes, and builder/factory functions.

This may be surprising to some, but object literals used as part of configuration are also part of the public API. Think of it like this: if you tell people to use a function that requires an object literal, you are making a contract with them about what you support. It’s better to just maintain backwards and forwards compatibility with any changes to these object literals whenever possible.

Prototypical OO Classes

We use prototypical inheritance and constructor functions throughout the client, but the design reflects a more traditional OO style. We’ve found this makes sense to most of our customers of all skill/experience levels.

Stormpath is a User Management API, so our classes represent common user objects like Account, in addition to more generic classes, like ApiKey. A few classes used as examples in this post:

Builder Functions

Node.js and other APIs often use method chaining syntax to produce a more readable experience. You may have also heard of this referred to as a Fluent Interface.

In our client, it’s possible to perform any API operation using a client instance. For example, getApplications obtains all Applications by using the client and method chaining:

client.getApplications()
.where(name).startsWith(‘foo’)
.orderBy(name).asc()
.limit(10)
.execute(function (err, apps){
  ...
});

There are two important things to note from this getApplications example:

  1. Query construction with where, startsWith and orderBy functions is synchronous. These are extremely lightweight functions that merely set a variable, so there is no I/O overhead and as such, do not need to be asynchronous.
  2. The execute function at the end is asynchronous and actually does the work and real I/O behavior. This is always asynchronous to comply with Node.js performance best practices.

Did you notice getApplications does not actually return an applications list but instead returns a builder object?

A consistent convention we’ve added to our client library is that get* methods will either make an asynchronous call or they will return a builder that is used to make an asynchronous call.

But we also support direct field access, like client.foo, and this implies a normal property lookup on a dictionary and a server request will not be made.

So, calling a getter function does something more substantial. Both still retain familiar dot notation to access internal properties. This convention creates a clear distinction between asynchronous behavior and simple property access, and the library user knows clearly what to expect in all cases.

Writing code this way helps with readability too – code becomes more simple and succinct, and you always know what is going on.

Base Resource Implementation

The base resource class has four primary responsibilities:

  1. Property manipulation methods – Methods (functions) with complicated interactions
  2. Dirty Checking – Determines whether properties have changed or not
  3. Reference to DataStore – All our resource implementations represent an internal DataStore object (we’ll cover this soon)
  4. Lazy Loading – Loads linked resources

Resource and all of its subclasses are actually lightweight proxies around a DataStore instance, which is why the constructor function below takes two inputs:

  1. data (an object of name/value pairs)
  2. A DataStore object.

     var utils = require('utils');
    
     function Resource(data, dataStore) {
    
       var DataStore = require('../ds/DataStore');
    
       if (!dataStore && data instanceof DataStore){
         dataStore = data;
         data = null;
       }
    
       data = data || {};
    
       for (var key in data) {
         if (data.hasOwnProperty(key)) {
           this[key] = data[key];
         }
       }
    
       var ds = null; //private var, not enumerable
       Object.defineProperty(this, 'dataStore', {
         get: function getDataStore() {
           return ds;
         },
         set: function setDataStore(dataStore) {
           ds = dataStore;
         }
       });
    
       if (dataStore) {
         this.dataStore = dataStore;
       }
     }
     utils.inherits(Resource, Object);
    
     module.exports = Resource;
    

When CRUD operations are performed against these resource classes, they just delegate work to the backend DataStore. As the DataStore is a crucial component of the private API, we keep it hidden using object-defined private property semantics. You can see this in practice with the public getters and setters around the private attribute above. This is one of the few ways to implement proper encapsulation in JavaScript.

If you remember to do just two things when implementing base resource classes, let them be:

  1. Copy properties over one-to-one
  2. Create a reference to a DataStore object to use later

Base Instance Resource Implementation

InstanceResource is a subclass of Resource. The base instance resource class prototypically defines functions such as save and delete, making them available to every concrete instance resource.

Note that the saveResource and deleteResource functions delegate work to the DataStore.

var utils = require('utils');
var Resource = require('./Resource');

function InstanceResource() {
  InstanceResource.super_.apply(this, arguments);
}
utils.inherits(InstanceResource, Resource);

InstanceResource.prototype.save = function saveResource(callback) {
  this.dataStore.saveResource(this, callback);
};

InstanceResource.prototype.delete = function deleteResource(callback) {
  this.dataStore.deleteResource(this, callback);
};

In traditional object oriented programming, the base instance resource class would be an abstract. It isn’t meant to be instantiated directly, but instead should be used to create concrete instance resources like Application:

var utils = require('utils');
var InstanceResource = require('./InstanceResource');

function Application() {
  Application.super_.apply(this, arguments);
}
utils.inherits(Application, InstanceResource);

Application.prototype.getAccounts = function 
getApplicationAccounts(/* [options,] callback */) {
  var self = this;
  var args = Array.prototype.slice.call(arguments);
  var callback = args.pop();
  var options = (args.length > 0) ? args.shift() : null;
  return self.dataStore.getResource(self.accounts.href, options, 
                                    require('./Account'), callback);
};

How do you support variable arguments in a language with no native support for function overloading? If you look at the getAccounts function on Applications, you’ll see we’re inspecting the argument stack as it comes into the function.

The comment notation indicates what the signature could be and brackets represent optional arguments. These signal to the client’s maintainer(s) (the dev team) what the arguments are supposed to represent. It’s a handy documentation syntax that makes things clearer.

...
Application.prototype.getAccounts = function 
getApplicationAccounts(/* [options,] callback */) {
  ...
}
...

options is an object literal of name/value pairs and callback is the function to be invoked. The client ultimately directs the work to the DataStore by passing in an href. The DataStore uses the href to know which resource it’s interacting with server-side.

Usage Paradigm

Let’s take a quick look at an example JSON resource returned by Stormpath:

{
  “href”: “https://api.stormpath.com/v1/accounts/x7y8z9”,
  “givenName”: “Tony”,
  “surname”: “Stark”,
  ...,
  “directory”: {
    “href”: “https://api.stormpath.com/v1/directories/g4h5i6”
  }
}

Every JSON document has an href field that exists in all resources, everywhere. JSON is exposed as data via the resource and can be referenced via standard dot notation like any other JavaScript object.

Note: Check out this blog post on linking and resource expansion if you’re wondering how we handle linking in JSON.

Proxy Pattern

Applications using a client will often have an href for one concrete resource and need access to many others. In this case, the client should support a method (e.g. getAccount) that takes in the href they have, to obtain the ones they need.

String href = 'https://api.stormpath.com/v1/...etc...';

client.getAccount(href, function(err, acct) {
  if (err) throw err;

  account.getDirectory(function(err, dir) {
    if (err) throw err;
    console.log(dir);
  });
});

In the above code sample,getAccount returns the corresponding Account instance, and then the account can be immediately used to obtain its parent Directory object. Notice that you did not have to use the client again!

The reason this works is that the Account instance is not a simple object literal. It is instead a proxy, that wraps a set of data and the underlying DataStore instance. Whenever it needs to do something more complicated than direct property access, it can automatically delegate work to the datastore to do the heavy lifting.

This proxy pattern is popular because it allows for many benefits, such as programmatic interaction between references, linked references, and resources. In fact, you can traverse the entire object graph with just the initial href! That’s awfully close to HATEOS! And it dramatically reduces boilerplate in your code by alleviating the need to repeat client interaction all the time.

SDK architecture diagram

So how does this work under the hood? When your code calls account.getDirectory, the underlying (wrapped) DataStore performs a series of operations under the hood:

  1. Create the HTTP request
  2. Execute the request
  3. Receive a response
  4. Marshal the data into an object
  5. Instantiate the resource
  6. Return it to the caller

Client Component Architecture

Clearly, the DataStore does most of the heavy lifting for the client. There’s actually a really good reason for this model: future enhancements.

Your client will potentially handle a lot of complexity that is simpler in the long run to decouple from resource implementations. Because the DataStore is part of the private API, we can leverage it to plugin new functionality and add new features without changing the Public API at all. The client will just immediately see the benefits.

Datastore

Here is a really good example of this point. The first release of our SDK Client did not have caching built in. Any time a Stormpath-backed app called getAccount, getDirectory, or any number of other methods, the client always had to execute an HTTP request to our servers. This obviously introduced latency to the application and incurred an unnecessary bandwidth hit.

However our DataStore-centric component architecture allowed us to go back in and plug in a cache manager. The instant this was enabled, caching became a new feature available to everyone and no one had to change their source code. That’s huge.

Anyway, let’s walk through the sequence of steps in a request, to see how the pieces work together.

Cache Manager Diagram

First, the DataStore looks up the cache manager, finds a particular region in that cache, and checks if the requested resource is in cache. If it is, the client returns the object from the cache immediately.

If the object is not in cache, the DataStore interacts with the RequestExecutor. The RequestExecutor is another DataStore component that in turn delegates to two other components: an AuthenticationStrategy and the RequestAuthenticator.

RequestExecutor

REST clients generally authenticate by setting values in the authorization header. This approach is incredibly convenient because it means swapping authentication strategies is a simple matter of changing out the header. All that is required is to change out the AuthenticationStrategy implementation and that’s it – no internal code changes required!

Many clients additionally support multiple/optional authentication schemes. More on this topic in part 3.

After authentication, the RequestExecutor communicates the outgoing request to the API server.

RequestExecutor to API Server

Finally, the ResourceFactory takes the raw JSON returned by the API server and invokes a constructor function to create the instance resource that wraps (proxies) this data, and again, the DataStore.

ResourceFactory

All of the client components represented in this diagram should be pluggable and swappable based on your particular implementation. To make this a reality as you architect the client, try to adhere to the Single Responsibility Principle: ensure that your functions and classes do one and only one thing so you can swap them out or remove them without impacting other parts of your library. If you have too many branching statements in your code, you might be breaking SRP and this could cause you pain in the future.

And there you have it! Our approach to designing a user-friendly and extremely maintainable client to your REST API. Check back for Part Three and a look at querying, authentication, and plugins!

API Management with Stormpath

Stormpath makes it easy to manage your API keys and authenticate developers to your API service. Learn more in our API Key Management Guide and try it for free!

CourionPurdue Pharma Selects Courion to Fulfill Identity and Access Management Requirements [Technorati links]

August 14, 2014 02:56 PM

Access Risk Management Blog | Courion

David DiGangiPurdue Pharma L.P., a privately held pharmaceutical company based in Stamford Connecticut, has selected the Courion Access Assurance Suite after an evaluation of several competing offerings. The pharmaceutical company will leverage the intelligence capabilities of access assurance suite to maintain regulatory compliance and mitigate risk.

Purdue Pharma, together with its network of independent associated US companies, has administrative, research and manufacturing facilities in Connecticut, New Jersey and North Carolina.Purdue Pharma logo

With implementation of the intelligence capabilities within the Courion IAM Suite, Purdue will be able to leverage this product to automate routine IAM tasks and maintain compliance with US Food & Drug Administration requirements.

blog.courion.com

Kuppinger ColeExecutive View: WSO2 Identity Server - 71129 [Technorati links]

August 14, 2014 10:11 AM
In KuppingerCole

In contrast to common application servers, WSO2 provides a more comprehensive platform, adding on the one hand features such as event processing and business rule management, but on the other hand also providing strong support for security features. The latter includes WSO2 API Manager, which manages API (Application Programming Interface) traffic and thus supports organizations in managing and protecting the APIs they are exposing, for instance to business partners....
more

Kuppinger ColeExecutive View: Druva inSync - 71131 [Technorati links]

August 14, 2014 09:53 AM
In KuppingerCole

Druva’s approach to information protection is quite unique among traditional solutions, since instead of maintaining a centralized data storage and enabling secure access to it from outside, inSync maintains a centralized snapshot of data backed up from all endpoints and operates on this snapshot only, leaving the original data on endpoints completely intact.
Having its roots in a multiplatform cloud backup and file sharing platform, inSync has evolved into an integrated service...
more

August 13, 2014

Pamela Dingle - Ping IdentityThe next conversation to be had [Technorati links]

August 13, 2014 05:01 PM

Ok, now that CIS and Catalyst conferences are (almost) out of the way, we need to rally the identity geeks and start talking about OAuth and OpenID Connect design patterns.   We need to get some public discourse going about token architectures for various real world business access scenarios.

The value proposition needs to be made more concrete.  So let’s try to push on that rope in the next few months.

 

August 12, 2014

Nat Sakimura政府、マイナンバー制度に関わる本人確認の措置についての資料を公表 [Technorati links]

August 12, 2014 11:00 PM


政府は12日、マイナンバー制度に関わる本人確認の処置についての資料[1]を公表しました。

これは、行政手続における特定の個人を識別するための番号(マイナンバー)の利用時に法律に基づき要請される番号確認および身元確認の方法について解説したもので、(I)本人から提供を受ける場合 (II)代理人から提供を受ける場合に分け、それぞれ(1)対面・郵送(2)オンライン(3)電話の場合に分けて説明したものです。

わけかたとして、対面と郵送を一つにしているところが面白いですね。国際規格だと、(1)対面(2)リモートに分けますから、郵送はどちらかと言うとオンラインとセットになりそうですが、今回の政府のわけかたはむしろ(1)紙での確認 (2)電子的確認 (3)音声による確認、という分け方にしたように思われます。

基本、マイナンバー法施行規則[2]をわかりやすく解説する形をとっており、施行規則のどこを参照したら良いかなども書いてあります。例えば、(I)本人から提供を受ける場合の(2)オンラインの場合ですが、次の3つのように書かれています。

① 個人番号カード(ICチップの読み取り)【則4一】
② 公的個人認証による電子署名 【則4二ハ】
③ 個人番号利用事務実施者が適当と認める方法 【則4二ニ】

ここで【則4二ニ】は、「施行規則第四条第二項のニを参照するように」ということですね。実際に該当部分を見ると「ハに掲げるもののほか、個人番号利用事務実施者が適当と認める方法により、当該電子情報処理組織に電気通信回線で接続した電子計算機を使用する者が当該提供を行う者であることを確認すること。 」と書いてあります。

この資料で個人的に注目したのは、上記の③ 個人番号利用事務実施者が適当と認める方法に付けられた解説です。「※ 民間発行の電子署名、個人番号利用事務実施者によるID・PWの発行などを想定」とのことですので、今後の広がりが考えられますね。


[1] 内閣官房:『本人確認の措置についての資料』http://www.cas.go.jp/jp/seisaku/bangoseido/sekoukisoku/26-4hk.pdf

[2] 内閣官房:『行政手続における特定の個人を識別するための番号の利用等に関する法律施行規則(マイナンバー法施行規則)』http://www.cas.go.jp/jp/seisaku/bangoseido/sekoukisoku/26-3.pdf

[3] うさぎイラストはこちらからいただきました→http://pic.prepics-cdn.com/munimuniwaon/35872028.jpeg マイナンバーのキャラクターのうさぎを使うと、使用規約にひっかかるとの指摘があったので。

Kantara InitiativeRoad to SXSW 2015 [Technorati links]

August 12, 2014 10:43 PM

Care and Feeding of Human & Device Relationships

It’s that time again to choose your sessions for SXSW Interactive.  Here’s a summary from our experience last year as well as just a few of our suggested picks. 

SXSW Interactive provides a unique and innovative platform to share experiences and connect with a diverse set a stakeholders that can only be found at springtime in Austin. We love to regularly connect with best in class Identity services professionals, but SXSW stands out as an event where we connect with people and organizations of all types. The opportunity for unmatched diversity in one place is something that comes only once a year.

Last year Kantara Initiative presented Tips and Tools for Protected Connection as part of the broader IEEE technology for humanity series. Our panel included privacy technology innovations, practices, solutions and research from ISOC, TOR Project and UMA. We’re focusing on IoT and identity this year with 2 panel submissions. We’ve submitted the Care and Feeding of Human & Device Relationships with panelists from Forgerock, CA, and Salesforce.com.  We’ve also worked with our Board Member IEEE-SA to submit a proposal from the Identities of Things WG with panelists from Deutche Telekom, Cisco, Perey Research and Consulting and Forgerock as part of the IEEE 2015 series.

The road to SXSW is a long one but with your support we hope to get on the schedule again!  Have a look at our highlighted session for your voting pleasure. There are MANY quality proposals this year so this is just a taste.  Please vote for our submissions and let us know about your favourites!

Our Picks for SXSW 2015

1. Care and Feeding of Human & Device Relationships

Relationships are formed of connections and interactions. We have relationships with humans and entities like our employers, Twitter, Facebook, and our families. We also have relationships with objects like our phones, cars, and gaming consoles. Our connections, roles, and relationships are multiplying with each innovation. People, entities, and things all have identities. Who is paying attention to the relationships between each? Who has the authority to confirm if a relationship is valid, current, and should be trusted? With more and more interactions and automation how can we understand the associated relationships and how to manage billions of relationships? This session discusses the developing laws of relationships between people, entities, and things and provides an innovative view of the landscape from the Identity Relationship Management Open Work Group. Find out what you need to know about the management human and device relationships. Discover how you can participate.

2. Identities of Things Group: Paving the Way for IoT

There’s a ton of promise in “smart everything.” However, the convergence of technology and sheer proliferation of data being gathered by sensors, cameras and other networked devices are making the road to the Internet of Things (IoT) a bumpy one. Today, there are no overarching frameworks that support broad authentication or data management, fueling serious data privacy and security concerns. Further, there’s no “DNS-like” framework that maps object identities, so things can effectively communicate and work with each other. In order for IoT to realize its promise, we must differentiate between people and objects, putting standards and structures in place that maximize the use of networked data, while guarding against abuse. Learn how the Identities of Things Group is working with industry to assess the IoT Landscape and develop harmonized frameworks that will help enable the Internet of Things. Find out how to get involved in defining an IoT future where PEOPLE matter most!

3. A framework for Privacy by Design

We live in an era where the pace of technological advancement is speeding along faster than the world can comprehend or respond. As we try to keep up, we are merging our limited understanding of emerging technology with our own antiquated views, policies and concepts related to personal identity, privacy and data governance. As a result, the world is playing an awkward and inefficient global game of “catch-up” that may do more harm to privacy than good. It is time for a more proactive stance; a vision, framework and standards that can help the world incorporate “Privacy by Design.” Join these two incredible thought leaders for a conversation around a new, global Privacy by Design concept that incorporates standards for privacy as an integral part and practice of development. We’ll explore what we own, how we store it, and who’s responsible for keeping it secure and what’s at stake for the future.

4. Biometrics & Identity: Beyond Wearable

From mobile devices to wearable gear, the increasingly ergonomic, small, lightweight, body conscious, attachable, controllable and comfortable devices we use are becoming physical extensions of ourselves. From phone to fitbit, as we become more dependent on these devices, our comfort level with the capture and use of our intimate personal data increases. However, will we become comfortable using our biometric and genomic data to digitally unlock our every day lives — from car to communications, home security to banking, healthcare to services? We are moving beyond wearables, to an age where products like biyo, which connects physical payment to a scan of the unique veins in the human palm, are becoming present market realities. What are the implications of using personal biometric data as the virtual keys that unlock our very real lives? How should we feel about using such sensitive, personal data as a means of self-identification?

We look forward to SXSW 2015.  Happy voting!!

 

Netweaver Identity Management Weblog - SAPOverlooked Risk in Middle Tier M&A? [Technorati links]

August 12, 2014 08:02 PM

If you have ever been part of a big public company merger then most likely the merger included an audit and review of the IT assets, principally those that provide the accounting and reporting.  Post merger and before the two merged companies are interconnected there is also a review of the security policies in order to determine risks and gaps that could lead to compromise.  If there is a large difference in policy, the interconnectivity can be delayed until the security differences are corrected and verified.  This behavior is prudent.  Data compromise can damage a company’s carefully guarded reputation and lead to significant losses.  Beyond loss in sales it can also drive the stock price down.

Private Equity firms that buy and sell companies in the middle tier are strongly focused on the financial health of the company they are purchasing.  Certainly financial health indicates a well run company. Hours are spent structuring the deal and ensuring they know what they are acquiring.  No one wants to be defrauded.  From a sellers perspective they want a high asking pricing and zero encumbrances.

From what I have seen, both buy side and sell side are paying little attention to either information security or physical security risks.  This, even though middle tier companies tend to have fewer resources and are more likely to have major security gaps, whether within their facilities, or their network infrastructure.  Consider a scenario where you are either buying or selling a company and it has been compromised and the hackers are quietly laying in wait, collecting additional access credentials and elevating privileges.  Over time they will be able to exfiltrate all intellectual property.  In the case where the hacking is being done by a state actor it will be shared with domestic competitors.  If this is a platform company, that has been built up over several years, this amounts to a staggering loss of value. The buyer is accumulating exposure in the same way someone who sells naked options without holding the underlying asset accumulates exposure.  The same can be said for the supply chain where down stream providers of services connected into the network increase the size and diversity of the threat landscape.  A compromise within this system if not properly secured could bring down years of work and destroy any equity built.  Any time one is purchasing or selling  a company, he should take security exposure seriously and hire the teams necessary to do a thorough review.

I frequently hear people say that a business ending compromise is a rare event.  How rare or improbable an event is, matters less than the consequences of it occurring.  You can’t zero out risks, of course, but you should follow what works.  If one is not already doing this I recommend the list below.  It applies to domestic acquisitions within first world countries.  Cross border buys add additional challenges (e.g. FCPA exposure) but this list will still apply at the macro level.

  1. Thorough review and harmonization of security policies.
  2. Reciprocal audit agreements with 3rd party suppliers in place.
  3. Thorough review of security controls.
  4. Conduct a network vulnerability assessment covering both internal networks and boundaries.
  5. Perform a penetration test (physical and digital).
  6. Look at patch management processes.
  7. Review identity management practices and access control.
  8. Code audit of custom mission critical applications.
  9. An up to date threat model.
  10. Physical security audit.

MythicsThe Power of ZS3 Amplified by DTrace [Technorati links]

August 12, 2014 06:12 PM

The infinite appetite for data from our applications is a never ending challenge to the IT staff. Not only do we need to keep feeding…

GluuSXSW 2015: How API access control = monetization + freedom [Technorati links]

August 12, 2014 04:25 PM

mike_talking

Control access to your APIs, and you can charge for them. Large companies see API access management at scale as a competitive advantage and a way to lock in customers. Think about Google docs: it only works if both parties have an account at Google.

But the greatness of the Internet was not achieved by the offering of one domain. If each device and cloud service has proprietary security controls, people will have no way to effectively manage their personal digital infrastructure. Luckily, standards have emerged thanks to a simple but flexible JSON/REST framework called OAuth2, and the “OpenID Connect” and “User Managed Access” profiles of it.

This talk will provide a history of access management and a deep dive into the concepts, patterns, and tools to enable mobile and API developers to put new OAuth2 standards to use today. It will provide specific examples and workflows to bring OAuth2 to life to help organizations understand how they can hook into the API economy.

Questions

 

Vote here http://panelpicker.sxsw.com/vote/38690

 

Mark Dixon - OracleYou’re Home at Last, my iPad, You’re Home at Last! [Technorati links]

August 12, 2014 03:16 PM

Last Wednesday, a dreaded First World Fear was realized.  During a tight connection between flights at the Dallas – Fort Worth airport, I left my iPad in the seat pocket on my first flight.  I didn’t realize what I had done until I reached into my briefcase for it on my next flight. My heart sank. I use the IPad for so many things. To lose it was a huge disruption in my day to day life, not to mention the cost and hassle of replacement

A call to the DFW lost and found department was not reassuring. I was instructed by the telephone robot to leave a message with contact information and lost item description, and wait.  I dutifully complied, but had real doubts about whether I’d ever see my iPad again.  A conversation with an American Airlines gate agent gave a little bit of hope.  She assured me that every lost item was investigated, and that I should be patient for the process to take its course.

My Monday morning, I had about given up hope.  But then – the phone call – my iPad had been found!  I had activated the “Find my iPhone” feature, which caused my phone number to be displayed when ever the device was turned on.  The lost and found agent called me, verified that the device was indeed mine and arranged for it to be returned to me by Fedex. Then things got interesting …

Soon after I received the happy phone call, I received an email, also informing me that the iPad had been found – another nice feature of Find my iPhone.  

Ipaddfw

Apparently, when a device is in the “lost” mode, it will continue to wake up periodically and attempt to send its location via email.  I have received 18 emails to that effect since the iPad was first found yesterday morning, each with a little map pinpointing its current location.

I really enjoyed tracking the iPad’s progress as it found its way back to me via my iPhone’s Find My iPhone app.  In the photos below, you can see my iPad’s circuitous journey around DFW yesterday, its flight to the Fedex hub and back to Phoenix overnight, and the fairly direct route to my home by 7:33 this morning!

Ipad1Ipad2Ipad3

So, in addition to getting my treasured iPad back, I received an object lesson in the value of mobile location services!  We live in wonderful times!

KatasoftBuild a Node API Client – Part 1: REST Principles [Technorati links]

August 12, 2014 03:00 PM

Build a Node API Client – Part 1: REST Principles FTW

If you want developers to love your API, focus the bulk of your efforts on designing a beautiful one. If you want to boost adoption of your API, consider a tool to make life easier on your users: client libraries. Specifically, Node.js client libraries.

This series will cover our playbook for building a stable, useful Node.js client in detail, in three parts:

Part 1: Need-to-know RESTful Concepts

Part 2: REST client design in Node.js

Part 3: Functionality

If you would like to learn general REST+JSON API design best practices, check out this video. In this article series, we are going to focus exclusively on client-side concepts.

Lastly, keep in mind that while these articles use Node.js, the same concepts apply to any other language client with only syntax differences.

OK. Let’s begin with the RESTful concepts that make for a killer client. After all, if we don’t nail these down, no amount of JavaScript will save us later.

HATEOAS

HATEOAS, usually pronounced ‘Haiti-ohs’, is an acronym for “Hypermedia As The Engine of Application State”. Aside from being an unfortunate acronym, HATEOAS dictates that REST API clients should not know anything about that REST API: a REST client should issue an initial request and everything it needs from that point on can be discovered from the initial response.

HATEOAS is still considered the ideal target for REST API design, but at times, HATEOAS will present a challenge to our REST client design efforts. Think of it this way:

  1. Your end-users will want to use your shiny new client to do the things they know are possible in your API. They will want to invoke known / defined behavior.
  2. You will want to provide them convenience functions to make it super easy for them to interact with your API – things that may not be as easy or possible with standard HTTP requests.
  3. Reconciling 1 and 2 with HATEOAS simply won’t always possible.

Our philosophy takes the pragmatic view: while HATEOS is ideal for automated software agents (like browsers), it is often not as nice for humans that want library functions that address specific needs, so we will diverge from HATEOAS when it makes sense.

REST Resources

Resources transferred between the client and the API server represent things (nouns), not behaviors (verbs). Stormpath is a User Management API, so for us, resources are records like user accounts, groups, and applications.

No matter what your API’s resources are, each resource should always have it’s own canonical URL. This globally unique HREF will identify each resource and only that resource. This point really can’t be stressed enough; canonical URLs are the backbone of RESTful architecture.

In addition to having a canonical URL, resources should be coarse-grained. In practice, this means we return all resource properties in their entirety in the REST payload instead of as partial chunks of data. Assumptions about who will use the resource and how they will use it only make it more difficult to expand your use cases in the future. Besides, coarse-grained resources translate to fewer overall endpoints!

Collection Resources

Collection resources are first-class citizens with their own first-class properties. These properties contain data that in turn describe the collection itself, such as limit and offset.

Collection resources should have a canonical URL and follow a plural naming convention, like /applications (and not /application) to make identification easy. A collection resource always represents potentially many other resources, so plural naming conventions are the most intuitive.

Collections usually support create requests as well as query requests, for example, ‘find all children resources that match criteria X’.

Instance Resources

An instance resource is usually represented as a child of some parent collection. In the example below, the URI references a particular application within the /applications collection. The implication is that if you interact with this endpoint, you interact with a single application.

/applications/8sZxUoExA30mp74

For the most part, instance resources only need to support read, update, and delete operations. While not a hard and fast rule, reserving create for collections is a common convention in REST API design, especially if you want to generate unique identifiers for each newly-created resource.

Resource Code Examples

Ok, now the fun stuff – code!

If we are to translate these REST resource concepts to working code, it would make sense to have code artifacts that represent both collection and instance resources.

Here’s an example of what it might look like to define a very general resource concept in a Node.js library:

var util = require('util');

function Resource(...) { ... }
util.inherits(Resource, Object);

someResource.href

We’re using JavaScript’s prototypical inheritance here to simulate classical Object Oriented inheritance. We’ve found this to be the easiest to understand abstraction for most developers using code libraries, so we went with this paradigm.

As you can see, The Resource ‘class’ above takes advantage of the standard Node util library to create a resource constructor function. If you want to simulate a classical OO hierarchy, util is a great way to do it.

Next, we’ll extend this general resource class to create more specific Instance and Collection resource classes.

function InstanceResource(...) {...}
util.inherits(InstanceResource, Resource);

anInstanceResource.save(function (err, saved) {
    ...
}); 

anInstanceResource.delete(function (err) {
    ...
});

As mentioned, you’ll notice save and delete methods on InstanceResource, but no create. The callback on save returns either an error or the successfully saved object; delete has no object to return, so only an error might be provided to the callback. Both methods are called asynchronously after the operation is complete.

So what’s the takeaway? You can save or delete individual things, but not necessarily entire collections. Which leads us to our next resource class:

function CollectionResource(...) {...}
util.inherits(CollectionResource, Resource);

aCollResource.each(function (item, callback) {
    ...
}, function onCompletion(err) {
    ... 
}); 

aCollResource.eachSeries
aCollResource.map
aCollResource.filter
... other async.js methods ...

CollectionResource can support a number of helper functions, but the most common is each. each takes an iterator function which is invoked asynchronously for every instance in the collection.

applications.each(function(app, callback){
    console.log(app);
    callback();
}, function finished(err) {
    if (err) console.log(‘Error: ‘ + err);
});

This example uses each to simply log all the instance resources.

As a great convenience, we made the decision early on to assimilate all of async.js’ collection utility functions into all Collection resources. This allows developers to call Stormpath methods using the semantics of async.js and allows the client to delegate those methods to the corresponding async.js functions behind the scenes. Eliminating even just one package to import has proven to be really convenient, as we’ll discuss more in part 3. We’re big fans of async.js.

(Note that async.js requires you to invoke a callback method when you’re done with a given iteration step.)

That’s it for the RESTful groundwork! We have a number of other articles and videos pertaining to REST security, linking, POST vs PUT, and more on the blog if you’re interested.

Part two of this series will be all about the nitty-gritty details of coding the client. Expect sections on encapsulation, public vs. private API implementation, and our component architecture. Stay tuned!

API Management with Stormpath

Stormpath makes it easy to manage your API keys and authenticate developers to your API service. Learn more in our API Key Management Guide and try it for free!