Tech Blog‎ > ‎

Geek corner


LDAP Authentication Provider

posted Sep 12, 2016, 9:33 AM by Unknown user   [ updated Sep 12, 2016, 2:22 PM by Laura Carrubba ]

CA Live API Creator provides a built-in authentication service for both the admin service (for example, 'sa') and the user-admin. You can replace the authentication service with a JavaScript library. For more information about how to replace the authentication service, see Create Custom Authentication Providers using JavaScript. The B2B example provides a great example of a custom authentication service and is included with the self-contained, single-user version of Live API Creator based on Jetty.

A sample LDAP Java library and JavaScript has been posted to GitHub here. The process requires a modification to the Java code to match the corporate specification of the internal LDAP system. Once completed and tested, create a JAR file and copy this to your /lib directory. The next step is to load the JavaScript user library into LAC and then to create a new authentication provider service using the new JavaScript library and JAR file.

Once the new authentication provider service is in place and tested, it can be used by both the internal admin service and the end user access of REST API endpoints.

Basic Authentication using Custom Endpoint

posted Sep 12, 2016, 9:13 AM by Unknown user   [ updated Sep 19, 2016, 1:59 PM by Laura Carrubba ]

As an API developer, you can use a custom endpoint for greater flexibility. One request was to support basic authentication (passing username and password). You can extract and decode the authentication string and use these values to create a Live API Creator authentication token. You can use this token to make REST API calls to Live API Creator and return JSON responses.

For more information about how a create custom endpoint, see Custom Endpoints.

  1. Create a new custom endpoint - check GET and POST.
  2. Enter the following code, changing the authURL to your project:
    var res = {};
    var hdrs = headers.getRequestHeader('Authorization');
    if(hdrs){

    for (var i = 0; i < hdrs.size(); ++i) {
      var auth = hdrs.get(i);
      var decode = Java.type("com.kahuna.server.util.Base64Util.decode");
      var userpw = new decode(auth.substring(6));

    if(auth){
      var split = userpw.split(":");
      var username = split[0];
      var password = split[1];
      var data = { 'username': username, 'password': password};
      var authURL = "http://localhost:8080/rest/default/demo/v1";
      var apikey = SysUtility.restPost(authURL +"/@authentication",null,null,data);
      var authtoken = JSON.parse(apikey).apikey;
      var settings = {headers: { "Authorization": "CALiveAPICreator "+authtoken+":1"}};
      var params = {};
      var url = authURL + "/demo:customer";
      res = SysUtility.restGet(url,params,settings);

    /*
    //FOR POST verb

      var reader = new java.io.BufferedReader(new java.io.InputStreamReader(request.inputStream));
      var json = "";
      var line = "";
      while ((line = reader.readLine()) != null) {
        json += line;
      } res = SysUtility.restPost(url,params, settings, json);
    */
      } //if auth
     } //for loop
    } //if hdrs - or throw exception
    return JSON.stringify(res);
  3. Go to your favorite tool, such as cURL or Postman, and create a basic authentication (username: demo, password: Password1).
  4. If you do a verb GET, use restGET. If you use POST, use restPOST and pass the 'json' content.

Add Amazon S3 Data storage to your rules

posted Sep 12, 2016, 9:02 AM by Unknown user   [ updated Sep 12, 2016, 2:27 PM by Laura Carrubba ]

There are times when you need to read a file - an Excel spreadsheet, a comma separated file (CSV) , or an employee resume to/from Live API Creator. API server is a Java-based engine and you have full access to the java.io.File interface. This works fine if you are running a single user system on your laptop. However, in a production system that has been scaled horizontally, you would need to mount and map some shared filesystem (for example, x:/foo/bar/myfile.txt) . Another approach is to add the Amazon s3 JAR files to your Live API Creator environment and call the S3 files from the cloud.

If you are running on Amazon, add the following files to your WAR file (java jar uf CALiveAPICreator-3.0.WAR .ebsextensions/JDcDrivers.config):

Sample JDBCDrivers.config for Amazon S3

  "/usr/share/tomcat8/lib/aws-java-sdk-core-1.11.9.jar":

     mode: "000755"

     owner: tomcat

     group: tomcat

     source: http://central.maven.org/maven2/com/amazonaws/aws-java-sdk-core/1.11.9/aws-java-sdk-core-1.11.9.jar

 

  "/usr/share/tomcat8/lib/aws-java-sdk-kms-1.11.9.jar":

     mode: "000755"

     owner: tomcat

     group: tomcat

     source: http://central.maven.org/maven2/com/amazonaws/aws-java-sdk-kms/1.11.9/aws-java-sdk-kms-1.11.9.jar

 

  "/usr/share/tomcat8/lib/aws-java-sdk-s3-1.11.9.jar":

     mode: "000755"

     owner: tomcat

     group: tomcat

     source: http://central.maven.org/maven2/com/amazonaws/aws-java-sdk-s3/1.11.9/aws-java-sdk-s3-1.11.9.jar

 

  "/usr/share/tomcat8/lib/joda-time-2.9.4.jar":

     mode: "000755"

     owner: tomcat

     group: tomcat

     source: http://central.maven.org/maven2/joda-time/joda-time/2.9.4/joda-time-2.9.4.jar


If you are using Live API Creator running on Amazon (AWS) using Elastic Beanstalk (EBS). For more information about the setup, see Install on Amazon Web Services Elastic Beanstalk. The key is to add the correct Amazon JAR files to the installation (.ebsextensions/JDBCDrivers.config) shown above. The second part is to create a Java Class and library (see amazon documentation samples ) that you will load as a user library (JAR) in Live API Creator. Using a rule or event, you can call the new Java wrapper function. The trick is to either pass in the accessKey and secretKey (replace their sample code with new BasicAWSCredentials(accessKey,secretAccessKey) ); The other trick is to make sure you have set the access rights for each file you plan to read to allow this user read privileges.


var ReadFileFromS3 = Java.type("com.mycompany.s3.ReadFileFromS3");

var myFile =new ReadFileFromS3(buckeName, key, accessKey, secretAccessKey);

Why do I need a schema anyway?

posted Feb 16, 2016, 1:41 PM by Michael Holleran   [ updated Aug 26, 2016, 1:10 PM by Laura Carrubba ]

"Art is limitation; the essence of every picture is the frame." — G.K. Chesterton

Summary

This article  is not yet another argument in the tiresome SQL vs. NoSQL debate. I think both technologies have their place. This is an explanation of the benefits of using a schema when the data can benefit from it.

Most NoSQL databases store data either in key/value form, or as XML/JSON documents. In almost all cases, they lack the concept of a schema. This presents certain advantages: programmers can store any data they want, they can change how they store the data over time without migrating old data, etc... That make sense for unstructured data, but when it comes to structured data, these advantages are offset by significant, and (I think) under-reported, downsides regarding the value of the data, and its long-term viability.

In this article, I describe how a schema can be an important asset when dealing with many types of data, and how the concept of schema can be extended to make it even more useful. 

Why a schema?

When writing software, we usually think of what the system is supposed to do. We should also think about what the software is not supposed to do.

In many ways, that's what a schema does. It's a way to define how data should behave, and how it should not behave. It's a way to draw the line between the "good" space, where data is consistent, and the "bad" space, where data is not consistent.

That is the main purpose of a schema. It's not a crutch to help the database engine. It's not an arbitrary set of limits created solely for the purpose of frustrating the programmer's creativity. It's about carving out a well-defined area in an infinite space of possibilities.

Advantages of Having a Schema

The following are the advantages to having a schema:
  • As a communication tool

The first advantage of having a schema is that it brings structure. This may sound tautological but I don't think it is. Having a formally defined structure for your data means that all parts of the system will have at least that much in common. A schema diagram is a great tool for communicating in a team.

  • As an error-catching mechanism

Having a well-defined schema will catch errors that would go undetected otherwise: null values where there shouldn't be, incorrectly spelled attribute/column names, values out of range, referential integrity, etc.

ProblemExample
Invalid dataProduct price = true (meaningless -- should be a number)
Missing dataLine item does not have a price.
Extraneous dataLine item has an extra attribute named "Color" and we don't know what it means.
Referential integrityOrder does not belong to a customer.
  • Discoverability - reports, other apps, etc.
An under-appreciated benefit of having a schema is also the discoverability it brings to your data. A well-defined schema means that other systems may also be able to use your data: ELT tools, reporting tools, even app generators.
  • For performance

A schema makes indexing easier. It also informs how the database retrieves your data.

  • For migration

Perhaps most importantly, a schema makes migrating the data much easier. Data tends to outlive applications. You will have to transform your data in any number of ways over its lifetime.

As Sarah Mei recently wrote in her remarkably clear and cogent piece:

"Schema flexibility sounds like a great idea, but the only time it’s actually useful is when the structure of your data has no value." -- Sarah Mei

Disadvantages of Having a Schema

The following are the advantages to having a schema:

  • It takes more time up front.
  • You can't store whatever you feel like.
  • You have to learn some data modeling.

There is nothing wrong about storing schema-less data if that makes sense for your particular problem. But we should stop pretending that NoSQL is the best solution for everything.

The C in ACID

posted Feb 16, 2016, 1:41 PM by Michael Holleran

Everyone who works with databases is familiar with the acronym ACID, which lists the attributes of a proper transaction. It should be:
  • Atomic
  • Consistent
  • Isolated
  • Durable
We all know about the A and the D -- they're relatively intuitive. Far fewer people truly understand the I, but that's for another article. Today, I'd like to focus on the C. What exactly does it mean for data to be consistent?

Consistent means that the data is in a valid state; in other words, it follows the definition of the schema. For instance, if the column is defined as NOT NULL, it shouldn't ever be null. If it's defined as a foreign key, then the referred object should always exist. The list goes on.

Many databases allow you to go further and define domains. For instance, perhaps the customer's status should be one of Bronze, Silver or Gold, or the customer's age should be between 0 and 125.

These definitions are good and useful because they are easy to declare, and once they are declared, you don't have to think about them. The database is going to do whatever it needs to do to make sure that these definitions remain true, no matter what happens to the data.

For anything more complicated, you typically have to use triggers and stored procedures -- not that there's anything wrong with that, mind you. Triggers and stored procedures participate in transactions, and therefore are part of consistency. In fact, they can be considered to be part of the schema, if you use the term loosely.

But of course, triggers and stored procedures are going to be vendor-dependent, and are often difficult to write and debug. In addition, they add to the database load, which can lead to scalability issues. So the non-trivial logic is often defined in the middle tier, using a language like C#, Java, Python, etc...

There is a big gap between declaring a schema, and writing procedural code. Defining a constraint as part of a schema is (comparatively) easy, and you don't have to explain what it means to the database. For instance, a foreign key definition will automatically cover inserts, updates and deletes. Not only that, but it's also self-documenting: everyone will know what it means.

As soon as you start writing procedural code (whether in triggers and stored procedures, or other languages), you're leaving all that behind, and taking responsibility for a lot of things. You have to make sure that your code does the right thing at the right time, and in particular, you're responsible for dealing with the various dependencies between the various bits of code that you may have. This problem is exacerbated when the logic governing the data is expressed in more than one place. It's not unusual to have some of that logic defined in triggers and stored procedures, some in the middle tier, and (shudder) even some in the presentation layer. Getting a global view of how all this logic works is daunting. Changing any of it can be a frightening proposition, since there may be a lot of non-obvious dependencies that might be tripped by a seemingly innocent change.

Wouldn't it be nice to be able to do more than trivial definitions as part of the schema? What if we could extend schema definition to include higher-level constructs, like complex derivations, aggregates, and multi-table validations? That wouldn't solve all of our problems, but it would allow us to work at a higher level of abstraction.

That's what database reactive programming aims for. We're pushing the declarative aspect of database schemas to a whole new level. By doing so, we want to capture more of the logic as declarations, and less as code.

1-5 of 5