Showing posts with label data. Show all posts
Showing posts with label data. Show all posts

Thursday, March 12, 2015

Announcing Version 1 9 of the NET Library for Google Data APIs

We have released version 1.9 of the .NET Library for Google Data APIs and it is available for download.

This version adds the following new features:

  • support for 3-legged OAuth
  • two new sample applications: BookThemAll (mashup of Calendar and Provisioning APIs) and UnshareProfiles (showcasing a new feature of the Google Apps Profiles API)
  • updates to the Content for Shopping API to implement 20+ new item attributes
  • support for new yt:rating system and Access Control settings to the YouTube API

This new version also removes the client library for the deprecated Google Base API and fixes 20 bugs.

For more details, please check the Release Notes and remember to file feature requests or bugs in the project issue tracker.

Claudio Cherubino profile | twitter | blog

Claudio is a Developer Programs Engineer working on Google Apps APIs and the Google Apps Marketplace. Prior to Google, he worked as software developer, technology evangelist, community manager, consultant, technical translator and has contributed to many open-source projects, including MySQL, PHP, Wordpress, Songbird and Project Voldemort.

Read more »

Monday, March 9, 2015

Google Data Authentication Choices

Editors Note: Jeff Morgan is a Senior Technical Consultant at Appirio, a cloud solution provider which creates products and delivers services . He’s worked with many Google Data APIs, so we’re excited to publish his insights on the various authentication choices.

As developers using the Google Data APIs, one of the first challenges we tackle is learning and sorting through the Google Data authentication schemes. AuthSub, OAuth, ClientLogin, OpenID+OAuth, and so on. Perhaps you may be developing a Google App Engine web application for the first time and want access to a users Google Calendar. Maybe you are building a data processing application that requires access to a Google users data. If you are familiar with the Google Data APIs you likely know that there are many authentication options available depending on the API. So how do you decide? The best place to start is the Getting Started with Account Authorization guide. Also, the Authentication in Google Data Protocol page provides detail on the various authentication methods. This post provides references to many existing resources and some additional things to consider to help you decide which authentication method to use for your application.

Does your application need end user permission to access their Google data?

When developing web applications, developers are sometimes faced with deciding between AuthSub or OAuth. However, the first question should be, "Who will be granting permission to the users Google data?" If the end users are Google Apps users then most likely it will be an administrator granting access. Frequently developers ask the same question in another way, "Should I use 3-legged or 2-legged authentication?" Ideally, from a Google Apps user experience perspective, it is better to use 2-legged OAuth. This way the application is granted access to a users Google Apps data at the administrator level. The end user can start to use the application and the trust has already been established by an administrator. If you are developing a Marketplace application then it is very likely you will not need to engage in the 3-legged authentication dance.
However, if you are writing an application that is wide open to anyone with a Google account, let the 3-legged dance begin. Both AuthSub and OAuth for Web Applications are designed for this purpose. However, the behavior and look and feel for each is slightly different. Both ultimately allow the user to grant your application access to their data, but knowing the differences helps you make the best choice. Here is an article that covers the topic and should help you choose. It has color coded highlights that compare the two.

Some general rules follow when choosing an authentication method:
End UserAuthentication
Google User (regular)AuthSub or 3-Legged OAuth (3LO)
Googe Apps User (hosted)2-Legged OAuth (2LO)
No end user2LO or ClientLogin


Gadgets

Again there are different options for choosing Google Data authentication from a Gadget container. The first option to consider is OAuth Proxy for Gadgets. OAuth proxy is a 3-legged dance asking the end user for permission to access their data. Once access has been granted then the gadget developer can use the gadgets.io.makeRequest()method to make the Google Data API calls. Most likely you will want the JSON response from the feed so remember to add the ?alt=json URL parameter. For more information and code examples on OAuth Proxy see Writing OAuth Gadgets.

For Marketplace Gadgets another authentication option is available and uses OpenID. This option is used to authenticate the user and determine identity only and it does not provide authorization for API access. After the user is authenticated, server-to-server calls are used to make the requests, authorized via 2-legged OAuth and specifying the xoauth_requestor_id based on the authenticated user from OpenID. For more information on this approach see the Marketplace Best Practices page.


Secure Storage of Credentials

Adding layers of security is a common approach to to making data more secure. Google does provide various layers by providing different authentication and authorization methods. Registering web applications, supporting digital certificates, support for industry standards, (SAML, OAuth, OpenID) all help in providing security. However, one of the most common mistakes we can make is not taking care to protect important credentials. When working with Google Data ClientLogin and 2-legged OAuth, these credentials can be keys to the kingdom (e.g. administrator credentials or domain OAuth consumer secret) and therefore should be protected at all costs. Access to these credentials can potentially open the entire Google Apps domain data for the authorized services. This could have tremendous impact especially if you are maintaining 2-legged OAuth credentials for domains that have granted your application access, e.g. a Marketplace application. Therefore it is risky practice to embed them in source code or even a configuration file. Consider using an approach that allows you to enter credentials at runtime and store in memory or use your favorite method for secure storage of these important credentials.


Google Apps Marketplace

With the March announcement of the Google Apps Marketplace, the decision making process may have become a little easier. OpenID+OAuth and 2-Legged OAuth are the schemes that are supported and likely will be used if your Marketplace application needs access to Google Apps data; which is very likely. You’ll notice that the Marketplace has embraced the open standards of OpenID and OAuth. While AuthSub and ClientLogin are proprietary to Google, they will likely not be useful in your Marketplace application.


ClientLogin

If your application needs to create or make modifications to user accounts (using the Provisioning API) then your only current option is ClientLogin. But a common oversight is to not reuse the access token. Without proper reuse of this token, eventually your application will force the account into a CAPTCHA state which can easily be avoided if the access token is used for subsequent ClientLogin authentication requests. It is best to keep this token cached in memory and renew the token in some configured frequency/timeout.

Summary

This post covered some important considerations for selecting a Google Data authentication method for your application. If you are new to Google Data authentications and want to have a better overall understanding then start with the Getting Started with Account Authorization guide. No matter which approach you choose make sure that accessing users data is done is a secure user friendly way.

References

Google Data authentication is a vast topic. For your convenience here are a list of resources.

Google Data Authentication - Start Here
Getting Started with Account Authorization
Authentication in the Google Data Protocol
OAuth
OAuth in the Google Data Protocol Client Libraries
OAuth for Web Applications
OAuth for Installed Applications
OAuth API Reference
Using OAuth with the Google Data APIs
Google Data API Tips - OAuth
OAuth Playground
OAuth-enabled http test client: oacurl
Gadgets
Authentication for Gadgets
Writing OAuth Gadgets
Fetching Remote Content
Creating a Google Data Gadget
JavaScript client library API reference
Using JSON with Google Data APIs
OAuth Enabled Gadgets on OpenSocial enabled Social Networks
Registration
Registration for Web-Based Applications
OpenID
Federated Login Service for Google Apps
OpenID+OAuth
Sharing and previewing Google Docs in Socialwok: Google Data APIs
Step2 Code Project
OpenID OAuth Extension
ClientLogin
ClientLogin in the Google Data Protocol Client Libraries
ClientLogin for Installed Applications
Google Data API Blog - ClientLogin
AuthSub
AuthSub Authentication for Web Applications
Using AuthSub in the Google Data Protocol JavaScript Client Library
Google Data API Tips - AuthSub
Google I/O
OpenID-based single sign on and OAuth data access for Google Apps

Want to weigh in on this topic? Discuss on Buzz

Read more »

Wednesday, February 18, 2015

Unrolled Linked List Data Structure

What is Unrolled Linked List?

An Unrolled linked list is a type of link list that stores multiple elements in each node. It is also known as cache sensitive data structure. The term ‘cache’ has been taken from cache memory which is associated with CPU. A cache memory is used to store recently visited pages or data so that they can be made available for RAM immediately without accessing secondary or permanent memory. A typical unrolled linked list can be declared in C as follows.

#define SIZE 50
struct node
{
                int count;            
                int elements[SIZE];
       struct node *next;
};

Each node in the unrolled array contains a certain maximum number of elements. The number of elements are large enough to fill a single cache line. The position of the element in the list can be indicated either by reference or by position in the array. An unrolled linked list can be shown like following.


Unrolled Linked List in Data Structure

The operations those can be performed on an unrolled linked list are insertion and deletion.

Insertion

First of all it is checked if the space is available in the list for inserting a new element. If the space is available, the element is just inserted. When the element is inserted in the array elements, the count variable is incremented by one. If array doesn’t have free space to have an element, we just create a new node, place it after the current node and move half of the elements to newly created node. It creates room/space for the new element.

Deletion

When an element is deleted from the list it is simply removed from the array. If the number of elements in the array falls below N/2, we take the elements from a neighboring array to fill the array. If neighboring array also has N/2 elements then we merge both of the arrays.

Advantages of Unrolled Linked List

1. Due to its cache behavior, the unrolled linked list performs the sequential traversal very rapidly.
2. It requires less storage space.
3. It performs the operations more quickly than ordinary linked list.
4. Indexing time O (N) is reduced to O (N/max), as we are able to process a whole node at a time instead of individual elements.

Disadvantages of Unrolled Linked List

The overhead per node for references and elements count is considerably high.
Read more »

Tuesday, February 3, 2015

How to use Support Vector Machine classifier in OpenCV for Linearly Separable Data sets

In this tutorial I’m going to illustrate very basic and simple coding example targeting beginners to use Support Vector Machine (SVM) Implementation in OpenCV for Linearly Separable Data sets. I’m not going to explain the complex mathematical background of finding the optimal hyperplane. However in the first section of the post I’m going to give a simple introduction about support vector machines. 

When we consider classification and machine learning methods SVM is one of simple and easy to use classification method. Regarding image processing there are number of uses of SVM. Image classification and hand written character recognition are some uses of SVM. SVM can be easily used to classify feature vectors extracted from images. 

SVM is a supervised learning method. It defines separating hyperplanes between labeled data sets. These hyper planes can be used to categorize new data set which we don’t know the class label. 

To understand the problem easily instead of hyperplanes and vectors in a high dimensional space I used lines and points in the Cartesian plane (See the below figure).
Here the problem is to select a one from all possible lines which can be used to separate two classes, After defining such hyperplane we can categories new sample data to a class using that separating hyperplane, but it is difficult to define an optimal hyperplane. Based on a criterion we can estimate the worth of the lines. The operation of the SVM algorithm is based on finding the hyperplane that gives the optimal separating hyperplane (largest minimum distance to the training examples). 

Support vectors are the elements of the training set that would change the position of the dividing hyperplane if removed. Support vectors are the critical elements of the training set.


In this section I’m going to illustrate how we can use the SVM implementation in OpenCV to classify very simple data set. The SVM implementation in OpenCV is based on LibSVM.

In this example, to train the SVM I used 10 points (x and y coordinates lying on a Cartesian plane). I separate these points in to two classes named as 0 and 1 (See following table).


 
Table 1

To train the SVM we have to pass a N * M Mat of features (N rows, M columns) and a Nx1 Mat of class-labels

float trainingData[10][2] = { { 100, 10 }, { 150, 10 }, { 600, 200 }, { 600, 10}, { 10, 100 }, { 455, 10 }, { 345, 255 }, { 10, 501 }, { 401, 255 }, { 30, 150 } };

Here I create 10 * 2 data set as the feature space.

float labels[10] = { 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0 };

Here I create 10 * 1 data set as class labels. These class labels are mapped with the set of features as shown in Table 1.

SVM training function only accepts data as Mat objects so we need to create Mat objects from arrays defined above. Then after completing the training process we can use the trained SVM to classify given coordinates in to a class.

Following code can be used to train and predict using SVM. I have added comments to easily understand the code.

#include <opencvcv.h>
#include <opencvhighgui.h>
#include "opencv2/ml/ml.hpp"

void main(){
float labels[10] = { 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0 };
cv::Mat lablesMat(10, 1, CV_32FC1, labels);

float trainingData[10][2] = { { 100, 10 }, { 150, 10 }, { 600, 200 }, { 600, 10 }, { 10, 100 }, { 455, 10 }, { 345, 255 }, { 10, 501 }, { 401, 255 }, { 30, 150 } };

cv::Mat trainDataMat(10, 2, CV_32FC1, trainingData);

//Define parameters for SVM
CvSVMParams params;
//SVM type is defined as n-class classification n>=2, allows imperfect separation of classes
params.svm_type = CvSVM::C_SVC;
// No mapping is done, linear discrimination (or regression) is done in the original feature space.
params.kernel_type = CvSVM::LINEAR;
//Define the termination criterion for SVM algorithm.
//Here stop the algorithm after the achieved algorithm-dependent accuracy becomes lower than epsilon
//or run for maximum 100 iterations
params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);

CvSVM svm;
//Call train function
svm.train(trainDataMat, lablesMat, cv::Mat(), cv::Mat(), params);

//Create test features
float testData[2] = { 150, 15 };

cv::Mat testDataMat(2, 1, CV_32FC1, testData);

//Predict the class labele for test data sample
float predictLable = svm.predict(testDataMat);

std::cout << "Predicted label :" << predictLable << "
";

system("PAUSE");

}

In line 29 I pass data to predict the class label.

First I pass {150,15}. The class label was correctly predicted as 0. Following figure shows the output.



Then I changed the line no 29 to pass {400,200} as the  test data. Here is the output.


By doing some modifications to the above code as shown in the following code example, we can graphically represent the decision regions given by the SVM.

#include <opencvcv.h>
#include <opencvhighgui.h>
#include "opencv2/ml/ml.hpp"

void main(){
int width = 650, height = 650;
//Create a mat object
cv::Mat image = cv::Mat::zeros(height, width, CV_8UC3);

// Set up training data
float labels[10] = { 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0 }; //
cv::Mat labelsMat(10, 1, CV_32FC1, labels);
float trainingData[10][2] = { { 100, 10 }, { 150, 10 }, { 600, 200 }, { 600, 10 }, { 10, 100 }, { 455, 10 }, { 345, 255 }, { 10, 501 }, { 401, 255 }, { 30, 150 } };//
cv::Mat trainingDataMat(10, 2, CV_32FC1, trainingData);

// Define up SVMs parameters
CvSVMParams params;
params.svm_type = CvSVM::C_SVC;
params.kernel_type = CvSVM::LINEAR;
params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);

// Train the SVM
CvSVM SVM;
SVM.train(trainingDataMat, labelsMat, cv::Mat(), cv::Mat(), params);

cv::Vec3b green(0, 255, 0), blue(255, 0, 0);
// Show the decision regions given by the SVM
for (int i = 0; i < image.rows; ++i)
for (int j = 0; j < image.cols; ++j)
{
cv::Mat sampleMat = (cv::Mat_<float>(1, 2) << j, i);
float response = SVM.predict(sampleMat);

if (response == 1)
image.at<cv::Vec3b>(i, j) = green;
else if (response == 0)
image.at<cv::Vec3b>(i, j) = blue;

}

// Show the training data
int thickness = -1;
int lineType = 8;
circle(image, cv::Point(100, 10), 5, cv::Scalar(255, 255, 255), thickness, lineType);
circle(image, cv::Point(150, 10), 5, cv::Scalar(255, 255, 255), thickness, lineType);
circle(image, cv::Point(600, 200), 5, cv::Scalar(255, 255, 255), thickness, lineType);
circle(image, cv::Point(600, 10), 5, cv::Scalar(255, 255, 255), thickness, lineType);
circle(image, cv::Point(10, 100), 5, cv::Scalar(255, 255, 255), thickness, lineType);
circle(image, cv::Point(455, 10), 5, cv::Scalar(255, 255, 255), thickness, lineType);
circle(image, cv::Point(345, 255), 5, cv::Scalar(255, 255, 255), thickness, lineType);
circle(image, cv::Point(10, 501), 5, cv::Scalar(255, 255, 255), thickness, lineType);
circle(image, cv::Point(401, 255), 5, cv::Scalar(255, 255, 255), thickness, lineType);
circle(image, cv::Point(30, 150), 5, cv::Scalar(255, 255, 255), thickness, lineType);

//Show the test data
circle(image, cv::Point(400, 200), 5, cv::Scalar(0, 0, 255), thickness, lineType);

imwrite("result.png", image); // save the image

imshow("SVM Simple Example", image); // show it to the user
cv::waitKey(0);

}

The following image shows the output of the code.


White dots shows the points of training data set. The red dot shows the point belongs to the test data which is {400,200}.

You can download the VisualStudio project from here. I have used OpenCV 2.4.9.

(To create the above code samples I used example codes provided in the link 1.)
References:
1. http://docs.opencv.org/doc/tutorials/ml/introduction_to_svm/introduction_to_svm.html
2. http://docs.opencv.org/modules/ml/doc/support_vector_machines.html#cvsvmparams
3. http://web.mit.edu/6.034/wwwbob/svm-notes-long-08.pdf

Read more »

Tuesday, January 27, 2015

ASUS MeMO PAD KOW How to Hard Reset erase user data

ASUS MeMO PAD KOW 
Hard Reset and Wipe User data 


How to Hard Reset / erase user data , ASUS Memo Pad 

MODEL: KOW 

1. Download Flashing file from 4Shared
2. Extract and copy all data on SD Card. 
3.Turn off ASUS MEMO PAD KOW Tablet . 
4. Insert SD Card into tablet and press power on button. 
5.The blue screen will come up with different options. 


6. Select Clear user data by pressing Power button. for up and down movement use Volume up and Volume Down buttons. 

7. Remove SD card and reboot Tablet.


Courtesy by K-BOX TEAM. 


You may also like to Read. 

How to hard reset Scroll Basic Plus tablets.
How to hard reset Vox Mid V91 Tablets
Hard Reset Mediacom Smartpad 715i Tablets.
Hard Reset Coby Kyros Mid 7010 WC Tablets
Hard Reset Samsung Galaxy Tab 2 P5100 Tablets
Hard Reset Acer Iconia A500 Tablets
Hard Reset Symphony w20 Tablets
Hard Reset Archos 101 Titanium Tablets
Read more »