Sunday, February 21, 2021

Overview of Adobe Experience Platform Launch Server Side

Happy (belated) New Year 2021! There is no denying the fact that the introduction of the Adobe Experience Platform Web SDK has revolutionized data collection. There are a lot of advantages of this approach but the main ones are reduced page latency, fewer cookies, a consolidated tracking library and the ability to directly stream data to Adobe Experience Cloud solutions to name a few. It's no surprise that Adobe is openly propagating this approach as the recommended path to implement all client-side solutions moving forward. I coauthored a post about the high level migration approach to the Web SDK on Adobe's official Medium blog site.

Assuming that we know the basics of the Adobe Experience Platform Web SDK, XDM and setting up the AEP schemas & datasets which will all apply in this case, this article will solely focus on Launch Server Side which is a recently launched service by the Adobe Launch data collection team. I'm working with one of my clients to evaluate the feasibility of this feature and have already set this up on my sandbox which I'll cover in this article.

Overview and Use Case


Adobe Experience Platform Launch Server Side allows customers to federate or distribute data to a 3rd party platform such as Google Analytics or any system that has an API endpoint. The data is first collected from the page via the Adobe Web SDK client-side library and then sent over to a 3rd party endpoint from the Adobe Edge network. Even though it is called Launch Server Side, data still needs to be collected at the source on the page.

The primary use case for leveraging this feature is to stream data in real-time to a Data Lake for on-site personalization, decisioning or simply for data collection to run offline queries, build dashboards or create ML models.

You can essentially leverage the implementation you have on your website to send data directly to your Data Lake which isn't very common for a lot of clients who usually send hourly or daily data feeds to their backend platform. With this, they can basically collect cherry picked data which is streamed in near real-time.

Architecture


I had originally created this simplified architecture diagram for a blog I wrote explaining the AEP Real-Time CDP which I've slightly tweaked to explain the concept of the how data is federated server-side to 3rd party platforms (explained in 3b). Basically, the AEP Edge Network is the system that federates data both to our own Adobe solutions and to 3rd party platforms.



Steps to Implement


In this section, I will cover how to implement this in Launch and also show how to collect data streamed from my sandbox using ngrok which is a localhost webhook development tool that allows me to make my endpoint URL public. Please note that I'll just cover the basic steps that will help you make a call to a 3rd party endpoint but a more exhaustive tutorial on this can be found here. So let's take a look.

The first step is to create a separate Launch Property under the Server Side section in Launch by clicking on the drop down as shown below. Please note that you will not have access to this by default so please contact your Adobe Customer Success Manager to request access and get details around licensing.


The next step is to create a new Launch property (Rohan's Test in my case) in this section and add the extension called "Adobe Cloud Connector". Please note that there's another extension which allows you to federate data to Google Analytics as well.


Next, we need to create a new API endpoint. I'm using ngrok to map it to create a public facing URL tied to port 3000. The public facing domain ends with f3cc.ngrok.io.


I have a Node.js webhook (mapped to port 3000) which I ran while doing this test. I mapped the ngrok domain to my webhook/API endpoint and modified the URL to add '/ssftest' which is my final API endpoint URL.


The next step is to enable the toggle to enable Launch Server Side and pick your newly created property ID and Environment.


Next, we need to a new rule tied to the Adobe Cloud Connector extension and pick the option called "Make Fetch Call". Please note that you can also need to create new data elements in this property in case you want to pass any additional data to your API endpoint.


Next, I mapped the API endpoint URL within the action. In this screen, we have the option to make different types of calls but in my case, I'm going to make a POST request. Note that I am also sending some test query string parameters as part of the JSON request. We can also add the XDM data and send it in the body of the request.


Finally, we need to go to the website and load the page with the Web SDK deployed which is the ideal scenario. In my case, my sandbox already has the AEP Web SDK deployed so as soon as I refresh the page, a call gets made both to Adobe Experience Platform and to my webhook (API endpoint).


Here, I can see the query string parameter I sent in the request and some default parameters which also get sent in. If I would've sent in the XDM data object, you would've seen it here too.


What Else Would I Like to See


Below are a few additional things I'd like to see included in Launch Server Side:

  • The ability to do the same thing on a mobile app client-side implementation. Adobe is working on a Mobile app equivalent of the Web SDK so hopefully we'll see it soon.
  • It'll be great if the Launch Server Side team can collaborate with the Data/Bulk Ingestion API team and see if they can come up with shared learnings to standardize server side data collection within Adobe's own solutions as well.
  • Being able to replicate the same data elements created in the client-side Launch property to avoid redundancy.
  • Include more extensions in addition to Google Analytics to federate data server side which is collected by the Edge Server . This will hopefully allow clients to get rid of additional 3rd party scripts from the page such as Facebook and Google conversion pixels.

Even though this is a relatively new service, I'm confident it will become more and more popular as we get closer to the impending deprecation of 3rd Party cookies and introduction of more stringent data sharing policies. Hope you found this post helpful and please don't hesitate to reach out with any questions!

Saturday, December 19, 2020

Overview of Adobe Privacy Service and Data Repair API

Imagine a world where user data and information flows freely without any regulations and guidelines. Information ranging from your date of birth, health records to credit card number is accessible by anyone on the Internet. Well, given that we're still in the middle of a pandemic, let's not make things worse and get back to reality. 

We know that web analytics and data management platforms (barring CDPs) are not supposed to store PII or PHI information and are designed to capture anonymous behavioral user information with the ability to store demographic data using indirectly identifiable encrypted IDs. However, it's virtually impossible to control what data gets sent into these tools and in my personal experience working with a lot of different clients and tools, I've seen PII data such as email address, phone number and even SSN still being passed accidentally. Now, I don't think there ever can be a situation where we can completely guarantee that this won't ever happen again but we can certainly put in the right guardrails in place before data is captured.

GDPR and CCPA (and other regulations) are some of the ways which put more power in the hands of individuals to know, opt-out and delete any data collected on them and understand how it's used. This article explains the impact of these two regulations on marketers but before that it is also important to understand the difference between data subjects, data processor and data controller as explained here.

In this post, I will cover how Adobe Analytics customers can obfuscate or delete data which is done using either the Adobe Privacy Service (UI) and Data Repair API respectively. Given that my article will primarily cover how these two tools can be used to execute the deletion requests, please review the official documentation on these two tools for the finer details which I won't cover. So, let's dive in!

Adobe Privacy Service

The Adobe Privacy Service provides Adobe customers with a user interface and APIs to help manage their customer data requests AFTER the data has been collected as opposed to an opt-in service such as OneTrust which blocks all data from being captured in real-time. The Privacy Service allows you to selectively delete data from all Adobe solutions such as Adobe Analytics, Target etc. based on user identifiers or namespaces such as ECID or encrypted user ids. Please note that the Privacy API should not be used to delete PII data captured accidentally but should only be used to serve delete requests from data subjects. Also, note that I will specifically cover the privacy UI in this post but there also a privacy service API which allows you to accomplish the same tasks if you want to do it programmatically.

Use Case

The primary use case for leveraging the Privacy Service is to either access or delete data for users who explicitly reach out to a brand to request a copy of all their personal data or ask for their personal data to be deleted. A good example of how users can do so is shown here.


Overview of Privacy Labels


In order to access or delete any data from Adobe Analytics, Adobe Experience Platform and other solutions, the first step is to add privacy labels to each attribute which contains sensitive data. The labels are classified into three primary categories as covered here so please review these as the scope of my article doesn't cover what these are. In this article, I will use two Adobe Analytics variables Internal Search Term and User Name as examples to first perform a data deletion request through the Privacy Service UI and then through the Data Repair API. 

The first step is to go to the Data Governance section by visiting Analytics > Admin > Data Governance within Adobe Analytics. You'll first land on the page which lets you select your report suite and will also show you the data retention policy for now long your data will be retained.

Once you select your report suite, you can see that in my case, Internal Search Term (eVar1) variable has a l1 label which essentially means that there's PII information in this variable which is captured for some user names captured in eVar50 which has the label l2 (indirectly identifiable). Please note that only the variables which are labelled will be part of the delete requests and unlabelled variables will be left as-is.


We also need to pick some data data governance labels where I'm specifying that I want to delete all data at the person level. The labels are explained in more detail for eVar50 below. Please note that in order to delete data tied to a custom variable, you will need to define a custom namespace (profile_id in my case) which will be the primary identifier to delete IDs. You can name it anything or can use an OOTB namespace such as ECID.


Privacy Job Request


A privacy job request can be made by visiting the privacy page (requires IMS authentication). There is an option to pick the regulation type (GDPR, CCPA etc.) and delete IDs either manually or in bulk by uploading IDs in a JSON file (covered below). I will cover how to do it using both methods but before I do so, let's take a look at the data captured in eVar1 and eVar50. One thing to note is that only data tied to user ids "abc123" and "def456" will be obfuscated as I will only be processing delete requests for these ids and the rest of the data will be left as-is.


Here are the two methods by which you can send a delete request from the privacy UI.


I'll first process a delete request using the manual method. Please note that you can process a request for any of the solutions mentioned but in my case, I'll be deleting (obfuscating) data from Adobe Analytics for the user id "abc123" captured in eVar50 which is tied to the namespace "profile_id" so anytime I enter the value manually, it will be obfuscated but this is not a scalable approach if you want to delete ids in bulk.


Once you click create, you will see that a job id is created which contains my user name "abc123". 


I'll now process a delete request using the JSON upload method which allows you to upload up to 1000 IDs at a time. In my case, I only have 1 ID to delete called "def456" but you can upload up to 1000 per request but limit the requests to up to 10,000 IDs per day. Below is what my JSON looks like. Note that you need to include your IMS org, specify the namespace and add more ids in the "users" array among other attributes.


I also had my network tab open while doing this so you can see the request actually makes an API call to the Privacy Service and sends over the necessary fields to process a delete request.


Impact on Adobe Analytics Data


Once the requests have completely processed which typically takes anywhere between 5-7 days, you can see that the status is marked as "complete". I've purposely hidden the job id and requester email address.


As far as data in Analytics is concerned, you will see that data in the two variables in question will contain the word "privacy-" followed up with a unique identifier for every record tied to the user IDs I sent in the request.



There's a lot of information available in the official Adobe document but I've only covered information relevant to my use case.


Data Repair API

The Data Repair API provides Adobe Analytics customers access to APIs which allows them to delete any data which they want to remove. This API scans all rows of data for a particular report suite and deletes all data in a custom variable defined as part of the API request.

This API is currently only available for Adobe Analytics customers and requires a $0 addendum (free) to be added to your contract. In addition, up to 1.2 Billion server calls are included for free and any volume over that, will be billed separately. Please note that this information may vary so please contact your Adobe account team for more information.


Use Case

The primary use case for leveraging the Data Repair API is to completely delete data from Adobe Analytics variables. The typical scenario is when a customer may have inadvertently captured PII data in an Analytics variable.

Data Repair API Request

The official documentation covers all information around the prerequisites (admin console, console.io access token, global company id etc.) and caveats so I highly recommend that you read that. In this section, I'll cover how I went about sending the API requests using Postman.

Populate the access token, client id (from I/O Console) and global company ID (from Analytics) and as headers in the /serverCallEstimate GET call.


Pass the start and end dates as query string parameters in the same request. You will see that the API response generates a validation token along with specifying the total number of server calls.

The request is the actual delete POST request where we specify the variable that needs to be deleted. You can add multiple variables as part of the same request but I only sent a request for eVar50 in this example. Also, take a look at the response which provides us with a job ID and status.

So, that was the extent of what I will cover but a lot of other useful information is covered in the official Adobe documentation.

Impact on Adobe Analytics Data

Once the request is processed, all data is deleted from Adobe Analytics as shown below.


What Else Would I Like to See Added

Even though I'm happy that we finally have a productized solution for deleting PII data, I would like to see the following enhancements added to it:

  • A user interface to make these requests in addition to APIs
  • Ability to add regex or conditional logic built into the UI to selectively delete data instead of deleting everything 
  • Make this API available for other solutions including AAM and others
  • This is not a big deal but when I tried to send multiple requests (one for each variable), I ran into the following error. It will be nice if users are able to send multiple requests without waiting. Regardless, you can get around this by sending a delete requests for multiple variables as part of the same call.

All in all, it is a great tool for customers who want to delete data using a productized solution which is much cheaper to use. Given that I learnt about this API recently, I wanted to share my learnings with everyone. Hope you found this post informative. 

Saturday, October 24, 2020

Marketo Integration with Adobe Experience Cloud Solutions

The Experience Cloud ID service consists of many solution integrations which I've written about in the past. The true value of the Experience Cloud ID service come to the fore with the various integrations that exist helping client really realize the true value of their investment in Adobe technology. A relatively newer addition to the Experience Cloud ecosystem is Marketo which was acquired by Adobe a few years ago for its rich cross-channel activation and marketing automation capabilities primarily in the B2B space but Marketo can be leveraged for B2C use cases as well. 

I know that doesn't do justice to how powerful Marketo really is so here's a high-level overview of some of its capabilities especially around marketing automation:

To start off, let's start by understanding what marketing automation is. It is a technology that automates the measurement and orchestration of omnichannel marketing initiatives. It simplifies lead management & nurturing, event marketing, personalization, regular & triggered emails or SMS. I have worked extensively in the automotive vertical in the past and know how crucial it is for automotive companies or any other company to manage their leads throughout the customer journey starting from awareness to purchase. Marketo is the perfect system to do that and if you combine it with the power of the Adobe Experience Cloud, you are truly able to orchestrate and measure the customer journey from start to finish.

In this post, I will walk through the integration of Marketo with other Adobe Experience Cloud solutions focussing more on Audience Manager but the general process is the same for Adobe Analytics and Target. Please note I'm specifically referring to Marketo Engage but will call it Marketo in this post.


Use Cases

Let's start with some common use cases that can be executed with this integration: 

  • Cross-device media activation of leads (B2B or B2C data from Marketo activated in Audience Manager leveraging the device graph)
  • Event-driven (signups, abandonments etc.) user messaging (in Marketo based on user behavioral data from Analytics)
  • Cross-channel and cross-device personalization (via Target and Marketo based on onboarded data, user behavioral data from Analytics leveraging the device graph in Audience Manager)


Prerequisites


Let's take a closer look at some of the prerequisites with this integration.
  • An Adobe org admin can enable this integration by mapping the IMS org in Marketo (Admin section).
  • It is recommended to setup the cookie sync as early as possible between the Adobe Experience Cloud ID Service and Marketo's munchkin.js to ensure a higher match rate. The "cookie sync" happens automatically as long as long as both these scripts are present on the page.
  • The other important piece to keep in mind is that the munchkin cookie needs to match to a known lead with an email address via a hashed id on the website when users authenticate or submit a form or click on an email given that hashed emails IDs are sent to AAM. 

Architecture


This is a slightly detailed architecture diagram which shows the bi-directional integration between Marketo and Audience Manager and the general process is the same for Adobe Analytics and Target (currently it's only Marketo -> Target audience sharing). 
  • One thing to note in this architecture is that the Marketo to AAM integration is currently manual but the AAM->Marketo integration is automated where AAM audiences are refreshed in real-time with a backfill done every 24 hours. Step #4 is explained in more detailed in Marketo's official documentation.
  • Another thing to note is that in Marketo, there will be an option to either "Send to Experience Cloud" (hashed email ids) or "Sync from Experience Cloud Audience" (ECIDs) to send and receive segments respectively. For Target, it's currently a one-way sync from Marketo.


Other Facts


Finally, let's take a quick look at some other interesting facts about the integration with Marketo:
  • Any email lead data from Marketo to Audience Manager must be hashed (handled automatically from Marketo if these are captured).
  • There is already an integration between AEM and Marketo where Marketo can receive assets from AEM to embed in emails. More information can be found here.
  • An integration between Ad Cloud, Real-Time CDP and Marketo is also possible. I have not worked with it directly but will share once I learn more.
  • This integration is not PHI compliant but can be used for any other customer not mandating PHI compliance.


Hope this gave you a general understanding of how this integration works. Feel free to share your use cases for this integration or let me know if you have any questions.

Sunday, August 23, 2020

Medium Blog: Power Personalized Experiences with Project Firefly

It's never been more important for organizations of all sizes to personalize their experiences for their customers and audiences, and AEM and Adobe Target are two stellar ways to do this. 

Here's link to a post which I coauthored to show how to use Project Firefly, our framework for building custom, cloud-native Adobe apps, to integrate AEM and Target in a separate UI to more easily achieve your personalization goals.

Monday, July 27, 2020

Comparison Between Adobe Analytics and Customer Journey Analytics

Adobe Analytics has long been the undisputed leader in the world of Web Analytics and is still a marquee product for analyzing web and mobile app data. It is the bread and butter for consultants and data analysts worldwide who work on enterprise level data. However, just like any enterprise level product, it does come with its share of challenges. I wrote some posts last year outlining some of these challenges. This post lists a challenge we face while uploading classification data in Adobe Analytics and this article talks about the implication of uploading historical data (see point #3).

So, is there a solution that can make these challenges go away? YES there is and the solution to these challenges is Customer Journey Analytics. Customer Journey Analytics or CJA is an enterprise wide analytics product that is built on Adobe Experience Platform. CJA allows us to join different data sources (online & offline) to give a complete view of our customers in real-time across channels. Please note that CJA is considered an add-on to Adobe Analytics, also available for Non-Platform (AEP) customers and works natively with Adobe Experience Platform.


In this article, I'll compare Adobe Analytics with CJA based on a set of standard capabilities which are common between the two solutions and highlight some of the differences. The writeup is long but I've combined all the content in a single matrix at the end so feel free to scroll down to view it in one tabular view.



Adobe Analytics

In this section, I've listed the various capabilities of Adobe Analytics and added a high level writeup explaining each of these separately. I've done the same for Customer Journey Analytics.


1.    Data Capture 
o   Primarily takes place based on the AppMeasurement library (client-side-web), Mobile SDK (mobile app), Data insertion API and Bulk Data Insertion API (server-side).

2.   Data Usage
o   Data is stored in Report Suites usually setup to receive data globally or individually based on the requirement.
o  Virtual Report Suites (VRS) can be created to “split” data based on web/mobile, region or Business group and can be setup based on custom session timeouts, expiration and time zones. 

3.   Reporting and Analysis
o   Data is visualized in Analysis Workspace or the legacy UI.
o  Workspace panel includes Freeform, Cohort, Fallout etc. options available to visualize data.
o  Calculated metrics can be created, and marketing channels can be used for further analysis.
o  Robust data export capabilities (PDF, CSV etc. formats) as well as access to raw data feeds.
o  Ability to setup alerts in case of anomalies.

4.   Identity
o   Primarily based on cookies for client-side web tagging. 
o  Based on ECID for mobile app (tied to each installed instance of the app).
o  Customer IDs converted to ECID for server-side implementations in general.
o  Device graph data can be accessed via the People metric or leveraged via Cross-Device Analytics.

5.   Segmentation
o   Segmentation built into Analysis Workspace
o   Visitor, Visit and Hit segment containers available.
o   Sequential segmentation and exclusion capabilities available to users.

6.   Data Limitations
o   Limited to 200 eVars/props and 1000 events.
o   UI limited to 500K unique rows of data per month (Low Traffic).

7.   Data Classifications
o   Classifications subject to the same restrictions as the UI in terms of only classifying the top 500K rows.

8.  Historical Data Ingestion
o   Historical data sent in but out of order hits can affect the sequence of events and attribution of eVars and marketing channels.

9.   User Permissions
o   User permissions are granted via the Admin Console at a more granular level for report suites etc. at a product profile level.

10. Data Latency
o   Data can take up to 2 hours to be fully available in Adobe Analytics.






Customer Journey Analytics

In this section, I've put CJA through the same set of capabilities as I did for Adobe Analytics. Please note that there are some features that CJA lacks compared to Adobe Analytics which the product team is working on to add support for. Those are explained in more detailed here.

1.    Data Capture
o   Data needs to be conformed to Adobe Experience Platform’s XDM schema to bring in any type of data.
o   Web SDK is used for real-time data streaming and streaming API will be available for sending data server-side.

2.   Data Usage
o   Data is stored in datasets created within Adobe Experience Platform and added to CJA as Connections.
o  Data Views are similar to VRS which also allow us to define data based on the type of datasets being analyzed as well as setting custom session timeouts, expiration and defining separate time zones.

3.   Reporting and Analysis
o   Data in CJA is visualized in Analysis Workspace.
o   Workspace panel includes Freeform, Cohort, Fallout etc. options available to visualize data.
o   Calculated metrics can be created for further analysis, but marketing channel support is not available yet, but support is planned.
o   No current ability to export data in CJA (Workspace) but support is planned. However, Query Service and Data Access API provides the ability to export data.
o   No current ability to setup alerts but support is planned.

4.   Identity
o   Tied directly to the Namespace defined within Adobe Experience Platform.
o   ID can be based on anything be it cookies, CRM id, Loyalty ID or Phone number.
o   Custom namespaces can be defined.
o   Data in the device graph is NOT available yet but support is planned.

5.   Segmentation
o   Filters built into Analysis Workspace.
o   Person, Session and Event segment containers available.
o   Leverages the same standard segmentation UI/features as Adobe Analytics.

6.   Data Limitations
o   Unlimited metrics and dimensions and data in eVars/props is available in XDM format within CJA.
o   Unlimited number of rows and unique values.

7.   Data Classifications
o   Lookup Datasets created in Platform are not subject to any volume restrictions in terms of volume but there is a 1 GB limit which isn't "enforced".

8.  Historical Data Ingestion
o   Any missing historical data can be uploaded into Adobe Experience Platform and then leveraged in CJA including support for out of order hits for a person.

9.   User Permissions
o   Only product admins (not all users) can now perform granular tasks such as deleting, updating and sharing Workspace dashboards with other users.

10. Data Latency
o   Data isn’t available in near real-time can be take up to 2 hours, but real-time support is being looked into.



Here's the matrix which consolidates the capabilities compared above in a tabular format. Please note that I took a stab at also calling out which solution is (currently) better for a particular capability by adding a checkmark. If there's no checkmark, then it means that the two solutions are on par with each other or support is planned to add that feature to CJA by the product team.


Hope this article provided you with some more information and context to figure out some similarities and differences between Adobe Analytics and Customer Journey Analytics. The key points to consider would be to see if you analyze large amount of dimensional data (exceeding 500K unique rows per month), often analyze customer data across multiple channels, need to add missing historical "hit level" data after the fact or connect offline data with online with the aim to get a single view of the customer, then you should seriously consider CJA.

Are you in the process of considering this tool or have any further questions? Feel free to post them here.