Quantcast
Channel: BizTalkGurus
Viewing all 2977 articles
Browse latest View live

Backup BizTalk Server (BizTalkMgmtDb) job failed with BACKUP LOG cannot be performed because there is no current database backup

$
0
0

Recently, I was writing a new article “BizTalk Server Tips and Tricks: How to Backup (other) BizTalk Custom Databases” (that will be release soon as a guest post in BizTalk360 Blog) that explain you how you can configure the Backup BizTalk Server (BizTalkMgmtDb) job to back up additional BizTalk custom databases (RosettaNet, ESB Toolkit, …), when I got an error message on the job saying: BACKUP LOG cannot be performed because there is no current database backup. [SQLSTATE 42000] (Error 4214). The full error was this:

Date 6/5/2017 2:30:00 PM
Log Job History (Backup BizTalk Server (BizTalkMgmtDb))

Step ID 3
Server BTS02

Job Name Backup BizTalk Server (BizTalkMgmtDb)
Step Name MarkAndBackupLog
Duration 00:00:01
Sql Severity 16
Sql Message ID 3014
Operator Emailed
Operator Net sent
Operator Paged
Retries Attempted 0

Message

Executed as user: domain/username. Processed 1 pages for database ‘BAMPrimaryImport’, file ‘BAMPrimaryImport_log’ on file 1. [SQLSTATE 01000] (Message 4035) BACKUP LOG successfully processed 1 pages in 0.037 seconds (0.118 MB/sec). [SQLSTATE 01000] (Message 3014) Processed 12 pages for database ‘BizTalkDTADb’, file ‘BizTalkDTADb_log’ on file 1. [SQLSTATE 01000] (Message 4035) BACKUP LOG successfully processed 12 pages in 0.051 seconds (1.790 MB/sec). [SQLSTATE 01000] (Message 3014) Processed 9 pages for database ‘BizTalkMgmtDb’, file ‘BizTalkMgmtDb_log’ on file 1. [SQLSTATE 01000] (Message 4035) BACKUP LOG successfully processed 9 pages in 0.054 seconds (1.283 MB/sec). [SQLSTATE 01000] (Message 3014) Processed 67 pages for database ‘BizTalkMsgBoxDb’, file ‘BizTalkMsgBoxDb_log’ on file 1. [SQLSTATE 01000] (Message 4035) BACKUP LOG successfully processed 67 pages in 0.101 seconds (5.129 MB/sec). [SQLSTATE 01000] (Message 3014) Processed 1 pages for database ‘BizTalkRuleEngineDb’, file ‘BizTalkRuleEngineDb_log’ on file 1. [SQLSTATE 01000] (Message 4035) BACKUP LOG cannot be performed because there is no current database backup. [SQLSTATE 42000] (Error 4214) BACKUP LOG is terminating abnormally. [SQLSTATE 42000] (Error 3013) BACKUP LOG successfully processed 1 pages in 0.051 seconds (0.086 MB/sec). [SQLSTATE 01000] (Error 3014). The step failed.

Backup BizTalk Server (BizTalkMgmtDb) job failed BACKUP LOG cannot be performed because there is no current database backup.

Cause

If you take attention to the error message, you will notice that the job failed in the third task: MarkAndBackupLog, that is responsible for performing the backup’s the BizTalk Server databases logs.

You need to remember that by default Backup BizTalk Server (BizTalkMgmtDb) job only makes a full backup of the databases once a day and each 15 minutes a backup of the databases logs. However, to be able to perform a back up of the log from a particular database, it needs that a previous full backup has been performed and register in the “adm_BackupHistory” table in the BizTalkMgmtDb database. You can validate by executing the following script:

USE [BizTalkMgmtDb]

SELECT DISTINCT [DatabaseName]
FROM [BizTalkMgmtDb].[dbo].[adm_BackupHistory]

In my case, I was configuring an additional database to be backed up via the Backup BizTalk Server (BizTalkMgmtDb) job, and at that time, the full backup of the BizTalk Databases had already occurred.

Solution

To fix the problem of BACKUP LOG cannot be performed by the Backup BizTalk Server job, you need to:

  • Execute the “sp_ForceFullBackup” stored procedure present in the BizTalkMgmtDb database.
USE [BizTalkMgmtDb]

EXEC sp_ForceFullBackup

The next time you run the Backup BizTalk Server job, it will back up all BizTalk databases including all your BizTalk custom databases and it will be able them to execute the third task: MarkAndBackupLog, and therefore execute successfully the job once again.

Fix: Backup BizTalk Server (BizTalkMgmtDb) job failed BACKUP LOG cannot be performed because there is no current database backup.

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

The post Backup BizTalk Server (BizTalkMgmtDb) job failed with BACKUP LOG cannot be performed because there is no current database backup appeared first on BizTalkGurus.


BizTalk Server Tips and Tricks: How to Backup (other) BizTalk Custom Databases

$
0
0

During my sessions about BizTalk Server Tips and Tricks, I normally ask: What RosettaNet, ESB or UDDI have in common? And the answer is: they all are BizTalk optional features that are not part of the primary installation process, you need to execute “secondary” installation processes to add theses features. These installation processes will create BizTalk custom databases for supporting all of these new optional features. But the big questions here are: do you think that these databases are being backed up? And if not, how to backup (other) BizTalk Custom Databases?

Do you think that these databases are being backed up?

To respond this first question, the answer is: No!

Because these BizTalk custom databases (we are calling “custom databases” because they are supporting optional features that are not part of the primary installation process) are not installed by default with BizTalk Server, they are not included in the default list of databases to be marked and backed up by the Backup BizTalk Server job. The default list of databases that are, normally, being backed up by the Backup BizTalk Server job are:

  • BAMAlertsApplication
  • BAMPrimaryImport
  • BizTalkDTADb
  • BizTalkMgmtDb
  • BizTalkMsgBoxDb
  • BizTalkRuleEngineDb
  • SSODB

How to Backup (other) BizTalk Custom Databases?

If you want the Backup BizTalk Server job to back up these additional BizTalk custom databases, you must manually add the databases to the Backup BizTalk Server job.

You can achieve this by:

  • Taking Windows Explorer and browse to the “Schema” directory on the BizTalk installation folder, normally:
    • C:Program Files (x86)Microsoft BizTalk Server <version>Schema
  • Run “Backup_Setup_All_Tables.sql” and next “Backup_Setup_All_Procs.sql” against all your BizTalk custom databases that you want to back up. This creates the necessary table, procedures,  roles and assigns permissions to the stored procedures.
  • After that you need need to modify the adm_OtherBackupDatabases table, in the BizTalk Management (BizTalkMgmtDb) database, to include a row for each of the new BizTalk custom databases
    • Type the new server and database names in the corresponding columns, as shown in the following tab
      • DefaultDatabaseName: The friendly name of your custom database.
      • DatabaseName: The name of your custom database.
      • ServerName: The name of the computer running SQL Server.
      • BTSServerName: The name of the BizTalk Server. This value is not used, but it must contain a value nonetheless.

To complete the process, you, mandatory, need to force Backup BizTalk Server (BizTalkMgmtDb) job to perform a full backup of the databases, otherwise you will receive the following error:

  • BACKUP LOG cannot be performed because there is no current database backup. [SQLSTATE 42000] (Error 4214) BACKUP LOG is terminating abnormally. [SQLSTATE 42000] (Error 3013)

To do that you need:

  • Execute the “sp_ForceFullBackup” stored procedure present in the BizTalkMgmtDb database.

The next time you run the Backup BizTalk Server job, it will back up all your BizTalk custom databases.

Note: I will not recommend you to add any of your application support custom databases to the Backup BizTalk Server job since they may interfere with the execution times of this job. If the Backup BizTalk Server job starts to take a long time to execute, it may also affect the overall performance of the BizTalk platform.

Stay tuned for new BizTalk Server Tips and Tricks!

Check out the first blog of the series BizTalk Server Tips and Tricks: Enabling BAM Add-In for Excel 2016.

Author: Sandro Pereira

Sandro Pereira is an Azure MVP and works as an Integration consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

The post BizTalk Server Tips and Tricks: How to Backup (other) BizTalk Custom Databases appeared first on BizTalkGurus.

The Integrate Button

$
0
0

Introduction

Before I read the book about Continuous Integration by Paul Duvall, Stephen M. Matyas III, and Andrew Glover, I thought that CI meant that we just create a deployment pipeline in which we can easily/automate deployment of our software. That and the fact that developers integrate continuously with each other.

I’m not saying that it’s a wrong definition, I’m saying that it might be too narrow for what it really is.

Thank you, Paul, Stephen, and Andrew, for the inspiring book and the motivation to write this post.

Automation

Several things became clear to me when studying CI. One of these things is that everything is based on the principle of automation. The moment when you start thinking about “I can’t automate this” that’s the moment when you should ask yourself if that is really the case.

CI is all about automation. We automate the Compilation (different environments, configurations), Testing (unit, component, integration, acceptance…), Inspection (coverage, complexity, technical debt, maintainability index…), Documentation (code architecture, user manual, schemas…).

We automate the build that runs all these steps, we automate the feedback we get from it, …

You can almost automate everything. Things that you can’t automate are for example Manual Testing. The reason is that the definition of manual testing is that you let a human test your software. You let the human decide what to test. You can in fact automate the environment in which this human must test the software, but not the testing itself (otherwise it wouldn’t be called “manual” testing).

That’s what most intrigued me when studying CI – the automation. It makes you think of all those manual steps you must take to get your work done. All those tiny little steps that by itself aren’t meaning much but are a big waste if you see them all together.

If you always must build your software locally before committing, could we than just place the commit commands at the end of our build script?

Building

It’s kind of funny when people talk about “building” software. When some people say: “I can’t build the software anymore”; don’t always mean “build”; they mean “compile”. In the context of Continuous Integration, the “compile” step is only the first step of the pipeline but it’s sometimes the most important step to people. Many think of it as:

“If it compiles == it works”

When you check out some code and the Build fails (build, not compilation); that could mean several things: failed Unit Tests, missing Code Coverage, maximum Cyclometric Complexity, … but also a compilation failure.

In the next paragraphs, when I talk about a “build” I’m talking in the context of CI and don’t mean “compile”.

Continuous Building Software

Is your build automated?
Are your builds under 10 minutes?
Are you placing the tasks that will most likely to fail at the beginning of your build?
How often do you run your integration builds? Daily? Weekly? At every change (continuously)?

  • Every developer should have the ability to run (on demand) a Private Build on his or her machine.
  • Ever project should have the ability to run (on demand, polled, event-driven) an Integration Build that include slower tasks (integration/component tests, performance/load tests…),
  • Every project should have the ability to run (on demand, scheduled) a Release Build to create deployable software (typically at the end of the iteration), but must include the acceptance tests.

There are tremendous build script tools available to automate these kinds of things. NAnt, Psake, FAKE, Cake… are a few (I use FAKE).

Continuous Preventing Development/Testing

Are your tests automated?
Are you writing a test for every defect?
How many asserts per test? Limit to one?
Do you categorize your tests?

“Drive to fix the defect and prevent from reoccurring”

Many other posts discus the Test-First and Test-Driven mindset and the reason behind that; so, I will not discuss this here. What I will discuss is the reaction people have on a failing test from your build.

A failed build should trigger a “Stop the presses” event within the team. Everyone should be concerned about the failure and should help each other to make the build succeed again as quickly as possible. Fixing a failed build should be the responsible of the team and not (only) the person that broke the build.

But what do you do when the build failed? What reaction should you have?

First, write a test that exposes the defect by writing a test that passes. When that new test passes, you have proven the defect and can start fixing it. Note that we don’t write a failed test!

There are three reasons why you should write a test that passes for a defect (we’re using Test-Driven Development, right?):

  1. It’s difficult to write a failing test that uses the assertion correctly because the assertion may not be added when the test doesn’t fail anymore which means you don’t have a test that passes but a test that’s just not failing.
  2. You’re guessing what the fix should alter in behavior == assumption.
  3. If you have to fix the code being tested, you have a failing test that works but one that doesn’t verify the behavioral change.

To end the part of testing, let me be clear on some points that many developers fail to grasp: the different software tests. I have encountered several definitions of the tests so I merge them here for you. I think the most important part is that you test all these kind of aspects and not the part if you choose to call your Acceptance Tests, or Functional Tests:

  • Unit Tests: testing the smallest possible “units” of code with no external dependencies (including file system, database…), written by programmers – for programmers, specify the software at the lowest level…
    Michael Feathers has some Unit Test Rulz that specify whether a test can be seen as a Unit Test.
  • Component Tests encapsulate business rules (could include external dependencies), …
  • Integration Tests don’t encapsulate business rules (could include external dependencies), tests how components work together, Plumbing Tests, testing architectural structure, …
  • Acceptance Tests (or Functional Tests) written by business people, define the definition of “done”, purpose to give clarity, communication, and precision, test the software as the client expects it, (Given > When > Then structure), …
  • System Tests test the entire system, could sometimes overlap with the acceptance tests, test the system in a developer perspective…

Continuous Inspection

Can you show the current amount of code complexity?
Performing automated design reviews?
Monitoring code duplication?
Current code coverage?
Produce inspection reports?

It wouldn’t surprise you that Code Inspection is maybe not the most “sexy” part of software development (is Code Testing sexy?). But nonetheless it’s a very important part of the build.

Try to ask some projects what their current Code Coverage is, Maintainability Index? Technical Debt? Duplication? Complexity?…

Al those elements are so easily automated but so little teams adopt this mindset of Continuous Inspection. These elements are a certain starting point:

Continuous Deployment

Can you rollback a release?
Are you labelling your builds?
Deploy software with a single command?
Deploy with different environments (configuration)?
How do you handle fixes after deployment?

At the end of the pipeline (in a Release Build), you could trigger the deployment of the project. Yes, you should include the Acceptance Tests in here because this is the last step before the actual deployment.

The deployment itself should be done with one “Push on the Button”; as simple as that. In Agile projects, the deployment of the software is already done at the very beginning of the project. This means that the software is placed at the known deployment target as quickly as possible.

That way the team get as quickly as possible feedback of how the software act in “the real world”.

Continuous Feedback

When you deploy, build, test, … something, wouldn’t you want to know as quickly as possible what happened? I certainly do.

One of the first things I always do when starting a project is checking if I (and the team) gets the right notifications. As a developer, I want to know as quickly as possible when a build succeeds/failed. As an architect, you want to know what the current documentation of the code base is and what the code looks like in schemas, as project manager you may want to know if the acceptance tests where succeeded so the clients get what he/she wants…

Each function has its own responsibilities and its own reason to want feedback on things. You should be able to give them this feedback!

I use Catlight for my build feedback, work item tracking, release status… This tool will maybe in the future support pull request notifications too.

Some development teams have an actual big colorful lamp that indicate the current build status. Red = Failed, Green = Successful and Yellow = Investigating. Some lamps go more lighter/darker red if the build states in a “failed” state for too long.

Conclusion

Don’t call this a full-CI summary because it is certainly not. See this as a quick introduction of how CI can be implemented in a software project with the high-level actions in place and what you can improve in your project automation process. My motto is that anything can be improved and so, be more automated.

I would also suggest you read the book I talked about and/or check the site of Thought Works for more information on the recent developments in the CI community.

Start integrating your software to develop software with lesser risk and higher quality. Make it as automated that you just must “Push the Button”The Integrate Button.

The post The Integrate Button appeared first on BizTalkGurus.

Azure Logic Apps – Retry Policy (Middleware Friday)

$
0
0

This blog will give you a recap of the feature content that was discussed as a part of Episode 22 of Middleware Friday. In this episode, Kent Weare discussed a small, yet very interesting feature in Azure Logic Apps – the Retry Policy.

Logic App Retry Policy

To understand the Retry Policy better, let’s assume we have an endpoint that is consumed by Logic Apps. If the endpoint has some intermittent issues, and the initial request fails to execute, the number of retries will be attempted based on the “retry count” default settings. By default, the retry action will execute 4 additional times over 20-second intervals. The retry policy applies to intermittent failure HTTP codes like 408, 429, 5xx series. You can define the retry policy as follows:

"retryPolicy" : {
     "type": "&lt;type-of-retry-policy&gt;",
     "interval": &lt;retry-interval&gt;,
     "count": &lt;number-of-retry-attempts&gt;
}

The maximum number of retry attempts that can be made is 4. If you try tweaking the retry count in the JSON, during the Logic App execution you will notice an exception as “The provided retry count of ‘value’ is not valid. The retry count must be a positive number no greater than ‘4’“. Similarly, the maximum delay for a retry can be set to 1 hour while the minimum delay is 5 seconds. Azure Logic Apps uses ISO 8601 standards for the above mentioned time durations and you need to define the interval in one of the following formats –

PnYnMnDTnHnMnS
PnW
P<date>T<time>

Demo – With Default Retry Mechanism

Kent demonstrated the Azure Logic App Retry Policy with the help of the following Logic App example –

Prerequisite: Create a Logic App before proceeding with the steps shown below

  1. First, let’s start with a Blank Logic App. In the Logic App designer, we will get started with creating a simple HTTP request trigger.
    http post url request
  2. Next, we will create an HTTP POST method and give a fake URL (URI) to allow the retry mechanism to kick in
    http method details
  3. Finally, an HTTP response action with the status code of 200 to complete the Logic App
    http response status code 200

When we execute the Logic App, you will notice that the default retry mechanism (4 attempts, once every 20 seconds) will kick in.

default entry mechanism

After about 70 seconds, the fourth retry is performed.

entries of http method

Finally, after 80 seconds, the Logic App execution will fail and the corresponding error will be displayed in the Logic App Designer.

logic app execution error

You can alter the retry mechanism by entering the code view and modifying the code with the values as shown previously in the blog post.

logic app retry mechanism

Therefore, the demo clearly shows how the retry policy works out of the box in Logic Apps and how you can customize the retry policy within its limits. You can watch the video of this session here –

Feedback Survey

If you have any specific topics of interest at Middleware Friday, you can fill in this survey. Alternatively, you can tweet at @MiddlewareFri or drop an email to middlewarefriday@gmail.com with your topics of preference.

You can watch the Middleware Friday sessions here.

Author: Sriram Hariharan

Sriram Hariharan is the Senior Technical and Content Writer at BizTalk360. He has over 9 years of experience working as documentation specialist for different products and domains. Writing is his passion and he believes in the following quote – “As wings are for an aircraft, a technical document is for a product — be it a product document, user guide, or release notes”.

The post Azure Logic Apps – Retry Policy (Middleware Friday) appeared first on BizTalkGurus.

10 tips for enterprise integration with Logic Apps

$
0
0

Democratization of integration

Before we dive into the details, I want to provide some reasoning behind this post. With the rise of cloud technology, integration takes a more prominent role than ever before. In Microsoft’s integration vision, democratization of integration is on top of the list.

Microsoft aims to take integration out of its niche market and offers it as an intuitive and easy-to-use service to everyone. The so-called Citizen Integrators are now capable of creating light-weight integrations without the steep learning curve that for example BizTalk Server requires. Such integrations are typically point-to-point, user-centric and have some accepted level of fault tolerance.

As an Integration Expert, you must be aware of this. Enterprise integration faces completely different requirements than light-weight citizen integration: loosely coupling is required, no message loss is accepted because it’s mission critical interfacing, integrations must be optimized for operations personnel (monitoring and error handling), etc…

Keep this in mind when designing Logic App solutions for enterprise integration! Make sure you know your cloud and integration patterns. Ensure you understand the strengths and limits of Logic Apps. The advice below can give you a jump start in designing reliable interfaces within Logic Apps!

Design enterprise integration solutions

1. Decouple protocol and message processing

Once you created a Logic App that receives a message via a specific transport protocol, it’s extremely difficult to change the protocol afterwards. This is because the subsequent actions of your Logic App often have a hard dependency on your protocol trigger / action. The advice is to perform the protocol handling in one Logic App and hand over the message to another Logic App to perform the message processing. This decoupling will allow you to change the receiving transport protocol in a flexible way, in case the requirements change or in case a certain protocol (e.g. SFTP) is not available in your DEV / TEST environment.

2. Establish reliable messaging

You must realize that every action you execute, is performed by an underlying HTTP connection. By its nature, an HTTP request/response is not reliable: the service is not aware if the client disconnects during request processing. That’s why receiving messages must always happen in two phases: first you mark the data as returned by the service; second you label the data as received by the client (in our case the Logic App). The Service Bus Peek-Lock pattern is a great example that provides such at-least-once reliability.  Another example can be found here.

3. Design for reuse

Real enterprise integration is composed of several common integration tasks such as: receive, decode, transform, debatch, batch, enrich, send, etc… In many cases, each task is performed by a combination of several Logic App actions. To avoid reconfiguring these tasks over and over again, you need to design the solution upfront to encourage reuse of these common integration tasks. You can for example use the Process Manager pattern that orchestrates the message processing by reusing nested Logic Apps or introduce the Routing Slip pattern to build integration on top of generic Logic Apps. Reuse can also be achieved on the deployment side, by having some kind of templated deployments of reusable integration tasks.

4. Secure your Logic Apps

From a security perspective, you need to take into account both role-based access control to your Logic App resources and runtime security considerations. RBAC can be configured in the Access Control (IAM) tab of your Logic App or on a Resource Group level. The runtime security really depends on the triggers and actions you’re using. As an example: Request endpoints are secured via a Shared Access Signature that must be part of the URL, IP restrictions can be applied. Azure API Management is the way to go if you want to govern API security centrally, on a larger scale. It’s a good practice to assign the minimum required privileges (e.g. read only) to your Logic Apps.

5. Think about idempotence

Logic Apps can be considered as composite services, built on top of several API’s. API’s leverage the HTTP protocol, which can cause data consistency issues due to its nature. As described in this blog, there are multiple ways the client and server can get misaligned about the processing state. In such situations, clients will mostly retry automatically, which could result in the same data being processed twice at server side. Idempotent service endpoints are required in such scenarios, to avoid duplicate data entries. Logic Apps connectors that provide Upsert functionality are very helpful in these cases.

6. Have a clear error handling strategy

With the rise of cloud technology, exception and error handling become even more important. You need to cope with failure when connecting to multiple on premise systems and cloud services. With Logic Apps, retry policies are your first resort to build resilient integrations. You can configure a retry count and interval at every action, there’s no support for exponential retries or circuit breaker pattern. In case the retry policy doesn’t solve the issue, it’s advised to return a clear error description within sync integrations and to ensure a resumable workflow within async integrations. Read here how you can design a good resume / resubmit strategy.

7. Ensure decent monitoring

Every IT solution benefits from a good monitoring. It provides visibility and improves the operational experience for your support personnel. If you want to expose business properties within your monitoring, you can use Logic Apps custom outputs or tracked properties. These can be consumed via the Logic Apps Workflow Management API or via OMS Log Analytics. From an operational perspective, it’s important to be aware that there is an out-of-the-box alerting mechanism that can send emails or trigger Logic Apps in case a run fails. Unfortunately, Logic Apps has no built-in support for Application Insights, but you can leverage extensibility (custom API App or Azure Function) to achieve this. If your integration spans multiple Logic Apps, you must foresee correlation in your monitoring / tracing!  Find here more details about monitoring in Logic Apps.

8. Use async wherever possible

Solid integrations are often characterized by asynchronous messaging. Unless the business requirements really demand request/response patterns, try to implement them asynchronously. It comes with the advantage that you introduce real decoupling, both from a design and runtime perspective. Introducing a queuing system (e.g. Azure Service Bus) in fire-and-forget integrations, results in highly scalable solutions that can handle an enormous amount of messages. Retry policies in Logic Apps must have different settings depending whether you’re dealing with async or sync integration. Read more about it here.

9. Don’t forget your integration patterns

Whereas BizTalk Server forces you to design and develop in specific integration patterns, Logic Apps is more intuitive and easier to use. This could come with a potential downside that you forget about integration patterns, because they are not suggested by the service itself. As an integration expert, it’s your responsible to determine which integration patterns should be applied on your interfaces. Loosely coupling is common for enterprise integration. You can for example introduce Azure Service Bus that provides a Publish/Subscribe architecture. Its message size limitation can be worked around by leveraging the Claim Check pattern, with Azure Blob Storage. This is just one example of introducing enterprise integration patterns.

10. Apply application lifecycle management (ALM)

The move to a PaaS architecture, should be done carefully and must be governed well, as described here. Developers should not have full access to the production resources within the Azure portal, because the change of one small setting can have an enormous impact. Therefore, it’s very important to setup ALM, to deploy your Logic App solutions throughout the DTAP-street. This ensures uniformity and avoids human deployment errors. Check this video to get a head start on continuous integration for Logic Apps and read this blog on how to use Azure Key Vault to retrieve passwords within ARM deployments. Consider ALM as an important aspect within your disaster recovery strategy!

Conclusion

Yes, we can! Logic Apps really is a fit for enterprise integration, if you know what you’re doing! Make sure you know your cloud and integration patterns. Ensure you understand the strengths and limits of Logic Apps. The Logic App framework is a truly amazing and stable platform that brings a whole range of new opportunities to organizations. The way you use it, should be depending on the type of integration you are facing!

Interested in more?  Definitely check out this session about building loosely coupled integrations with Logic Apps!

Any questions or doubts? Do not hesitate to get in touch!
Toon

The post 10 tips for enterprise integration with Logic Apps appeared first on BizTalkGurus.

Microsoft Integration Weekly Update: June 12

$
0
0

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

On-Premise Integration:

Cloud and Hybrid Integration:

Feedback

Hope this would be helpful. Please feel free to let me know your feedback on the Integration weekly series.

Advertisements

The post Microsoft Integration Weekly Update: June 12 appeared first on BizTalkGurus.

Run BizTalk extension objects in Logic Apps

$
0
0

Extension objects are used to consume external .NET libraries from within XSLT maps. This is often required to perform database lookups or complex functions during a transformation. Read more about extension objects in this excellent blog.

Analysis

Requirements

We are facing two big challenges:

  1. We must execute the existing XSLT’s with extension objects in Logic App maps
  2. On premises Oracle and SQL databases must be accessed from within these maps

Analysis

It’s clear that we should extend Logic Apps with non-standard functionality. This can be done by leveraging Azure Functions or Azure API Apps. Both allow custom coding, integrate seamlessly with Logic Apps and offer the following hybrid network options (when using App Service Plans):

  • Hybrid Connections: most applicable for light weight integrations and development / demo purposes
  • VNET Integration: if you want to access a number of on premise resources through your Site-to-Site VPN
  • App Service Environment: if you want to access a high number of on premise resources via ExpressRoute

As the pricing model is quite identical, because we must use an App Service Plan, the choice for Azure API Apps was made. The main reason was the already existing WebAPI knowledge within the organization.

Design

A Site-to-Site VPN is used to connect to the on-premise SQL and Oracle databases. By using a standard App Service Plan, we can enable VNET integration on the custom Transform API App. Behind the scenes, this creates a Point-to-Site VPN between the API App and the VNET, as described here. The Transform API App can be consumed easily from the Logic App, while being secured with Active Directory authentication.

Solution

Implementation

The following steps were needed to build the solution. More details can be found in the referenced documentation.

  1. Create a VNET in Azure. (link)
  2. Setup a Site-to-Site VPN between the VNET and your on-premises network. (link)
  3. Develop an API App that executes XSLT’s with corresponding extension objects. (link)
  4. Foresee Swagger documentation for the API App. (link)
  5. Deploy the API App. Expose the Swagger metadata and configure CORS policy. (link)
  6. Configure VNET Integration to add the API App to the VNET. (link)
  7. Add Active Directory authentication to the API App. (link)
  8. Consume the API App from within Logic Apps.

Transform API

The source code of the Transform API can be found here. It leverages Azure Blob Storage, to retrieve the required files. The Transform API must be configured with the required app settings, that define the blob storage connection string and the containers where the artefacts will be uploaded.

The Transform API offers one Transform operation, that requires 3 parameters:

  • InputXml: the byte[] that needs to be transformed
  • MapName: the blob name of the XSLT map to be executed
  • ExtensionObjectName: the blob name of the extension object to be used

Sample

You can run this sample to test the Transform API with custom extension objects.

Input XML

This is a sample input that can be provided as input for the Transform action.

Transformation XSLT

This XSLT must be uploaded to the right blob storage container and will be executed during the Transform action.

Extension Object XML

This extension object must be uploaded to the right blob storage container and will be used to load the required assemblies.

External Assembly

Create an assembly named, TVH.Sample.dll, that contains the class Common.cs. This class contains a simple method to generate a GUID. Upload this assembly to the right blob storage container, so it can be loaded at runtime.

Output XML

Deploy the Transform API, using the instructions on GitHub. You can easily test it using the Request / Response actions:

As a response, you should get the following output XML, that contains the generated GUID.

Important remark: Do not forget to add security to your Transform API (Step 7), as is it accessible on public internet, by default!

Conclusion

Thanks to the Logic Apps extensibility through API Apps and their VNET integration capabilities, we were able to build this solution in a very short time span. The solution offers an easy way to migrate BizTalk maps as-is towards Logic Apps, which is a big time saver! Access to resources that remain on premises is also a big plus nowadays, as many organizations have a hybrid application landscape.

Hope to see this functionality out-of-the-box in the future, as part of the Integration Account!

Thanks for reading. Sharing is caring!
Toon

The post Run BizTalk extension objects in Logic Apps appeared first on BizTalkGurus.

We’re just days away from INTEGRATE 2017!

$
0
0

It’s time for you to pack your bags and prepare for your trip to London for INTEGRATE 2017 — the biggest Integration focused conference of the year. We are almost there! (just a week away before the event). We decided to write this blog with some last minute information to make it easy for you to attend the event. If you still haven’t booked your tickets, we have the last 10 tickets up for grabs on a first come first serve basis. Don’t miss out the chance to be at INTEGRATE 2017!

Attendee Count

We are expecting close to 380+ attendees this year for INTEGRATE 2017. It’s quite amazing to see the response year after year for this event and the amount of hope the folks in the Microsoft Integration Community have on BizTalk360 to consistently and successfully organize this event. We will be able to present you the exact stats on the first day of the event.

Event Venue

Kings Place Events
90 York Way, London, N1 9AG.

The venue is located in the heart of London. Just a five minute walk from Kings Cross and St. Pancras International Stations. If you are travelling from:

  • London Heathrow Airport – Kings Place is approximately 50 mins by train
  • London Gatwick Airport – Kings Place is approximately an hour by train and underground
  • London City Airport – Approximately 45 minutes by underground and DLR

There are high-speed services from Kent, majority of all trains from the North arrive at either Kings Cross or Euston (which is only 10 mins walk), and most underground lines stop at Kings Cross. St Pancras is also the home of Eurostar.

Quick Link: Tube Map to reach Kings Place

Event Registration

The registration desk will be open from 0730 hrs on Day 1. To ease the registration process, there will be 4 booths and will be categorized alphabetically (as per your first name) for you to register on the 1st day. You will be provided your conference ID badges. Please remember to wear your badge at all times.

The easiest way to make your way through the event venue is to follow the signage or simply reach out to one of our volunteers for any assistance.

Day 1 – It’s all Microsoft, Microsoft, and Microsoft sessions….

You simply cannot miss Day 1 of INTEGRATE 2017! We have lined up 9 sessions from the Microsoft Product Group team starting off with the keynote speech by Jim Harrer on what’s happened in the Hybrid Integration Platform over the past year and how AI is changing the way Microsoft thinks about enterprise application integration. The subject matter of interest then slowly shifts to BizTalk, Enterprise Messaging, and finally into the vast ocean of Azure related topics like Event Hubs, Logic Apps, Azure Functions, Microsoft Flows, and API Management. And probably, this is the best day you can get your questions answered from the Microsoft Product Group or the community team present at the event. As Saravana Kumar, founder/CTO of BizTalk360 says,

If you cannot find an answer to your question in this room (INTEGRATE event), you probably will not be able to find an answer elsewhere.

Evening Drinks with Networking

We have arranged for evening networking after the end of Day 1 over some drinks. Enjoy your drink after an informative Day 1 at INTEGRATE 2017 and get a chance to meet fellow integration MVPs, the Product Group and people from the Microsoft Integration space.

The first half of Day 2 (till 1145 AM) is also covered with sessions from the Microsoft Product Group, after which the remaining 1.5 days belong to the Integration MVPs.

Quick Link: INTEGRATE 2017 Agenda

Meet our Sponsors

INTEGRATE 2017 would not be the same without our sponsors and we would like to extend our thanks to our Titanium Sponsor Microsoft, Platinum Sponsor Codit, Gold Sponsors – Bouvet, Reply Solidsoft, Active Adapter, and our Silver sponsors – QuickLearn Training, Middleway, Affinus. You can walk through the sponsor booths on the mezzanine floor during coffee/lunch breaks and engage in a conversation.

BizTalk360 & ServiceBus360 Booths – Meet the team!

That’s not all! The core team from BizTalk360 & ServiceBus360 – the think tank team, Development folks, QA people, customer support team, client relationship group (who keep our customers happy!) are all available over the 3 days of event. Come over to the BizTalk360 and ServiceBus360 booths at the event venue to meet the team who work behind the scenes on these products.

Informal Entertainment on Day 1 Evening

We have some informal entertainment planned for Day 1 evening during the drinks/networking session.

Social Media – Post, Follow, Like, Comment, Share about the event

Let it just not be a one-sided action at INTEGRATE! Come and join us on social media and spread the word about the event to the world. Show us how you are enjoying INTEGRATE by sharing photographs from the event venue.

Official Event Hashtag – #Integrate2017

If you are not attending the event, don’t worry! Simply follow us on –

Twitterhttps://twitter.com/BizTalk360
Facebookhttps://facebook.com/BizTalk360
Instagramhttps://instagram.com/BizTalk360

Packing your stuff for travel

We care about our attendees who are travelling into London for INTEGRATE. We have people travelling all the way from New Zealand flying approximately over 30 hours, and folks from the US crossing the pond.

Temperatures are slightly on the warmer side during this time, but can become overcast with spells of rain. So make sure you pack the right set of clothes. The average daytime temperatures are around 23°C/73.4F.

The dress code for INTEGRATE 2017 is standard Business Casuals.

Wishing you a Safe Travel! See you at INTEGRATE 2017

On behalf of the INTEGRATE 2017 Event Management Team, I would like to wish you a safe travel — if you are travelling by plane, train, bus, or any other mode. We look forward to seeing you at INTEGRATE 2017 event on June 26th at Kings Place. For any more details about INTEGRATE 2017, you can visit the event website.

See you in the next few days! 🙂

Author: Sriram Hariharan

Sriram Hariharan is the Senior Technical and Content Writer at BizTalk360. He has over 9 years of experience working as documentation specialist for different products and domains. Writing is his passion and he believes in the following quote – “As wings are for an aircraft, a technical document is for a product — be it a product document, user guide, or release notes”.

The post We’re just days away from INTEGRATE 2017! appeared first on BizTalkGurus.


Microsoft Integration Weekly Update: June 19

$
0
0

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

On-Premise Integration:

Cloud and Hybrid Integration:

IoT, Stream Analytics and other Big Data Stuff via The Azure Podcast

Feedback

Hope this would be helpful. Please feel free to let me know your feedback on the Integration weekly series.
Advertisements

The post Microsoft Integration Weekly Update: June 19 appeared first on BizTalkGurus.

Automating BizTalk Administration tasks using BizTalk360 : Data Monitoring Actions

$
0
0

Introduction

On a day to day basis, a BizTalk administrator must perform few monotonous activities such as terminating instances, enabling receive locations, ensuring the status of SQL jobs etc. BizTalk360 has few powerful features which help you to automate such monotonous tasks. These features are hidden gems and are overlooked by many BizTalk360’s users, despite the availability of a good documentation. That prompted me to start my new blog series called “Automating BizTalk administration tasks using BizTalk360”. In this blog series, I will be explaining these automation capabilities which BizTalk360 brings to its users.

To start off with in this first blog I am focusing on “Data Monitoring Actions”.

What is Data Monitoring in BizTalk360?

As we are aware, BizTalk collects a diverse set of data into message box database, tracking database, BAM primary import and ESB databases.   BizTalk360 brings all these data into a single console and on top of that provides a powerful capability to set alerts based on various thresholds. This feature is called data monitoring. Below is a screenshot that shows all different data sets which can be used in data monitoring feature.

biztalk administration
Below table briefly explains various types of data items which could be monitored.

Data monitoring category

Explanation
Process MonitoringWith process monitoring you will be able to monitor the number of messages being processed by receive ports, send ports. This is popularly also called as “non-event monitoring

Ex: if you want to alert when less than 50 messages received in an hourly window during business hours, then process monitoring is the best fit.

Refer the assist article Process Monitoring for more information.

Message Box Data monitoringWith this you will be able to set alerts on the number of suspended, running, dehydrated messaging instance.

Refer the assist article Message Box Data Monitoring for more information.

Tracking Data MonitoringWith this you can set alerts on tracked messaging events and tracked service instances.

Refer the assist article Tracking Data Monitoring for more information.

BAM Data MonitoringWith this you can set alerts on the data stored in BAM tables.

Refer the assist article BAM Data Monitoring for more information.

EDI Data MonitoringWith this you can set alerts on the EDI and AS2 reporting data stored in BAM tables.

Refer the assist article EDI Data Monitoring for more information.

ESB Data MonitoringWith this you can set alerts on the ESB data and exceptions stored in BAM and ESB tables.

Refer the assist article ESB Data Monitoring for more information

Logic Apps Metrics MonitoringWith this you can set alerts on metrics emitted by Logic apps.

Refer the assist article Logic Apps Metrics Monitoring for more information

Message Box Data Monitoring Actions

In Message Box Data Monitoring, the user can configure the queries to monitor service instances and messages. Monitoring service will send the notification to the users whenever the service instances/Messages count violates the threshold condition.

Message Box Data Schedule can be configured in Data Monitoring > Message Box Data. It can be scheduled at the different frequencies (Daily, Weekly, and Monthly) based on the volume and priority to take the action on service instances/messages.

Query Condition

BizTalk360 provides Highly advanced query builders for selecting the precise and expected suspended instances based data-result. While querying the suspended/All In-Progress Service Instances you can apply the filters like Error Code, Application, Service Class, Service Name etc.

biztalk administration

Context Properties

A Message-Context based query is been provided by BizTalk360 for higher business-friendly scenarios. In Message payload, context/promoted properties can be selected to know the transactional message. In Data monitoring schedule the user can choose which context promoted properties to be in an email alert.

biztalk administration

Action on Service Instances

The operational user must closely watch the suspended service instances to act on. It is a tedious process to look after all the time. Message Box data monitoring feature will take automatic action on service instances when the set actions are configured in our schedule. The monitoring service will Terminate/Resume the service instances based on either error or warning condition which doesn’t require any manual intervention.

biztalk administration

Archiving & Downloading the Message Instances

Message content & context is required for auditing or other purposes of reconciliation. If you have not enabled the tracking option, it is not possible to get hold of the data again. Keeping this in mind, we have implemented archiving the message context when setting the action is taken on instances. In BizTalk360 Settings>System Settings, Archive and Download location of message instances must be configured to archive and download the message instances. Automatic actions with desired backup steps are been taken to make sure all the data are preserved before taking any action.

Note: In order take action on suspended service instances the monitoring service account has to be created as superuser in BizTalk360.

biztalk administration

In Data Monitoring dashboard, every monitoring cycle status is shown. When the user clicks on the status tab, it will bring the details about the Query result, Task Action and Exception summary.

biztalk administration

In Task Action tab, you can download each instance separately or by using “Click here” button you can download all the instances to the server location. Service Instances messages are download in server location as Zip file with activity ID for the monitoring run cycle.

biztalk administration

Conclusion

Data Monitoring, an Auto-Monitoring feature of BizTalk Administration which can take corrective actions with all backup steps in the event of any threshold violations. With just a one-time setting we have our BizTalk360 to make sure all your routine tasks are addressed without a manual intervention. Also, BizTalk360 offers much more monitoring features which will enable all administrators to be pro-actively monitoring the BizTalk environment(s). Next article will see the Auto correction on BizTalk Artifacts and Logic Apps.

Author: Senthil Palanisamy

Senthil Palanisamy is the Technical Lead at BizTalk360 having 12 years of experience in Microsoft Technologies. Worked various products across domains like Health Care, Energy and Retail.

The post Automating BizTalk Administration tasks using BizTalk360 : Data Monitoring Actions appeared first on BizTalkGurus.

Choosing between BizTalk360 and ServiceBus360 for monitoring Azure Service Bus

$
0
0

Recently we received few support emails where people were asking about the overlap between BizTalk360 and ServiceBus360 when it comes to monitoring Azure Service Bus. Which ones should they go for? Also, the question was extended in such a way that if they are using Azure Logic Apps and Web API’s (Web Endpoints), then which is the better product to opt for.

Given both the products got the capability to monitor Azure Service Bus, it’s a valid question and let me try to clarify the positioning of both the products.

BizTalk360

When we released BizTalk360 version 8.1, we introduced a bunch of Azure Monitoring capabilities in the product like:

The Web End Points Monitoring capability was also heavily enhanced to support features like adding query strings, body payload, HTTP headers etc., in the request message and enriched validation like JSONPath, XPath, response time etc., on the response message. The changes made the feature super powerful for monitoring SOAP, REST/HTTP-based web endpoints.

The long term goal for us at BizTalk360 is to provide a consolidated single pane of glass operations, monitoring and analytics solution for customers who are using Microsoft Integration Stack for their integration needs. In the upcoming 8.5 version, we are extending Azure capability even further by bringing support for Azure Integration Accounts within BizTalk360.

If you are a Microsoft BizTalk Server customer and slowly started leveraging Azure Service Bus, Logic Apps, API apps, and Web API’s for your integration requirements, then BizTalk360 will be the ideal product both for Managing and Monitor the entire infrastructure. Typically Microsoft BizTalk Server customers who started utilizing some of the Azure Integration technology stacks like Azure Service Bus, Logics Apps, API apps will get benefitted by using BizTalk360.

When it comes to Azure Service Bus monitoring in BizTalk360, we only cover Azure Service Bus Queues. Currently, we do not cover Azure Service Bus Topics, Azure Service Bus Relay and Azure Service Bus EventHubs. Therefore, if you are using any of these technologies (that are not monitored with BizTalk360), then you’ll also need ServiceBus360.

ServiceBus360

ServiceBus360 is designed and developed to provide complete operations and monitoring capabilities for Azure Service Bus Messaging, Relay and Event Hubs. ServiceBus360 provides in-depth monitoring capabilities for:

ServiceBus360 is also not just a monitoring solution for Azure Service Bus. The idea of ServiceBus360 is to make it a world class product for complete operations, monitoring and analytics of Azure Service Bus. The product already supports a variety of productivity and advanced operational capabilities like:

The above is not the complete list of features – it just gives you the flavor of what can be accomplished with ServiceBus360. Clearly, BizTalk360 will not have this level of coverage for Azure Service Bus.

Therefore, if you are using Azure Service Bus for mission critical integration work, then ServiceBus360 is the viable option to improve productivity and avoid disaster.

Author: Saravana Kumar

Saravana Kumar is the Founder and CTO of BizTalk360, an enterprise software that acts as an all-in-one solution for better administration, operation, support and monitoring of Microsoft BizTalk Server environments.

The post Choosing between BizTalk360 and ServiceBus360 for monitoring Azure Service Bus appeared first on BizTalkGurus.

Saving time via Logic Apps: a real world example

$
0
0

Introduction

At Codit, I manage the blog. We have some very passionate people on board who like to invest their time to get to the bottom of things and – also very important – share it with the world!
That small part of my job means I get to review blog posts before publishing on a technical level. It’s always good to have one extra pair of eyes reading the post before publishing it to the public, so this definitely pays off!

An even smaller part of publishing blog posts is making sure they get enough coverage. Sharing them on Twitter, LinkedIn or even Facebook is part of the job for our devoted marketing department! And analytics around these shares on social media definitely come in handy! For that specific reason we use Bitly to shorten our URLs.
Every time a blog post gets published, someone needed to add them manually to out Bitly account and send out an e-mail. This takes a small amount of time, but as you can imagine it accumulates quickly with the amount of posts we generate lately!

Logic Apps to the rescue!

I was looking for an excuse to start playing with Logic Apps and they recently added Bitly as one of their Preview connectors, so I started digging!

First, let’s try and list the requirements of our Logic App to-be:

Must-haves:

  • The Logic App should trigger automatically whenever a new blog post is published.
  • It should create a short link, specifically for usage on Twitter.
  • It also should create a short link, specifically for LinkedIn usage.
  • It should send out an e-mail with the short links.
  • I want the short URLs to appear in the Bitly dashboard, so we can track click-through-rate (CTR).
  • I want to spend a minimum of Azure consumption.

Nice-to-haves:

  • I want the Logic App to trigger immediately after publishing the blog post.
  • I want the e-mail to be sent out to me, the marketing department and the author of the post for (possibly) immediate usage on social media.
  • If I resubmit a logic app, I don’t want new URLs (idempotency), I want to keep the ones already in the Bitly dashboard.
  • I want the e-mail to appear as if it was coming directly from me.

Logic App Trigger

I could easily fill in one of the first requirements, since the Logic App RSS connector provides me a very easy way to trigger a logic app based on a RSS feed. Our Codit blog RSS feed seemed to do the trick perfectly!

Now it’s all about timing the polling interval: if we poll every minute we get the e-mail faster, but will spend more on Azure consumption since the Logic App gets triggered more… I decided 30 minutes would probably be good enough.

Now I needed to try and get the URL for any new posts that were published. Luckily, the links – Item provides me the perfect way of doing that. The Logic Apps designer conveniently detects this might be an array of links (in case two posts get published at once) and places this within a “For each” shape!

Now that I had the URL(s), all I needed to do was save the Logic App and wait until a blog post was published to test the Logic App. In the Logic App “Runs history” I was able to click through and see for myself that I got the links array nicely:

Seems there is only one item in the array for each blog post, which is perfect for our use-case!

Shortening the URL

For this part of the exercise I needed several things:

  • I actually need two URLs: one for Twitter and one for LinkedIn, so I need to call the Bitly connector twice!
  • Each link gets a little extra information in the query string called UTM codes. If you are unfamiliar with those, read up on UTM codes here. (In short: it adds extra visibility and tracking in Google Analytics).
    So I needed to concatenate the original URL with some static UTM string + one part which needed to be dynamic: the UTM campaign.

For that last part (the campaign): we already have our CMS cleaning up the title of a blog post in the last part of the URL being published! This seems ideal for us here.

However, due to lack of knowledge in Logic Apps-syntax I got a bit frustrated and – at first – created an Azure Function to do just that (extract the interesting part from the URL):

I wasn’t pleased with this, but at least I was able to get things running…
It however meant I needed extra, unwanted, Azure resources:

  • Extra Azure storage account (to store the function in)
  • Azure App Service Plan to host the function in
  • An Azure function to do the trivial task of some string manipulation.

After some additional (but determined) trial and error late in the evening, I ended up doing the same in a Logic App Compose shape! Happy days!

Inputs: @split(item(), ‘/’)[add(length(split(item(), ‘/’)), -2)]

It takes the URL, splits it into an array, based on the slash (‘/’) and takes the part which is interesting for my use-case. See for yourself:

Now I still needed to concatenate all pieces of string together. The concat() function seems to be able to do the trick, but an even easier solution is to just use another Compose shape:

Concatenation comes naturally to the Compose shape!

Then I still needed to create the short links by calling the Bitly connector:

Let’s send out an e-mail

Sending out e-mail, using my Office365 account is actually the easiest thing ever:

Conclusion

My first practical Logic App seems to be a hit! And probably saves us about half an hour of work every week. A few hours of Logic App “R&D” will definitely pay off in the long run!

Here’s the overview of my complete Logic App:

Some remarks

During development, I came across – what appear to me – some limitations :

  • The author of the blog post is not in the output of the RSS connector, which is a pity! This would have allowed me to use his/her e-mail address directly or, if it was his/her name, to look-up the e-mail address using the Office 365 users connector!
  • I’m missing some kind of expression shape in Logic Apps!
    Coming from BizTalk Server where expression shapes containing a limited form of C# code are very handy in a BizTalk orchestration, this is something that should be included one way or the other (without the Azure function implementation).
    A few lines of code in there is awesome for dirty work like string manipulation for example.
  • It took me a while to get my head around Logic Apps syntax.
    It’s not really explained in the documentation when or when not to use @function() or @{function()}. It’s not that hard at all once you get the hang of it. Unfortunately it took me a lot of save errors and even some run-time errors (not covered at design time) to get to that point. Might be just me however…
  • I cannot rename API connections in my Azure Resource Group. Some generic names like ‘rss’, ‘bitly’ and ‘office-365’ are used. I can set some connection properties so they appear nicely in the Logic App however.
  • We have Office365 Multi-Factor Authentication enabled at our company. I can authorize the Office365 API connection, but this will only last for 30 days. I might need to change to an account without multi-factor authentication if I don’t want to re-authorize every 30 days…

Let me know what you think in the comments! Is this the way to go?
Any alternative versions I could use? Any feedback is more than welcome.

In a next blog post I will take some of our Logic Apps best practices to heart and optimize the Logic App.

Have a nice day!
Pieter

The post Saving time via Logic Apps: a real world example appeared first on BizTalkGurus.

Atlassian Bamboo–How to create multiple Remote Agents on single server to do continuous deployment for BizTalk / WCF.

$
0
0

Hi,

I’m writing this post to demonstrate how we can create multiple remote agent on a single server to do the parallel deployment to the BizTalk/WCF servers. Bamboo comes with the concept of local agents and remote agents. Remote agents are installed on the individual servers for the artefact/solution deployment. Remote agent runs on a windows wrapper service, whenever there is a new server, the project team need to install Remote Agent and run the services. This is trouble with large organisation, and Remote agents are not free.

Follow the below steps to create multiple Remote Agent on one/two/three particular dedicate machine for Bamboo.

Sr No.TaskDescription
1.Download Remote Agent

Download bamboo-agent-installer-5.14.1.jar from bamboo agent page

2.

Copy jar file

Copy .jar file to a folder.

3.

Create Remote Agent 1 – <ServerName>.<Env>.<Domain>.lan

Follow the below steps to install Remote Agent 1.
1 – Open CMD prompt, CD into the folder where .Jar file exists.
2- Run the below command.
java -Dbamboo.home=d:bamboo-1 -jar atlassian-bamboo-agent-installer-5.14.1.jar http://<AgentServer>/agentServer/
The process will stop and ask to approve the remote agent. Login to the Bamboo portal, navigate to Agents, click on Agent Authentication under Remote Agents. Approve the operations. Process will resume.
3- After the completion of the above, navigate to the folder D:bamboo-1Conf.
4- Open the file wrapper.conf
5- Edit the file with the below information:
         wrapper.console.title=Bamboo Remote Agent 1
         wrapper.ntservice.name=bamboo-remote-agent-1
         wrapper.ntservice.displayname=Bamboo Remote Agent 1
6. Navigate to d:bamboo-1bin. Run the following .bat file in order as per below:
         InstallBambooAgent-NT
         StartBambooAgent-NT
7. A Service name “Bamboo Remote Agent 1” will get installed and started. Use bamboo user to login to the service.

4.

Remote Agent 1 – <ServerName>.<Env>.<Domain>.lan

This remote agent will appear on the online remote agents tab under Remote Agents.

5.Create Remote Agent 2 – <ServerName>.<Env>.<Domain>.lan (2)

Follow the below steps to install Remote Agent 1.
1 – Open CMD prompt, CD into the folder where .Jar file exists.
2- Run the below command.
java -Dbamboo.home=d:bamboo-2 -jar atlassian-bamboo-agent-installer-5.14.1.jar http://<AgentServer>/agentServer/
The process will stop and ask to approve the remote agent. Login to the Bamboo portal, navigate to Agents, click on Agent Authentication under Remote Agents. Approve the operations. Process will resume.
3- After the completion of the above, navigate to the folder D:bamboo-2Conf.
4- Open the file wrapper.conf
5- Edit the file with the below information:
         wrapper.console.title=Bamboo Remote Agent 2
         wrapper.ntservice.name=bamboo-remote-agent-2
         wrapper.ntservice.displayname=Bamboo Remote Agent 2
6. Navigate to d:bamboo-2bin. Run the following .bat file in order as per below:
         InstallBambooAgent-NT
         StartBambooAgent-NT
7. A Service name “Bamboo Remote Agent 2” will get installed and started. Use bamboo user to login to the service.

6.

Create Remote Agent 3 – <ServerName>.<Env>.<Domain>.lan (3)

Follow the below steps to install Remote Agent 1.
1 – Open CMD prompt, CD into the folder where .Jar file exists.
2- Run the below command.
java -Dbamboo.home=d:bamboo-3 -jar atlassian-bamboo-agent-installer-5.14.1.jar http://<AgentServer>/agentServer/
The process will stop and ask to approve the remote agent. Login to the Bamboo portal, navigate to Agents, click on Agent Authentication under Remote Agents. Approve the operations. Process will resume.
3- After the completion of the above, navigate to the folder D:bamboo-3Conf.
4- Open the file wrapper.conf
5- Edit the file with the below information:
         wrapper.console.title=Bamboo Remote Agent 3
         wrapper.ntservice.name=bamboo-remote-agent-3
         wrapper.ntservice.displayname=Bamboo Remote Agent 3
6. Navigate to d:bamboo-2bin. Run the following .bat file in order as per below:
         InstallBambooAgent-NT
         StartBambooAgent-NT
7. A Service name “Bamboo Remote Agent 3” will get installed and started.  Use bamboo user to login to the service.

7.

Three Remote Agents available.

image

Once the remote agent is created you need to create PowerShell script using New-PSSession and Remote connection, something like :


$LocalDir= "${bamboo.biztalk.server}C$Users${bamboo.remote_username}Documents" $session = New-PSSession -ComputerName $biztalk_server -ConfigurationName Microsoft.PowerShell32 $LastExitCode = Invoke-Command -Session $session -File "${LocalDir}US_Controller_BizTalk_Database.ps1" -ArgumentList "undeploy","$list","$biztalk_sql_instance","$log_dir"

Some people might disagree with this approach, but if we can create multiple local agents on the same server then why not remote agents?

Many Thanks.

Regards,

Shadab

Advertisements

The post Atlassian Bamboo–How to create multiple Remote Agents on single server to do continuous deployment for BizTalk / WCF. appeared first on BizTalkGurus.

Celebrating 100th Integration Monday Episode – Live Q&A session with Microsoft Product Group

$
0
0

Background

It all started back in 2006 when Michael Stephenson and Saravana Kumar identified that people in the integration space lack the technical know-how of concepts. In an effort to bridge this gap, they decided to create a strong community where people can share their experience and learning with others. This saw the birth of BizTalk User Group. Later, when the integration scope expanded beyond BizTalk to WCF, AppFabric, BizTalk Services, the community was renamed as UK Connected Systems User Group (UKCSUG). In 2015, as the integration scope grew wider, the community user group was renamed as Integration User Group. You can read the detailed history behind organizing Integration Monday’s in our Integration User Group launch blog.

The 100th Episode- A Milestone enroute!!

Since the launch of Integration Monday on January 19, 2015, it has taken us close to 29 months to hit the milestone 100th Integration Monday episode mark. We have strived our best to consistently deliver one session every Monday (except public and bank holidays).  There is a separate team working to ensure the sessions are slotted out in advance for a quarter, getting in touch with potential speakers and scheduling them, having test sessions before the webinar, getting the registrations, social media promotions, uploading the videos and presentations after the event, and so on.

Statistics

A look at some of the statistics from the Integration Monday sessions.

integration user group

We wanted to make the 100th Integration Monday episode a grand one. After a lot of email conversations and brainstorming, we narrowed down on the option of having a 1 hour Q&A session with the Microsoft Product Group. Then we realized that the 100th Integration Monday episode falls exactly one week before INTEGRATE 2017. So it would only make sense to make the 100th Integration Monday episode to be a prelude to the biggest integration focussed conference starting on June 26th.

Join the community and get to share your knowledge with developers and architects to learn about the evolving integration technologies. Register for our future events.

Preparations for the Special Episode on Integration Monday

Few back and forth emails with the Microsoft Product Group (thanks to Saravana), we were all set for the 100th Integration Monday episode. We learnt that we will have the Pro Integration team presence across the different product offerings from Microsoft such as BizTalk, Azure Logic Apps, Azure API Management, Azure Service Bus.

integration user group

Jim Harrer – ‎Pro Integration Group PM, Jon Fancey – ‎Principal Program Manager (Azure Logic Apps & BizTalk), Tord Glad Nordahl – Program Manager (owning BizTalk Server), Dan Rosanova – ‎Principal Program Manager (Product Owner for Azure Messaging), Jeff Hollan – ‎Senior Program Manager at Microsoft (Azure), Kevin Lam – Principal Program Manager for Microsoft Azure Scheduler, Logic Apps, Azure Resource Manager and other services, Vladimir Vinogradsky – Principal PM Manager (Azure API Management).

Since it was only a one hour Q&A session, we decided to collect the questions upfront from the registrants. So, the team quickly set course to design an event landing page with all the session details and a simple form for users to submit their questions for the Pro Integration team.

Registrations

We received close to 200 registrations for the event and some very interesting questions from the attendees. We categorized the questions based on the product offering and shared it in advance with the Pro Integration team so that they can plan out their responses in the best interest of time.

Recap from the 100th Integration Monday Episode

The stage was perfectly set for the 100th Integration Monday episode. As attendees started to join in, Saravana Kumar started off the broadcast at 0735 BST welcoming the Pro Integration team and the attendees on the webinar. After a round of quick self introductions, it was time to get into the questions from the attendees. I’ll try to highlight some of the key discussions that happened on the webinar in this blog post.

integration user group

Question: What does Microsoft see as the future of Integration and what it means to Microsoft?

Jim Harrer: The past year (since the major announcements at INTEGRATE 2016) has been extremely busy for Microsoft in terms of bringing the team together and respond better to customer requirements, cater to the demands of our partner ecosystem and define the strategy around application integration and enterprise integration. Microsoft has achieved this by building the Hybrid Integration platform. Microsoft has been talking and dealing with “Better Together” strategy when it comes to cloud and on-premise offering. Therefore, the entire team (under the Program Managers on the webinar) has been focussing on the Integration strategy.

The team has really stuck to be Hybrid Integration platform and delivered some awesome stuff around it — Feature Pack 1 for BizTalk Server, Logic Apps and BizTalk Connector to connect the on-premise and cloud solutions, first class experience with Azure Service Bus and API Management. The focus for the future is to extend these offerings into other Azure services in order to have a Better Together strategy across all product offerings. In the last year, the key highlights were the GA of BizTalk 2016 and the Feature Pack 1 (totally a new concept from Microsoft) that received a lot of positive feedback from the community.

For more “exciting” information on the future of Microsoft and what’s lined up, you may have to wait one more week for INTEGRATE 2017 where the Pro Integration team will be releasing their vision, strategy and roadmap for the upcoming year. So stay tuned for our further blog posts from INTEGRATE 2017 🙂

Question: What kind of solutions are customers using Microsoft’s offerings? In other words, what kinds of features are customers leveraging Microsoft technologies for?

Tord Glad Nordahl: Customers are moving into the Digital Transformation world. Say, for eg., BizTalk server being used in scenarios where we would have never thought of in the past after release of Feature Pack 1. Customers have been able to define their workflows and build their own integration solution. BizTalk customers have started taking advantage of (for eg.,) Powerapp to manage their BizTalk server environment, connect BizTalk to SignalR, etc., and make their integration solution more interesting, smart and predictive.

Jim Harrer: “Integration is HOT. We are enjoying the hotness of this concept”. All Microsoft’s products are seeing the growth and the customer numbers are on the rise. Customers no longer can have siloed applications; instead they need to extend them out and maximize the value by integrating with different other systems. Vlad’s team (API Management team) have enjoyed the success where legacy systems are now starting to put their API into the API Management platform.

Vladimir Vinogradsky – Previously, customers were exposing APIs for mobile apps, partner integrations (closed connection). The way customers expose their APIs is now changing. These days, companies use API Management to manage both their external and internal APIs across the organization.

Dan Rosanova – Enterprise integration has got the right meaning over the last few months or so. Earlier it was within a team, department or business. Previously, for instance, someone may have only used Service Bus and some compute to perform all their integration. Nowadays, you need not write any code to use all the functionalities in Service Bus as Logic Apps gives you the complete control by means of its connectors.

Jon Fancey – Customers visit the Microsoft platform from different locations for different reasons. The general feedback is that they value the fact that they can get started from one place and then expand using Microsoft’s Integration Portfolio (rich services that are available on-premises and on Azure).

Question: How is being “Serverless” helping Microsoft?

Jeff Hollan: Serverless is the next step of evolution with Platform as a Service (PaaS). It does not mean there are no servers! There are servers, but, as an operator/developer, you need not worry about the servers. No worries about the server being secure, scalable etc!!

In Azure, there are few core offerings that are serverless – Azure Functions and Azure Logic Apps. The unique advantage in the serverless story when it comes to Azure is that integration and serverless are treated as “hand in glove”. With Serverless, customers feel they can get something into production really quick and connect it to the system/API that I am concerned about. This helps the project IT to move faster to get into the speed of business.

Question: How is Microsoft BizTalk Server 2016 Feature Pack 1 being received by the customers? What’s the plan moving forward?

Tord Glad Nordahl: It was a complete team restructure that we had to go through during the release of Feature Pack 1 and the release process (from once in every 2 years for a major release). Feature Pack 1 was mainly intended to help customers do the better integration. Most suggestions for the features for Service Pack 1 actually came from customers through the Uservoice (Customer Feedback) portal. With Feature Pack releases, customers can do more with the services provided by Microsoft and improve on what they already have in store.

The plan is to continue the investment and working on the features that were shipped as a part of Feature Pack 1. For what’s coming more in upcoming Feature Packs, stay tuned for INTEGRATE 2017 announcements in a week’s time 🙂

Question: We see updates for new ServiceBus Library for a .Net Client to use Azure AD Authentication. What will happen to the existing Library that uses Connection String from Shared Access Policy. Will that continue to be in use with new updates added to them?

Dan Rosanova: Yes, both the libraries will continue to use SAS as it is very useful for high messaging scenarios. For the new library, the team is working on implementing Active Directory MSI (Secure Managed Identities for Services).

Question: I have a multi-cloud environment. Are there any Logic App AWS connectors that are in the pipeline?

Jeff Hollan: At present, there are no out-of-the-box connectors in the library (of the available 160+ connectors). If you would like to request for this connector, you can go to the Logic Apps Connectors Uservoice page and search if the request for the connector is already available. If yes, vote for the request so that the team knows which connector to work on priority. If not, you can create the request for the connector and the team will assess the demand based on the votes.

Request from the Pro Integration team – If you require any new connector or a feature in any of the products, the best place to request/show your support is through the Uservoice page for the particular product.

Question: Should I hollow out my EDI exchange running on BizTalk Server 2010 and move into Azure Logic Apps, or should I upgrade to BizTalk 2016?

Tord Glad Nordahl: This completely depends on where you are doing the workflow/integration. If it’s all on the cloud and you are communicating with your partners on the cloud, then Logic Apps is the best way to go forward. However, if you are doing a lot of stuff on-premise, then BizTalk is also the right choice. If there is a hybrid scenario where you do processing both on-premise and the cloud, then you can use both in conjunction. Therefore, it all depends on the need of the customer. So Microsoft’s responsibility is to provide features and products that customers ask for!

Question: When will we see a true PaaS implementation of API Management, with a corresponding price model?

Vladimir Vinogradsky:  There are thoughts behind getting a PaaS implementation of API Management, but no concrete timelines on the availability of this functionality.

Question: My question is around using SQL Availability groups in BizTalk setup. Currently with BizTalk Server 2016 and SQL Server 2016, it needs atleast 8 SQL instances to run BizTalk HA environment with SQL availability groups. With the announcement that SQL Server 2017 supports distributed transactions for databases in availability groups, does it mean that the minimum number of instances required will reduce from 8 to 2?

Tord Glad Nordahl: Definitely, yes! This will be addressed. The BizTalk team is working hard with the SQL team to get this addressed.

Question: Now that BizTalk Services is dead we are certain that the two tools that will be kept are BizTalk (On-Prem) and Logic Apps (cloud)?

Jon Fancey: A common question received by the Logic Apps team was “When should I use BizTalk Services and when should I use Logic Apps?” Since its absolutely ridiculous to have the same offering in multiple features, the team worked hard over the last 18 months to make sure all features that are a part of the BizTalk Services are shifted to Logic Apps. This has ZERO IMPACT on BizTalk Server. Although the name has the word “BizTalk Server”, it does not mean the end of the road for BizTalk Server. It’s just a shift to the capabilities and what the team is focusing on – BizTalk Server, Logic Apps, and Enterprise Integration.

Question: What’s the Future of Service Bus On-Premises?

Dan Rosanova: This was announced in January. The future is very well defined that it goes out of mainline support in January 2018. There are no plans to replace it. The On-premise roadmap involves Azure stack for better alignment with other services.

Question: Is Logic Apps a mature technology or not considering that it’s pretty much a new concept?

Jeff Hollan: Reading through the customer stories where customers talk about how they have been using Logic Apps in their environments and the different scenarios that they have implemented, it’s only unfair to comment on the maturity of the product as a whole. Logic Apps is just about 12 months since it went GA and seeing the number of customer success stories and numerous blog posts on how the community have been using Logic Apps makes us feel that we are in the right direction. Therefore, if there is any chance that Logic Apps ended up not having a great SLA, it’s not only for Logic Apps but around 10-12 other connected services/products in Microsoft’s offering feeling the ripple effect.

Logic Apps has been built very consciously by taking the learnings from BizTalk server and used the learning to build a very strong cloud platform for our customers.

With that, it was almost close to one hour since we started the session! Time just flew in the blink of an eye, but boy! what an engrossing discussion that was from the team. You can watch the video of the session here –

Final Question: What’s the roadmap for Healthcare companies to move to the cloud?

Jim Harrer: The Pro Integration team is already working on improving the vertical strategy given that a real good functionality exists around the product. The team is challenged to put together different solutions for different verticals, healthcare being one of them.

Jon Fancey: Microsoft is keen to developing and building a solid stable platform to provide a lot of general purpose integration capabilities across the board so that people can build mission critical integration solutions.

If you have any specific questions related to any vertical, get a chance to meet the same team next week at London at INTEGRATE 2017.

Feedback from the Community

Here’s what the community had to say about the Integration User Group initiative and on reaching the 100th episode –

Integration User Group Evangelises Microsoft Integration developments

Dedicated people, talking about things they love; The sessions stimulate me to try new things

Big kudos to BizTalk360 team for doing an amazing job in evangelizing Microsoft Enterprise Integration.

Feedback like this drive us to move forward and deliver the best content to our attendees. If you have not registered for our event updates, we recommend you to register for the upcoming events on the Integration User Group website.

Final wrap up of the session

Jim Harrer thanked the attendees who joined the webcast, congratulated the team behind Integration User Group for reaching their 100th milestone episode and the speakers who presented sessions on Integration User Group.

You can watch the previous episodes of Integration Monday on the Past Events section, and register for the upcoming events.

Author: Sriram Hariharan

Sriram Hariharan is the Senior Technical and Content Writer at BizTalk360. He has over 9 years of experience working as documentation specialist for different products and domains. Writing is his passion and he believes in the following quote – “As wings are for an aircraft, a technical document is for a product — be it a product document, user guide, or release notes”.

The post Celebrating 100th Integration Monday Episode – Live Q&A session with Microsoft Product Group appeared first on BizTalkGurus.

The Routing Slip Pattern

$
0
0

The Pattern

Introduction

A routing slip is a configuration that specifies a sequence of processing steps (services). This routing slip must be attached to the message to be processed. Each service (processing step) is designed to receive the message, perform its functionality (based on the configuration) and invoke the next service. In that way, a message gets processed sequentially by multiple services, without the need of a coordinating component. The schema below is taken from Enterprise Integration Patterns.

Some examples of this pattern are:

Routing Slip

Routing slips can be configured in any language, JSON or XML are quite popular. An example of a simple routing slip can be found below. The header contains the name of the routing slip and a counter that carries the current step number. Each service is represented by a routing step. A step has its own name to identify the service to be invoked and has a specific key-value configuration pairs.

Remark that this is just one way to represent a routing slip. Feel free to add your personal flavor…

Assign Routing Slip

There are multiple ways to assign a routing slip to a message. Let’s have a look:

  • External: the source system already attaches the routing slip to the message
  • Static: when a message is received, a fixed routing slip is attached to it
  • Dynamic: when a message is received, a routing slip is attached, based on some business logic
  • Scheduled: the integration layer has routing slips scheduled that contain also a command to retrieve a message

Service

A service is considered as a “step” within your routing slip. When defining a service, you need to design it to be generic. The executed logic within the service must be based on the configuration, if any is required. Ensure your service has a single responsibility and there’s a clear boundary of its scope.

A service must consist of three steps:

  • Receive the message
  • Process the message, based on the routing slip configuration
  • Invoke the next service, based on the routing slip configuration

There are multiple ways to invoke services:

  • Synchronous: the next service is invoked without any persistence in between (e.g. in memory). This has the advantage that it will perform faster.
  • Asynchronous: the next service is invoked with persistence in between (e.g. a queue). This has the advantage that reliability increases, but performance degrades.

Think on the desired way to invoke services. If required, a combination of sync and async can be supported.

Advantages

Encourages reuse

Integrations are composed of reusable and configurable building blocks. The routing slip pattern forces you to analyze, develop and operate in a streamlined manner. Reuse is heavily encouraged on different levels: the way analysis is performed, how patterns are implemented, the way releases are rolled out and how operational tasks are performed. One unified way of working, built on reusability.

Configuration based

Your integration is completely driven by the assigned routing slip. There are no hard-coded links between components. This allows you to change its behavior without the need of a re-deployment. This configuration also serves as a great source of documentation, as it explains exactly what message exchanges are running on your middleware and what they exactly do.

Faster release cycles

Once you have set up a solid routing slip framework, you can increase your release cadence. By leveraging your catalogue of reusable services, you heavily benefit from previous development efforts. The focus is only on the specifics of a new message exchange, which are mostly data bound (e.g. mapping). There’s also a tremendous increase of agility, when it comes to small changes. Just update the routing slip configuration and it has an immediate effect on your production workload.

Technology independent

A routing slip is agnostic to the underlying technology stack. The way the routing slip is interpreted, is of course specific to the technology used. This introduces ways to have a unified integration solution, even if it is composed of several different technologies. It enables also cross technology message exchanges. As an example, you can have an order that is received via an AS2 Logic App, being transformed and sent to an on premise BizTalk Server that inserts it into the mainframe, all governed by a single routing slip config.

Provides visibility

A routing slip can introduce more visibility into the message exchanges, for sure from an operational perspective. If a message encounters an issue, operations personnel can immediately consult the routing slip to see where the message comes from, what steps are already executed and where it is heading to. This visibility can be improved, by updating the routing slip with some extra historical information, such as the service start and end time. Why even not including an URL in the routing slip that points to a wiki page or knowledge base about that interface type?

Pitfalls

Not enough reusability

Not every integration project is well-suited to use the routing slip pattern. During analysis phase, it’s important to identity the integration needs and to see if there are a lot of similarities between all message exchanges. When a high level of reusability is detected, the routing slip pattern might be a good fit. If all integrations are too heterogenous, you’ll introduce more overhead than benefits.

Too complex logic

A common pitfall is adding too much complexity into the routing slip. Try to stick as much as possible to a sequential series of steps (services) that are executed. Some conditional decision logic inside a routing slip might be acceptable, but define clear boundaries for such logic. Do not start writing you own workflow engine, with its own workflow language. Keep the routing slip logic clean and simple, to stick to the purpose of a routing slip.

Limited control

In case of maintenance of the surrounding systems, you often need to stop a message flow. Let’s take the scenario where you face the following requirement: “Do not send orders to SAP for the coming 2 hours”. One option is to stop a message exchange at its source, e.g. stop receiving messages from an SFTP server. In case this is not accepted, as these orders are also sent to other systems that should not be impacted, things get more complicated. You can stop the generic service that sends a message to SAP, but then you also stop sending other message types… Think about this upfront!

Hard deployments

A very common pain-point of a high level of reuse, is the impact of upgrading a generic service that is used all over the place. There are different ways to reduce the risks of such upgrades, of which automated system testing is an important one. Within the routing slip, you can specify explicitly the version of a service you want to invoke. In that way, you can upgrade services gradually to the latest version, without the risk of a big bang deploy. Define a clear upgrade policy, to avoid that too many different versions of a service are running side-by-side.

Monitoring

A message exchange is spread across multiple loosely coupled service instances, which could impose a monitoring challenge. Many technologies offer great monitoring insights for a single service instance, but lack an overall view across multiple service instances. Introducing a correlation ID into your routing slip, can highly improve the monitoring experience. This ID can be generated the moment you initialize a routing slip.

Conclusion

Routing slips are a very powerful mechanism to deliver unified and robust integrations in a fast way. The main key take-aways of this blog are:

  • Analyze in depth if can benefit from the routing slip pattern
  • Limit the complexity that the routing slip resolves
  • Have explicit versioning of services inside the routing slip
  • Include a unique correlation ID into the routing slip
  • Add historical data to the routing slip

Hope this was a useful read!
Toon

The post The Routing Slip Pattern appeared first on BizTalkGurus.


Setting unique Tracking Id in BizTalk Logic Apps send port

$
0
0

I was working on a POC which involved sending a message from BizTalk send port to a logic app with message’s HTTP header enriched to have a unique tracking id.  Achieving this was not straight forward. In this article, I will explain the issue I faced and resolution.

Problem explained

I have a simple logic app with an HTTP request trigger and dumps the received message into a Google drive folder.

Setting unique Tracking Id in BizTalk Logic Apps send port

My BizTalk application has an FTP receive port and a send port configured to use Logic Apps adapter. send port subscribes to messages from FTP receive port and sends them out to a logic app. 
Setting unique Tracking Id in BizTalk Logic Apps send port

As we are aware logic apps provide an option to send a client tracking id in the form of a custom HTTP header x-ms-client-tracking-id. Refer the article  https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-monitor-your-logic-apps to know more about monitoring and tracking in Logic apps.

The static logic app sends adapter provides an option to configure the custom HTTP headers in the port configuration as shown below.

Setting unique Tracking Id in BizTalk Logic Apps send port

Since I want to send a unique tracking id per message, I cannot set a static value in port configuration. Hence I did what any other BizTalk developer would do. tried to look for a property schema specific to Logic Apps adapter. However, I could not find one in the list of property schemas deployed in my BizTalk environment. This put me in a situation where I don’t have the option to send unique tracking id per message.

Solution

I started to contemplate on how a dynamic send port would send messages to logic apps without any property schema related to logic app adapter is deployed.  With a little bit of research, I came to know that the Logic Apps adapter internally leverages WCF Web Http binding. This directed me toward WCF property schema.

So I wrote a  HttpHeaders context property in a custom pipeline component in send port.


inMsg.Context.Write("HttpHeaders", "http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties", "x-ms-client-tracking-id: " + trackingId);

This made the trick! Now I am able to view the tracking id in my logic apps run history.

Setting unique Tracking Id in BizTalk Logic Apps send port

However, when I send another message, I saw that google drive connector was failing due to duplicate file name.  I used the tracking Id as the file name. This means somehow tracking id which I have set is same for subsequent runs.  This was again a setback for me as I was still not able to receive unique tracking id per message.

Setting unique Tracking Id in BizTalk Logic Apps send port

Again with a little bit of research, I understood that this is the normal behavior for a static WCF send port to cache the headers set using context properties. The option was to create a dynamic port. Since I do not want to create a dynamic port, I just tried to set a context property related to dynamic ports. So I added an additional line to my code in pipeline component.

inMsg.Context.Write("HttpHeaders", "http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties", "x-ms-client-tracking-id: " + trackingId);

inMsg.Context.Write("IsDynamicSend", "http://schemas.microsoft.com/BizTalk/2003/system-properties", true);

This solved the issue! I am now able to send a unique tracking id per message using Logic Apps adapter in BizTalk static send port.

Summary

In summary, we need to remember following points

  • Logic app provides an option to send client tracking id using a custom HTTP header “x-ms-client-tracking-id”
  • Logic Apps send adapter leverages WCF web HTTP binding behind the scene.
  • If you want a static Logic Apps port to send the tracking id per message then you need to promote two properties HttpHeader and IsDynamicSend
Author: Srinivasa Mahendrakar

Technical Lead at BizTalk360 UK – I am an Integration consultant with more than 11 years of experience in design and development of On-premises and Cloud based EAI and B2B solutions using Microsoft Technologies.

The post Setting unique Tracking Id in BizTalk Logic Apps send port appeared first on BizTalkGurus.

Integrate 2017 – Day 1 Recap

$
0
0

Introduction

Codit is back in London for Integrate 2017! This time with a record number of around 26 blue-shirted colleagues representing us. Obviously this makes sense now that Codit is bigger than ever with offices in Belgium, France, The Netherlands, UK, Switzerland, Portugal and Malta. This blog post was put together by each and everyone of our colleagues attending Integrate 2017.

Keynote: Microsoft’s Brings Intelligence to its Hybrid Integration Platform – Jim Harrer

What progress has Microsoft made in the Integration space (and their Hybrid Integration Platform) over the last year? How is Artificial Intelligence changing the way we think about enterprise application integration? Jim Harrer, Pro Integration Program Manager for Microsoft, kicks off with the keynote here at Integrate 2017. 

With a “year in review” slide, Jim reminded us how a lot of new Azure services are now in GA. Microsoft also confirmed, once again, that hybrid integration is the path forward for Microsoft. Integration nowadays is a “Better Together“-story. Hybrid integration bringing together BizTalk Server, Logic Apps, API Management, Service Bus, Azure Functions and … Artificial Intelligence.

Microsoft is moving at an incredible pace and isn’t showing any signs of slowing down. Jim also spoke briefly about some of the great benefits which are now being seen since the Logic Apps, BizTalk, HIS and APIM fall under the same Pro-Integration team.  

Integration today is about making the impossible, possible; The fact that Microsoft is working very hard to bring developers the necessary tooling and development experience to make it easier and faster to deliver complex integration solutions. It’s about keeping up – AT THE SPEED OF BUSINESS – to increase value and to unlock “the impossible”.

Jim made a very good point:

Your business has stopped asking if you can do this or that, because it’s always been a story about delivering something which takes months or will cost millions of dollars. Nowadays, you have the tools to deliver solutions at a fraction of the cost and a fraction of the time. Integration specialists should now go and ask business what they can do for them to maximize added value to that business and make your business as efficient as possible.

Jim had fewer slides in favor of some short, teasing demos:

  • Jeff Hollan demonstrated how to use Logic Apps with the Cognitive Services Face API to build a kiosk application to on-board new members at a fictitious gym (“Contoso Fitness”), adding the ability to enter the gym without needing to bring a card or fob but simply by using face recognition when entering the building.
  • Jon Fancey showed off some great new batching features which are going to be released for Logic Apps soon.
  • Tord Glad Nordahl tackled the scenario where the gyms sell products like energy bars and protein powders and needs to track sales and stock at all the locations, to determine when new products need to be ordered. BizTalk was the technology behind the scenes, with some Azure Machine learning thrown in.

Watch out for new integration updates later in the week to be announced.

Innovating BizTalk Server to bring more capabilities to the Enterprise customer – Tord Glad Nordahl

In the second session of the day, Tord walked us through the BizTalk lifecycle and emphasized that the product team is still putting a lot of effort in improving the product and its capabilities. He talked about the recent release of the first feature pack for BizTalk Server 2016 and how it tackles some of the pain points gathered from customer feedback. FP1 is just a first step in enriching BizTalk, more and more functionalities will be added and further improved in the time to come.  

“BizTalk is NOT dead”

Tord emphasized how important it is to receive feedback from partners and end-users. He urged everyone to report all bugs and inconviences using the Uservoice page so we can all together help shape the future of BizTalk Server.
The product team is working hard to release CU packs at a steady cadence, and plan on getting vNext of BizTalk ready before the end of 2018. 

No breaking news unfortunately (other than more features coming to the new automated deployment that came in Feature Pack 1), but we’re looking forward to Tord’s in-depth session about FP1 coming Wednesday. If you can’t wait to have a look of what FP1 can do, check out Toon’s blog posts!

BTS2016 FP1: Scheduling Capabilities
BTS2016 FP1: Continuous Deployment
BTS2016 FP1: Management & Operational API
BTS2016 FP1: Continuous Deployment Walkthrough

Messaging yesterday, today, and tomorrow – Dan Rosanova

The third speaker of the day was Dan Rosanova, giving us an overview of the evolution of the Messaging landscape and its future.

He started with some staggering numbers: currently Azure Messaging is processing 23 TRILLION (23,000,000,000,000,000,000) messages per month. Which is a giant increase from the 2.75 trillion per month last year (at Integrate).

In the past, picking a messaging system was comparable to choosing a partner to marry: you pick one you like and you’re stuck with the whole package, peculiarities and all. It wasn’t easy, and very expensive to change.

Messaging systems are now changing to more modular systems. From the giant pool of (Azure) offerings, you pick the services that best fit your entire solution. A single solution can now include multiple messaging products, depending on your (and their) specific use case.

“Event Hubs is the ideal service for telemetry ingestion from websites, apps and streams of big data.”

Where Event Hubs used to be seen as an IoT service, this has now been repositioned as part of the Big Data stack. Although still on the edge with IoT.

The Microsoft messaging team has been very busy. Since last year they have implemented new Hybrid Connections, new java and open-source .NET clients, Premium Service Bus went GA in 19 regions and a new portal was created. They’re currently working on more encryption (encryption at rest and Bring Your Own Key) and security: Managed Secure Identity and IP Filtering features which will be coming soon. So it looks to be a promising year!

Dan introduced Geo-DR, which is a dual-region active-passive disaster recovery tool coming this summer. The user decides when to trigger this fail-forward disaster recovery. However this is only meant as a disaster recovery solution, and is NOT intended for high-availability or other scenarios. 

Finally, Dan added a remark that messaging is under-appreciated and his goal is reaching transparent messaging by making messaging as simple as possible. 

Azure Event Hubs: the world’s most widely used telemetry service – Shubha Vijayasarathy

“The Azure Event Hubs are based on three S’s: Simple, stable and Scalable.

Shubba talked about Azure Event Hubs Capture replacing the existing Azure Event Hubs Archive service. With Event Hubs Capture there is no overhead with code or configuration. The separate data transfer will reduce the service management hassle. It’s possible to opt-in or -out at any time. Azure Event Hubs Capture will be GA June 28th 2017, price changes will go into effect August 1st 2017.

The next item was Event Hubs Auto-Inflate. With Auto-Inflate it’s possible to auto-scale TU’s, to meet your usage needs. It also prevents throttling (when data ingress and egress rates exceed preconfigured TUs). This is ideal for handling burst workloads. It’s downside is that it only scales up and doesn’t scale back down again.
 
Dedicated Event Hubs are designed for massive scale usage scenarios. It has a completely dedicated platform, so there are no noisy neighbours sharing resources on Azure. Dedicated Event Hubs are sold in Capacity Units (CU). Message sizes are up to 1 MB.  

Event Hubs Clusters will enable you to create your own clusters in less than 2 hours in which Azure Event Hubs Capture is also included. Message sizes go up to 1MB and pricing starts at $5000. The idea is to start small and scale out as you go. Event Hubs Clusters is currently in private preview and will be available as public preview starting September 2017 in all regions.

Coming soon

– Geo-DR capability
– Encryption at rest
– Metrics in the new portal
– ADLS for public preview
– Dedicated EH clusters for private preview

Azure Logic Apps – build cloud-scale integrations faster – Jeff Hollan / Kevin Lam

Jeff Hollan and Kevin Lam had a really entertaining session which was perfect to avoid an after-lunch-dip! 

Some great new connectors were announced, which will be added in the near future. Among them: Azure storage tables, Oracle EBS, Service Now and SOAP. Besides the connectors that Microsoft will make available, the ability to create custom connectors, linked with custom API connections, sounds very promising!  It’s great to hear that Logic Apps is now certified for Drummond AS2, ISO 27001, SCO (I, II, IIII), HIPAA and PCI DSS.

Quite a lot of interesting new features will be released soon:

  • Expression authoring and intellisense will improve the user experience, especially combined with detailed tracing of expression runtime executions.
  • Advanced scheduling capabilities will remove the need to reach out to Azure Scheduler.  
  • The development cycle will be enhanced by executing Logic Apps in draft, which means your Logic Apps can be developed without being activated in production and the ability to promote them.
  • The announced mock testing features will be a great addition to the framework.
  • Monitoring across Logic Apps through OMS and resubmitting from a failed action, will definitely make our cloud integration a lot easier to manage!
  • And last, but not least: out-of-the-box batching functionality will be released next week!

Azure Functions – Serverless compute in the cloud – Jeff Hollan

Whereas Logic Apps executes workflows based on events, Azure Functions executes code on event triggers. They really complement each other. It’s important to understand that both are serverless technologies, which comes with the following advantages: reduced DevOps, more focus on business logic and faster time to market.

The Azure Functions product team has made a lot of investments to improve the developer experience. It is now possible to create Azure Functions locally in Visual Studio 2017, which gives developers the ability to use intellisense to test locally and to write unit tests.

There’s out-of-the-box Application Insights monitoring for Azure Functions. This provides real details on how your Azure Functions are performing. Very powerful insights on that data are available by writing fairly simple queries. Jeff finished his session by emphasizing that Azure Functions can also run on IoT edge. As data has “gravity”, some local processing on data is desired in many scenarios, to reduce network dependencies, cost and bandwith.

Integrating the last mile with Microsoft Flow – Derek Li

In the first session after the last break, Derek Li took us for a ride through Microsoft Flow, the solution to the “last mile” of integration challenges. Microsoft Flow helps non-developers work smarter by automating workflows across apps and services to provide value without code.

Derek explained why you should care about Flow, even if you’re a developer and already familiar with Logic Apps: 

  • You can advise business users how they can solve some of their problems themselves using Flow, while you concentrate on more complex integrations.
  • You’ll have more engaged customers and engaged customers are happy customers.
  • Integrations originally created in Flow can graduate to Logic Apps when they become popular, mission critical or they need to scale.
  • With the ability to create custom connectors you can connect to your own services.

Some key differences between Flow and Logic Apps:

FlowLogic Apps
Citizen-developersIT Professionals
Web & mobile interfaceVisual Studio or web interface
Access with Microsoft/O365 accountAccess with Azure Subscription
Ad-hocSource control
Deep SharePoint integration 
Approval portal 

In short: Use Flow to automate personal tasks and get notifications, use Logic Apps if someone must be woken up in the middle of the night to fix a broken (mission-critical) workflow.

To extend the reach of your custom connectors beyond your own tenant subscription, you can publish your custom connector by performing the following steps:

  1. Develop custom connector within your Flow tenant, using swagger/postman
  2. Test using the custom connector test wizard
  3. Submit your connector to Microsoft for review and certification to provide support for the customer connector
  4. Publish to Flow, Power Apps, and Logic Apps

State of Azure API Management – Vladimir Vinogradsky

This session started with Vladimir pointing out the importance of API’s, as API’s are everywhere: IoT, Machine Learning, Software as a Service, cloud computing, blockchain… The need to tie all of these things together is what makes API Management a critical component in Azure: abstracting complexity and thereby forming a base for digital transformation.

Discover, mediate and publish are the keywords in API Management. For instance: existing backend services can be discovered using the API management development portal.

There is no strict versioning strategy in API Management as this depends on the specific organization The reason for this is that there is a lot of discussion on versioning of API’s, with questions such as:

  • Is versioning a requirement?
  • When is a new version required?
  • What defines a breaking change?
  • Where to place versioning information? And in what format?

Microsoft chose an approach to versioning is fully featured. It allows the user full control on whether or not to implement it. The approach is based on the following principles:

  • Versioning is opt-in.
  • Choose the API versioning scheme that is appropriate for you.
  • Seamlessly create new API versions without impacting legacy versions.
  • Make developers aware of revisions and versions.

The session concluded with an overview of upcoming features for API Management:

Integrate heritage IBM systems using new cloud and on-premises connectors – Paul Larsen / Steve Melan

Last session of the day was all about integrating heritage IBM systems with Microsoft Azure technologies. It’s interesting to know that still lots of organizations (small, medium and large) have some form of IBM systems running in their organization.

Microsoft developed a brand new Microsoft MQSeries client: extremely light weight, no more IBM binaries to be installed and outstanding performance improvements (up to 4 times faster). Thanks to this, the existing integration capabilities with old-school mainframes can now run in the Azure cloud as e.g. as Logic Apps connectors. An impressive demo was shown, showcasing cloud integration with legacy mainframe systems.

The story is even more compelling with the following improvements!

Thank you for reading our blog post, feel free to comment or give us feedback in person.

This blogpost was prepared by:

Pieter Vandenheede (BE)
Jonathan Gurevich (NL)
Toon Vanhoutte (BE)
Carlo Garcia-Mier (UK)
Jef Cools (BE)
Tom Burnip (UK)
Michel Pauwels (BE)
Pim Simons (NL)
Iemen Uyttenhove (BE)
Mariette Mak (NL)
Jasper Defesche (NL)
Robert Maes (BE)
Vincent Ter Maat (NL)
Henry Houdmont (BE)
René Bik (NL)
Bart Defoort (BE)
Peter Brouwer (NL)
Iain Quick (UK)
Ricardo Marques (PT)

The post Integrate 2017 – Day 1 Recap appeared first on BizTalkGurus.

INTEGRATE 2017 – Recap of Day 1 & Announcements from the Microsoft Pro Integration Team

$
0
0

After months of preparations and work behind the scenes, it’s time for INTEGRATE 2017 – the premier integration focused conference at Kings Place, London. A beautiful day greeted the attendees, speakers from the Microsoft Pro Integration team from Redmond, and Speakers (Integration MVPs) for the event.

Registrations

Registrations started off at 0730 hrs. To avoid the hassle of making the attendees wait for long, we had four separate booths to ease the registration process. Indeed this idea worked out quite well and we were perfectly set up for a 0845 start. The statistics after registrations on Day 1 stand as follows –

Welcome Speech at INTEGRATE 2017

At 0845, Saravana Kumar – Founder/CTO of BizTalk360 officially started the INTEGRATE 2017 event with his welcome speech.

He thanked the sponsors, attendees, Jim Harrer for bringing the vast experience from Redmond to this event, and the rest of the speakers of this event. Saravana also appreciated Kent Weare for taking interest in consistently delivering the Middleware Friday sessions (for 6 months now) and the team consistently working hard for Integration User Group (who recently completed their 100th episode).

INTEGRATE 2017 USA Redmond – October 25 – 27, 2017

The key announcement during the welcome speech was that INTEGRATE 2017 event will happen at Redmond, Seattle on October 25 – 27, 2017. Registrations will open by July 1st for this event. Stay tuned!

It was then time for Duncan Barker, Business Development Manager at BizTalk360 to give a sneak peak into the history of BizTalk360 and ServiceBus360 and to welcome the Keynote speaker – Jim Harrer.

Keynote Speech

It was time for Jim Harrer – Principal Group Program Manager at Microsoft Pro Integration team to get started with his talk on how Microsoft brings intelligence to its Hybrid Integration Platform. Jim started off his talk saying –

Microsoft is not a Cloud only company. We strongly believe and invest a lot in Hybrid.

How the team have evolved since INTEGRATE 2016?

That’s one reason why Microsoft reinvested into BizTalk server and also took steps to make sure BizTalk server becomes stronger with cloud based offerings (such as Azure Logic Apps). BizTalk server and Logic Apps together form the core heartbeat of a hybrid integration solution. In addition, after INTEGRATE 2016, the Pro Integration team added the API Management team into the hierarchy, brought the Host Integration Server Product into the Pro Integration team.

Jim showed the Pro Integration team’s year in review showing how they have progressed as a team in the 4 main departments – Logic Apps, BizTalk, Host Integration Server, and API Management.

Jim felt proud to announce that Microsoft has the largest number of datacenters around the globe than anyone else, and Logic Apps are available in over half of these available data centers. The team is working hard to deploy the solutions across more data centers in the months to come.

Jim was happy to introduce all his thought leaders who have taken time and come from Redmond to this event. Jim thanked the team for making the flight across the pond for INTEGRATE 2017.

The first step for the team was to start working on connecting the different Azure Services such as Logic Apps, Azure Functions, API Management, Cognitive Services, and so on to unlock the value and take advantage of the value that Azure provides. After Azure Services, the team decided to interconnect the different Line of Business (LoB) applications so that connection could be established with the on-premise systems. Jeff had a huge round of congratulations for his team for making these integrations in a year’s time and get them Generally Available (GA) for the public.

Summary points from the Keynote Speech at INTEGRATE 2017

    • Jim announced that by Friday (June 30th), Logic Apps will be made available in both the UK regions.
    • Logic Apps currently has over 160+ connectors
    • After Azure Services and Line of Business Applications, the team at Microsoft started looking at how to get the maximum intelligence and insights. That’s when they expanded their horizon of “Better Together” with more offerings related to intelligence and insights (Cognitive Services, Sentiment Analysis, LUIS, Face API, Insights, Translator, Power BI and more). Initially, Better Together was applicable to Logic Apps and BizTalk Server, but today, it’s more vast.
  • Microsoft has aimed at improving the rate at which it takes to integrate solutions at the Speed of Business. This has been the key focus of the team over the last year.
  • The Pro Integration team is the most sought after team by most companies that come to Redmond to integrate their business solutions. The team has been focused to improve the business value of customers who are using Microsoft’s Integration solutions.

Keynote – Demo (#ContosoFitness)

To get the audience understand the pulse of what Microsoft’s Integration solutions can offer, a real time demo was planned with a physical fitness company called #ContosoFitness. The idea was to show how they made use of the existing technologies in the Microsoft Integration space and develop solutions for common problems. Jim hinted that most demos over the course of INTEGRATE 2017 will be focused on #ContosoFitness example.

The demo had some very interesting answers to common problems faced by enterprises. The demos were presented by Jeff Hollan (Logic Apps and Cognitive Services), Jon Fancey (Logic Apps), Tord Glad Nordahl (BizTalk) If you are keen to know about it, please wait for the videos to be released in few week’s time. 🙂 I’ll leave it there for the moment.

Jim wrapped up the keynote by telling the users how they can maximize application value –

  • By looking for opportunities to improve customer experience (With Cognitive services etc)
  • Find ways to make it easier for prospects to turn into customers. Identify customer pain points and work on solutions to fix the problem.
  • Seek integration opportunities that yield competitive advantage
  • Use Insights to gain business efficiency

That wrapped up a very interesting first hour at INTEGRATE 2017 and set the tone perfectly for the rest of the sessions lined up for the day.

Session 2 – Innovating BizTalk Server to bring more capabilities to the Enterprise customer

Tord Glad Nordahl took over from Jim Harrer to talk about Microsoft’s commitment to BizTalk Server. With BizTalk Server, customers can –

  • Connect on-premises hybrid and cloud applications
  • Run mission-critical, complex integration scenarios with ease
  • Enhance business productivity by automating business processes

Tord showed a nice chart showing the Mainstream support lifecycle for different versions of BizTalk Server. He even hinted on few customers still running versions as low as BizTalk 2004 😉

Then Tord showed what Microsoft did with BizTalk Server 2016 (being their 10th release in the series).

  • Microsoft created the Logic App Adapter to help users to connect with Logic Apps
  • High Availability through SQL Server AlwaysOn availability groups
  • SHA2 support certified by Drummond (last year)
  • Ordered delivery for Dynamic Ports
  • Improvements for Adapters (SAP, FTP, File, etc)
  • Then the major FEATURE PACK 1 Release few months earlier

Tord as well showed the release cadence of Microsoft to answer the question “Is BizTalk Server Dead?” correctly.

With that, Tord wrapped up his session and it was time to take a break for the attendees on Day 1 of INTEGRATE 2017.

Session 3 – Messaging yesterday, today, and tomorrow by Dan Rosanova

Dan started off his session by wowing the audience on the Azure Messaging numbers. Those numbers are simply awesome!

Similar to Azure Messaging, Event Hubs also has some staggering statistics (YoY comparison) –

Dan made the mention that customers can make use of ServiceBus360 to monitor their Azure Service Bus resources and keep a watch on the failure rate. In the above screenshot, Dan says the team receives easily about 28 million failures per week and ServiceBus360 can help customers to keep track of these failures and fix them at the earliest.

Dan then gave a brief history about Messaging, how it was, and how the landscape is today. Today’s messaging landscape involves –

Microsoft is clearly the leader in the messaging space. Dan spoke about the concepts of Event Hubs, Messaging as a Service (MaaS) – Queues and Topics, features of Service Bus, Relays.

What’s in store with Service Bus team?

  • Encryption at rest (Event Hubs and Premium Service Bus)
  • Managed Secure Identity
  • Bring your Own Key (BYOK) encryption at rest for premium products
  • IP Filtering
  • Vnet
  • New metrics pipeline
  • GeoDR – coming to Event Hubs, Service Bus and Relays this summer 
    • You can create an alias – FQDN like namespace
    • Select Primary region and NS name
    • Select Secondary region and NS name
    • GeoDR team will copy data between regions
    • You call REST call to initiate failover
      • GeoDR team ejects old primary and break metadata sync
      • Alias connection string continues to work for send and receive

Session 4 – Azure Event Hubs: the world’s most widely used telemetry service by Shubha Vijayasarathi

Shubha started off her talk talking about the 3 main S’s the Product Group is focusing on to build their solutions that are –

  1. Simple
  2. Stable
  3. Scalable

Event Hubs Archive CAPTURE

Shubha then deep dived into the concept of Event Hubs. Shubha made the mention that majority of Event Hubs customers are writing their stream to a persistent store mainly for long term storage and batch processing of the information. For achieving this, the Event Hubs team are reintroducing the concept of Archive, but with a better name called CAPTURE.

Event Hubs CAPTURE allows you to batch and real time processing on the same stream. This feature will be GA on June 28, 2017 across all Azure regions.

Shubha then covered the conceptual architecture of Event Hubs and gave a clear distinction between Event Hubs and Service Bus Topics.

Event Hubs Auto-Inflate

Shubha then gave an insight into one of the recently released features earlier this month called Event Hubs – Auto Inflate. For more information about this feature, we recommend you to watch this Middleware Friday episode by Kent Weare.

Shubha wrapped up her session talking about the Event Hubs pricing, dedicated Event Hubs and one of the soon to be released feature called Event Hubs Clusters.

Event Hubs Clusters

You can stand up your own Event Hubs Clusters in less than two hours. You can monitor your cluster health, and with this model you have the option to start small and then scale as you go. This feature is currently in Private Preview and will soon be available in the Public Preview (by September).

What’s Coming – Azure Event Hubs

  • GeoDR Capability
  • Encryption at Rest
  • Metrics in the new portal
  • ADLS for public preview
  • Dedicated EH clusters for private preview
  • Namespace updates
    • Sun setting the old Azure Portal
    • All services are independent (Messaging, Event Hubs, Relays)
    • Different portal experience

With that, it was the end of the first half of Day 1 sessions on INTEGRATE 2017.

During lunch, we took a statistic on the sentiment of Tweets for the hashtag #Integrate20107 and here’s the report. This report was generated by using a Logic App that connects with the Sentiment Analysis connector to identify the sentiment of tweets with hashtag #Integrate2017, and later the results are categorized based on the value on a scale of 0 to 1 and the data is loaded into a Power BI dashboard.

Note: This report started collecting data from Saturday when the buzz started for #Integrate2017.

Post Lunch Sessions – Azure Logic Apps and Azure Functions

Post lunch, it was time for the dynamic crew of the Logic Apps Live webcast to take over the stage – Jeff Hollan and Kevin Lam. Kevin started off the session taking the audience on a getting started session with Azure Logic Apps.

“Logic Apps means Powerful Integration” – Kevin Lam

What is Logic Apps?

Logic Apps is Microsoft’s strategy for Serverless technologies.

  • Faster integration using interactive visual designer
  • Easy workflow connector with triggers and actions
  • Built for mission critical integration
  • Create, Deploy, Manage and Monitor

What’s coming in Logic Apps?

  • Azure Storage Tables – in addition to Queues and Blobs
  • Oracle eBusiness Suite
  • ServiceNow
  • SOAP Connector (Yaaaayyyy!!!!!)
  • Service Principles
  • Custom Connectors – Build and deploy your own connector!!

Logic Apps is now CERTIFIED!!

Logic Apps is certified by –

  • Drummond AS2
  • ISO 27001
  • SOC (I, II, III)
  • HIPAA
  • PCI DSS

Logic Apps is Agile! What’s Coming?

Jeff & Kevin gave a very interesting demo based on the #ContosoFitness idea that was earlier discussed during the keynote.

Azure Functions

Jeff continued the session from Logic Apps to Azure Functions. He started off with the evolution of application platforms –

Serverless is the future

  • Abstraction of servers
  • Event Driven and instant scale
  • Micro-billing

Azure Functions is nothing but some code + event/data

Points to Remember with Azure Functions

  • If you are using Visual Studio 2017 (Preview mode currently), you can download Azure Functions Tools for Visual Studio

Session 8 – Microsoft Flows

Derek Li from Microsoft Flows team started off his session talking about what is Microsoft Flows. Flows helps non developers automate their workflow across different apps and services without having to write single line of code.

Microsoft Flow solves the last mile of integration challenges.

Why Flows and not Logic Apps?

Why Flow is important for integration teams?

  • Flow lets business users to self-service and solve simple problems on their own
  • Helps customers to be more engaged
  • Popular, advanced versions of flows translate into Logic Apps
  • For Flow to connect to your services, custom connector has to be developed

Session 9 – Azure API Management

Vladimir Vinogradsky and Matthew Farmer started their session on Azure API Management. I will take you through the key highlights from this session –

  • The following are the updates that have gone into API Management after last year INTEGRATE event 
  • Versioning approach
    • Versioning is opt-in
    • Choose appropriate scheme for API
    • Create New API versions as first class objects
    • Make developers aware of versions and revisions 

Matt Farmer then showcased the API New Portal and he recommends everyone to take a look at the new portal. Paul gave an interesting demo walkthrough about the new portal along with the #ContosFitness concept that was discussed throughout the day.

Finally Vlad came back on stage to discuss the roadmap for the API Management team.

Towards the end of the day, Paul Larsen and Steve Melan showed how you can integrate heritage IBM systems using new cloud and on-premises connectors with an interesting demo.

With that it was a wrap on Day 1 at INTEGRATE 2017. It was time for the attendees to chill out over some drinks and networking.

Statistics of Tweets with #Integrate2017 – End of Day 1

The following screenshot shows the report of number of tweets and their sentiment at the end of Day 1 of #Integrate2017.

And that’s a wrap on our summary of Day 1 at INTEGRATE 2017. We already look forward for Day 2 and Day 3 sessions. Thanks for reading! Good night from Day 1 at INTEGRATE 2017, London.

Author: Sriram Hariharan

Sriram Hariharan is the Senior Technical and Content Writer at BizTalk360. He has over 9 years of experience working as documentation specialist for different products and domains. Writing is his passion and he believes in the following quote – “As wings are for an aircraft, a technical document is for a product — be it a product document, user guide, or release notes”.

The post INTEGRATE 2017 – Recap of Day 1 & Announcements from the Microsoft Pro Integration Team appeared first on BizTalkGurus.

Microsoft Integration Weekly Update: June 26

$
0
0

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

On-Premise Integration:

Cloud and Hybrid Integration:

Feedback

Hope this would be helpful. Please feel free to let me know your feedback on the Integration weekly series.

Advertisements

The post Microsoft Integration Weekly Update: June 26 appeared first on BizTalkGurus.

INTEGRATE 2017 – Recap of Day 2

$
0
0

After an exciting Day 1 at INTEGRATE 2017 with loads of valuable content from the Microsoft Pro Integration team, it was time to get started with Day 2 at INTEGRATE 2017.

Important Links – Recap of Day 1 at INTEGRATE 2017,
Photos from Day 1 at INTEGRATE 2017

Session 1 – Microsoft IT journey with Azure Logic Apps by MSCIT team

Day 2 at INTEGRATE 2017 started off with Duncan Barker of BizTalk360 introducing Mayank Sharma and Divya Swarnkar from the Microsoft IT Team. The key highlights from the session were –

    • Integration Landscape at Microsoft has over 1000 Partners, 170M+ Messages per month, 175+ BizTalk Servers, 200+ Line of Business Systems, 1300+ Transforms and a Multi platform that supports BizTalk Server 2016, Azure Logic Apps, and MABS
    • Microsoft IT Team showed why the team were motivated to move to Logic Apps –
      • Modernization of Integration (Serverless Computing + Managed Services, business agility and accelerated development)
      • Manage and Control Costs based on usage
      • Business Continuity
    • The following image shows where the MSCIT team is placed today in terms of number of releases. Microsoft Azure BizTalk Services will be retired by end of July.
    • Microsoft IT team uses Logic App pipeline to process EDI messages coming from partners
    • For testing purposes, Microsoft IT team uses Azure API Management policies to route the message flows to parallel pipelines for testing purposes
    • The team at Microsoft IT uses Operations Management Suite (OMS) for Logic Apps diagnostics. This was briefly covered earlier by Srinivasa Mahendrakar in one of the Integration Monday sessions – Business Activity tracking and monitoring in Logic Apps. Microsoft IT have migrated all their EDI workloads off of MABS and BizTalk and onto Logic Apps.
    • Microsoft IT only uses BizTalk for its adapters to connect to LOB systems, while all processing happens in Logic Apps.
    • Finally, the team shared their learnings while working with Logic Apps
      • Each Logic App has a published limit – make sure you understand what they are
      • Consider the nature of flow you will create with Logic Apps – high throughput or long running workflows
      • Leverage the platform for concurrency (SplitOn vs. ForEach)
      • Understand the structure and behavior of data (batched vs. non-batched)
      • Consider a SxS strategy to enable test in production
      • In Logic Apps, your delivery options are ‘atleast once’ or ‘at most once’ (not ‘only once’)

Jim Harrer was really appreciative and thankful to the Microsoft IT team for making their trip to London to share their experiences.

Session 2 – Azure Logic Apps – Advanced integration patterns

This was one of the most expected sessions on Day 2 at INTEGRATE 2017 with Jeff Hollan (Sir Hollywood) and Derek Li talking about “Advanced integration patterns”. The agenda of the session included talks on –

  • Logic Apps Architecture
  • Parallel Actions
  • Exception Handling
  • Other “Operation Options”
  • Workflow Expressions

The Logic Apps architecture under the hood looks as follows –

An important point to observe is that the ForEach loop in Logic Apps runs the tasks in parallel!

Awesome overview from @jeffhollan @logicappsio on how #LogicApps are executed by the runtime. No thread management needed!!

The Logic Apps designer is basically a TypeScript/React app that uses OpenAPI (Swagger) to render input and output. The Logic Apps designer has the capability to generate Workflow definition (JSON). You can configure the runAfter options via the Logic Apps designer.

This statistic made by Jeff Hollan was probably the highlight of the show

In the history of #LogicApps, there hasn’t been a single run that hasn’t executed at least once.

After a very interesting demo by Derek Li, Jeff Hollan started his talk on Workflow Expressions. An expression is anything but any input that will be dynamic (changes at every run). Jeff explained the different expression properties in a easy to understand way –

@ – Used to indicate an expression. It can be escaped with @@. Example – @foo()

() – Encapsulate the expression parameters – Example – @foo(‘Hello World’)

{} – “Curly braces means string!!!“. This is same as @string() – Example – @add{(1,1)}

[] – Used to parse properties in the JSON objects – Example – @foo(‘JsonBody’) [‘person’][‘address’]

This session from Jon Fancey and Derek Li was well received by the audience at #Integrate2017.

Jon also made the mention about the feature where customers can test the expressions in the designer which is coming soon!

Session 3 – Enterprise Integration with Logic Apps by Jon Fancey

In this session, Jon Fancey started off his presentation by talking about Batching in Logic Apps and how it works –

  • There are basically two Logic Apps – Sender and Receiver
  • Batcher is aware of the Batching Logic App; whereas Batching Logic App is not aware of the batchers (1:n)

What’s coming in Batching?

  1. Batch Flush
  2. Time based Batch release trigger options
  3. EDI Batching

Jon Fancey moved into the concept of Integration Account (IA) and made the mention about the VETER pipeline being available as a template in Azure Logic Apps using Integration Account.

  • Integration Account is the core to XML and B2B capabilities
  • IA provides partner creation and management
  • IA provides for XML validation, mapping and flatfile conversion
  • Provides tracking

Jon listed the Logic Apps enhancements coming soon for working with XML such as:

  • XML parameters
  • Code and functoids
  • Enhancements soon
    • Transform output format (XML, HTML, Text)
    • BOM handling

Jon showed a very interesting demo about how to transform an XML message with C# and XSLT in Logic Apps. You got to wait a little longer till the videos are made available on the INTEGRATE 2017 event website 🙂

Disaster Recovery with B2B, and how it works?

In the final section of his presentation, Jon discussed about the Monitoring and tracking of Azure Logic Apps. This part was covered by Srinivasa Mahendrakar on one of his recent Integration Monday sessions.

Jon showed an early preview (mockup) of the OMS Dashboard for Azure Logic Apps that’s coming up soon. With this, you can perform Operational Monitoring for Logic Apps in OMS with a powerful query engine. You can expect this feature to be rolled out mid-July!

With that, completed the first set of sessions for the morning on Day 2 at INTEGRATE 2017.

Session 4 – Bringing Logic Apps into DevOps with Visual Studio and monitoring by Jeff Hollan/Kevin Lam

Once again, but unfortunately for the last time on stage, it was time for Sir Hollywood Jeff Hollan to rock the stage with his partner Kevin Lam to talk about bringing Logic Apps into DevOps with Visual Studio and monitoring.

The key highlights from the session include –

Visual Studio tooling to manage Logic Apps

  • Hosted the Logic App Designer within Visual Studio
  • Resource Group Project (same project that manages the ARM projects)
  • Cloud Explorer integration
  • XML/B2B artifacts

Make sure you have selected “Cloud Explorer for Visual Studio 2015 and Azure Logic Apps Tools for Visual Studio” these tools in order to be able to use Logic Apps from Visual Studio. It also works on Visual Studio version 2015/2017.

Kevin and Jeff showed the demo of the Visual Studio tooling with a real time example of using Logic Apps in Visual Studio.

Azure Resource Templates

  • You can create Azure Resource Templates that get shipped on to Azure Resource Manager.
    • Azure Resources can be represented and created via programmatic APIs that are available at http://resources.azure.com. This is a pivot to Azure where you are looking at the API version of your resources.
  • Resource templates define a collection of resources to created
  • Templates include –
    • Resources that you want to create
    • Parameters that you want to pass during deployment (for example)
    • Variables (specific calculated values)
    • Outputs

Service Principal

With this, you can get authorization to an application that you create and then say the application has access to the resources.

Jeff wrapped up the session by showing a demo of how the deployment process works, in detail. You can watch the video that will be available in a week’s time on BizTalk360 website for the detailed understanding of the steps to perform a deployment.

With this wrapped up the 1.5 days of sessions from Microsoft on core integration technologies, and what’s coming up from them in the coming months. It was now time for the Integration MVPs to take the stage and show what they’ve done/achieved, or what they can do with the various offerings from Microsoft.

Session 5 – What’s there & what’s coming in BizTalk360 & ServiceBus360 by Saravana Kumar

Saravana was given a “warm” welcome with a nice music and a loud applause from the audience! 🙂 Saravana thanked the entire Microsoft team for their presence and effort at INTEGRATE 2017 over the last 1.5 days.

Key Highlights from Saravana’s session

BizTalk360 Updates

  • BizTalk Server License Calculator
  • Folder Location Monitoring
    • File, FTP/FTPS, SFTP
  • Queue Monitoring
  • Email Templates
  • Throttling Monitoring
  • On-Premise + Cloud features
    • Azure Logic Apps Management
    • Azure Logic Apps Monitoring
    • Azure Integration Account
    • Azure Service Bus Queues (monitoring)

You can get started with a 14-day FREE TRIAL of BizTalk360 to realize the full blown capabilities of the product.

ServiceBus360

Saravana discussed the challenges with Azure Service Bus and how ServiceBus360 helps to solve the Operations, Monitoring and Analytics issues of Azure Service Bus.

You get ServiceBus360 with a pricing model as low as 15$. We wanted to go with a low cost, high volume model for ServiceBus360. You can also try the product for FREE if you are keen on trying the product. If you are an INTEGRATE 2017 attendee, we have a special offer for you that you cannot afford to miss.

With that it was time for the attendees to break for lunch on Day 2 at INTEGRATE 2017. Lots more in store over the remaining 1.5 days!

Post Lunch Sessions – Session 6 – Give your Bots connectivity, with Azure Logic Apps by Kent Weare

We’ll take you through a quick recap of the post lunch sessions on Day 2 at INTEGRATE 2017.

Kent Weare started off his talk about his company and how they are coping up to the business transformation demands from the government and local bodies in Canada. Kent then shows how their company has grown over the years and how much it will mean to them in terms of cost of business transformation. The approach they have taken is by moving towards “Automating Insight, Artificial Intelligence,  Machine Learning, and BOTS”.

Kent then showed why BOTS are gaining the popularity these days – to Improve Productivity! Bots is something very similar to IMs which users are very familiar with.

Kent then stepped into his demo where the concept was as follows –

Kent wrapped up his session with the following summary for companies to take advantage of the latest technology in store these days.

Session 7 – Empowering the business using Logic Apps by Steef-Jan Wiggers

After Kent Weare, Steef-Jan Wiggers took over the stage to talk about Empowering the business using Logic Apps. This talk from Steef-Jan Wiggers was more from the end user/consumer perspective of using Logic Apps.

Steef took a business case of a company called “Cloud First” that wanted to move to the cloud (and chose Azure). All his talk in this session was focussed towards this company who wanted to migrate to cloud with minimal customization and by having a unified landscape. Steef also showed some sentiment around the developer experience with Logic App.

Steef showed a demo that calculates the sentiment of #Integrate2017 (which is exactly something similar folks at BizTalk360 also have tried and reproduced in the Day 1 Recap blog).

After the end of the demo, Steef talked about the Business Value of Logic Apps –

  • Solving business problem first
  • Fit for purpose for cloud integration’
  • Less cost; Faster time to market

Session 8 – Logic App continuous integration and deployment with Visual Studio Team Services

After Steef, Johan Hedberg took the stage to talk about Logic App continuous integration and deployment with Visual Studio Team Services. Johan set the stage for the session by giving a example –

  • Pete is a web developer who loves the Azure Portal and has an amazing time to market. Generally, he is fast but has no process.
  • Charlotte loves Visual Studio. She wants to bring the Logic App from Visual studio with Source control.
  • Bruce is an operations guy. He does not like Pete and Charlotte having direct access to production. He likes to have a process over anything and would want to approve things before it goes out.

Therefore, what all 3 of them are missing is a common process/pipeline of how to perform things such as –

  • Lack of development standards
  • Process standards
  • Security standards
  • Deployment standards
  • Team communication and culture, and more

Therefore, in this session (and demo), Johan shows how users can use continuous integration and deployment with Visual Studio Team Services using Logic Apps.

Sessions 9 & 10 – Internet of Things

In the last two sessions of Day 2 at INTEGRATE 2017, Sam Vanhoutte and Mikael Hakansson talked about Integration of Things (IoT).

Sam Vanhoutte talked about why integration people are forced to build good IoT solutions. He showed the IoT End-to-End value chain with a nice diagrammatic representation.

Then Sam talked about the different points in the Industrial IoT Connectivity challenge. The points are –

  • Direct connectivity (feels less secure)
  • Cloud gateways (easier to start in a cloud setup)
  • Field gateways (feels more secure)

Sam spoke about Azure IoT Edge, the required hardware for Azure IoT Edge and more about flexible business rules for IoT solutions.

Mikael Hakansson started off his IoT talk from where Sam Vanhoutte left the speech, but there came the fun part of the session. Sandro Pereira had to stop Mikael from delivering his presentation and make him wear the “Green” color shirt for losing a bet (well not sure if Mikael was a part of that bet at all and his friends unanimously agreed he lost the bet 🙂 )in a football match. (so did Steef-Jan Wiggers and he was wearing a green shirt too!)

Mikael started off his talk about IoT === Integration and he introduced the concept of Microsoft Azure IoT Hub in detail.

  • Stand-alone service or as one of the services used in the new Azure IoT Suite
  • With Azure IoT Hub, you can connect your devices to Azure:
    • Millions of simultaneously connected devices
    • Per-device authentication
    • High throughput data ingestion
    • Variety of communication patterns
    • Reliable command and control

Mikael gave a very cool demo on IoT with Azure Functions in his usual, calm way of coding while on stage. We recommend you to watch the video to see the effort that has gone behind to prepare for the demo and actually be able to code while presenting the session.

End of the Sessions

At the end of the session, it was curtains down on what promised to be another spectacular day of sessions at INTEGRATE 2017. The team gathered for a lovely photo shoot courtesy photographer Tariq Sheikh.

With that we would like to wrap our exhaustive coverage of Day 2 proceedings at INTEGRATE 2017. Stay tuned for the updates from Day 3. Until then Good night from London!

ICYMI: Recap of Day 1 at INTEGRATE 2017

Author: Sriram Hariharan

Sriram Hariharan is the Senior Technical and Content Writer at BizTalk360. He has over 9 years of experience working as documentation specialist for different products and domains. Writing is his passion and he believes in the following quote – “As wings are for an aircraft, a technical document is for a product — be it a product document, user guide, or release notes”.

The post INTEGRATE 2017 – Recap of Day 2 appeared first on BizTalkGurus.

Viewing all 2977 articles
Browse latest View live