Quantcast
Channel: BizTalkGurus
Viewing all 2977 articles
Browse latest View live

Host Integration Product Feedback


Microsoft Integrate 2017 Event

$
0
0

Microsoft Integrate 2017 Event

Don’t miss the greatest integration event in the world.

Name : Integrate 2017

When : 26-28 June 2017

Where : Kings Place, London

Discount Code : MVPSPEAK2017REF

Link : https://www.biztalk360.com/integrate-2017/

Topics : Microsoft BizTalk, Host Integration Server, Logic Apps, IBM Legacy Integration, and much more

Speakers : Product Group, MVP (Most Valuable Professional)

Audience : From all over the world

Advertisements

The post Microsoft Integrate 2017 Event appeared first on BizTalkGurus.

Azure announcements from Microsoft Build 2017

$
0
0

microsoft build 2017

Microsoft Build (often stylized as //build/) is an annual conference event held by Microsoft, aimed towards software engineers and web developers using Windows, Windows Phone, Microsoft Azure and other Microsoft Technologies. This year’s conference was conducted in Seattle, WA from May 10 to May 12.

Unlike the previous year which concentrated mainly on Visual Studio & .Net Core, Microsoft this time concentrated mainly on AI and Azure Services. This blog provides a compilation of all the Azure announcements from the 3-day event.

Azure IoT Edge

Azure IoT Edge

IoT Edge provides easy orchestration between code and services, so they flow securely between cloud and edge to distribute intelligence across IoT devices. This leverages on Azure Stream Analytics, Microsoft Cognitive Services, and Azure Machine learning to create more advanced IoT solutions with less time and effort.

To get more info on Azure IoT Edge please click here or check out the video on channel9.

Azure Batch AI Training

On Wednesday Microsoft announced a new service called Azure Batch AI training. It uses Azure to train deep neural networks, which means that now it is possible for developers to train their AI without having to worry about hardware.

To get more info on Azure Batch AI Training please click here.

Azure Cloud Shell

Azure Cloud Shell

Microsoft has put a huge investment on Command line interface. Now you can use Azure Cloud shell from inside the azure portal!! Yes, you heard me right; now Azure portal has real bash command line interface. It is also preloaded with Azure CLI, so that you can directly use commands like azure vm list right from the portal.

Azure cloud shell is still in preview mode please click here to find more information on Azure Cloud Shell.

MySQL and PostgresSQL on Azure

mySQL and postgreSQL

With this support now developers will be able to use their favourite databases on Azure. Developers will surely appreciate this, since more options means more flexibility.

Azure Database for MySQL and Azure Database for PostgreSQL services are built on the intelligent, trusted and flexible Azure relational database platform. This platform extends similar managed services benefits like Global Azure region reach, and innovations that currently power Azure SQL database and Azure SQL Data warehouse services to the MySQL and PostgreSQL database engines.

Cosmos DB

Azure Cosmos DB

It’s Microsoft’s first globally distributed, multi-model database. Azure Cosmos DB enables you to elastically and independently scale throughput and storage across any number of Azure’s geographic regions.

To find more information on Azure Cosmos DB please click here.

Cognitive Services

Microsoft also announced new cognitive services on top of the 25 existing ones. These new services include a machine vision service, a Bing-based search engine powered by AI, a video indexer, and a new online lab where more experimental services may be unveiled.

Check out this page to find all the available cognitive services in Azure Platform.

Conclusion

And that’s actually all of it! Since Microsoft Build is a developer conference most of the feature announcement targeted developers, but these will probably influence the future of AI, since Microsoft is making it easy for developers to include the power of AI with minimal effort.

Author: Umamaheswaran Manivannan

Umamaheswaran is the Senior Software Engineer at BizTalk360 having 6 years of experience. He is a full stack developer worked in various technologies like .NET, Angular JS etc.

The post Azure announcements from Microsoft Build 2017 appeared first on BizTalkGurus.

Using IoT Hub for Cloud to Device Messaging

$
0
0

In the previous blog posts of this IoT Hub series, we have seen how we can use IoT Hub to administrate our devices, and how to do device to cloud messaging. In this post we will see how we can do cloud to device messaging, something which is much harder when not using Azure IoT Hub. IoT devices will normally be low power, low performance devices, like small footprint devices and purpose-specific devices. This means they are not meant to (and most often won’t be able to) run antivirus applications, firewalls, and other types of protection software. We want to minimize the attack surface they expose, meaning we can’t expose any open ports or other means of remoting into them. IoT Hub uses Service Bus technologies to make sure there is no inbound traffic needed toward the device, but instead uses per-device topics, allowing us to send commands and messages to our devices without the need to make them vulnerable to attacks.

IoT Hub For Cloud To Device Messaging

Send Message To Device

When we want to send one-way notifications or commands to our devices, we can use cloud to device messages. To do this, we will expand on the EngineManagement application we created in our earlier posts, by adding the following controls, which, in our scenario, will allow us to start the fans of the selected engine.

IoT Hub For Cloud To Device Messaging

To be able to communicate to our devices, we will first implement a ServiceClient in our class.

private readonly ServiceClient serviceClient = ServiceClient.CreateFromConnectionString("HostName=youriothubname.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=yoursharedaccesskey"); 

Next we implement the event handler for the Start Fans button. This type of communication targets a specific device by using the DeviceID from the device twin.

private async void ButtonStartFans_Click(object sender, EventArgs e)
{
    var message = new Microsoft.Azure.Devices.Message();
    message.Properties.Add(new KeyValuePair<string, string>("StartFans", "true"));
    message.Ack = DeliveryAcknowledgement.Full; // Used for getting delivery feedback
    await serviceClient.SendAsync(comboBoxSerialNumber.Text, message);
}

Process Message On Device

Once we have sent our message, we will need to process it on our device. For this, we are going to update the client application of our simulated engine (which we also created in the previous blog posts) by adding the following method.

private static async void ReceiveMessageFromCloud(object sender, DoWorkEventArgs e)
{
    // Continuously wait for messages
    while (true)
    {
        var message = await client.ReceiveAsync();

        // Check if message was received
        if (message == null)
        {
            continue;
        }

        try
        {
            if (message.Properties.ContainsKey("StartFans") && message.Properties["StartFans"] == "true")
            {
                // This would start the fans
                Console.WriteLine("Fans started!");

            }

            await client.CompleteAsync(message);
        }
        catch (Exception)
        {
            // Send to deadletter
            await client.RejectAsync(message);
        }
    }
}

We will run this method in the background, so update the Main method, and insert the following code after the call for updating the firmware.

// Wait for messages in background
var backgroundWorker = new BackgroundWorker();
backgroundWorker.DoWork += ReceiveMessageFromCloud;
backgroundWorker.RunWorkerAsync();

Message Feedback

Although cloud to device messages are a one-way communication style, we can request feedback on the delivery of the message, allowing us to invoke retries or start compensation when the message fails to be delivered. To do this, implement the following method in our EngineManagement backend application.

private async void ReceiveFeedback(object sender, DoWorkEventArgs e)
{
    var feedbackReceiver = serviceClient.GetFeedbackReceiver();
    
    while (true)
    {
        var feedbackBatch = await feedbackReceiver.ReceiveAsync();

        // Check if feedback messages were received
        if (feedbackBatch == null)
        {
            continue;
        }

        // Loop through feedback messages
        foreach(var feedback in feedbackBatch.Records)
        {
            if(feedback.StatusCode != FeedbackStatusCode.Success)
            {
                // Handle compensation here
            }
        }

        await feedbackReceiver.CompleteAsync(feedbackBatch);
    }
}

And add the following code to the constructor.

var backgroundWorker = new BackgroundWorker();
backgroundWorker.DoWork += ReceiveFeedback;
backgroundWorker.RunWorkerAsync();

Call Remote Method

Another feature when sending messages from the cloud to our devices is to call a remote method on the device, which we call invoking a direct method. This type of communication is used when we want to have an immediate confirmation of the outcome of the command (unlike setting the desired state and communicating back reported properties, which has been explained in the previous two blog posts). Let’s update the EngineManagement application by adding the following controls, which would allow us to send an alarm message to the engine, sounding the alarm and displaying a message.

IoT Hub For Cloud To Device Messaging

Now add the following event handler for clicking the Send Alarm button.

private async void ButtonSendAlarm_Click(object sender, EventArgs e)
{
    var methodInvocation = new CloudToDeviceMethod("SoundAlarm") { ResponseTimeout = TimeSpan.FromSeconds(300) };
    methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = textBoxMessage.Text }));

    CloudToDeviceMethodResult response = null;

    try
    {
        response = await serviceClient.InvokeDeviceMethodAsync(comboBoxSerialNumber.Text, methodInvocation);
    }
    catch (IotHubException)
    {
        // Do nothing
    }

    if (response != null && JObject.Parse(response.GetPayloadAsJson()).GetValue("acknowledged").Value<bool>())
    {
        MessageBox.Show("Message was acknowledged.", "Information", MessageBoxButtons.OK, MessageBoxIcon.Information);
    }
    else
    {
        MessageBox.Show("Message was not acknowledged!", "Warning", MessageBoxButtons.OK, MessageBoxIcon.Warning);
    }
}

And in our simulated device, implement the SoundAlarm remote method which is being called.

 
private static Task<MethodResponse> SoundAlarm(MethodRequest methodRequest, object userContext)
{
    // On a real engine this would sound the alarm as well as show the message
    Console.ForegroundColor = ConsoleColor.Red;
    Console.WriteLine($"Alarm sounded with message: {JObject.Parse(methodRequest.DataAsJson).GetValue("message").Value<string>()}! Type yes to acknowledge.");
    Console.ForegroundColor = ConsoleColor.White;
    var response = JsonConvert.SerializeObject(new { acknowledged = Console.ReadLine() == "yes" });
    return Task.FromResult(new MethodResponse(Encoding.UTF8.GetBytes(response), 200));
}

And finally, we need to map the SoundAlarm method to the incoming remote method call. To do this, add the following line in the Main method.

client.SetMethodHandlerAsync("SoundAlarm", SoundAlarm, null);

Call Remote Method On Multiple Devices

When invoking direct methods on devices, we can also use jobs to send the command to multiple devices. We can use our custom tags here to broadcast our message to a specific set of devices.
In this case, we will add a filter on the engine type and manufacturer, so we can, for example, send a message to all main engines manufactured by Caterpillar. In our first blog post, we added these properties as tags on the device twin, so we now use these in our filter. Start by adding the following controls to our EngineManagement application.

IoT Hub For Cloud To Device Messaging

Now add a JobClient to the application, which will be used to broadcast and monitor our messages.

 
private readonly JobClient jobClient = JobClient.CreateFromConnectionString("HostName=youriothubname.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=yoursharedaccesskey");

To broadcast our message, update the event handler for the Send Alarm button to the following.

 
private async void ButtonSendAlarm_Click(object sender, EventArgs e)
{
    var methodInvocation = new CloudToDeviceMethod("SoundAlarm") { ResponseTimeout = TimeSpan.FromSeconds(300) };

    methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = textBoxMessage.Text }));

    if (checkBoxBroadcast.Checked)
    {
        try
        {
            var jobResponse = await jobClient.ScheduleDeviceMethodAsync(Guid.NewGuid().ToString(), $"tags.engineType = '{comboBoxEngineTypeFilter.Text}' and tags.manufacturer = '{textBoxManufacturerFilter.Text}'", methodInvocation, DateTime.Now, 10);
            
            await MonitorJob(jobResponse.JobId);
        }
        catch (IotHubException)
        {
            // Do nothing
        }
    }
    else
    {
        CloudToDeviceMethodResult response = null;

        try
        {
            response = await serviceClient.InvokeDeviceMethodAsync(comboBoxSerialNumber.Text, methodInvocation);
        }
        catch (IotHubException)
        {
            // Do nothing
        }

        if (response != null && JObject.Parse(response.GetPayloadAsJson()).GetValue("acknowledged").Value<bool>())
        {
            MessageBox.Show("Message was acknowledged.", "Information", MessageBoxButtons.OK, MessageBoxIcon.Information);
        }
        else
        {
            MessageBox.Show("Message was not acknowledged!", "Warning", MessageBoxButtons.OK, MessageBoxIcon.Warning);
        }
    }
}

And finally, add the MonitorJob method with the following implementation.

 
public async Task MonitorJob(string jobId)
{
    JobResponse result;

    do
    {
        result = await jobClient.GetJobAsync(jobId);
        Thread.Sleep(2000);
    }
    while (result.Status != JobStatus.Completed && result.Status != JobStatus.Failed);

    // Check if all devices successful
    if (result.DeviceJobStatistics.FailedCount > 0)
    {
        MessageBox.Show("Not all engines reported success!", "Warning", MessageBoxButtons.OK, MessageBoxIcon.Warning);
    }
    else
    {
        MessageBox.Show("All engines reported success.", "Information", MessageBoxButtons.OK, MessageBoxIcon.Information);
    }
}

Conclusion

By using IoT Hub we have a safe and secure way of communicating from the cloud and our backend to devices out in the field. We have seen how we can use the cloud to device messages in case we want to send one-way messages to our device or use direct methods when we want to be informed of the outcome from our invocation. By using jobs, we can also call out to multiple devices at once, limiting the devices being called by using (custom) properties of the device twin. The code for this post can be found here.

IoT Hub Blog Series

In case you missed the other articles from this IoT Hub series, take a look here.

Blog 1: Device Administration Using Azure IoT Hub
Blog 2: Implementing Device To Cloud Messaging Using IoT Hub
Blog 3: Using IoT Hub for Cloud to Device Messaging

Author: Eldert Grootenboer

Eldert is a Microsoft Integration Architect and Azure MVP from the Netherlands, currently working at Motion10, mainly focused on IoT and BizTalk Server and Azure integration. He comes from a .NET background, and has been in the IT since 2006. He has been working with BizTalk since 2010 and since then has expanded into Azure and surrounding technologies as well. Eldert loves working in integration projects, as each project brings new challenges and there is always something new to learn. In his spare time Eldert likes to be active in the integration community and get his hands dirty on new technologies. He can be found on Twitter at @egrootenboer and has a blog at http://blog.eldert.net/.

The post Using IoT Hub for Cloud to Device Messaging appeared first on BizTalkGurus.

Reliably receive SQL data in Logic Apps

$
0
0

Scenario

Let’s discuss the scenario briefly.  We need to consume data from the following table.  All orders with the status New must be processed!

The table can be created with the following SQL statement:

First Attempt

Solution

To receive the data, I prefer to create a stored procedure. This avoids maintaining potentially complex SQL queries within your Logic App. The following stored procedure selects the first order with status New and updates its status to Processed in the same statement. Remark that it also returns the @@ROWCOUNT, as this will come in handy in the next steps.

The Logic App fires with a Recurrence trigger.  The stored procedure gets executed and via the ReturnCode we can easily determine whether it returned an order or not.  In case an order is retrieved, its further processing can be performed, which will not be covered in this post.

Evaluation

If you have a BizTalk background, this is a similar approach on using a polling SQL receive location. One very important difference: the BizTalk receive adapter executes the stored procedure within the same distributed transaction as it persists the data in the MessageBox, whereas Logic Apps is completely built on API’s that have no notion of MSDTC at all.

In failure situations, when a database shuts down or the network connection drops, it could be that the order is already marked as Processed, but it never reaches the Logic App. Depending on the returned error code, your Logic App will end up in a Failed state without clear description or the Logic App will retry automatically (for error codes 429 and 5xx). In both situations you’re facing data loss, which is not acceptable for our scenario.

Second attempt

Solution

We need to come up with a reliable way of receiving the data. Therefore, I suggest to implement a similar pattern as the Azure Service Bus Peek-Lock. Data is received in 2 phases:

  1. You mark the data as Peeked, which means it has been assigned to a receiving process
  2. You mark the data as Completed, which means it has been received by the receiving process

Next to these two explicit processing steps, there must be a background task which reprocesses messages that have the Peeked status for a too long duration. This makes our solution more resilient.

Let’s create the first stored procedure that marks the order as Peeked.

The second stored procedure accepts the OrderId and marks the order as Completed.

The third stored procedure should be executed by a background process, as it sets the status back to New for all orders that have the Peeked status for more than 1 hour.

Let’s consume now the two stored procedures from within our Logic App.  First we Peek for a new order and when we received it, the order gets Completed.  The OrderId is retrieved via this expression: @body(‘Execute_PeekNewOrder_stored_procedure’)?[‘ResultSets’][‘Table1’][0][‘Id’]

The background task could be executed by a SQL Agent Job (SQL Server only) or by another Logic App that is fired every hour.

Evaluation

Happy with the result? Not a 100%! What if something goes wrong during further downstream processing of the order? The only way to reprocess the message is by changing its status in the origin database, which can be a quite cumbersome experience for operators. Why can’t we just resume the Logic App in case of an issue?

Third Attempt

Solution

As explained over here, Logic Apps has an extremely powerful mechanism of resubmitting workflows. Because Logic Apps has – at the time of writing – no triggers for SQL Server, a resubmit of the Recurrence trigger is quite useless. Therefore I only want to complete my order when I’m sure that I’ll be able to resubmit it if something fails during its further processing. This can be achieved by splitting the Logic App in two separate workflows.

The first Logic App peeks for the order and parses the result into a JSON representation. This JSON is passed to the next Logic App.

The second Logic App gets invoked by the first one.  This Logic App completes the order first and performs afterwards the further processing.  In case something goes wrong, a resubmit of the second Logic App can be initiated.

Evaluation

Very happy with the result as:

  • The data is received from the SQL table in a reliable fashion
  • The data can be resumed in case further processing fails

Conclusion

Don’t forget that every action is HTTP based, which can have an impact on reliability. Consider a two-phased approach for receiving data, in case you cannot afford message loss. The same principle can also by applied on receiving files: read the file content in one action and delete the file in another action. Always think upfront about resume / resubmit scenarios. Triggers are better suited for resubmit than actions, so if there are triggers available: always use them!

This may sound overkill to you, as these considerations will require some additional effort. My advice is to determine first if your business scenario must cover such edge case failure situations. If yes, this post can be a starting point for you final solution design.

Liked this post? Feel free to share with others!
Toon

The post Reliably receive SQL data in Logic Apps appeared first on BizTalkGurus.

Microsoft Integration Weekly Update: May 22

$
0
0

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

On-Premise Integration:

Cloud and Hybrid Integration:

Feedback

Hope this would be helpful. Please feel free to let me know your feedback on the Integration weekly series.

Advertisements

The post Microsoft Integration Weekly Update: May 22 appeared first on BizTalkGurus.

SendPort is not showing in BizTalk360

$
0
0

BizTalk360 v8.4 is now released for public with lots of exciting new features and enhancements. Many of our customers have upgraded to the latest version and started enjoying the new features.  We at BizTalk360 support get a lot of queries in the form of tickets. Many customers are asking for the installation path, few raise some clarifications and others may be issues. We categorize the tickets as a clarification, feature requests, and bugs.

Our support team often get some strange issues which did not belong to either of these categories. I am here to explain about one such interesting case and how we identified the root cause and resolved it. As per the below quote,

“The job isn’t to just fix the problem. It is also to restore the customer’s confidence. DO BOTH!” – Shep Hyken

we, the BizTalk360 support team, always work hard to resolve the customers’ issues and achieve customer satisfaction.

The original case stated by the customer

There was a ticket from the customer stating that “Send port is not showing on BT360 Application portal”. In BizTalk360 console, the artifacts get listed when we navigate to Operations -> Application Support -> Applications. The case was that a send port was not getting listed here. But all the send ports were getting listed at the time of assigning alarm for monitoring activity. There was no issue with the other artifacts and they were getting listed properly.

Sendport information missing in BizTalk360

Backtracking and Analyzing the case with the Network Response

Generally, when there is any issue related to UI, we ask for the JSON response from the Network tab in the Developer’s console of the browser. By pressing F12, we can open the browser console and check for any exceptions in the service calls. In the Network tab of the console, the service calls for each operation gets listed from which we can get the request headers and JSON response. This way we can check for the exception details and work on the same. So, we replied to the ticket asking for the network response. But we did not get the required information from the JSON response. The next step was to go on a call with the customer involving one of our technical team members through the web meeting with a screen sharing session.

In the web meeting, we tried different scenarios to check for the send ports. We tried in the Search Artifacts section and it was getting listed without any problem. But there was a weird thing seen. There were multiple entries for the same send port with different URI configured. This is the first time we have come across such an issue. But will this be the issue for the send port not getting listed? Let’s see what’s happening.

Discrepancy in the Send Port information

We exported the send port data from the customer and checked it.  There were multiples entries for the same send port but with different transport type and protocols configured. But in BizTalk server, it does not allow us to create send ports with duplicate names. Then how come this would happen at the customer end? We started our investigation further. Then we found that the multiple entries were due to the backup transport configured for the send ports.  But this was not the cause of the issue. What a strange issue? Shall we move further with the analysis?

Discrepancy in send port information pattern

Was the DB2 Adapter causing the real problem

On further analysis on this case, we found that DB2 adapter was being used in one of the send ports and it is not a standard BizTalk adapter. The BizTalk Adapter for DB2 is a send and receive adapter that enables BizTalk orchestrations to interact with host systems. Specifically, the adapter enables to send and receive operations over TCP/IP and APPC connections to DB2 databases running on a mainframe, AS/400, and UDB platforms. Based on Host Integration Server (HIS) technology, the adapter uses the Data Access Library to configure DB2 connections, and the Managed Provider for DB2 to issue SQL commands and stored procedures.

The trace logs also indicated a NULL assignment for the Transport type for the send port with this adapter. It’s a prerequisite for BizTalk360, that BizTalk Admin components must be installed in the BizTalk360 server, in case of the BizTalk360 standalone installation. Since DB2 adapter comes with HIS, it was suggested to the customer to install HIS in the BizTalk360 server and observe for the send ports listing. But even after installing HIS, the same issue persisted. We also tried to replicate the same scenario by installing HIS with the DB2 adapter in a BizTalk360 standalone server. The send ports with different combinations of adapters in the transport types were created and tested. But the issue was not reproducible. So, we concluded that the DB2 adapter was not the real cause of the problem.

The Console App made the trick

Sometimes, the issue may seem to be simple. But identifying the root cause of the issue is very difficult. And that too for some strange issues, it would be extremely difficult if the issue is not reproducible. It might not be good to disturb the customers often since they might be busy. Our next plan was to provide a console app to get the complete details of the send ports configured. This app was quite helpful for us to find the root cause. Read further to know the real cause.

The console app was given to the customer to get the complete details of the send ports. The app would give the result in the JSON format with all the details like the name, URI configured, transport type, send handlers etc., The BizTalk application which contained the send port and the database details must be entered in the app to fetch the response.

JSON response with the sendport details for the console app

From the screenshot, we can see that the sendHandler for the secondaryTransport

does not contain the value of transport type. This was the cause for the send port not getting displayed. It was causing the exception.

Finally, the Backup Transport configuration was the cause

We probed further into the case as to why the sendHandler details were not coming up. The Backup transport was configured to “None” in the BizTalk admin console for that send port. Even though it was configured to None, we again asked them to update it again to None and then save it. This time, the issue was resolved and the send port got listed in BizTalk360 UI. It might have happened when importing the send port configuration, back up Transport Type is set to other than “None”. (Type can be empty or NULL).

If Transport type is other than None, then the code will generate the send handler and look for Transport Type. But it could not find the transport type and hence throws an error. The same issue happened in the production environment also and got resolved the same way.

To configure BackupTransport for send port in BizTalk server

Conclusion

When we import the send port configuration, we must make sure that the Backup transport type data is properly set to None. It should not be set to NULL or empty. This way we can make sure that all the send ports are getting listed in the BizTalk360 UI without any problem. We could identify this with the help of the console app.

Author: Praveena Jayanarayanan

I am working as Senior Support Engineer at BizTalk360. I always believe in team work leading to success because “We all cannot do everything or solve every issue. ‘It’s impossible’. However, if we each simply do our part, make our own contribution, regardless of how small we may think it is…. together it adds up and great things get accomplished.”

The post SendPort is not showing in BizTalk360 appeared first on BizTalkGurus.

Customer story: BizTalk management through PowerApps

$
0
0

It’s been a month since Feature Pack 1 was released for BizTalk Server 2016 Enterprise and Developer edition. The Feature pack introduced a set of new features, and helped customers leverage new technologies as well as taking advantage of tools they already used in their organization, in this case to enable a more streamlined management of their BizTalk Installation.

One of these customers were FortisAlbert, an energy company located Alberta, Canada. With the new management REST APIs with full Swagger support they were able to take parts of the operational management of the environments out from the BizTalk Servers and Administration Console and build a PowerApp to create and maintain their applications. Making the 24/7 support of their environment easier for the operational teams.

Having the option to add, update and even start existing artifacts directly from the PowerApp has helped FortisAlbert to enhance their productivity and speed of resolving live incidents.

“FortisAlberta has been using BizTalk since 2006 and is currently migrating to BizTalk 2016 due to its versatility, adaptability and ability to integrate disparate systems with ease.”

Anthony See, FortisAlberta

5-23-2017-9-09-14-am

host-instancessend-ports

The post Customer story: BizTalk management through PowerApps appeared first on BizTalkGurus.


Step by step configuration to publish BizTalk operational data on Power BI whitepaper

$
0
0

After some request by the community and after I publish in my blog as a season of blog posts, Step by step configuration to publish BizTalk operational data on Power BI is available as a whitepaper!

Recently, the Microsoft Product team released a first feature pack for BizTalk Server 2016 (only available for Enterprise and Developer edition). This whitepaper will help you understand how to install and configure one of the new features of BizTalk Server 2016:

  • Leverage operational data – View operational data from anywhere and with any device using Power BI, works and how we can configure it.

BizTalk operational data on Power BI whitepaper

What to expect about Step by step configuration to publish BizTalk operational data on Power BI

This whitepaper will give a step-by-step explanation of what component or tools you need to install and configure to enable BizTalk operational data to be published in a Power BI report.

Table of Contents

  1. About the Author
  2. Introduction
  3. What is Operational Data?
  4. System Requirements to Enable BizTalk Server 2016 Operational Data
  5. Step-by-step Configuration to Enable BizTalk Server 2016 Operational Data Feed
    1. First step: Install Microsoft Power BI Desktop
    2. Second step: Enable operational data feed
    3. Third step: Use the BizTalk Server Operational Data Power BI template to publish the report to Power BI
    4. Fourth step: Connect Power BI BizTalkOperationalData dataset with your on-premise BizTalk environment

Where I can download it

You can download the whitepaper here:

I would like to take this opportunity also to say thanks to my amazing team of BizTalk360 for the proofreading and for once again join forces with me to publish for free another white paper.

I hope you enjoy reading this paper and any comments or suggestions are welcome.

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

The post Step by step configuration to publish BizTalk operational data on Power BI whitepaper appeared first on BizTalkGurus.

Techorama 2017 Keynote – Recap

$
0
0

Techorama is a yearly International Technology Conference which takes place at Metropolis, Antwerp. With 1500+ physical participants across the globe, the stage was all set to witness the intelligence of Azure. Among the thousands of virtual participants, I am happy to document the Keynote presented by Scott Guthrie, Executive Vice President of Cloud and Enterprise Group, Microsoft on Developing with the cloud. The most interesting feature of this demo is, Scott has scaled the whole demo on a Scenario driven approach from the perspective of a common developer. Let me take you through this keynote quickly.

Azure Mobile app

The Inception of cloud inside a mobile! Yes, you heard it right. Microsoft team has come up with Azure App for IOS/Android/Windows to manage all your cloud services. You can now easily manage all your cloud functionalities from Mobile.

AzureMobileApp

Integrated Bashshell Client

Now the Bash Shell is integrated into the azure cloud to manage/retrieve all the azure services with just a type of a command. The Bash Shell client is opened in the browser pop-up and get connected to the cloud without any keys. More of the Automation scripts in future can get executed easily with this Bash in place. Also, it provides a CLI documentation for the list of commands/arguments. You can expect a Powershell client soon! 

Azure Cloud Shell

Application Map

The flow between different cloud services and their status with all diagnostic logs and charts are displayed in the dashboard level. As a top-down approach, you can get to the in-depth level of tracking per instance based on failure/success/slow response scenarios with all diagnostics, stack trace and creation of a work item from the failure stack traces. From the admin/operations perspective, this feature is a great value add.

Application Map

Stack trace with Work item creation

Stack Trace with work item creation

Security Center

Managing the security of the cloud system could be a complex task. With the Security center in place, we can easily manage all the VMs/other cloud services. The machine learning algorithms at the backend will fetch all the possible recommendations for an environment or the services.

Security Center

Recommendations

The possible recommendations for virtual machines are provided with the help of Machine learning Algorithms.

Security Recomendation

Essentials for Mobile success

To deliver a seamless mobile experience to the user, you need to have an interactive user-friendly UI, BTD (Build, Test, Deploy automation) and scalability with the cloud infrastructure. These are the essentials for Mobile success and Microsoft with a Xamarin platform has nailed it.

Mobile Essentials

A favorite area of mine has been added with much needed intelligent feature. Xamarin – VS2017 combo is now makings its step into a real-time debugging!!!

You can pair up your iPhone/any mobile device to the visual studio with the Xamarin Live player which allows you to perform live debugging. Dev-Ops support to Xamarin has now been extended, you can now make a build-test-deploy to any firmware connected to the cloud as like a Continuous Integration Build. Automation in testing and deployment for the mobile framework is the best part. You can get the real-time memory usage statistics for your application on a single window. Also, you can now run VS2017 on IOS as well. 🙂

Xamarin Live Player

The mobile features have not stopped with this. The VS Mobile center is also integrated here to make a staging test with your friend’s community to get feedback on your mobile application before we submit to any mobile stores. Cool, isn’t it.

SQL server 2017

Scott also revealed some features of upcoming SQL server 2017, which has a capability to run on Linux OS and Docker apart from Windows.

The new SQL Server 2017 has got Adaptive Query Processing and Advance Machine Learning features and can offer in-memory support for advanced analytics. Also, SQL server is capable of seamless failovers between on-premise and cloud SQL with no downtime along with Azure Database migration service.

SQL Server2017

Azure Database- SQL Injection Alerts

SQL injection could be the most faced problems of an application. As a remedy, Azure SQL database now can detect the SQL injection by machine learning algorithms. It can send you the alert when an abnormal query gets executed.

Security Alert

Showing the vulnerability in the query

SQL Injection

New Relational Database service

The Relational Database service is now extended to PostgreSQL as a service and MySQL as a service which can seamlessly integrate with your application.

Relational Database

Data at Planet scale: COSMOS-DB

This could be the right statement to explain Cosmos DB. The Azure has come with Globally distributed multi-model database service for higher scalability and geographical access. You can easily replicate/mirror/clone the database based on the user base to any geographical location. To give you an example you can scale from Giga to Petabytes of data and from Hundreds to Millions of transactions with all metrics in place. And this makes the name COSMOS!

Scott has also shown us a video on how a JET online retailer is using cosmosDB and chat bot which runs with the Cosmos DB to answer intelligent human queries. With Cosmos DB and Gremin API you can retrieve the comprehensive graphical analysis of the data. Here, he showed us the Marvel comics characters and friends chart of Mr.Stark, quite cool!

Replicate Data Globally

Convert exist apps to Container based microservice Architecture

You may all wonder how to make your existing application to the Azure container based architecture and here is a solution with the support of Docker. In your existing application project, you can easily add the Docker which makes you run your application on the image of ASP.net with which it can easily get into the services of cloud build-deploy-test framework of continuous integration. A simple addition of Docker metadata file has made the Dev-ops much easier.

Azure stack

There are a lot of case studies which indicates the love towards azure functionalities but enterprises were not able to use it for tailor-made solutions. There comes an Azure-Stack, a private cloud hosting capability for your data center to privatize and use all cloud expertise on your own ground.

Azure Stack

Conclusion

As more features including Azure Functions, Service Fabric, etc. are being introduced, this gist of keynote would have given you the overall view on The Intelligent Cloud and much more to come on the floor: tune to Techorama channel9 for more updates from 2nd-day events. With cloud scaling out with new capabilities, there will never be an application in future without rel on ing cloud services.

Happy Cloud Engineering!!!

Author: Vignesh Sukumar

Vignesh, A Senior BizTalk Developer @BizTalk360 has crossed half a decade of BizTalk Experience. He is passionate about evolving Integration Technologies. Vignesh has worked for several BizTalk Projects on various Integration Patterns and has an expertise on BAM. His Hobbies includes Training, Mentoring and Travelling

The post Techorama 2017 Keynote – Recap appeared first on BizTalkGurus.

BizTalk Server: Teach me something new about Flat Files (or not) video and slides are available at Integration Monday

$
0
0

Last Monday I presented, once again, a session in the Integration Monday series. This time the topic was BizTalk Server: Teach me something new about Flat Files (or not). This was my fifth session that I deliver:

And I think will not be the last! However, this time was different for many aspects and in a certain way it was a crazy session… Despite having some post about BizTalk Server: Teach me something new about Flat Files on my blog, I didn’t have time to prepare this session (sent to a crazy mission for a client and also because I had to organize the integration track on TUGA IT event), I had a small problem in my BizTalk Server 2016 machine in which I had to switch to my BizTalk Server 2013 R2 VM, interrupted by the kids in the middle of the session because the girls wanted me to have dinner with them (worthy of being in this series)… but it all ended well and I think it was a very nice session with two great real case samples:

  • Removing headers from a flat file (CSV) using only the schema (without any custom pipeline component)
  • And removing empty lines from a delimited flat file, again, using only the schema (without any custom pipeline component)

For those who were online, I hope you have enjoyed it and sorry for all the confusion. And for those who did not have the chance to be there, you can now view it because the session is recorded and available on the Integration Monday website. I hope you like it!

Session Name: BizTalk Server: Teach me something new about Flat Files (or not)

BizTalk Server: Teach me something new about Flat Files

Session Overview: Despite over the year’s new protocols, formats or patterns emerged like Web Services, WCF RESTful services, XML, JSON, among others. The use of text files (Flat Files ) as CSV (Comma Separated Values) or TXT, one of the oldest common patterns for exchanging messages, still remains today one of the most used standards in systems integration and/or communication with business partners.

While tools like Excel can help us interpret such files, this type of process is always iterative and requires few user tips so that software can determine where there is a need to separate the fields/columns as well the data type of each field. But for a system integration (Enterprise Application Integration) like BizTalk Server, you must reduce any ambiguity, so that these kinds of operations can be performed thousands of times with confidence and without having recourse to a manual operator.

In this session we will first address: How we can easily implement a robust File Transfer integration in BizTalk Server (using Content-Based Routing in BizTalk with retries, backup channel and so on).
And second: How to process Flat Files documents (TXT, CSV …) in BizTalk Server. Addressing what types of flat files are supported? How is the process of transforming text files (also called Flat Files) into XML documents (Syntax Transformations) – where does it happen and which components are needed. How can I perform a flat file validation?

Integration Monday is full of great sessions that you can watch and I will also take this opportunity to invite you all to join us next Monday.

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

The post BizTalk Server: Teach me something new about Flat Files (or not) video and slides are available at Integration Monday appeared first on BizTalkGurus.

BizTalk Application Insights in depth – Part 1

$
0
0

In my previous blog, I explained about installing BTS 2016 feature pack 1 and configuring it for Application Insights integration. In this article, I want to go a bit deeper and try to demonstrate,

  • Nature of  tracking data sent to Application Insights
  • Structure of the data
  • Querying  data in Application Insights
  • Some practical examples of getting sensible analytics for BizTalk interfaces.

I am hoping to give a jump start to someone who wants to use the Application Insights for BizTalk Server 2016.

Tracking data

As you know the term tracking data in BizTalk refers to different types of data emitted from different artifacts. It could be in/out events from ports and orchestrations, pipeline components, it could be system context properties, could be custom properties tracked from custom property schemas, could be message body in various artifacts, could be events fired from rule engine etc. So we would like to know, whether we will be able to get all this data in Application Insights or is it just a subset. I will try to answer this question based on the POC I have created.

POC I created is pretty simple. It has one receive port which receives an order XML file, processes that in an orchestration and send it to two different send ports. It can be pictorially represented as below.

XML transmit receive pipeline

  • I have enabled pipeline tracking on XML receive and XML transport pipelines.

biztalk pipeline properties

  • Enabled track message body and analytics in Receive ports (If you want to know about Analytics option please refer to my first article.

Receive port properties

  • Enabled the Track Events, Track Message Bodies, and Analytics on orchestration.

biztalk orchestration properties

  • Enabled the Analytics on Send Port

send port properties

Note: I enabled a different level of tracking at different artifacts to see if it has an impact on the analytics data sent to Application Insights. Later I realized that different tracking levels do not have any impact on the analytics data.

Analytics Data in Application Insights

I placed a single file into the receive location and started observing the events pushed to Application Insights. In general, Applications integrated with Application Insights can send data belonging to various categories, such as traces, customEvents, pageViews, requests, dependencies, exceptions, availabilityResults, customMetrics, band browserTimings. With BizTalk, I have observed that data belongs to “CustomEvents” category. Following are the custom events which are ingested from my BizTalk interface.

application insights

  • There are two events for a receive port.
  • There is an event for every logical port inside the orchestration. And hence we can see three events in total for orchestration.
  • There are two events for each send port.

All these events can be related to events logged into “Tracked events” query results which are shown below.

biztalk application query results

Structure of a BizTalk custom event

In the previous section, we saw that our BizTalk interface emitted various custom events for ports and orchestration. In this section, we will look into the structure of data which is captured in a custom event.

Event Metadata

Event metadata is the list of values which defines an event. Following are the event metadata in one of the custom events.

biztalk client details

Custom Dimensions

Custom dimensions consist of the service instance details and context properties promoted in the messaging instance. Hence we can observe two different kinds of data under custom dimensions.

Service instance properties: These are the values specific to service instance associated with the messaging event.

biztalk service instance properties

Context properties:  All the context properties which are non-integer type will be listed under the custom dimensions.

context properties

Custom Measurements

As per my observation, custom measurements only contain the context properties of integer type.
custom measurements

Since there is no proper documentation regarding this, I tried to prove this theory by creating three custom properties in a property schema and promoted the fields in the incoming message. Following is the property schema that I defined.

biztalk property schema

I observed that PartyID and AskPrice properties which are of type string and decimal respectively are moved to Custom Dimensions section. Property Quantity which is of type integer is moved to Custom measurement.

Querying data

As discussed in above section all the BizTalk events are tracked under the customEvents category. Hence our query will start with customEvents.

querying data in biztalk

Query language in Application Insights is very straightforward and yet very powerful. If you want to find out all the construct of this query language please refer this link Application Insights Analytics Reference.

In this section, I would like to cover some concepts or techniques which are relevant for querying BizTalk events.

Convert the context property values to specific types

In Application Insights, the context property values are stored as dynamic types. When you directly use them into queries especially in aggregations, you will receive a type casting exception as shown below.

converting property values

To overcome this error, you will need to convert the context property to a specific type as shown below.

custom events

Easy way to bring a context property key into the query

Since context properties are a combination of namespace and property name, it will be a bit of an effort to type them in the queries that we create. To bring the context property on to the query page easily, follow steps as below.

  • Query for custom events and navigate to the property you are interested in the results section.
  • When you hover the mouse on the desired property, we will get two buttons. ‘+’ for inclusion and ‘-’ for exclusion.

property context key

Selecting specific fields

If you already know app insights query language, this tip is not so special. But if you are new to it and trying to find out how to select a column, you will face some difficulty as I did. The main reason for this is there is no construct called “select”. Instead, you will have to use something called “project”. Below is an example query.

selecting specific fields

Some useful sample queries.

In this section, I will try to list some queries which I found useful.

Message count by port names

query 


customEvents
| where customDimensions.Direction == "Receive" 
| summarize count() by tostring(customDimensions.["PortName (http_//schemas.microsoft.com/BizTalk/2003/messagetracking-properties)"])

Chart

biztalk analytics

Messaging Volume by schema

Query

customEvents
| where customDimensions.Direction == "Receive" 
| summarize count() by tostring(customDimensions.["MessageType (http_//schemas.microsoft.com/BizTalk/2003/system-properties)"])

Chart

biztalk server analytics

Analytics with custom context properties.

Ability to generate analytics reports based on the custom promoted properties is a very powerful feature which really makes using application insights interesting. As I explained in previous sections I have created a custom property schema to track PartId, Quantity and AskPrice fields. Now we will see some example reports based on this.

Total quantity by part id

Query

customEvents
| where customDimensions.PortType == "ReceivePort" 
| where customDimensions.Direction == "Send" 
|summarize sum(toint(customMeasurements.["Quantity (https_//SampleBizTalkApplication.PropertySchema)"])) by PartId = tostring(customDimensions.["PartID (https_//SampleBizTalkApplication.PropertySchema)"])

Chart
biztalk analytics data

Total sales over period of time

Query

customEvents
| where customDimensions.PortType == "ReceivePort" 
| where customDimensions.Direction == "Send" 
| summarize sum(todouble(customDimensions.["AskPrice (https_//SampleBizTalkApplication.PropertySchema)"])) by bin( timestamp,10m)

Chart
sales chart with biztalk server

Pinning charts to Azure dashboard

All the charts that you have created can be pinned to an Azure dashboard and you can club these charts with other application dashboards as well. My dashboard with the charts that we created looks as below.

azure dashboard

Summary

In summary BizTalk analytics option which is introduced in BizTalk Server 2016 Feature Pack 1 is useful to get analytics out of tracking data. I would like to conclude by stating following points.

  • Only the tracked messaging events, service instance information and context properties of associated service instance are sent to App iInsightsby analytics feature. Message body, pipeline events, business rule engine events etc. are not being pushed out.
  • Under messaging events, I was unable to find the Transmission Failure events for send ports. This will be useful for getting metrics on failure rates. If you agree with this observation please vote here.
  • The different level of tracking on ports and orchestrations does not have an impact on the data being transmitted to app insights.
  • Orchestration failures/suspended events are not pushed to application insights. It would be good if Microsoft provides an extensible feature to push exceptions from orchestrations. If you agree please vote here
  • There is no control on what context properties published to app insights. It is all or nothing scenario. It would be good to have control on it. Especially when you are promoting and tracking business data. If you agree please vote here.
  • Ability to perform analytics based on context property values can turn out to be a powerful feature for BizTalk implementations.
Author: Srinivasa Mahendrakar

Technical Lead at BizTalk360 UK – I am an Integration consultant with more than 11 years of experience in design and development of On-premises and Cloud based EAI and B2B solutions using Microsoft Technologies.

The post BizTalk Application Insights in depth – Part 1 appeared first on BizTalkGurus.

Microsoft Integration (Azure and much more) Stencils Pack v2.5 for Visio 2016/2013 is now available

$
0
0

Once again, my Microsoft Integration Stencils Pack was updated with new stencils. This time I added near 193 new shapes and additional reorganization in the shapes by adding two new files/categories: MIS Power BI and MIS Developer. With these new additions, this package now contains an astounding total of ~1287 shapes (symbols/icons) that will help you visually represent Integration architectures (On-premise, Cloud or Hybrid scenarios) and Cloud solutions diagrams in Visio 2016/2013. It will provide symbols/icons to visually represent features, systems, processes and architectures that use BizTalk Server, API Management, Logic Apps, Microsoft Azure and related technologies.

  • BizTalk Server
  • Microsoft Azure
    • BizTalk Services
    • Azure App Service (API Apps, Web Apps, Mobile Apps and Logic Apps)
    • API Management
    • Event Hubs
    • Service Bus
    • Azure IoT and Docker
    • Virtual Machines and Network
    • SQL Server, DocumentDB, CosmosDB, MySQL, …
    • Machine Learning, Stream Analytics, Data Factory, Data Pipelines
    • and so on
  • Microsoft Flow
  • PowerApps
  • Power BI
  • Office365, SharePoint
  • DevOpps: PowerShell, Containers
  • And much more…

Microsoft Integration Stencils Pack v2.5

The Microsoft Integration Stencils Pack v2.5 is composed by 13 files:

  • Microsoft Integration Stencils v2.5
  • MIS Apps and Systems Logo Stencils v2.5
  • MIS Azure Portal, Services and VSTS Stencils v2.5
  • MIS Azure SDK and Tools Stencils v2.5
  • MIS Azure Services Stencils v2.5
  • MIS Deprecated Stencils v2.5
  • MIS Developer v2.5 (new)
  • MIS Devices Stencils v2.5
  • MIS IoT Devices Stencils v2.5
  • MIS Power BI v2.5 (new)
  • MIS Servers and Hardware Stencils v2.5
  • MIS Support Stencils v2.5
  • MIS Users and Roles Stencils v2.5

These are some of the new shapes you can find in this new version:

Microsoft Integration Stencils Pack v2.5

You can download Microsoft Integration Stencils Pack for Visio 2016/2013 from:

Microsoft Integration Stencils Pack for Visio 2016/2013 (10,1 MB)
Microsoft | TechNet Gallery

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

The post Microsoft Integration (Azure and much more) Stencils Pack v2.5 for Visio 2016/2013 is now available appeared first on BizTalkGurus.

TUGA IT 2017 – Recap of an amazing event

$
0
0

Last week I was in Lisbon for TUGA IT, one of the greatest events here in Europe. A full day of workshops, followed by two days of sessions in multiple tracks, with attendees and presenters from all around Europe. For those who missed it this year, make sure to be there next time!

https://pbs.twimg.com/media/DARVdjOXcAArTIe.jpg:large

On Saturday I did a session on Industrial IoT using Azure IoT Hub. The industrial space is where we will be seeing a huge growth in IoT, and I showed how we can use Azure IoT Hub to manage our devices and do bi-directional communication. Dynamics 365 was used to give a familiar and easy to use interface to work with these devices and visualize the data.

And of course, I was not alone. The other speakers in the integration track, are community heroes and my good friends, Sandro, Nino, Steef-Jan, Tomasso and Ricardo, who all did some amazing sessions as well. It is great to be able to present side-by-side with these amazing guys, to learn and discuss.

There were some other great sessions as well in the other tracks, like Karl’s session on DevOps, Kris‘ session on the Bot Framework, and many more. At an event like this it’s always so much content being presented, that you can’t always see every session you would like, but luckily the speakers are always willing to have a discussion with you outside of the sessions as well. And with 8 different tracks running side-by-side, there’s always something interesting going on.

One of the advantages of attending all these conferences, is that I get to see a lot of cities as well. This was the second time I was in Lisbon, and Sandro has showed us a lot of beautiful spots in this great city. We enjoyed traditional food and drinks, a lot of ice cream, and had a lot of fun together.

The post TUGA IT 2017 – Recap of an amazing event appeared first on BizTalkGurus.

Stef’s Monthly Update – May 2017

$
0
0

The month May went quicker than as I realized myself. Almost half 2017 and I must say I have enjoyed it to the fullest. Speaking, travelling, working on an interesting project with the latest Azure Services, and recording another Middleware Friday show. It was tha best, it was amazing!

Month May

In May I started off with working on a recording for Middleware Friday, I recorded a demo to show how one can distinguish Flow from Logic Apps. You can view the recording named Task Management Face off with Logic Apps and Flow.

The next thing I did was prepare myself for TUGAIT, where I had two sessions. One session on Friday in the Azure track, where I talked about Azure Functions and WebJobs.

And one session on Saturday in the integration track about the number of options with integration and Azure.

I enjoyed both and was able to crack a few jokes. Especially on Saturday, where kept using Trump and his hair as a running joke.

TUGAIT 2017 was an amazing event and I enjoyed the event, hanging out with Sandro, Nino, Eldert and Tomasso and the food!

During the TUGA event I did three new interviews for my YouTube series “Talking with Integration Pros”. And this time I interviewed:

I will continue the series next month.

Books

In May I was able to read a few books again. I started reading a book about genes. Before I started my career in IT I was a Biotech researcher and worked in the field of DNA, BioTechnology and Immunology. The book is called The Gene by Siddharta Mukherjee.

I loved the story line and went through the 500 pages pretty quick (still two weeks in the evenings). The other book I read was Sapiens by Yuval Noah Harari. And this book is a good follow up of the previous one!

The final book I read this month was about Graph databases. In my current project we have started with a proof of concept/architecture on Azure Cosmos DB, Graph and Azure Search.

The book helped me understand Graph databases better.

Music

My favorite albums that were released in May were:

  • God Dethroned – The World Ablaze
  • Voyager – Ghost Mile
  • Sólstafir – Berdreyminn
  • Avatarium – Hurricanes And Halos
  • The Night Flight Orchestra – Amber Galactic

There you have it Stef’s fourth Monthly Update and I can look back again with great joy. Not much running this month as I was recovering a bit from the marathon in April. I am looking forward to June as I will be speaking at the BTUG June event in Belgium and Integrate 2017 in London.

Cheers,

Steef-Jan

Author: Steef-Jan Wiggers

Steef-Jan Wiggers is all in on Microsoft Azure, Integration, and Data Science. He has over 15 years’ experience in a wide variety of scenarios such as custom .NET solution development, overseeing large enterprise integrations, building web services, managing projects, designing web services, experimenting with data, SQL Server database administration, and consulting. Steef-Jan loves challenges in the Microsoft playing field combining it with his domain knowledge in energy, utility, banking, insurance, health care, agriculture, (local) government, bio-sciences, retail, travel and logistics. He is very active in the community as a blogger, TechNet Wiki author, book author, and global public speaker. For these efforts, Microsoft has recognized him a Microsoft MVP for the past 6 years.

The post Stef’s Monthly Update – May 2017 appeared first on BizTalkGurus.


Microsoft Integration Weekly Update: May 29

$
0
0

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

On-Premise Integration:

Cloud and Hybrid Integration:

Feedback

Hope this would be helpful. Please feel free to let me know your feedback on the Integration weekly series.

Advertisements

The post Microsoft Integration Weekly Update: May 29 appeared first on BizTalkGurus.

Announcing BizTalk Server 2016 CU2

$
0
0

We are happy to announce the 2nd Cumulative updates BizTalk Server 2016.

This cumulative update package for Microsoft BizTalk Server 2016 contains hotfixes for the BizTalk Server 2016 issues that were resolved after the release of BizTalk Server 2016.

NOTE: This CU is not applicable to environment where Feature Pack 1 is installed, there will be a new Cumulative Update for BizTalk Server 2016 Feature Pack 1 coming soon.

We recommend that you test hotfixes before you deploy them in a production environment. Because the builds are cumulative, each new update release contains all the hotfixes and all the security updates that were included in the previous BizTalk Server 2016 CUs. We recommend that you consider applying the most recent BizTalk Server 2016 update release.

Cumulative update package 6 for Microsoft BizTalk Server 2016:

  • BizTalk Server Adapter
    • 4010116: FIX: “Unable to allocate client in pool” error in NCo in BizTalk Server
    • 4013857: FIX: The WCF-SAP adapter crashes after you change the ConnectorType property to NCo in BizTalk Server
    • 4011935: FIX: MQSeries receive location fails to recover after MQ server restarts
    • 4012183: FIX: Error while retrieving metadata for IDOC/RFC/tRFC/BAPI operations when SAP system uses non-Unicode encoding and the connection type is NCo in BizTalk Server
    • 4013857: FIX: The WCF-SAP adapter crashes after you change the ConnectorType property to NCo in BizTalk Server
    • 4020011: FIX: WCF-WebHTTP Two-Way Send Response responds with an empty message and causes the JSON decoder to fail in BizTalk Server
    • 4020012: FIX: MIME/SMIME decoder in the MIME decoder pipeline component selects an incorrect MIME message part in BizTalk Server
    • 4020014: FIX: SAP adapter in NCo mode does not trim trailing non-printable characters in BizTalk Server
    • 4020015: FIX: File receive locations with alternate credentials get stuck in a tight loop after network failure
  • BizTalk Server Accelerators
    • 4020010: FIX: Dynamic MLLP One-Way Send fails if you enable the “Solicit Response Enabled” property in BizTalk Server
  • BizTalk Server Design Tools
    • 4022593: Update for BizTalk Server adds support for the .NET Framework 4.7
  • BizTalk Server Message Runtime, Pipelines, and Tracking
    • 3194297: “Document type does not match any of the given schemas” error message in BizTalk Server
    • 4020018: FIX: BOM isn’t removed by MIME encoder when Content-Transfer-Encoding is 8-bit in BizTalk Server

Download and read more here

The post Announcing BizTalk Server 2016 CU2 appeared first on BizTalkGurus.

Get access to a great range of BizTalk360’s Value added services

$
0
0

At BizTalk360, we provide some essential services which benefit the customers and our partners immensely. Businesses trust and use our expertise to assist them on a wide range of projects relating to their BizTalk needs. This includes, but is not limited to:

  • Setting up our monitoring product BizTalk360 and integrating their BizTalk servers.
  • Advising on the best practices used in the community for monitoring.
  • Helping in BizTalk migration projects via support and licensing discounts.
  • In-Depth training and advice on end to end BizTalk solutions.

There are 3 types of services we offer, namely:

  1. Product Demos
  2. Best Practice and Installation service
  3. In-Depth Training for BizTalk360

Product Demos

request biztalk360 demo session

Demos can turn prospects into customers. They combat client concerns and provide proof what the product can do for you (our customers). Customers often want to see it in action before they commit to a purchase. We also offer a Free Trial as well.

Free Trial of BizTalk360
It gives the customer the best opportunity to experience what it would be like to own the product

Most of the times, we are approached by the customers themselves who googled and came across our product BizTalk360 and want to know more about it.

Sometimes we are approached by consultants who say “We love your product and need your help to convince Management to get it for our company”.

We also give Demos to our Partners as well to keep them well-equipped and confident in the product.

BizTalk360 Demo – Request one today!

We avoid giving generic demos. We try to learn about our audience’s specific challenges and what they want to achieve and tailor the demo accordingly. We want customers to feel empowered and be confident of the product they are going to be purchasing. The customer should be happy that BizTalk360 is going to be a good fit for their requirements.
We have also printed some documentation which outlines all the main features of BizTalk360 – What’s the business value of using Biztalk360, which we will be giving out at our upcoming INTEGRATE 2017 event.

Best Practice installation and configuration

important troubleshooting steps

Customers like hearing about best practices, especially if they’re easy to implement and will result in an immediate benefit. We provide this service to help set up BizTalk360 at the customer site. While the actual setup can be quite simple and quick to setup, depending on your environment you might face some issues. We provide a 2-hour service (chargeable) where we set up the product (via a web call) and go through a few basic setup tasks

  • SMTP settings – These are the first thing to setup to ensure that the notifications for any alarm are successfully sent.
  • User access policy – Add user specific permissions to view different parts of the product.
  • Setting up Alarms/Monitoring Dashboard – Monitoring artifacts is the core of our product. We explain the types of alarms available, which are more suited to certain scenarios. The dashboard helps you to see all the status of artifacts mapped in 1 look.
  • Monitoring of SQL Servers
  • Event Log setup – This will enable you to view Windows logs across multiple BizTalk servers
  • Purging Policy – This will help set up purging to avoid bloating of the database.

Often a simple task can also turn out to be complex when dealing with multiple environments. A lot of our customers also take this opportunity for their BizTalk administrators to get a quick crash course in the product to see how they can utilize the product in the best way possible.

Some important troubleshooting steps during installation

Best Practice installation and configuration

  • Try to run the MSI using logs, so if any issues occur they are detailed in the logs.
  • Once the installation is complete successfully, if the home page is not displayed and you see any errors like Icons/images missing – Check the ‘Static content’ tickbox is activated under ‘IIS’ in Server Manager -> Add Roles & Features
  • If you are Installing BizTalk360 on a complete standalone server and there is no trace of the remote BizTalk environment we are going to configure. Simply enter the corresponding values for BizTalkMgmtSqlInstanceName and BizTalkMgmtDb columns as shown below.trouble shooting steps in biztalk360
  • If it doesn’t help, kindly run the troubleshooter tool on the machine where BizTalk360 is installed. Troubleshooter will help you to find any permissions missed.

We are soon releasing a ‘User Guide’ of around 400 pages for the basic setup tasks required when you install BizTalk360 – written by Eva De Jong and Lex Hegt. Please look out for it at our ‘INTEGRATE 2017’ event.

BizTalk360 Training

BizTalk Server & BizTalk360 Product experts training

BizTalk360 has a lot of features which can be quite overwhelming for someone new. Many companies appreciate an in-depth intensive BizTalk360 training. This is an 8-hour training that we provide and is given by our BizTalk Server & BizTalk360 Product experts.

This session is covered over 4 days in 2-hour slots (or as desired by customer). The main idea is for customers to get a better understanding of how the product can help them achieve certain scenarios. Our experts interact with the audience and understand the environment and architecture and what customizations they want to achieve.

Initially, they go through the various features available in BizTalk360, then on subsequent days the trainer does a deep dive and goes through more technical complicated scenarios and how the customer can benefit from the product. Many customers use this as a training session for their technical staff. This is conducted via a remote webinar session (GoToMeeting) and is a very interactive and a good learning experience for most customers. The session is not a straightforward monologue but it provides plenty of space for the customer to pose questions regarding their own circumstances.

Exciting upcoming Services/Training

We also have a new training for ‘BizTalk Server Administrators’ run by our very own BizTalk expert Lex Hegt.

The audience would be Systems Administrators who deploy and manage (multi-server) BizTalk Server environments, SQL Server DBA’s who are responsible for maintaining the BizTalk Server databases or BizTalk Developers who need to support a BizTalk environment.

During this course, attendees will get a thorough training for everything they need to know to properly administer BizTalk Server.

Some of the course topics are:

  1. Installing and configuring BizTalk Groups
  2. Operating and monitoring BizTalk Server
  3. Deployment of BizTalk applications
  4. Periodical administrative tasks and best practices.

Keep tuned to our website for more such information or contact support@biztalk360.com to arrange the same.

The post Get access to a great range of BizTalk360’s Value added services appeared first on BizTalkGurus.

Test-First Mindset

$
0
0

Introduction

One of the mind-blowing development techniques that radically changed the programming world; is the Test-Driven Development.

Writing tests before we start coding? Who will do that?

I must admit that I personally wasn’t really convinced by the idea; maybe because I didn’t quite understand the reason we should write our tests first and the way we should do it. Can you have a bad Software Design with TDD? Can you break your Architecture with TDD? Yes! TDD is a discipline that you should be following. And like any discipline you must hold to a certain number of requirements. By the end of the day, it’s YOUR task to follow this simple mindset.

In this article, I will talk about the Mindset introduced by Kent Beck when writing in a Test-Driven Development environment.

Too many developers don’t see the added-value about this technique and/or don’t believe it works.

TDD works!

“Testing is not the point; the point is about Responsibility”
-Kent Beck

Benefits

Because so many of us don’t see the benefits of TDD, I thought it would make sense to specify them for you. Robert C. Martin has inspired me with this list of benefits.

Increased Certainty

One of the benefits is that you’re certain that it works. Users have more responsibility in a Test-Driven team; because they will write the Acceptance Tests (with help of course), and they will define what the system must do. By doing so, you’re certain that what you write is what the customer wants.

The amount of uncertainty that builds up by writing code that isn’t exactly what the customer wants is called: The Uncertainty Principle. We must always eliminate this uncertainty.

By writing tests first; you can tell your manager and customer: “Yes, it will work; yes, it’s what you want”.

Defect Reduction

Before I write in a Test-First mindset; I always thought that my code was full of bugs and doesn’t handle unexpected behavior.
Maybe it was because I’m very certain of myself; but also, because I wrote the tests after the code and was testing what a just wrote; not what I want to test.

This increases the Fake Coverage of your code.

Increased Courage

So many developers are “afraid” to change something in their code base. They are afraid to break something. Why are they afraid? Because they don’t have tests!

“When programmers lose the fear of cleaning; they clean”
– Robert C. Martin

A professional developer doesn’t allow that his/her code rots; so, you must refactor with courage.

In-Sync Documentation

Tests are the lowest form of documentation of your code base; always 100% in sync with the current implementation in the production code.

Simple Design

TDD is an analysis/design technique and not necessary a development technique. Tests force you to think about good design. Certainly, if you write them BEFORE you write the actual implementation. If you do so, you’re writing them in offence and not in defense (when you’re writing them afterwards).

Test-First also helps you think about the Simplest thing that could possibly work which automatically helps you to write simple structured designed code.

Test-First Mindset

When you’re introduced into the Test-First methodology, people often get Test Infected. The amount of stress that’s taking from you is remarkable. You refactor more aggressively your code without any fear that you might break something.

Test-Driven Development is based on this very simple idea to first write your test, and only then write your production code. People underestimate the part “first write your test”. When you writing your tests, you’re solving more problems than you think.

Where should I place this code? Who’s responsible for this logic? What names should I use for my methods, classes…? What result must a get from this? What isn’t valid data? How will my class interface look like? …

After trying to use TDD in my daily practice, if found myself always asking the same questio:

“I would like to have a … with … and …”

Such a simple idea changed my vision so radically about development and I’m convinced that by using this technique, you’re writing simpler code because you always think about:

“What’s the simplest thing that could make this test work”

If you find that you can implement something that isn’t the right implementation, write another test to expose this behavior you want to implement.

TDD is – in a way – a physiological methodology. What they say is true: you DO get addicted to that nice green bar that indicate that you’re tests all pass. You want that color as green as possible, you want it always green, you want it run as fast as possible so you can quickly see it’s green…

To be a Green-Bar-Addict is a nice thing.

Kent Beck Test-Patterns

It felt a little weird to just state all the patterns Kent Beck introduced. Maybe you should just read the book Test-Driven Development by Example; he’s a very nice writer and I learned a lot from the examples, patterns and ideas.

What I will do, is give you some basic patterns I will use later in the example and some patterns that we very eye-opening for me the first time.

Fake It

When Kent talked about “What’s the simplest thing that could work”, I was thinking about my implementation but what he meant was “What’s the simplest thing that could work for this test”.

If you’re testing that 2 x 3 is 6 than when you implement it, you should Fake It and just return the 6.

Very weird at first, especially because the whole Fake It approach is based on duplication; the root of all software evil. Maybe that’s the reason experienced software engineers are having problems with this approach.

But it’s a very powerful approach. Using this technique, you can quickly get the bar green (testing bar). And the quicker you get that bar green, the better. And if that means you must fake something; then you should do that.

Triangulation

This technique I found very interesting. This approach really drives the abstraction of your design. When you find yourself not knowing what to do next, or how you should go further with your refactoring; write another test to support new knowledge of the system and the start of new refactorings in your design.

Especially when you’re unsure what to do next. 

If you’re testing that 2 x 3 is 6 than in a Triangulation approach you will first return 6 and only change that if you’re testing again but then for 2 x 2 is 4.

Obvious Implementation

Of course: when the implementation is so simple, so obvious, … Than you could always implement it directly after your test. But remember that this approach is only the second option after Fake It and Triangulation.

When you find yourself taking steps that are too big, you can always take smaller steps.

If you’re testing that 2 x 3 is 6, in an Obvious Implementation approach you will just write 2 x 3 right away.

By Example

I thought it would be useful to show you some example of the TDD workflow. Since everyone is so stoked about test-driving Fibonacci I thought it would be fun to test-drive another integer sequence.

Let’s test-drive the Factorial Sequence!

What happens when we factorial 4 for example? 4! = 4 x 3 x 2 x 1 = 24

Test

But let’s start with something super simple:

Always start with the same sentence: “I would like to have a… “. I would like to have a method called Factorial which I could use to send an integer with that will calculate the factorial integer for me.

Now we have created a test before anything about factorial is implemented.

Compile

Now we have the test, let’s start now by making our code compile again.

Let’s test this:

Hooray! We have a failed test == progress!

Implement

First Steps

What’s the simplest thing that we could write in order that this test will run?

Hooray! Our test passed, we can go home, right?

A Bit Harder

What’s next? Let’s check. What happens if we would test for another value?

I know, I know. Duplication, duplication, duplication. But were testing now right, not yet in the last step of the TDD mantra.

What is the simplest we could change to make this test pass?

Yes, I’m feeling good right now. A nice green bar.

One Step Before Generalize

Let’s add just another test, a bit harder this time. But these duplication is starting to irritate me; you know the mantra: One-Two-Three-Refactor? This is the third time, so let’s start refactoring!

Ok, what’s the simplest thing?

Generalize

Ok, we could add if/else-statements all day long, but I think it’s time to some generalization. Look at what we’ve now been implementing. We write 24 but do we mean 24?

Remembering Factorial, we mean something else:

All still works, yeah. Now, we don’t actually mean 4 by 4 do we. We actually mean the original number:

And we don’t actually mean 3, 2, and 1 by 3, 2 and 1, we actually mean the original number each time mins one. So, actually the Factorial of the 3! could you say, no?

Let’s try:

Wow, still works.Wait, isn’t that if-statement redundant? 2 x 2! == 2 right?

Exploration

Now, the factorial of 0, is also 1. We haven’t tested that haven’t we? We have found a boundary condition!

This will result in a endless loop because we will try to factorial an negative number; and since factorial only happens with positieve numbers (because the formula with negative integers will result in a division by zero and so, blocking us for calculating a factorial value for these negative integers).

Again, simplest thing that could work?

Now, the last step of TDD is always Remove Duplication which in this case is the 1 that’s used two times. Let’s take care of that:

Hmm, someone may have noticed something. We could actually remove the other if-statement with checking for 1 if we adapt the check of 0. This will return 1 for us in the recursive call:

By doing this, we also have ruled out all the other negative numbers passed inside the method.

Conclusion

Why oh why are people so skeptic about Test-Driven Development. If you look at the way you use it in your daily practice, you find yourself writing simpler and more robust code.

TDD is actually a Design Methodology and not a Development Methodology. The way you think about the design, the names, the structure… all that is part of the design process of your project. The tests that you have is the added value of this approach and makes sure that you can refactor safely and are always certain of your software.

Start trying today in your daily practice so you stop thinking about: How will you implement it? but rather:

How will you test it?

 

The post Test-First Mindset appeared first on BizTalkGurus.

Microsoft Integration Weekly Update: June 5

$
0
0

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

On-Premise Integration:

Cloud and Hybrid Integration:

Feedback

Hope this would be helpful. Please feel free to let me know your feedback on the Integration weekly series.

Advertisements

The post Microsoft Integration Weekly Update: June 5 appeared first on BizTalkGurus.

Viewing all 2977 articles
Browse latest View live


Latest Images