Host Integration Product Feedback
You are using Microsoft Host Integration Server ?
Please provide us a Product Feedback : https://aka.ms/hisfeedback
Thank you 🙂
The post Host Integration Product Feedback appeared first on BizTalkGurus.
You are using Microsoft Host Integration Server ?
Please provide us a Product Feedback : https://aka.ms/hisfeedback
Thank you 🙂
The post Host Integration Product Feedback appeared first on BizTalkGurus.
Don’t miss the greatest integration event in the world.
Name : Integrate 2017
When : 26-28 June 2017
Where : Kings Place, London
Discount Code : MVPSPEAK2017REF
Link : https://www.biztalk360.com/integrate-2017/
Topics : Microsoft BizTalk, Host Integration Server, Logic Apps, IBM Legacy Integration, and much more
Speakers : Product Group, MVP (Most Valuable Professional)
Audience : From all over the world
The post Microsoft Integrate 2017 Event appeared first on BizTalkGurus.
Microsoft Build (often stylized as //build/) is an annual conference event held by Microsoft, aimed towards software engineers and web developers using Windows, Windows Phone, Microsoft Azure and other Microsoft Technologies. This year’s conference was conducted in Seattle, WA from May 10 to May 12.
Unlike the previous year which concentrated mainly on Visual Studio & .Net Core, Microsoft this time concentrated mainly on AI and Azure Services. This blog provides a compilation of all the Azure announcements from the 3-day event.
IoT Edge provides easy orchestration between code and services, so they flow securely between cloud and edge to distribute intelligence across IoT devices. This leverages on Azure Stream Analytics, Microsoft Cognitive Services, and Azure Machine learning to create more advanced IoT solutions with less time and effort.
To get more info on Azure IoT Edge please click here or check out the video on channel9.
On Wednesday Microsoft announced a new service called Azure Batch AI training. It uses Azure to train deep neural networks, which means that now it is possible for developers to train their AI without having to worry about hardware.
To get more info on Azure Batch AI Training please click here.
Microsoft has put a huge investment on Command line interface. Now you can use Azure Cloud shell from inside the azure portal!! Yes, you heard me right; now Azure portal has real bash command line interface. It is also preloaded with Azure CLI, so that you can directly use commands like azure vm list right from the portal.
Azure cloud shell is still in preview mode please click here to find more information on Azure Cloud Shell.
With this support now developers will be able to use their favourite databases on Azure. Developers will surely appreciate this, since more options means more flexibility.
Azure Database for MySQL and Azure Database for PostgreSQL services are built on the intelligent, trusted and flexible Azure relational database platform. This platform extends similar managed services benefits like Global Azure region reach, and innovations that currently power Azure SQL database and Azure SQL Data warehouse services to the MySQL and PostgreSQL database engines.
It’s Microsoft’s first globally distributed, multi-model database. Azure Cosmos DB enables you to elastically and independently scale throughput and storage across any number of Azure’s geographic regions.
To find more information on Azure Cosmos DB please click here.
Microsoft also announced new cognitive services on top of the 25 existing ones. These new services include a machine vision service, a Bing-based search engine powered by AI, a video indexer, and a new online lab where more experimental services may be unveiled.
Check out this page to find all the available cognitive services in Azure Platform.
And that’s actually all of it! Since Microsoft Build is a developer conference most of the feature announcement targeted developers, but these will probably influence the future of AI, since Microsoft is making it easy for developers to include the power of AI with minimal effort.
Umamaheswaran is the Senior Software Engineer at BizTalk360 having 6 years of experience. He is a full stack developer worked in various technologies like .NET, Angular JS etc. View all posts by Umamaheswaran Manivannan
The post Azure announcements from Microsoft Build 2017 appeared first on BizTalkGurus.
In the previous blog posts of this IoT Hub series, we have seen how we can use IoT Hub to administrate our devices, and how to do device to cloud messaging. In this post we will see how we can do cloud to device messaging, something which is much harder when not using Azure IoT Hub. IoT devices will normally be low power, low performance devices, like small footprint devices and purpose-specific devices. This means they are not meant to (and most often won’t be able to) run antivirus applications, firewalls, and other types of protection software. We want to minimize the attack surface they expose, meaning we can’t expose any open ports or other means of remoting into them. IoT Hub uses Service Bus technologies to make sure there is no inbound traffic needed toward the device, but instead uses per-device topics, allowing us to send commands and messages to our devices without the need to make them vulnerable to attacks.
When we want to send one-way notifications or commands to our devices, we can use cloud to device messages. To do this, we will expand on the EngineManagement application we created in our earlier posts, by adding the following controls, which, in our scenario, will allow us to start the fans of the selected engine.
To be able to communicate to our devices, we will first implement a ServiceClient in our class.
private readonly ServiceClient serviceClient = ServiceClient.CreateFromConnectionString("HostName=youriothubname.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=yoursharedaccesskey");
Next we implement the event handler for the Start Fans button. This type of communication targets a specific device by using the DeviceID from the device twin.
private async void ButtonStartFans_Click(object sender, EventArgs e) { var message = new Microsoft.Azure.Devices.Message(); message.Properties.Add(new KeyValuePair<string, string>("StartFans", "true")); message.Ack = DeliveryAcknowledgement.Full; // Used for getting delivery feedback await serviceClient.SendAsync(comboBoxSerialNumber.Text, message); }
Once we have sent our message, we will need to process it on our device. For this, we are going to update the client application of our simulated engine (which we also created in the previous blog posts) by adding the following method.
private static async void ReceiveMessageFromCloud(object sender, DoWorkEventArgs e) { // Continuously wait for messages while (true) { var message = await client.ReceiveAsync(); // Check if message was received if (message == null) { continue; } try { if (message.Properties.ContainsKey("StartFans") && message.Properties["StartFans"] == "true") { // This would start the fans Console.WriteLine("Fans started!"); } await client.CompleteAsync(message); } catch (Exception) { // Send to deadletter await client.RejectAsync(message); } } }
We will run this method in the background, so update the Main method, and insert the following code after the call for updating the firmware.
// Wait for messages in background var backgroundWorker = new BackgroundWorker(); backgroundWorker.DoWork += ReceiveMessageFromCloud; backgroundWorker.RunWorkerAsync();
Although cloud to device messages are a one-way communication style, we can request feedback on the delivery of the message, allowing us to invoke retries or start compensation when the message fails to be delivered. To do this, implement the following method in our EngineManagement backend application.
private async void ReceiveFeedback(object sender, DoWorkEventArgs e) { var feedbackReceiver = serviceClient.GetFeedbackReceiver(); while (true) { var feedbackBatch = await feedbackReceiver.ReceiveAsync(); // Check if feedback messages were received if (feedbackBatch == null) { continue; } // Loop through feedback messages foreach(var feedback in feedbackBatch.Records) { if(feedback.StatusCode != FeedbackStatusCode.Success) { // Handle compensation here } } await feedbackReceiver.CompleteAsync(feedbackBatch); } }
And add the following code to the constructor.
var backgroundWorker = new BackgroundWorker(); backgroundWorker.DoWork += ReceiveFeedback; backgroundWorker.RunWorkerAsync();
Another feature when sending messages from the cloud to our devices is to call a remote method on the device, which we call invoking a direct method. This type of communication is used when we want to have an immediate confirmation of the outcome of the command (unlike setting the desired state and communicating back reported properties, which has been explained in the previous two blog posts). Let’s update the EngineManagement application by adding the following controls, which would allow us to send an alarm message to the engine, sounding the alarm and displaying a message.
Now add the following event handler for clicking the Send Alarm button.
private async void ButtonSendAlarm_Click(object sender, EventArgs e) { var methodInvocation = new CloudToDeviceMethod("SoundAlarm") { ResponseTimeout = TimeSpan.FromSeconds(300) }; methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = textBoxMessage.Text })); CloudToDeviceMethodResult response = null; try { response = await serviceClient.InvokeDeviceMethodAsync(comboBoxSerialNumber.Text, methodInvocation); } catch (IotHubException) { // Do nothing } if (response != null && JObject.Parse(response.GetPayloadAsJson()).GetValue("acknowledged").Value<bool>()) { MessageBox.Show("Message was acknowledged.", "Information", MessageBoxButtons.OK, MessageBoxIcon.Information); } else { MessageBox.Show("Message was not acknowledged!", "Warning", MessageBoxButtons.OK, MessageBoxIcon.Warning); } }
And in our simulated device, implement the SoundAlarm remote method which is being called.
private static Task<MethodResponse> SoundAlarm(MethodRequest methodRequest, object userContext) { // On a real engine this would sound the alarm as well as show the message Console.ForegroundColor = ConsoleColor.Red; Console.WriteLine($"Alarm sounded with message: {JObject.Parse(methodRequest.DataAsJson).GetValue("message").Value<string>()}! Type yes to acknowledge."); Console.ForegroundColor = ConsoleColor.White; var response = JsonConvert.SerializeObject(new { acknowledged = Console.ReadLine() == "yes" }); return Task.FromResult(new MethodResponse(Encoding.UTF8.GetBytes(response), 200)); }
And finally, we need to map the SoundAlarm method to the incoming remote method call. To do this, add the following line in the Main method.
client.SetMethodHandlerAsync("SoundAlarm", SoundAlarm, null);
When invoking direct methods on devices, we can also use jobs to send the command to multiple devices. We can use our custom tags here to broadcast our message to a specific set of devices.
In this case, we will add a filter on the engine type and manufacturer, so we can, for example, send a message to all main engines manufactured by Caterpillar. In our first blog post, we added these properties as tags on the device twin, so we now use these in our filter. Start by adding the following controls to our EngineManagement application.
Now add a JobClient to the application, which will be used to broadcast and monitor our messages.
private readonly JobClient jobClient = JobClient.CreateFromConnectionString("HostName=youriothubname.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=yoursharedaccesskey");
To broadcast our message, update the event handler for the Send Alarm button to the following.
private async void ButtonSendAlarm_Click(object sender, EventArgs e) { var methodInvocation = new CloudToDeviceMethod("SoundAlarm") { ResponseTimeout = TimeSpan.FromSeconds(300) }; methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = textBoxMessage.Text })); if (checkBoxBroadcast.Checked) { try { var jobResponse = await jobClient.ScheduleDeviceMethodAsync(Guid.NewGuid().ToString(), $"tags.engineType = '{comboBoxEngineTypeFilter.Text}' and tags.manufacturer = '{textBoxManufacturerFilter.Text}'", methodInvocation, DateTime.Now, 10); await MonitorJob(jobResponse.JobId); } catch (IotHubException) { // Do nothing } } else { CloudToDeviceMethodResult response = null; try { response = await serviceClient.InvokeDeviceMethodAsync(comboBoxSerialNumber.Text, methodInvocation); } catch (IotHubException) { // Do nothing } if (response != null && JObject.Parse(response.GetPayloadAsJson()).GetValue("acknowledged").Value<bool>()) { MessageBox.Show("Message was acknowledged.", "Information", MessageBoxButtons.OK, MessageBoxIcon.Information); } else { MessageBox.Show("Message was not acknowledged!", "Warning", MessageBoxButtons.OK, MessageBoxIcon.Warning); } } }
And finally, add the MonitorJob method with the following implementation.
public async Task MonitorJob(string jobId) { JobResponse result; do { result = await jobClient.GetJobAsync(jobId); Thread.Sleep(2000); } while (result.Status != JobStatus.Completed && result.Status != JobStatus.Failed); // Check if all devices successful if (result.DeviceJobStatistics.FailedCount > 0) { MessageBox.Show("Not all engines reported success!", "Warning", MessageBoxButtons.OK, MessageBoxIcon.Warning); } else { MessageBox.Show("All engines reported success.", "Information", MessageBoxButtons.OK, MessageBoxIcon.Information); } }
By using IoT Hub we have a safe and secure way of communicating from the cloud and our backend to devices out in the field. We have seen how we can use the cloud to device messages in case we want to send one-way messages to our device or use direct methods when we want to be informed of the outcome from our invocation. By using jobs, we can also call out to multiple devices at once, limiting the devices being called by using (custom) properties of the device twin. The code for this post can be found here.
In case you missed the other articles from this IoT Hub series, take a look here.
Blog 1: Device Administration Using Azure IoT Hub
Blog 2: Implementing Device To Cloud Messaging Using IoT Hub
Blog 3: Using IoT Hub for Cloud to Device Messaging
Eldert is a Microsoft Integration Architect and Azure MVP from the Netherlands, currently working at Motion10, mainly focused on IoT and BizTalk Server and Azure integration. He comes from a .NET background, and has been in the IT since 2006. He has been working with BizTalk since 2010 and since then has expanded into Azure and surrounding technologies as well. Eldert loves working in integration projects, as each project brings new challenges and there is always something new to learn. In his spare time Eldert likes to be active in the integration community and get his hands dirty on new technologies. He can be found on Twitter at @egrootenboer and has a blog at http://blog.eldert.net/. View all posts by Eldert Grootenboer
The post Using IoT Hub for Cloud to Device Messaging appeared first on BizTalkGurus.
Let’s discuss the scenario briefly. We need to consume data from the following table. All orders with the status New must be processed!
The table can be created with the following SQL statement:
To receive the data, I prefer to create a stored procedure. This avoids maintaining potentially complex SQL queries within your Logic App. The following stored procedure selects the first order with status New and updates its status to Processed in the same statement. Remark that it also returns the @@ROWCOUNT, as this will come in handy in the next steps.
The Logic App fires with a Recurrence trigger. The stored procedure gets executed and via the ReturnCode we can easily determine whether it returned an order or not. In case an order is retrieved, its further processing can be performed, which will not be covered in this post.
If you have a BizTalk background, this is a similar approach on using a polling SQL receive location. One very important difference: the BizTalk receive adapter executes the stored procedure within the same distributed transaction as it persists the data in the MessageBox, whereas Logic Apps is completely built on API’s that have no notion of MSDTC at all.
In failure situations, when a database shuts down or the network connection drops, it could be that the order is already marked as Processed, but it never reaches the Logic App. Depending on the returned error code, your Logic App will end up in a Failed state without clear description or the Logic App will retry automatically (for error codes 429 and 5xx). In both situations you’re facing data loss, which is not acceptable for our scenario.
We need to come up with a reliable way of receiving the data. Therefore, I suggest to implement a similar pattern as the Azure Service Bus Peek-Lock. Data is received in 2 phases:
Next to these two explicit processing steps, there must be a background task which reprocesses messages that have the Peeked status for a too long duration. This makes our solution more resilient.
Let’s create the first stored procedure that marks the order as Peeked.
The second stored procedure accepts the OrderId and marks the order as Completed.
The third stored procedure should be executed by a background process, as it sets the status back to New for all orders that have the Peeked status for more than 1 hour.
Let’s consume now the two stored procedures from within our Logic App. First we Peek for a new order and when we received it, the order gets Completed. The OrderId is retrieved via this expression: @body(‘Execute_PeekNewOrder_stored_procedure’)?[‘ResultSets’][‘Table1’][0][‘Id’]
The background task could be executed by a SQL Agent Job (SQL Server only) or by another Logic App that is fired every hour.
Happy with the result? Not a 100%! What if something goes wrong during further downstream processing of the order? The only way to reprocess the message is by changing its status in the origin database, which can be a quite cumbersome experience for operators. Why can’t we just resume the Logic App in case of an issue?
As explained over here, Logic Apps has an extremely powerful mechanism of resubmitting workflows. Because Logic Apps has – at the time of writing – no triggers for SQL Server, a resubmit of the Recurrence trigger is quite useless. Therefore I only want to complete my order when I’m sure that I’ll be able to resubmit it if something fails during its further processing. This can be achieved by splitting the Logic App in two separate workflows.
The first Logic App peeks for the order and parses the result into a JSON representation. This JSON is passed to the next Logic App.
The second Logic App gets invoked by the first one. This Logic App completes the order first and performs afterwards the further processing. In case something goes wrong, a resubmit of the second Logic App can be initiated.
Very happy with the result as:
Don’t forget that every action is HTTP based, which can have an impact on reliability. Consider a two-phased approach for receiving data, in case you cannot afford message loss. The same principle can also by applied on receiving files: read the file content in one action and delete the file in another action. Always think upfront about resume / resubmit scenarios. Triggers are better suited for resubmit than actions, so if there are triggers available: always use them!
This may sound overkill to you, as these considerations will require some additional effort. My advice is to determine first if your business scenario must cover such edge case failure situations. If yes, this post can be a starting point for you final solution design.
Liked this post? Feel free to share with others!
Toon
The post Reliably receive SQL data in Logic Apps appeared first on BizTalkGurus.
Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?
Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.
If you want to receive these updates weekly, then don’t forget to Subscribe!
Hope this would be helpful. Please feel free to let me know your feedback on the Integration weekly series.
The post Microsoft Integration Weekly Update: May 22 appeared first on BizTalkGurus.
BizTalk360 v8.4 is now released for public with lots of exciting new features and enhancements. Many of our customers have upgraded to the latest version and started enjoying the new features. We at BizTalk360 support get a lot of queries in the form of tickets. Many customers are asking for the installation path, few raise some clarifications and others may be issues. We categorize the tickets as a clarification, feature requests, and bugs.
Our support team often get some strange issues which did not belong to either of these categories. I am here to explain about one such interesting case and how we identified the root cause and resolved it. As per the below quote,
“The job isn’t to just fix the problem. It is also to restore the customer’s confidence. DO BOTH!” – Shep Hyken
we, the BizTalk360 support team, always work hard to resolve the customers’ issues and achieve customer satisfaction.
There was a ticket from the customer stating that “Send port is not showing on BT360 Application portal”. In BizTalk360 console, the artifacts get listed when we navigate to Operations -> Application Support -> Applications. The case was that a send port was not getting listed here. But all the send ports were getting listed at the time of assigning alarm for monitoring activity. There was no issue with the other artifacts and they were getting listed properly.
Generally, when there is any issue related to UI, we ask for the JSON response from the Network tab in the Developer’s console of the browser. By pressing F12, we can open the browser console and check for any exceptions in the service calls. In the Network tab of the console, the service calls for each operation gets listed from which we can get the request headers and JSON response. This way we can check for the exception details and work on the same. So, we replied to the ticket asking for the network response. But we did not get the required information from the JSON response. The next step was to go on a call with the customer involving one of our technical team members through the web meeting with a screen sharing session.
In the web meeting, we tried different scenarios to check for the send ports. We tried in the Search Artifacts section and it was getting listed without any problem. But there was a weird thing seen. There were multiple entries for the same send port with different URI configured. This is the first time we have come across such an issue. But will this be the issue for the send port not getting listed? Let’s see what’s happening.
We exported the send port data from the customer and checked it. There were multiples entries for the same send port but with different transport type and protocols configured. But in BizTalk server, it does not allow us to create send ports with duplicate names. Then how come this would happen at the customer end? We started our investigation further. Then we found that the multiple entries were due to the backup transport configured for the send ports. But this was not the cause of the issue. What a strange issue? Shall we move further with the analysis?
On further analysis on this case, we found that DB2 adapter was being used in one of the send ports and it is not a standard BizTalk adapter. The BizTalk Adapter for DB2 is a send and receive adapter that enables BizTalk orchestrations to interact with host systems. Specifically, the adapter enables to send and receive operations over TCP/IP and APPC connections to DB2 databases running on a mainframe, AS/400, and UDB platforms. Based on Host Integration Server (HIS) technology, the adapter uses the Data Access Library to configure DB2 connections, and the Managed Provider for DB2 to issue SQL commands and stored procedures.
The trace logs also indicated a NULL assignment for the Transport type for the send port with this adapter. It’s a prerequisite for BizTalk360, that BizTalk Admin components must be installed in the BizTalk360 server, in case of the BizTalk360 standalone installation. Since DB2 adapter comes with HIS, it was suggested to the customer to install HIS in the BizTalk360 server and observe for the send ports listing. But even after installing HIS, the same issue persisted. We also tried to replicate the same scenario by installing HIS with the DB2 adapter in a BizTalk360 standalone server. The send ports with different combinations of adapters in the transport types were created and tested. But the issue was not reproducible. So, we concluded that the DB2 adapter was not the real cause of the problem.
Sometimes, the issue may seem to be simple. But identifying the root cause of the issue is very difficult. And that too for some strange issues, it would be extremely difficult if the issue is not reproducible. It might not be good to disturb the customers often since they might be busy. Our next plan was to provide a console app to get the complete details of the send ports configured. This app was quite helpful for us to find the root cause. Read further to know the real cause.
The console app was given to the customer to get the complete details of the send ports. The app would give the result in the JSON format with all the details like the name, URI configured, transport type, send handlers etc., The BizTalk application which contained the send port and the database details must be entered in the app to fetch the response.
From the screenshot, we can see that the sendHandler for the secondaryTransport
does not contain the value of transport type. This was the cause for the send port not getting displayed. It was causing the exception.
We probed further into the case as to why the sendHandler details were not coming up. The Backup transport was configured to “None” in the BizTalk admin console for that send port. Even though it was configured to None, we again asked them to update it again to None and then save it. This time, the issue was resolved and the send port got listed in BizTalk360 UI. It might have happened when importing the send port configuration, back up Transport Type is set to other than “None”. (Type can be empty or NULL).
If Transport type is other than None, then the code will generate the send handler and look for Transport Type. But it could not find the transport type and hence throws an error. The same issue happened in the production environment also and got resolved the same way.
When we import the send port configuration, we must make sure that the Backup transport type data is properly set to None. It should not be set to NULL or empty. This way we can make sure that all the send ports are getting listed in the BizTalk360 UI without any problem. We could identify this with the help of the console app.
I am working as Senior Support Engineer at BizTalk360. I always believe in team work leading to success because “We all cannot do everything or solve every issue. ‘It’s impossible’. However, if we each simply do our part, make our own contribution, regardless of how small we may think it is…. together it adds up and great things get accomplished.” View all posts by Praveena Jayanarayanan
The post SendPort is not showing in BizTalk360 appeared first on BizTalkGurus.
It’s been a month since Feature Pack 1 was released for BizTalk Server 2016 Enterprise and Developer edition. The Feature pack introduced a set of new features, and helped customers leverage new technologies as well as taking advantage of tools they already used in their organization, in this case to enable a more streamlined management of their BizTalk Installation.
One of these customers were FortisAlbert, an energy company located Alberta, Canada. With the new management REST APIs with full Swagger support they were able to take parts of the operational management of the environments out from the BizTalk Servers and Administration Console and build a PowerApp to create and maintain their applications. Making the 24/7 support of their environment easier for the operational teams.
Having the option to add, update and even start existing artifacts directly from the PowerApp has helped FortisAlbert to enhance their productivity and speed of resolving live incidents.
“FortisAlberta has been using BizTalk since 2006 and is currently migrating to BizTalk 2016 due to its versatility, adaptability and ability to integrate disparate systems with ease.”
Anthony See, FortisAlberta
The post Customer story: BizTalk management through PowerApps appeared first on BizTalkGurus.
After some request by the community and after I publish in my blog as a season of blog posts, Step by step configuration to publish BizTalk operational data on Power BI is available as a whitepaper!
Recently, the Microsoft Product team released a first feature pack for BizTalk Server 2016 (only available for Enterprise and Developer edition). This whitepaper will help you understand how to install and configure one of the new features of BizTalk Server 2016:
This whitepaper will give a step-by-step explanation of what component or tools you need to install and configure to enable BizTalk operational data to be published in a Power BI report.
Table of Contents
You can download the whitepaper here:
I would like to take this opportunity also to say thanks to my amazing team of BizTalk360 for the proofreading and for once again join forces with me to publish for free another white paper.
I hope you enjoy reading this paper and any comments or suggestions are welcome.
Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community. View all posts by Sandro Pereira
The post Step by step configuration to publish BizTalk operational data on Power BI whitepaper appeared first on BizTalkGurus.
Techorama is a yearly International Technology Conference which takes place at Metropolis, Antwerp. With 1500+ physical participants across the globe, the stage was all set to witness the intelligence of Azure. Among the thousands of virtual participants, I am happy to document the Keynote presented by Scott Guthrie, Executive Vice President of Cloud and Enterprise Group, Microsoft on Developing with the cloud. The most interesting feature of this demo is, Scott has scaled the whole demo on a Scenario driven approach from the perspective of a common developer. Let me take you through this keynote quickly.
The Inception of cloud inside a mobile! Yes, you heard it right. Microsoft team has come up with Azure App for IOS/Android/Windows to manage all your cloud services. You can now easily manage all your cloud functionalities from Mobile.
Now the Bash Shell is integrated into the azure cloud to manage/retrieve all the azure services with just a type of a command. The Bash Shell client is opened in the browser pop-up and get connected to the cloud without any keys. More of the Automation scripts in future can get executed easily with this Bash in place. Also, it provides a CLI documentation for the list of commands/arguments. You can expect a Powershell client soon!
The flow between different cloud services and their status with all diagnostic logs and charts are displayed in the dashboard level. As a top-down approach, you can get to the in-depth level of tracking per instance based on failure/success/slow response scenarios with all diagnostics, stack trace and creation of a work item from the failure stack traces. From the admin/operations perspective, this feature is a great value add.
Stack trace with Work item creation
Managing the security of the cloud system could be a complex task. With the Security center in place, we can easily manage all the VMs/other cloud services. The machine learning algorithms at the backend will fetch all the possible recommendations for an environment or the services.
Recommendations
The possible recommendations for virtual machines are provided with the help of Machine learning Algorithms.
To deliver a seamless mobile experience to the user, you need to have an interactive user-friendly UI, BTD (Build, Test, Deploy automation) and scalability with the cloud infrastructure. These are the essentials for Mobile success and Microsoft with a Xamarin platform has nailed it.
A favorite area of mine has been added with much needed intelligent feature. Xamarin – VS2017 combo is now makings its step into a real-time debugging!!!
You can pair up your iPhone/any mobile device to the visual studio with the Xamarin Live player which allows you to perform live debugging. Dev-Ops support to Xamarin has now been extended, you can now make a build-test-deploy to any firmware connected to the cloud as like a Continuous Integration Build. Automation in testing and deployment for the mobile framework is the best part. You can get the real-time memory usage statistics for your application on a single window. Also, you can now run VS2017 on IOS as well. 🙂
The mobile features have not stopped with this. The VS Mobile center is also integrated here to make a staging test with your friend’s community to get feedback on your mobile application before we submit to any mobile stores. Cool, isn’t it.
Scott also revealed some features of upcoming SQL server 2017, which has a capability to run on Linux OS and Docker apart from Windows.
The new SQL Server 2017 has got Adaptive Query Processing and Advance Machine Learning features and can offer in-memory support for advanced analytics. Also, SQL server is capable of seamless failovers between on-premise and cloud SQL with no downtime along with Azure Database migration service.
SQL injection could be the most faced problems of an application. As a remedy, Azure SQL database now can detect the SQL injection by machine learning algorithms. It can send you the alert when an abnormal query gets executed.
Showing the vulnerability in the query
The Relational Database service is now extended to PostgreSQL as a service and MySQL as a service which can seamlessly integrate with your application.
This could be the right statement to explain Cosmos DB. The Azure has come with Globally distributed multi-model database service for higher scalability and geographical access. You can easily replicate/mirror/clone the database based on the user base to any geographical location. To give you an example you can scale from Giga to Petabytes of data and from Hundreds to Millions of transactions with all metrics in place. And this makes the name COSMOS!
Scott has also shown us a video on how a JET online retailer is using cosmosDB and chat bot which runs with the Cosmos DB to answer intelligent human queries. With Cosmos DB and Gremin API you can retrieve the comprehensive graphical analysis of the data. Here, he showed us the Marvel comics characters and friends chart of Mr.Stark, quite cool!
You may all wonder how to make your existing application to the Azure container based architecture and here is a solution with the support of Docker. In your existing application project, you can easily add the Docker which makes you run your application on the image of ASP.net with which it can easily get into the services of cloud build-deploy-test framework of continuous integration. A simple addition of Docker metadata file has made the Dev-ops much easier.
There are a lot of case studies which indicates the love towards azure functionalities but enterprises were not able to use it for tailor-made solutions. There comes an Azure-Stack, a private cloud hosting capability for your data center to privatize and use all cloud expertise on your own ground.
As more features including Azure Functions, Service Fabric, etc. are being introduced, this gist of keynote would have given you the overall view on The Intelligent Cloud and much more to come on the floor: tune to Techorama channel9 for more updates from 2nd-day events. With cloud scaling out with new capabilities, there will never be an application in future without rel on ing cloud services.
Happy Cloud Engineering!!!
Vignesh, A Senior BizTalk Developer @BizTalk360 has crossed half a decade of BizTalk Experience. He is passionate about evolving Integration Technologies. Vignesh has worked for several BizTalk Projects on various Integration Patterns and has an expertise on BAM. His Hobbies includes Training, Mentoring and Travelling View all posts by Vignesh Sukumar
The post Techorama 2017 Keynote – Recap appeared first on BizTalkGurus.
Last Monday I presented, once again, a session in the Integration Monday series. This time the topic was BizTalk Server: Teach me something new about Flat Files (or not). This was my fifth session that I deliver:
And I think will not be the last! However, this time was different for many aspects and in a certain way it was a crazy session… Despite having some post about BizTalk Server: Teach me something new about Flat Files on my blog, I didn’t have time to prepare this session (sent to a crazy mission for a client and also because I had to organize the integration track on TUGA IT event), I had a small problem in my BizTalk Server 2016 machine in which I had to switch to my BizTalk Server 2013 R2 VM, interrupted by the kids in the middle of the session because the girls wanted me to have dinner with them (worthy of being in this series)… but it all ended well and I think it was a very nice session with two great real case samples:
For those who were online, I hope you have enjoyed it and sorry for all the confusion. And for those who did not have the chance to be there, you can now view it because the session is recorded and available on the Integration Monday website. I hope you like it!
Session Name: BizTalk Server: Teach me something new about Flat Files (or not)
Session Overview: Despite over the year’s new protocols, formats or patterns emerged like Web Services, WCF RESTful services, XML, JSON, among others. The use of text files (Flat Files ) as CSV (Comma Separated Values) or TXT, one of the oldest common patterns for exchanging messages, still remains today one of the most used standards in systems integration and/or communication with business partners.
While tools like Excel can help us interpret such files, this type of process is always iterative and requires few user tips so that software can determine where there is a need to separate the fields/columns as well the data type of each field. But for a system integration (Enterprise Application Integration) like BizTalk Server, you must reduce any ambiguity, so that these kinds of operations can be performed thousands of times with confidence and without having recourse to a manual operator.
In this session we will first address: How we can easily implement a robust File Transfer integration in BizTalk Server (using Content-Based Routing in BizTalk with retries, backup channel and so on).
And second: How to process Flat Files documents (TXT, CSV …) in BizTalk Server. Addressing what types of flat files are supported? How is the process of transforming text files (also called Flat Files) into XML documents (Syntax Transformations) – where does it happen and which components are needed. How can I perform a flat file validation?
Integration Monday is full of great sessions that you can watch and I will also take this opportunity to invite you all to join us next Monday.
Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community. View all posts by Sandro Pereira
The post BizTalk Server: Teach me something new about Flat Files (or not) video and slides are available at Integration Monday appeared first on BizTalkGurus.
In my previous blog, I explained about installing BTS 2016 feature pack 1 and configuring it for Application Insights integration. In this article, I want to go a bit deeper and try to demonstrate,
I am hoping to give a jump start to someone who wants to use the Application Insights for BizTalk Server 2016.
As you know the term tracking data in BizTalk refers to different types of data emitted from different artifacts. It could be in/out events from ports and orchestrations, pipeline components, it could be system context properties, could be custom properties tracked from custom property schemas, could be message body in various artifacts, could be events fired from rule engine etc. So we would like to know, whether we will be able to get all this data in Application Insights or is it just a subset. I will try to answer this question based on the POC I have created.
POC I created is pretty simple. It has one receive port which receives an order XML file, processes that in an orchestration and send it to two different send ports. It can be pictorially represented as below.
Note: I enabled a different level of tracking at different artifacts to see if it has an impact on the analytics data sent to Application Insights. Later I realized that different tracking levels do not have any impact on the analytics data.
I placed a single file into the receive location and started observing the events pushed to Application Insights. In general, Applications integrated with Application Insights can send data belonging to various categories, such as traces, customEvents, pageViews, requests, dependencies, exceptions, availabilityResults, customMetrics, band browserTimings. With BizTalk, I have observed that data belongs to “CustomEvents” category. Following are the custom events which are ingested from my BizTalk interface.
All these events can be related to events logged into “Tracked events” query results which are shown below.
In the previous section, we saw that our BizTalk interface emitted various custom events for ports and orchestration. In this section, we will look into the structure of data which is captured in a custom event.
Event metadata is the list of values which defines an event. Following are the event metadata in one of the custom events.
Custom dimensions consist of the service instance details and context properties promoted in the messaging instance. Hence we can observe two different kinds of data under custom dimensions.
Service instance properties: These are the values specific to service instance associated with the messaging event.
Context properties: All the context properties which are non-integer type will be listed under the custom dimensions.
As per my observation, custom measurements only contain the context properties of integer type.
Since there is no proper documentation regarding this, I tried to prove this theory by creating three custom properties in a property schema and promoted the fields in the incoming message. Following is the property schema that I defined.
I observed that PartyID and AskPrice properties which are of type string and decimal respectively are moved to Custom Dimensions section. Property Quantity which is of type integer is moved to Custom measurement.
As discussed in above section all the BizTalk events are tracked under the customEvents category. Hence our query will start with customEvents.
Query language in Application Insights is very straightforward and yet very powerful. If you want to find out all the construct of this query language please refer this link Application Insights Analytics Reference.
In this section, I would like to cover some concepts or techniques which are relevant for querying BizTalk events.
In Application Insights, the context property values are stored as dynamic types. When you directly use them into queries especially in aggregations, you will receive a type casting exception as shown below.
To overcome this error, you will need to convert the context property to a specific type as shown below.
Since context properties are a combination of namespace and property name, it will be a bit of an effort to type them in the queries that we create. To bring the context property on to the query page easily, follow steps as below.
If you already know app insights query language, this tip is not so special. But if you are new to it and trying to find out how to select a column, you will face some difficulty as I did. The main reason for this is there is no construct called “select”. Instead, you will have to use something called “project”. Below is an example query.
In this section, I will try to list some queries which I found useful.
query
customEvents | where customDimensions.Direction == "Receive" | summarize count() by tostring(customDimensions.["PortName (http_//schemas.microsoft.com/BizTalk/2003/messagetracking-properties)"])
Chart
Query
customEvents | where customDimensions.Direction == "Receive" | summarize count() by tostring(customDimensions.["MessageType (http_//schemas.microsoft.com/BizTalk/2003/system-properties)"])
Chart
Ability to generate analytics reports based on the custom promoted properties is a very powerful feature which really makes using application insights interesting. As I explained in previous sections I have created a custom property schema to track PartId, Quantity and AskPrice fields. Now we will see some example reports based on this.
Query
customEvents | where customDimensions.PortType == "ReceivePort" | where customDimensions.Direction == "Send" |summarize sum(toint(customMeasurements.["Quantity (https_//SampleBizTalkApplication.PropertySchema)"])) by PartId = tostring(customDimensions.["PartID (https_//SampleBizTalkApplication.PropertySchema)"])
Chart
Query
customEvents | where customDimensions.PortType == "ReceivePort" | where customDimensions.Direction == "Send" | summarize sum(todouble(customDimensions.["AskPrice (https_//SampleBizTalkApplication.PropertySchema)"])) by bin( timestamp,10m)
Chart
All the charts that you have created can be pinned to an Azure dashboard and you can club these charts with other application dashboards as well. My dashboard with the charts that we created looks as below.
In summary BizTalk analytics option which is introduced in BizTalk Server 2016 Feature Pack 1 is useful to get analytics out of tracking data. I would like to conclude by stating following points.
Technical Lead at BizTalk360 UK – I am an Integration consultant with more than 11 years of experience in design and development of On-premises and Cloud based EAI and B2B solutions using Microsoft Technologies. View all posts by Srinivasa Mahendrakar
The post BizTalk Application Insights in depth – Part 1 appeared first on BizTalkGurus.
Once again, my Microsoft Integration Stencils Pack was updated with new stencils. This time I added near 193 new shapes and additional reorganization in the shapes by adding two new files/categories: MIS Power BI and MIS Developer. With these new additions, this package now contains an astounding total of ~1287 shapes (symbols/icons) that will help you visually represent Integration architectures (On-premise, Cloud or Hybrid scenarios) and Cloud solutions diagrams in Visio 2016/2013. It will provide symbols/icons to visually represent features, systems, processes and architectures that use BizTalk Server, API Management, Logic Apps, Microsoft Azure and related technologies.
The Microsoft Integration Stencils Pack v2.5 is composed by 13 files:
These are some of the new shapes you can find in this new version:
You can download Microsoft Integration Stencils Pack for Visio 2016/2013 from:
Microsoft Integration Stencils Pack for Visio 2016/2013 (10,1 MB)
Microsoft | TechNet Gallery
Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community. View all posts by Sandro Pereira
The post Microsoft Integration (Azure and much more) Stencils Pack v2.5 for Visio 2016/2013 is now available appeared first on BizTalkGurus.
Last week I was in Lisbon for TUGA IT, one of the greatest events here in Europe. A full day of workshops, followed by two days of sessions in multiple tracks, with attendees and presenters from all around Europe. For those who missed it this year, make sure to be there next time!
On Saturday I did a session on Industrial IoT using Azure IoT Hub. The industrial space is where we will be seeing a huge growth in IoT, and I showed how we can use Azure IoT Hub to manage our devices and do bi-directional communication. Dynamics 365 was used to give a familiar and easy to use interface to work with these devices and visualize the data.
And of course, I was not alone. The other speakers in the integration track, are community heroes and my good friends, Sandro, Nino, Steef-Jan, Tomasso and Ricardo, who all did some amazing sessions as well. It is great to be able to present side-by-side with these amazing guys, to learn and discuss.
There were some other great sessions as well in the other tracks, like Karl’s session on DevOps, Kris‘ session on the Bot Framework, and many more. At an event like this it’s always so much content being presented, that you can’t always see every session you would like, but luckily the speakers are always willing to have a discussion with you outside of the sessions as well. And with 8 different tracks running side-by-side, there’s always something interesting going on.
One of the advantages of attending all these conferences, is that I get to see a lot of cities as well. This was the second time I was in Lisbon, and Sandro has showed us a lot of beautiful spots in this great city. We enjoyed traditional food and drinks, a lot of ice cream, and had a lot of fun together.
The post TUGA IT 2017 – Recap of an amazing event appeared first on BizTalkGurus.
The month May went quicker than as I realized myself. Almost half 2017 and I must say I have enjoyed it to the fullest. Speaking, travelling, working on an interesting project with the latest Azure Services, and recording another Middleware Friday show. It was tha best, it was amazing!
In May I started off with working on a recording for Middleware Friday, I recorded a demo to show how one can distinguish Flow from Logic Apps. You can view the recording named Task Management Face off with Logic Apps and Flow.
The next thing I did was prepare myself for TUGAIT, where I had two sessions. One session on Friday in the Azure track, where I talked about Azure Functions and WebJobs.
And one session on Saturday in the integration track about the number of options with integration and Azure.
I enjoyed both and was able to crack a few jokes. Especially on Saturday, where kept using Trump and his hair as a running joke.
TUGAIT 2017 was an amazing event and I enjoyed the event, hanging out with Sandro, Nino, Eldert and Tomasso and the food!
During the TUGA event I did three new interviews for my YouTube series “Talking with Integration Pros”. And this time I interviewed:
I will continue the series next month.
In May I was able to read a few books again. I started reading a book about genes. Before I started my career in IT I was a Biotech researcher and worked in the field of DNA, BioTechnology and Immunology. The book is called The Gene by Siddharta Mukherjee.
I loved the story line and went through the 500 pages pretty quick (still two weeks in the evenings). The other book I read was Sapiens by Yuval Noah Harari. And this book is a good follow up of the previous one!
The final book I read this month was about Graph databases. In my current project we have started with a proof of concept/architecture on Azure Cosmos DB, Graph and Azure Search.
The book helped me understand Graph databases better.
My favorite albums that were released in May were:
There you have it Stef’s fourth Monthly Update and I can look back again with great joy. Not much running this month as I was recovering a bit from the marathon in April. I am looking forward to June as I will be speaking at the BTUG June event in Belgium and Integrate 2017 in London.
Cheers,
Steef-Jan
Steef-Jan Wiggers is all in on Microsoft Azure, Integration, and Data Science. He has over 15 years’ experience in a wide variety of scenarios such as custom .NET solution development, overseeing large enterprise integrations, building web services, managing projects, designing web services, experimenting with data, SQL Server database administration, and consulting. Steef-Jan loves challenges in the Microsoft playing field combining it with his domain knowledge in energy, utility, banking, insurance, health care, agriculture, (local) government, bio-sciences, retail, travel and logistics. He is very active in the community as a blogger, TechNet Wiki author, book author, and global public speaker. For these efforts, Microsoft has recognized him a Microsoft MVP for the past 6 years. View all posts by Steef-Jan Wiggers
The post Stef’s Monthly Update – May 2017 appeared first on BizTalkGurus.
Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?
Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.
If you want to receive these updates weekly, then don’t forget to Subscribe!
Hope this would be helpful. Please feel free to let me know your feedback on the Integration weekly series.
The post Microsoft Integration Weekly Update: May 29 appeared first on BizTalkGurus.
We are happy to announce the 2nd Cumulative updates BizTalk Server 2016.
This cumulative update package for Microsoft BizTalk Server 2016 contains hotfixes for the BizTalk Server 2016 issues that were resolved after the release of BizTalk Server 2016.
NOTE: This CU is not applicable to environment where Feature Pack 1 is installed, there will be a new Cumulative Update for BizTalk Server 2016 Feature Pack 1 coming soon.
We recommend that you test hotfixes before you deploy them in a production environment. Because the builds are cumulative, each new update release contains all the hotfixes and all the security updates that were included in the previous BizTalk Server 2016 CUs. We recommend that you consider applying the most recent BizTalk Server 2016 update release.
Cumulative update package 6 for Microsoft BizTalk Server 2016:
The post Announcing BizTalk Server 2016 CU2 appeared first on BizTalkGurus.
At BizTalk360, we provide some essential services which benefit the customers and our partners immensely. Businesses trust and use our expertise to assist them on a wide range of projects relating to their BizTalk needs. This includes, but is not limited to:
There are 3 types of services we offer, namely:
Demos can turn prospects into customers. They combat client concerns and provide proof what the product can do for you (our customers). Customers often want to see it in action before they commit to a purchase. We also offer a Free Trial as well.
Free Trial of BizTalk360
It gives the customer the best opportunity to experience what it would be like to own the product
Most of the times, we are approached by the customers themselves who googled and came across our product BizTalk360 and want to know more about it.
Sometimes we are approached by consultants who say “We love your product and need your help to convince Management to get it for our company”.
We also give Demos to our Partners as well to keep them well-equipped and confident in the product.
BizTalk360 Demo – Request one today!
We avoid giving generic demos. We try to learn about our audience’s specific challenges and what they want to achieve and tailor the demo accordingly. We want customers to feel empowered and be confident of the product they are going to be purchasing. The customer should be happy that BizTalk360 is going to be a good fit for their requirements.
We have also printed some documentation which outlines all the main features of BizTalk360 – What’s the business value of using Biztalk360, which we will be giving out at our upcoming INTEGRATE 2017 event.
Customers like hearing about best practices, especially if they’re easy to implement and will result in an immediate benefit. We provide this service to help set up BizTalk360 at the customer site. While the actual setup can be quite simple and quick to setup, depending on your environment you might face some issues. We provide a 2-hour service (chargeable) where we set up the product (via a web call) and go through a few basic setup tasks
Often a simple task can also turn out to be complex when dealing with multiple environments. A lot of our customers also take this opportunity for their BizTalk administrators to get a quick crash course in the product to see how they can utilize the product in the best way possible.
We are soon releasing a ‘User Guide’ of around 400 pages for the basic setup tasks required when you install BizTalk360 – written by Eva De Jong and Lex Hegt. Please look out for it at our ‘INTEGRATE 2017’ event.
BizTalk360 has a lot of features which can be quite overwhelming for someone new. Many companies appreciate an in-depth intensive BizTalk360 training. This is an 8-hour training that we provide and is given by our BizTalk Server & BizTalk360 Product experts.
This session is covered over 4 days in 2-hour slots (or as desired by customer). The main idea is for customers to get a better understanding of how the product can help them achieve certain scenarios. Our experts interact with the audience and understand the environment and architecture and what customizations they want to achieve.
Initially, they go through the various features available in BizTalk360, then on subsequent days the trainer does a deep dive and goes through more technical complicated scenarios and how the customer can benefit from the product. Many customers use this as a training session for their technical staff. This is conducted via a remote webinar session (GoToMeeting) and is a very interactive and a good learning experience for most customers. The session is not a straightforward monologue but it provides plenty of space for the customer to pose questions regarding their own circumstances.
We also have a new training for ‘BizTalk Server Administrators’ run by our very own BizTalk expert Lex Hegt.
The audience would be Systems Administrators who deploy and manage (multi-server) BizTalk Server environments, SQL Server DBA’s who are responsible for maintaining the BizTalk Server databases or BizTalk Developers who need to support a BizTalk environment.
During this course, attendees will get a thorough training for everything they need to know to properly administer BizTalk Server.
Some of the course topics are:
Keep tuned to our website for more such information or contact support@biztalk360.com to arrange the same.
The post Get access to a great range of BizTalk360’s Value added services appeared first on BizTalkGurus.
One of the mind-blowing development techniques that radically changed the programming world; is the Test-Driven Development.
Writing tests before we start coding? Who will do that?
I must admit that I personally wasn’t really convinced by the idea; maybe because I didn’t quite understand the reason we should write our tests first and the way we should do it. Can you have a bad Software Design with TDD? Can you break your Architecture with TDD? Yes! TDD is a discipline that you should be following. And like any discipline you must hold to a certain number of requirements. By the end of the day, it’s YOUR task to follow this simple mindset.
In this article, I will talk about the Mindset introduced by Kent Beck when writing in a Test-Driven Development environment.
Too many developers don’t see the added-value about this technique and/or don’t believe it works.
TDD works!
“Testing is not the point; the point is about Responsibility”
-Kent Beck
Because so many of us don’t see the benefits of TDD, I thought it would make sense to specify them for you. Robert C. Martin has inspired me with this list of benefits.
One of the benefits is that you’re certain that it works. Users have more responsibility in a Test-Driven team; because they will write the Acceptance Tests (with help of course), and they will define what the system must do. By doing so, you’re certain that what you write is what the customer wants.
The amount of uncertainty that builds up by writing code that isn’t exactly what the customer wants is called: The Uncertainty Principle. We must always eliminate this uncertainty.
By writing tests first; you can tell your manager and customer: “Yes, it will work; yes, it’s what you want”.
Before I write in a Test-First mindset; I always thought that my code was full of bugs and doesn’t handle unexpected behavior.
Maybe it was because I’m very certain of myself; but also, because I wrote the tests after the code and was testing what a just wrote; not what I want to test.
This increases the Fake Coverage of your code.
So many developers are “afraid” to change something in their code base. They are afraid to break something. Why are they afraid? Because they don’t have tests!
“When programmers lose the fear of cleaning; they clean”
– Robert C. Martin
A professional developer doesn’t allow that his/her code rots; so, you must refactor with courage.
Tests are the lowest form of documentation of your code base; always 100% in sync with the current implementation in the production code.
TDD is an analysis/design technique and not necessary a development technique. Tests force you to think about good design. Certainly, if you write them BEFORE you write the actual implementation. If you do so, you’re writing them in offence and not in defense (when you’re writing them afterwards).
Test-First also helps you think about the Simplest thing that could possibly work which automatically helps you to write simple structured designed code.
When you’re introduced into the Test-First methodology, people often get Test Infected. The amount of stress that’s taking from you is remarkable. You refactor more aggressively your code without any fear that you might break something.
Test-Driven Development is based on this very simple idea to first write your test, and only then write your production code. People underestimate the part “first write your test”. When you writing your tests, you’re solving more problems than you think.
Where should I place this code? Who’s responsible for this logic? What names should I use for my methods, classes…? What result must a get from this? What isn’t valid data? How will my class interface look like? …
After trying to use TDD in my daily practice, if found myself always asking the same questio:
“I would like to have a … with … and …”
Such a simple idea changed my vision so radically about development and I’m convinced that by using this technique, you’re writing simpler code because you always think about:
“What’s the simplest thing that could make this test work”
If you find that you can implement something that isn’t the right implementation, write another test to expose this behavior you want to implement.
TDD is – in a way – a physiological methodology. What they say is true: you DO get addicted to that nice green bar that indicate that you’re tests all pass. You want that color as green as possible, you want it always green, you want it run as fast as possible so you can quickly see it’s green…
To be a Green-Bar-Addict is a nice thing.
It felt a little weird to just state all the patterns Kent Beck introduced. Maybe you should just read the book Test-Driven Development by Example; he’s a very nice writer and I learned a lot from the examples, patterns and ideas.
What I will do, is give you some basic patterns I will use later in the example and some patterns that we very eye-opening for me the first time.
When Kent talked about “What’s the simplest thing that could work”, I was thinking about my implementation but what he meant was “What’s the simplest thing that could work for this test”.
If you’re testing that 2 x 3 is 6 than when you implement it, you should Fake It and just return the 6.
Very weird at first, especially because the whole Fake It approach is based on duplication; the root of all software evil. Maybe that’s the reason experienced software engineers are having problems with this approach.
But it’s a very powerful approach. Using this technique, you can quickly get the bar green (testing bar). And the quicker you get that bar green, the better. And if that means you must fake something; then you should do that.
This technique I found very interesting. This approach really drives the abstraction of your design. When you find yourself not knowing what to do next, or how you should go further with your refactoring; write another test to support new knowledge of the system and the start of new refactorings in your design.
Especially when you’re unsure what to do next.
If you’re testing that 2 x 3 is 6 than in a Triangulation approach you will first return 6 and only change that if you’re testing again but then for 2 x 2 is 4.
Of course: when the implementation is so simple, so obvious, … Than you could always implement it directly after your test. But remember that this approach is only the second option after Fake It and Triangulation.
When you find yourself taking steps that are too big, you can always take smaller steps.
If you’re testing that 2 x 3 is 6, in an Obvious Implementation approach you will just write 2 x 3 right away.
I thought it would be useful to show you some example of the TDD workflow. Since everyone is so stoked about test-driving Fibonacci I thought it would be fun to test-drive another integer sequence.
Let’s test-drive the Factorial Sequence!
What happens when we factorial 4 for example? 4! = 4 x 3 x 2 x 1 = 24
But let’s start with something super simple:
Always start with the same sentence: “I would like to have a… “. I would like to have a method called Factorial which I could use to send an integer with that will calculate the factorial integer for me.
Now we have created a test before anything about factorial is implemented.
Now we have the test, let’s start now by making our code compile again.
Let’s test this:
Hooray! We have a failed test == progress!
What’s the simplest thing that we could write in order that this test will run?
Hooray! Our test passed, we can go home, right?
What’s next? Let’s check. What happens if we would test for another value?
I know, I know. Duplication, duplication, duplication. But were testing now right, not yet in the last step of the TDD mantra.
What is the simplest we could change to make this test pass?
Yes, I’m feeling good right now. A nice green bar.
Let’s add just another test, a bit harder this time. But these duplication is starting to irritate me; you know the mantra: One-Two-Three-Refactor? This is the third time, so let’s start refactoring!
Ok, what’s the simplest thing?
Ok, we could add if/else-statements all day long, but I think it’s time to some generalization. Look at what we’ve now been implementing. We write 24 but do we mean 24?
Remembering Factorial, we mean something else:
All still works, yeah. Now, we don’t actually mean 4 by 4 do we. We actually mean the original number:
And we don’t actually mean 3, 2, and 1 by 3, 2 and 1, we actually mean the original number each time mins one. So, actually the Factorial of the 3! could you say, no?
Let’s try:
Wow, still works.Wait, isn’t that if-statement redundant? 2 x 2! == 2 right?
Now, the factorial of 0, is also 1. We haven’t tested that haven’t we? We have found a boundary condition!
This will result in a endless loop because we will try to factorial an negative number; and since factorial only happens with positieve numbers (because the formula with negative integers will result in a division by zero and so, blocking us for calculating a factorial value for these negative integers).
Again, simplest thing that could work?
Now, the last step of TDD is always Remove Duplication which in this case is the 1 that’s used two times. Let’s take care of that:
Hmm, someone may have noticed something. We could actually remove the other if-statement with checking for 1 if we adapt the check of 0. This will return 1 for us in the recursive call:
By doing this, we also have ruled out all the other negative numbers passed inside the method.
Why oh why are people so skeptic about Test-Driven Development. If you look at the way you use it in your daily practice, you find yourself writing simpler and more robust code.
TDD is actually a Design Methodology and not a Development Methodology. The way you think about the design, the names, the structure… all that is part of the design process of your project. The tests that you have is the added value of this approach and makes sure that you can refactor safely and are always certain of your software.
Start trying today in your daily practice so you stop thinking about: How will you implement it? but rather:
How will you test it?
The post Test-First Mindset appeared first on BizTalkGurus.
Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?
Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.
If you want to receive these updates weekly, then don’t forget to Subscribe!
The post Microsoft Integration Weekly Update: June 5 appeared first on BizTalkGurus.