Microservices Software Architecture Patterns & Techniques.

Characteristics

Services are

  • fine-grained
  • lightweight

Advantages

  • Low coupling
  • Improves modularity
  • Promotes parallel development
  • Promotes scalability

Drawbacks

  • Infrastructure costs are usually higher
  • Integration testing complexity
  • Service management and deployment
  • Nanoservice anti pattern i.e where a service is too fine-grained

Why do many microservice projects fail?

  • Lack of
  • Planning
  • Knowledge
  • Skills
  • Time

How to prevent your projects from failing?

  • Determine applicability
  • Prioritise automation
  • Have a clear plan
  • Avoid common pitfalls

Microservice Template

This is a code project that can be used to start off from when developing a new microservice to save time.

Why is important?

  • A significant amount of time setting up
  • Similar code for each microservice setup

What should the template contain?

  • Cross-cutting concerns
  • Logging
  • Metrics
  • Connection setup and configuration to databases and message brokers
  • Project structure

Code Repository Setup

  • Mono: where all code will be in a single repository
  • Discrete: where codebase is split into a seperate repository

Mono Repo

Pros

  1. Easier to keep input/output contracts in sync
  2. Can version the entire repo with a build number

Cons

  1. Different teams working in the same repo can break the build, disrupting CI/CD for other teams
  2. Easier to create tight coupling
  3. Long build times, large code repo to download

Discrete Repo

Pros

  1. Different teams can ‘own’ different repositories
  2. The scope of a single repo is more clear

Cons

  1. Contract versioning becomes more complex
  2. Unless managed properly, discrete repositories can easily become monoliths
  3. More up front cost in setting up repos and CI/CD pipeline

Microservice Decomposition

They should be loosely coupled from each other but should have high cohesion i.e each microservice will contain only things that are strongly related to each other. Remember the common closure principle which states that things that often change together should often close together. The most widely used tactics is through business cases, technical capabilities or function objective to make sure a suitable microservice has a level of granularity eg. Order Management, Shopping Cart Management Order Management can be decomposed into the following microservices :

  • Order history
  • Order tracking
  • Order placement
  • Order dispute

Shopping Cart Management can be decomposed into the following microservices :

  • Cart Upselling
  • Cart Promotions
  • Cart Cost Calculator
  • Cart Recovery

Microservice Communication

Inter-service communication

  1. Remote Procedure Invocation: is simple and easy to understand and can be used using REST and Apache Thrift. It follows the Request/Response pattern ie. synchronous
  2. Asynchronous Message-Based Communication: in a scenario where the response is not immediately required eg using Messaging, a microservice can publish a message on the message bus for other services to consume. It follows publish/subscribe pattern. This can, however, lead to over complexity. It is always advisable to stick to synchronous communication except for in-cases where the communication must be asynchronous

Microservice Registry

This is an extremely important component required in microservice architecture in order to be able to support dynamic scaling. If one microservice needs to send a request to another microservice it needs to be aware of the available instances and their network. The number of instances of each microservice maybe scaled dynamically to adjust changed in a node, therefore, other services communicating with these services must be aware of these changes. To solve this problem, we introduce a service registry component that holds the currently available instances of the microservice and their network location.

How it works When a microservice starts it registers itself with the service registry which will add this microservice to its database same in a shutdown instance which triggers a remove from the service registry. Then at regular intervals, we can introduce a health check API which performs a health check at regular intervals. When a microservice is required the service registry is queried for the available instances and the network location.

Microservice Discovery

This talks about how services are able to query the service registry either directly or indirectly.

Client Side Discovery This is when a service directly queries the service registry to obtain network location for an instance of the required service and the service registry replies back the network location information and the caller uses this information to call the instance Server Side Discovery Here the microservice performing this request have no knowledge of the service registry , it simply send the request to a load balanced endpoint and the load balancer will query the service registry for an instance and network location of the required microservice endpoint.

The Server Side has an advantage over the Client Side discovery because clients do not need to query the service registry while the disadvantage is that there are more network hops involved before the request arrives at the microservice destination this can be mitigated against by building the service registry directly in the load balancer

Data

DataBase Patterns Shared Database

Order Placement Microservice

Customer Details Microservice Product Details Microservice all using the same database in this case DB transaction is used to guarantee data consistency and integrity. There is a possibility of performance issue due to deadlocks also schema change too by another team.

Database Per Service Order Placement Microservice

Customer Details Microservice Product Details Microservice all have their own separate databases. Different service can have different database technologies that base suit their requirement eg. using SQL database, NOSQL

It is difficult getting aggregate data across services however this can be achieved by API composition or event store

API Composition

A Service referred to as the API composer queries data from the multiple services and then performs an in-memory join

Event Sourcing

It helps services keep track of state changes of an object in a reliable way. The difference between event sourcing and using a database directly is that we use an EventStore and persist objects to it. Services are able to subscribe to the different events handled by the event store in this way the event store acts as a message broker.

Two-Phase Commit

A distributed transaction implies altering data on multiple databases which arises mostly on database per service pattern, as a result, this becomes complex as the commit/rollback of data must be coordinated in a transaction as a self-contained unit, as a result, the entire transaction commits or rollback.

Phase 1 : Commit Request
     - Coordinator service sends a query to commit message 
     - Services execute the transaction but do not commit
     - Reply Yes/No depending on if they were successful
Phase 2 : Commit 
-if all services replied yes
     - coordinator sends a commit message
     - Services commit the transaction
     - Reply with an acknowledgement
- If at least one service replied with no 
     - Coordinator sends a rollback message
     - Services rollback the transaction   

The disadvantage is that this is a synchronous operation which might result in the blocking because the services will have to wait for the coordinator on instructions to proceed also if the coordinator goes down the services hang indefinitely

Saga Pattern

is an alternative to 2 phase commit to managing distributed transactions, it is a sequence of local transactions in different microservices each local transaction updates the database of that microservice and then publishes an event or a message to trigger the next local transaction in the saga. If one local transaction fails the saga execute a series of compensating transactions that rollback changes of local transaction forming part of the distributed transaction that has already been executed. There are 2 main different types of Saga implementation

  1. Choreography-based sagas Were each local transaction publishes domain event that will trigger local transaction in other services until the saga is completed
  2. Orchestrator-based sagas An orchestrator which is usually created for each saga. It coordinates the whole saga. This orchestrator is also a microservice on its own. It lets each microservice in the saga known when to be called or to be rollback if any of the microservice fails.

Fault Tolerance and Monitoring

It is important to have redundancy and high availability. We must also have a redundant service registry and the redundant microservice hosted on another server entirely. If a service tries to connect to a service and it is unable to connect it should immediately connect to the redundant service and also create a log issue when the failover resource has been used so that the main service can be fixed..

Circuit breaker pattern

this helps prevent failures in some part of the network or in a microservice from bringing down the system. For more information on Circuit Breaker Pattern It should be a cross-cutting module in the microservice architecture

Health Check API

Used to get status of a service. This is usually used by the service registry to check on register services to check if they are up and running frequently.

Logging Technique

Always tag a request with a GUID for logging on the different microservices or better to use a log aggregation technology eg elastic, logstash , kibana stack, aws cloud watch, Splunk.

Advertisement

RabbitMQ Simplified

RabbitMQ is the most widely deployed open source message broker.

What is RabbitMQ?

  • The most widely used message broker
  • Open source
  • Lightweight and easy to deploy
  • Supports different messaging protocols
  • Can be deployed to clusters to provide high availability and scalability, necessary in enterprise solutions
  • Used by many companies on a large scale including Google, VMWare and Mozilla

Main Features

  • Variable level of reliability, generally configuring for increased reliability will reduce the performance so this can be managed as required.
  • Complex routing capabilities
  • Different configurations to group together brokers for different purposes eg. Clusters, Federation, Shovel models
  • Highly available message queues
  • Support for multiple protocols eg amqp , mqtt
  • Client available in a large number of languages including C#, Java, Go, Erlang etc

Installation on a Mac

Click to install on MAC then to enable the RabbitMQ management plugin run this command rabbitmq-plugins enable rabbitmq_management If RabbitMQ is running and the management plugin is enabled, accessing http://localhost:15672 should load the management UI. The default credentials for the management console are : Username: guest Password: guest Troubleshooting

  • Make sure that no other service was already running on port 15672 of your local machine
  • Check that RabbitMQ service is started
  • Installation may require a restart

RabbitMQ UI

  • Connections: is a TCP connection between an application and the rabbitMQ broker.
  • Channels: is a virtual connection inside the TCP connection, publishing/consuming is done over a connection rather than a connection and each TCP connection can hold multiple channels.
  • Nodes let us see the health of our cluster
  • Admin TAB
  • We can create a user then also virtual host. A virtual host is simply a way to segregate applications within the same rabbitmq instance so different users can have different right virtual host and queues and exchanges can be configured when created so that they exist only in one particular virtual host

Introduction to Messages, Queues and Exchanges

Messages

  • A message is a binary blob of data that is handled by RabbitMQ and can be distributed to queues and exchanges Throughout this material
  • we will use the term Producer to refer to a generic program producing messages and sending them to a RabbbitMQ queue/exchange
  • and the term Consumer to refer to any generic program receiving messages from RabbitMQ

Queues

  • A queue is where messages flowing through RabbitMQ can be stored functioning similarly to a post box
  • Can also be seen as a large message buffer
  • A queue’s message storing limit is only bound by the host’s memory and disk limits

Exchanges

  • An exchange receives messages from producers and pushes them to queues
  • An exchange can be set to forward messages to zero or more queues depending on the routing parameters and exchange configurations
  • If we relate this to the post office analogy, where the queues are the post boxes then the exchange is the postmen that deliver messages to the post boxes

Messages

Message Acknowledgements

which is a key mechanism in guaranteeing reliable message transfer in RabbitMQ

  • If a connection fails between a RabbitMQ server and a client (producer or consumer), messages in transit may not have all been processed correctly and need to be re-sent.
  • To detect such instances, message acknowledgements can be used. If the sender does not receive a positive acknowledgement (ack) before the connection fails it will re-queue the message
  • It is, therefore, good practice to acknowledge a message after any required operations on the message are performed.
  • There are different configurations of message acknowledgements, by enabling automatic acknowledgement mode the message will be considered acknowledged as soon as it is sent – acting in a fire and forget mode.
  • This will reduce the safety check that a message has been received successfully but allows for increased throughput.
  • Consumers can also send a negative acknowledgement for a message and instruct the message broker to re-queue them.
  • Both positive and negative message acknowledgements can be sent in bulk by setting the multiple flags of the acknowledgement command to true.
  • Protocol methods for acknowledgements: * basic.ack is used for positive acknowledgements * basic.nack is used for negative acknowledgements * basic.reject is also used for negative acknowledgements but is only capable rejecting one message at a time NOTE: the exact command name may vary slightly between client libraries of different programming languages

Message Ordering

  • By default, message ordering in queues in First In First Out (FIFO)
  • However, queues can be configured to act as priority queues, in which case messages will be ordered depending on their priority which is set by the sender

Durability

  • Durable queues are persisted to disk and thus survive broker restarts. Queues that are not durable are called transient.
  • Setting a queue to durable does not make messages that are routed to that queue durable. If the message broker is restarted, a durable queue will be re-declared during broker startup, however, only persistent messages will be recovered.
  • persistent , in which case it will be persisted to disk as soon it is received by a durable queue.
  • In some cases, non-persistent messages are also written to disk when there is a shortage of memory. However, this will not make them durable

Queues

Temporary Queues

  • Queues can be configured to be deleted automatically in 3 ways :
  • Exclusive queues can only be used by their declaring connection and will be automatically deleted once this connection is closed.
  • An expiry time (also known as a time to live) can be set for the queue. If the queue is left unused for a duration exceeding this period, the broker will automatically delete the queue.
  • Auto-delete queues will be automatically deleted once their last consumer has cancelled through the basic.cancel protocol or gone (e.g closed connection)

Exchange

  • An exchange receives messages from a producer (sender) and pushes them to queues or rather exchanges.
  • An exchange can be set to forward messages to zero or more queues depending on the routing parameters and exchange configurations
  • We will now go through each of the following exchanges types and learn about each of their routing capabilities
  • Types of exchanges * Fanout * Direct * Topic * Headers

Fanout Exchange

 * This is the most simple kind of exchange.
 * A Fanout Exchange routes a message to all queues bound to it
 * Ignores any routing key provided with the message 

You can click the image to enlarge

Direct Exchange

* A direct exchange routes a message with a particular routing key to queues bound to that exchange with that exact routing key.

You can click the image to enlarge

Topic Exchange

  • A Topic exchange routes a message with a particular routing key to queues whose routing key matches all or a portion of a routing key
  • Messages are published with routing keys containing one or more words separated by a dot eg multi.word.test
  • Queues that bind to a topic exchange supply a matching routing key pattern for the server to use when routing the message. These patterns may contain an asterisk (*) to match any word in a specific position of the routing key, or a hash (#) to match zero or more words.
  • Examples: a message published with routing key multi.word.test
  • will match queues with routing key multi.#, *.word*, multi.word.test, #
  • but will not match queues with routing key multi.* , single.# or multi.foo.test
  • The # can replace a single or more word while the * will replace a single word which matches the pattern

You can click the image to enlarge

Headers Exchange

  • A Headers exchange routes messages based upon a matching of the message’s headers to the expected headers specified by the binding queue.
  • It is important to note the difference between the headers exchanges and the topic exchange type :
     * The topic exchange type matches on the routing key
     * The header exchange matches on the message header
  • More than one header criteria can be specified as a filter by the binding queue, in which case the binding queue can specify if the message headers need to contain ‘any’ or ‘all’ of the header criteria
  • Message header can be matched in any order

You can click the image to enlarge

One API , multiple clients but different result sets.

This issue arose when I got a requirement in office to build API for clients but a tricky one indeed as my philosophy is to always build systems that are highly configurable rather than a system that is subject to code changes every time a new requirement arises.

Hence I took my time to analyze and plan carefully how to come up with a system that can handle the scenario on the ground.

The problem statement is to build a web API that will be used by the various client but each of them requires a different level of data exposure.

Imagine a system to display customer information but the business team has decided that data expose to the client should be based on an agreement signed eg.
Client A can get [Name,Age,Sex] , Client B can get [Name,Age] only , Client C can get [Name] only.

Let us get into the raw action.

First, let us create a sample repository

public class DataRepository
    {
        public static IList<Data> GetData()
        {
            return new List<Data>()
            {
                new Data(){ Age = 45, Sex = "Male" , Name = "Adeyinka Oluwaseun", PhoneNumber = "1111"},
                new Data(){ Age = 23, Sex = "Female" , Name = "Doyin Solomon", PhoneNumber = "2222"},
                new Data(){ Age = 56, Sex = "Male" , Name = "John Doe", PhoneNumber = "3333"},
            };
        }
    }

    public class Data
    {
        public string Name { get; set; }
        public string Sex { get; set; }
        public int Age { get; set; }
        public string PhoneNumber { get; set; }
    }

Next is to create our custom response class which is a dictionary with an indexer to store the retrieved information from our repository.

public class Response
    {
        private readonly Dictionary<string, string> _store;

        public Response()
        {
            _store = new Dictionary<string, string>();
        }

        public string this[string key]
        {
            get => _store[key];
            set => _store[key] = value;
        }

        public Dictionary<string, string> Get()
        {
            return _store;
        }
    }

Next is to configure the clients in the appsettings.json where will configure the clients and the data that will be available to them based on the business agreement signed.

{
  "Logging": {
    "LogLevel": {
      "Default": "Warning"
    }
  },
  "AllowedHosts": "*",
  "Clients": {
    "Banks": "CLIENTA:Name,Sex|CLIENTB:Name|CLIENTC:Name,Age,Sex"
  } 
}

Next is our controller class

[Route("api/[controller]")]
    public class InformationController : Controller
    {
        private static Clients _clients;

        public InformationController(IOptions<Clients> clients)
        {
            _clients = clients.Value;
        }

        [HttpGet()]
        public IActionResult Get([FromQuery]string number , [FromQuery]string clientName)
        {
            var response = new Response();

            foreach (var info in ClientData())
            {
                if (info.Key == clientName)
                {
                    var props = info.Value.Split(new[] { ',' }, StringSplitOptions.RemoveEmptyEntries);

                    var result = DataRepository.GetData().FirstOrDefault(c => c.PhoneNumber == number);

                    foreach (var prop in props)
                    {
                        if (result != null)
                        {
                            response[prop] = GetPropertyValue(result, prop);
                        }
                    }
                }
            }

            
            return new ObjectResult(JsonConvert.SerializeObject(response.Get(), Formatting.Indented));
        }

        private static Dictionary<string, string> ClientData()
        {

            var clients = _clients.Banks.Split(new[] { '|' }, StringSplitOptions.RemoveEmptyEntries);
            var clientInfo = new Dictionary<string, string>();

            foreach (var client in clients)
            {
                var split = client.Split(new[] { ':' }, StringSplitOptions.RemoveEmptyEntries);
                clientInfo.Add(split.First(), split.Last());
            }

            return clientInfo;
        }

        private static string GetPropertyValue(Data.Data data, string property)
        {
            return data.GetType().GetProperty(property)?.GetValue(data, null).ToString() ?? string.Empty;
        }
    }

The model to bind to the appsettings.json

public class Clients
    {
        public string Banks { get; set; }
    }

Finally in the startup.cs where we bind the Clients in the appsettings.json to the Model class ie. Clients

public class Startup
    {
        public Startup(IConfiguration configuration)
        {
            Configuration = configuration;
        }

        public IConfiguration Configuration { get; }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);

            services.Configure<Clients>(settings => Configuration.GetSection("Clients").Bind(settings));

        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }
            else
            {
                app.UseHsts();
            }

            app.UseHttpsRedirection();
            app.UseMvc();
        }
    }

Result screenshot from the test created out on different clients using POSTMAN.

CLIENT A

Client A

CLIENT C

Client C

CLIENT B

Cleint B

CLICK TO DOWNLOAD SOURCE CODE FROM GITHUB

Filter Table or List Result In VueJs

In Vuejs 1 , there was an inbuilt filter to filter table result using filterBy but in Vuejs 2 it was deprecated.

Hence you need to write your own custom query to achieve same result.

This tutorial will show you how to achieve same.

The application server is written in nodejs , express as the web framework and vash as the template engine.

I will be using a fake an API from https://jsonplaceholder.typicode.com/users

This is the nodejs server code

var express = require("express");
var app = express();

var port = process.env.PORT || 3000;

app.use("/assets", express.static(__dirname + "/public"));

app.set("view engine", "vash");

app.get("/", (req, res) => {
  res.render("index");
});

app.listen(port);

Next is the View i.e the html to display our data and provide search.

<link rel="stylesheet" type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" media="screen"
/>
<div class="container">
    <h1>Filter Todo Table By Search in VueJs
        <small>(
            <i class="glyphicon glyphicon-filter"></i>)</small>
    </h1>
    <div class="row">
        <div class="col-md-6" id="todo"> 
            <div class="panel panel-success">
                <div class="panel-heading">
                    <h3 class="panel-title">TODOS</h3> 
                   
                </div>
                <div class="panel-body">
                    <input type="text" class="form-control" placeholder="search" v-model="search"/>
                </div>
                <table class="table table-hover">
                    <thead>
                        <tr>
                            <th>#</th>
                            <th>Name</th>
                            <th>UserName</th>
                            <th>Email</th>
                        </tr>
                    </thead>
                    <tbody>
                        <tr v-for="(item,index) in computedData" :key="item.id">
                            <td>{{ ++index }}</td>
                            <td>{{ item.name }}</td>
                            <td>{{ item.username }}</td>
                            <td>{{ item.email }}</td>
                        </tr>

                    </tbody>
                </table>
            </div>
        </div>
    </div>
</div>

<script src="https://unpkg.com/axios/dist/axios.min.js"></script>
<script src="https://unpkg.com/vue"></script>  
<script src="/assets/data.js"></script>

Finally , the Vuejs code

var app = new Vue({
  el: "#todo",
  data: {
    todos: [],
    search:'',
    computedTodos:[]
  },
  mounted() {
    loadAll: {
      axios
        .get("https://jsonplaceholder.typicode.com/users") 
        .then(response => {
          this.todos = response.data;
        })
        .catch(function(error) {});
    }
  },
  computed: {
    computedData: function() { 
      this.computedTodos = this.todos;  
      if (this.search) {
        this.computedTodos = this.computedTodos.filter(item => item.name.toUpperCase().includes(this.search.toUpperCase()) || item.username.toUpperCase().includes(this.search.toUpperCase()) || item.email.toUpperCase().includes(this.search.toUpperCase()));   
        return this.computedTodos;
      }
      return this.computedTodos;
    }
  }  
});

Snapshot of the html table.

total

Search for BRET
search

Source on my github page

 

Post data from VueJs to ASP.NET CORE using Axios

As simple as this sounds , if you do not get it right then you can spend days trying to figure out why the data you post from Vuejs doesn’t get the ASPNET CORE controller action.

This was my scenario until i finally got it right.

First , what is Vuejs ?

According to wikipedia , Vue.js is a popular JavaScript front-end framework that was built to organize and simplify web development.

Secondly , what is Axios ?

Axios  is a javascript library for Promise based HTTP client for the browser and node.js

I spents hours trying to figure out why data i post from Vuejs doesn’t get to my Controller action in asp.net core until i figured out that for this to work then i need to set [FromBody] attribute on my action.

Let us see sample scenario

This is my View : Index.cshtml

@{
    ViewData["Title"] = "Home Page";
}

<!doctype html>
<html lang="en">

<head>
    <meta charset="utf-8">
    <title>Vue To .NET CORE</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link rel="icon" type="image/x-icon" href="favicon.ico">
</head>

<body>

<div id="app">
   
    <input name="firstName" v-model="firstName" placeholder="First Name"/>
    <br/>
    <input name="lastName" v-model="lastName" placeholder="Last Name"/>
    <br/>
    <button v-on:click="sendToServer">Submit</button>

</div>
</body>
</html>
<script src="https://unpkg.com/vue/dist/vue.js"></script>
<script src="https://unpkg.com/axios/dist/axios.min.js"></script>
<script src="/app/app.js"></script>

I added the Vuejs and Axios library via CDN.

Next is my Vuejs code

new Vue({

    el: "#app",
    data: {
        firstName: "",
        lastName: ""
    },
    methods: {
        sendToServer: function () {

            axios({
                    method: 'post',
                    url: '/home/index',
                    data: {
                        "firstName": this.firstName,
                        "lastName": this.lastName
                    }
                })
                .then(function (response) {
                    console.log(response);
                })
                .catch(function (error) {
                    console.log(error);
                });

        }
    }

});

The ViewModel 

namespace VueJsToNetCore.ViewModel
{
    public class User
    {
        public string LastName { get; set; }
        public string FirstName { get; set; }
    }
}

And lastly is my AspNet Core Controller.

using Microsoft.AspNetCore.Mvc;
using VueJsToNetCore.ViewModel;

namespace VueJsToNetCore.Controllers
{
    public class HomeController : Controller
    {
        [HttpGet]
        public IActionResult Index()
        {
            return View();
        }

        [HttpPost]
        public IActionResult Index([FromBody]User user)
        {
            return View();
        }
    }
}

NOTE : If you post without the [FromBody] attribute on the Controller Action then the posted parameters will not get to the action.

Page

pagecode

 

Source code available on my github page.

 

Make WebService Calls in ASP.NET CORE

Moving to ASP.NET Core is fun and interesting but then you begin to fall into problems trying to do some stuffs you use to do easily in ASP.NET MVC.

One of such is trying to make a call to a webservice but then where is the WEB.CONFIG xml file that keeps webservices configurations ? It is no more in ASP.NET CORE what we have now is the APPSETTINGS which is a json file. Hmmmmm……………..

In this post i will show how you can make webservice calls easily.

In order to use a webservice , i went online to get a free webservice from http://www.dneonline.com/calculator.asmx , this service has 4 simple operations which are Add , Divide , Multiply , Substract.

In this demo i will use the ADD method/operation which basically adds 2 numbers together. With this sample in this tutorial you can basically call any webservice after going through this tutorial.

The first thing we are going to do is a generate a proxy class for the webservice in our code using the in-built SVCUTIL.EXE command in the Visual Studio Command Prompt

Svc

This will automatically generate a proxy class and output.xml file but in this case we don’t need the output.xml in our code.

Svc Generated

Simply copy this generated .cs file into your code. For this service this is the generated .cs code. Note this is auto-generated by the VS Command Prompt using command above.

//------------------------------------------------------------------------------
// 
//     This code was generated by a tool.
//     Runtime Version:4.0.30319.42000
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// 
//------------------------------------------------------------------------------



[System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")]
[System.ServiceModel.ServiceContractAttribute(ConfigurationName="CalculatorSoap")]
public interface CalculatorSoap
{
    
    [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/Add", ReplyAction="*")]
    int Add(int intA, int intB);
    
    [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/Add", ReplyAction="*")]
    System.Threading.Tasks.Task<int> AddAsync(int intA, int intB);
    
    [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/Subtract", ReplyAction="*")]
    int Subtract(int intA, int intB);
    
    [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/Subtract", ReplyAction="*")]
    System.Threading.Tasks.Task<int> SubtractAsync(int intA, int intB);
    
    [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/Multiply", ReplyAction="*")]
    int Multiply(int intA, int intB);
    
    [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/Multiply", ReplyAction="*")]
    System.Threading.Tasks.Task<int> MultiplyAsync(int intA, int intB);
    
    [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/Divide", ReplyAction="*")]
    int Divide(int intA, int intB);
    
    [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/Divide", ReplyAction="*")]
    System.Threading.Tasks.Task<int> DivideAsync(int intA, int intB);
}

[System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")]
public interface CalculatorSoapChannel : CalculatorSoap, System.ServiceModel.IClientChannel
{
}

[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")]
public partial class CalculatorSoapClient : System.ServiceModel.ClientBase<CalculatorSoap>, CalculatorSoap
{
    
    public CalculatorSoapClient()
    {
    }
    
    public CalculatorSoapClient(string endpointConfigurationName) : 
            base(endpointConfigurationName)
    {
    }
    
    public CalculatorSoapClient(string endpointConfigurationName, string remoteAddress) : 
            base(endpointConfigurationName, remoteAddress)
    {
    }
    
    public CalculatorSoapClient(string endpointConfigurationName, System.ServiceModel.EndpointAddress remoteAddress) : 
            base(endpointConfigurationName, remoteAddress)
    {
    }
    
    public CalculatorSoapClient(System.ServiceModel.Channels.Binding binding, System.ServiceModel.EndpointAddress remoteAddress) : 
            base(binding, remoteAddress)
    {
    }
    
    public int Add(int intA, int intB)
    {
        return base.Channel.Add(intA, intB);
    }
    
    public System.Threading.Tasks.Task<int> AddAsync(int intA, int intB)
    {
        return base.Channel.AddAsync(intA, intB);
    }
    
    public int Subtract(int intA, int intB)
    {
        return base.Channel.Subtract(intA, intB);
    }
    
    public System.Threading.Tasks.Task<int> SubtractAsync(int intA, int intB)
    {
        return base.Channel.SubtractAsync(intA, intB);
    }
    
    public int Multiply(int intA, int intB)
    {
        return base.Channel.Multiply(intA, intB);
    }
    
    public System.Threading.Tasks.Task<int> MultiplyAsync(int intA, int intB)
    {
        return base.Channel.MultiplyAsync(intA, intB);
    }
    
    public int Divide(int intA, int intB)
    {
        return base.Channel.Divide(intA, intB);
    }
    
    public System.Threading.Tasks.Task<int> DivideAsync(int intA, int intB)
    {
        return base.Channel.DivideAsync(intA, intB);
    }
}

 

Next , is to add this class file which i call my GenericProxy class that i use to make calls to any webservice class in my codes.  It is a generic class so it is highly re-usable.

using System;
using System.Collections.Generic;
using System.Linq;
using System.ServiceModel;
using System.ServiceModel.Channels;
using System.Threading.Tasks;

namespace CallWebService.Core
{
    internal sealed class GenericProxy<TContract> : IDisposable where TContract : class
    {
        private readonly ChannelFactory<TContract> _channelFactory;
        private TContract _channel;

        public GenericProxy(Binding binding, EndpointAddress remoteAddress)
        {
            _channelFactory = new ChannelFactory<TContract>(binding, remoteAddress);
        }

        public void Execute(Action<TContract> action)
        {
            action.Invoke(Channel);
        }

        public TResult Execute<TResult>(Func<TContract, TResult> function)
        {
            return function.Invoke(Channel);
        }

        private TContract Channel
        {
            get
            {
                if (_channel == null)
                {
                    _channel = _channelFactory.CreateChannel();
                }

                return _channel;
            }
        }

        public void Dispose()
        {
            try
            {
                if (_channel != null)
                {
                    var currentChannel = _channel as IClientChannel;
                    if (currentChannel != null && currentChannel.State == CommunicationState.Faulted)
                    {
                        currentChannel.Abort();
                    }
                    else
                    {
                        currentChannel?.Close();
                    }
                }
            }
            finally
            {
                _channel = null;
                GC.SuppressFinalize(this);
            }
        }
    }
}

 

Now all setup is ready , we are now ready to write our codes.

Add an interface for our calculator which i call ICALCULATOR service.

namespace CallWebService.Core
{
    public interface ICalculatorService
    {
        int Sum(int a, int b);
    }
}


Next is the CalculatorService which implements the ICalculator interface above and also call the ADD method/operation in the webservice , we need to pass the interface of the webservice to the GenericProxy class with the Binding settings and the EndPoint of the service.

The EndPoint of the webservice is stored in our appsettings.json which i will call with a class i call Configurations

using System;
using System.Collections.Generic;
using System.ServiceModel;
using System.Text;
using Serilog;

namespace CallWebService.Core
{
    public class CalculatorService : ICalculatorService
    {
        private static readonly EndpointAddress Endpoint = new EndpointAddress(Configurations.CalculatorServiceEndPoint);
        private static readonly BasicHttpBinding Binding = new BasicHttpBinding
        {
            MaxReceivedMessageSize = 2147483647,
            MaxBufferSize = 2147483647
        };

        public int Sum(int a, int b)
        {
            int result = 0;
            using (var proxy = new GenericProxy<CalculatorSoap>(Binding, Endpoint))
            {
                try
                {
                    result = proxy.Execute(c => c.Add(a, b));
                }
                catch (FaultException ex)
                {
                    Log.Logger.Error(ex, "error while adding numbers.");

                }
                catch (Exception ex)
                {
                    Log.Logger.Error(ex, "error while adding numbers.");
                }

                return result;
            }

        }

    }
}

Configurations Class

using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Hosting.Internal;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using System.IO;

namespace CallWebService.Core
{
    public static class Configurations
    {
        private static readonly IConfigurationRoot Configuration = new BootStrap().Setup();

        public static readonly string CalculatorServiceEndPoint = Configuration["WebService:CalculatorEndPoint"];

        private class BootStrap
        {
            public IConfigurationRoot Setup()
            {
                IHostingEnvironment environment = new HostingEnvironment();

                // Enable to app to read json setting files
                var builder = new ConfigurationBuilder()
                    .SetBasePath(Directory.GetCurrentDirectory())
                    .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
                    .AddJsonFile($"appsettings.{environment.EnvironmentName}.json", optional: true)
                    .AddEnvironmentVariables();

                if (environment.IsDevelopment())
                {
                    // This will push telemetry data through Application Insights pipeline faster, allowing you to view results immediately.
                    builder.AddApplicationInsightsSettings(developerMode: true);
                }

                return builder.Build();
            }
        }
    }
}

 

AppSettings.json

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Warning"
    }
  },
  "WebService": {
    "CalculatorEndPoint": "http://www.dneonline.com/calculator.asmx?wsdl"
  } 
}

At this stage we are done with all we need  to do to call our webservice in our library project above , next is to move to the ASP.NET CORE project , in the Startup.cs we need to register our ICalculatorService dependency as below

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using CallWebService.Core;

namespace CallWebService
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            // Add framework services.
            services.AddMvc();

            services.AddTransient<ICalculatorService, CalculatorService>();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();

            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
                app.UseBrowserLink();
            }
            else
            {
                app.UseExceptionHandler("/Home/Error");
            }

            app.UseStaticFiles();

            app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller=Home}/{action=Index}/{id?}");
            });
        }
    }
}

 

For testing purpose we will be calling the webservice in our HomeController

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using CallWebService.Core;
using Microsoft.AspNetCore.Mvc;

namespace CallWebService.Controllers
{
    public class HomeController : Controller
    {
        private readonly ICalculatorService _calculatorService;

        public HomeController(ICalculatorService calculatorService)
        {
            _calculatorService = calculatorService;
        }

        [HttpGet]
        public IActionResult Index()
        {
            return View();
        }

        [HttpPost]
        public IActionResult Index(int firstNumber , int secondNumber)
        {
            var result = _calculatorService.Add(firstNumber, secondNumber);
            ViewBag.Result = $"{firstNumber} + {secondNumber} = {result}";
            return View();
        }
    }
}

Index.cshtml for the Index view in the HomeController

@{
    ViewData["Title"] = "Home Page";
}

<form asp-controller="Home" asp-action="Index" method="post">
    <div class="row">
        <div class="col-md-6">
            <input type="text" name="firstNumber" />
        </div>
        <br />
        <div class="col-md-6">
            <input type="text" name="secondNumber" />
        </div>
        <br />
        <input type="submit" value="Add Numbers" />
        <br/>
        @ViewBag.Result 
    </div>
</form>

sunm

ans


Finally , we have our result from the webservice.

The complete source code is available on my my github repo for download.

NOTE : You can only run it with VISUAL STUDIO 2017.

SweetAlert Notification in ASP.NET MVC & ASP.NET CORE

I got a requirement in an application to display notification messages as beautiful pop-ups instead of plain text on the web page.

Note , this is not a toastr kind of notification , it is the normal javascript alert notification.

A quick search and i got hold of this beautiful alert project by Tristan Edwards called SweetAlert

I thought well about this and also how i need to implement it to make it re-usable and available in all my Controller without doing too much frontend implementation of it.

Hence , i had to come up with a BaseController which all other controllers will simply inherit  from.

BASECONTROLLER DESIGN

    public abstract class BaseController : Controller
    {
       
        public void Alert(string message , NotificationType notificationType)
        {
           var msg = "swal('" + notificationType.ToString().ToUpper() + "', '"+ message +"','"+ notificationType+"')" + "";
           TempData["notification"] = msg;
        }
    }

Then a sample controller that inherit from the BASECONTROLLER above.

public class TestController : BaseController
 {
        [HttpGet]
        public IActionResult Index()
        {
            return View();
        }
    }
}

Since this TESTCONTROLLER inherits from the BASECONTROLLER , i therefore have access to the Alert method in the BASECONTROLLER.

The alert has icons that depict the kind of alert i want to use i.e error notification , success notification , warning notification and info notification

Hence the need for an enum , so i create the enum class below , which already appeared in the BASECONTROLLER above.

   public class Enum
    {
        public enum NotificationType
        {
            error,
            success,
            warning,
            info
        }

    }

With completely re-usability in mind , let us create a partial view that will display the alert notification passed into TempData[“notification”] = msg; in the BASECONTROLLER  above.

So let us create a partial view called _NotificationPanel.cshtml file which we will call in the _Layout.cshtml file in the Views/Shared folder

@if (TempData["notification"] != null)
{
    @Html.Raw(TempData["notification"])
}

Then in the _Layout.cshtml we make the _NotifcationPanel.cshtml available like this

@Html.Partial(“_NotificationPanel”)

@RenderBody()

Also , we need to add the css and javascript we got from SweetAlert to the  HEAD section of the _Layout.cshtml file

<link href=”/sweetAlert/sweetalert.css” rel=”stylesheet” />
/sweetAlert/sweetalert.min.js

Finally note that the view of our TESTCONTROLLER above need to use the _Layout as it’s Layout , so our TESTCONTROLLER view page needs to look like this

@{
    ViewBag.Title = "title";
    Layout = "_Layout";
}

Now we are set for use. So let us create sample for all the SWEETALERT notification types.

For Success Messages :

Our TESTCONTROLLER will be :

For Success Messages :

   public class TestController : BaseController
    {
        [HttpGet]
        public IActionResult Index()
        {          
            Alert("This is success message",NotificationType.success);
            return View();
        }
    }

With the help of our enum class we can choose any kind of message we want.

Source code available on my github repo.

Thanks…please do leave your comments below.