11 Jul

Setup Azure Internal API Manager with Application Gateway without Custom Domains

Summary

Recently I was tasked to setup an internal Azure API Manager and expose it via a WAF Application Gateway. While there were plenty of articles on line about this topic, none of them specifically addressed how to do when you did not want to use a custom domain or purchased SSL certificates. This post walks through setting it all up in Azure using only the domains Azure issues in resource creation

Resources

These are the resources in Azure we are going to be creating

  • App Service/App Service Plan (for hosting the API for API Manager)
  • Network Security Group
  • API Manager
  • Application Gateway
  • Public IP Address (2)
  • Virtual Machine (the jump box to interact with the API Manager)
  • Virtual Network
    • Application Gateway Subnet
    • APIM Subnet
    • VM Jumpbox Subnet
    • App Service Subnet

Create the Network Security Group

  1. Navigate to Network Security Group
  2. Create a new Resource Group. This group will be used for the rest of the resources we create
    • rg-contoso-api-dev-eastus
  3. Enter Name
    • nsg-contoso-api-dev-eastus
  4. Click Create
  5. Configure the inbound and outbound rules

Inbound Rules

Outbound Rules

NOTE: some of these rules may not apply to your given project and circumstances.

Creating the VNET

  1. Navigate to Virtual Networks
  2. Select the Resource Group create from above
    • rg-contoso-api-dev-eastus
  3. Enter Name for the VNET
    • vnet-contoso-dev-eastus
  4. Select Region
    • East US
  5. Click the IP Addresses tab
    • Delete the default subnet
    • Add the following subnets
      • snet-contoso-gw-dev-eastus (10.0.1.0/24)
      • snet-contoso-pvt-vm-dev-eastus (10.0.2.0/24)
      • snet-contoso-pvt-apim-dev-eastus (10.0.3.0/24)
      • snet-contoso-pvt-asp-dev-eastus (10.0.4.0/24)
  6. Click Review + Create

Creating the Virtual Machine

  1. Navigate to Windows Server 2016
  2. Select the Resource Group create from above
    • rg-contoso-api-dev-eastus
  3. Enter Virtual machine name
    • vm-contoso-api-jumpbox-dev-eastus
  4. Select Region
    • East US
  5. Select the size of the VM you want
    • Standard_B1ms
  6. Create a Username and Password
  7. Under Inbound port rules
    • Select Allow selected ports
    • Select RDP (3389) from the list

    Inbound port rules

  8. Click on the Disks Tab
    • For OS disk type select
      • Standard HDD
  9. Click on the Network Tab
    • Select the VNET (vnet-contoso-dev-eastus) from above
    • Select Subnet
      • snet-contoso-pvt-vm-dev-eastus
    • Create a new Public IP for the Virtual Machine
      • pubip-contoso-api-jumpbox-dev-eastus
      • Keep the rest of the defaults
    • Under NIC network security group
      • Click Advanced
      • Select Create New
      • vm-contoso-api-jumpbox-dev-eastus-nsg (this will be auto populated)

      Jumpbox NSG

  10. Keep the rest of the default settings
  11. Click Review + Create

NOTE: When the VM is create a new

Lets Configure the Subnets

Navigate to the new VNET resource (vnet-contoso-dev-eastus) and select Subnets

Configure snet-contoso-pvt-apim-dev-eastus

Subnets

  1. Select Subnet snet-contoso-pvt-apim-dev-eastus
  2. Under Network Security Group
    • Select nsg-contoso-api-dev-eastus
  3. Under Subnet delegation
    • Select Microsoft.ApiManagement/service

    Subnet Delegation

  4. Click Save

Configure snet-contoso-pvt-asp-dev-eastus

  1. Select Subnet snet-contoso-pvt-asp-dev-eastus
  2. Under Subnet delegation
    • Select Microsoft.Web/serverFarms

    Inbound port rules

  3. Click Save

Creating the API Manager

  1. Navigate to API Management
  2. Enter Name
    • api.contoso
  3. Select the Resource Group create from above
    • rg-contoso-api-dev-eastus
  4. Select Location (Region)
    • East US
  5. Pricing Tier
    • Select Development/Premium
  6. Click Create
  7. Once created lets join it to the VNET (vnet-contoso-dev-eastus)
    • Navigate to the new APIM resource
    • Locate Virtual Network

      Virtual Network

    • Select Internal
    • Select the Virtual Network created above
    • Select Subnet snet-contoso-pvt-apim-dev-eastus
    • Click Apply

Creating the App Service Plan

  1. Navigate to App Service Plan
  2. Select the Resource Group create from above
    • rg-contoso-api-dev-eastus
  3. Enter the name
    • asp.contoso.api-dev-eastus
  4. Select the OS you require for your API
  5. Select the Region
    • East US
  6. Select your required pricing tier
  7. Click Review + Create
  8. Deploy your API to an App Service under the newly created ASP above
    • Add the App service to the VNET (vnet-contoso-dev-eastus)
    • Select subnet snet-contoso-pvt-asp-dev-eastus

Setting up the JumpBox

  1. Navigate to the API Manager resource created above
    • Select Overview
      • Copy the Virtual IP private IP. You will needs this below
  2. Navigate to the Virtual Machine resource created above
  3. Select Connect under Settings
    VM Connect
  4. Select RDP and click Download RDP File
  5. Login to VM with the UserName and Password you created while setting up the VM above
  6. In windows explorer

    • Open folder C:\Windows\System32\drivers\etc\
    • Open the hosts file with notepad.exe
    • Add the following to the bottom of the file
    <APIM private IP> api.contoso.azure-api.net
    <APIM private IP> api.contoso.portal.azure-api.net
    <APIM private IP> api.contoso.developer.azure-api.net
    <APIM private IP> api.contoso.management.azure-api.net
    <APIM private IP> api.contoso.scm.azure-api.net
    
  7. Testing the connection

Setting up Application Gateway

  1. Navigate to Application Gateway
  2. Select the Resource Group create from above
    • rg-contoso-api-dev-eastus
  3. Enter the gateway name
    • agw-contoso-api-dev-eastus
  4. Enter region
    • East US
  5. Select Tier based on your needs. If possible choose V2 types
  6. Under Configure Virtual Network
    • Select the VNET created above (vnet-contoso-dev-eastus)
    • Select the subnet
      • snet-contoso-gw-dev-eastus
  7. Click Frontends
    1. Select Public for address type
      Public IP Address Type
    2. Create a new public IP address
    • ip-contoso-gw-dev-eastus
  8. Click Backends
    1. Select Add a backend pool
      1. Enter name
        1. backend-asp-api-dev-eastus
      2. Add Target
        1. Target type: IP Address or FQDN
        2. Target: api.contoso.azure-api.net
  9. Click Configuration
    1. Select Add a routing rule
    • Enter Name
      • rule-https-backend-asp-api-dev-eastus
    • Under the Listeners Tab
      • Enter Listener Name
        • https-listener
      • Enter Frontend IP
        • Select the IP created from the earlier step
      • Protocol
        • Select HTTPS
      • Http Settings
        1. Lets get the certificate we need. For this we can use a self-signed certificate to secure the incoming requests
        • Open a Powershell Command prompt

          New-SelfSignedCertificate -certstorelocation cert:\localmachine\my -dnsname api.contoso.com $pwd = ConvertTo-SecureString -String Password$ -Force -AsPlainText Export-PfxCertificate -cert cert:\localMachine\my\<COPY FROM OUTPUT ABOVE> -FilePath c:\api-contoso-gw-cert.pfx -Password $pwd
        1. Upload the created certificate
        2. Enter a cert name
        • self-cert-contoso-api-gw
          1. Enter the password used from Powershell
          2. Click Add
      • Under the Backend targets
        • Select Target type Backend pool
        • Select the backend pool created above
        • Select Add new for Http setting
          • Enter HTTP setting name
          • apim-contoso-https-dev-eastus
          • Backend protocol
          • HTTPS
          • Use well known CA certificate
          • NO
          • Getting the .cer to upload
          1. Remote into the Jumpbox
          2. Open the browser and navigate to
          • https://api.contoso.portal.azure-api.net/
          • Click on the secure connection icon in the browser. Select Certificate
            Certificate
          • Click details tab and Copy to File
            Certificate Details

            • In the export wizard. The exported file format should be Base-64-encoded X.509 (.VER)
              file
            • Enter a file name
              • api-contoso-dev-eastus.cer
            • Copy this file from the Jumpbox the where you are creating the Application Gateway above
              • Override with new host name
              • YES
              • Host name override
              • Override with specific domain name
              • api.contoso.azure-api.net
              • Click Add
                Add HTTP Setting
        • Click Add
        1. Click Review + Create
        2. Configuring the Custom health probe
        • Goto the Application Resource created above (agw-contoso-api-dev-eastus)
      • Select Health probes
        Health probes
      • A custom probe should have been created but it needs some tweaking to get it to validate against API Manager
      • Host
        • api-contoso.azure-api.net
      • Pick host name from backend HTTP settings
        • No
      • Path
        • /status-0123456789abcdef
        • This is APIM static health check endpoint on any instance
      • Use probe matching conditions
        • Yes
      • Http status code match
        • 200-399
          Health probe settings
        1. Verify Application Gateway and Backend pool can connect
        • Select Backend health
          Backend Health
      • If all checks out and everything is configured correctly should get a healthy status
        Healthy Status

Testing An API Call

To test everything is connected run the following cURL command


curl --location --request GET 'https://<APPLICATION GATEWAY PUBLIC IP>/echo/resource?param1=sample' \ --header 'Ocp-Apim-Subscription-Key: <YOUR APIM SUBSCRIPTION KEY>'

Final Thoughts

Hopefully this has helped get everything configured and up and running.

Resources

https://docs.microsoft.com/en-us/azure/api-management/api-management-using-with-internal-vnet

https://docs.microsoft.com/en-us/azure/api-management/api-management-howto-integrate-internal-vnet-appgateway

https://techcommunity.microsoft.com/t5/azure-paas-developer-blog/integrating-api-management-with-app-gateway-v2/ba-p/1241650

https://docs.microsoft.com/en-us/azure/application-gateway/certificates-for-backend-authentication#export-trusted-root-certificate-for-v2-sku

https://azure.microsoft.com/en-us/updates/azure-application-gateway-standardv2-wafv2-skus-generally-available/

http://thewindowsupdate.com/2020/03/20/integrating-api-management-with-app-gateway/

https://docs.microsoft.com/en-us/azure/api-management/api-management-howto-mutual-certificates#feedback

https://fabriciosanchez-en.azurewebsites.net/protecting-apis-with-api-management-and-application-gateway/

11 Jul

Custom Bulk Variables in Postman

Custom variables in Postman using Github

Summary

We created this implementation to address some of the short-comings in Postman variable management. The first issue was how to mass update variables within postman with the removal of the bulk edit feature, while not having to deal with the massive custom JSON object that is the data backing to the current Postman bulk edit modal. The second issue was how to keep our entire teams variables up to date and in sync for values that were required for API testing but that changed over time. Using team templates is not user friendly, and requires multiple new imports to re-sync variables, which also then forces you to lose any one off variable changes made.

This implementation provides the following advantages:

  1. Change tracking
  2. Edit history
  3. Real time updating of shared variables

Setup Summary

  1. Create new Github Repository for storing the variable files
  2. Add a new post request to your Postman collection for base-lining variables
  3. Adde the Pre-script and post-script code to that baseline request

GitHub files and structure

  1. Create a repository named Postman-Variables
  2. Setup the following folder structure

  3. Create JSON files in each user folder that should be applied to postman when base-lining for each given environment.

Variable Scopes

  1. Global: Contains variables that are required per the environment and rarely change.
  2. Shared: Contains all variables that are not considered global. When a new variable is created by a developer it should be added here for each environment file. This will make it available for all Postman Users.
  3. User: This contains static user specific overrides. These will overwrite the shared variable of the same key if it exists. Every user will have a folder containing their specific overrides.

Note: Global variables are overridden by shared variables, which are overridden by User variables.

File structure

In each of the folders add your variables in this format

[
  {
    "key": "<VARNAME>",
    "value": "<VALUE>"
  }
]

Setting up access

  1. In Github under the account go to settings
  2. Select Developer Settings
  3. Create a new OAuth Application
    • Ensure public repo access is checked
  4. Save the ClientId and ClientSecret created for use later in Postman

Setup Postman

Update Postman Settings

  • Set Automatically persist variable values is set to ON

Create the baseline environment templates

Create the following environments

  1. LocalVariables
  2. DevVariables
  3. QAVariables

Add the following variables to each template within postman

  • environment: this value will match the folder names from your GitHub repository structure you created above
  • username: this is the username of the folder in the github repository that contains the specific variables to retrieve

NOTE: if working in Postman team import each of those environment templates into your workspace as a duplicate

Setup Collection Global Variables

In your collection we need to add a couple global variables that will allow access across environments to gather our github stored variables

Add the following global variables

  • baseGithubUrl – base url for access the files in github
  • globalVariablesPath – Relative path in the repo to the global variable files
  • sharedVariablesPath – Relative path in the repo to the shared variable files
  • githubClientId – Client id generated above from GitHub
  • githubClientSecret – Client secret generated above from GitHub
  • githubUser – Github account housing the variables repo
  • githubRepoName – Name of the repo created to house the variables

Validating access

To ensure we have access to the Github repository based on the variables above we can test with the following postman GET request url

{{baseGithubUrl}}{{githubUser}}/{{githubRepoName}}/master/Global/{{environment}}.json?client_id={{githubClientId}}&client_secret={{githubClientSecret}}

If all is connected correctly Postman should return

Setup the baseline variable request

  1. Create a new folder in your collection Baseline Environment Variables
  2. Create a new GET request
    • Set the url to
{{baseGithubUrl}}{{githubUser}}/{{githubRepoName}}/master/Users/{{username}}/{{environment}}.json?client_id={{githubClientId}}&client_secret={{githubClientSecret}}
  1. Edit the folder and add the following to the Pre-request Scripts
    var baseGithubUrl = pm.variables.get("baseGithubUrl");
    var githubUser = pm.variables.get("githubUser");
    var githubRepoName = pm.variables.get("githubRepoName");
    var githubClientId = pm.variables.get("githubClientId");
    var githubClientSecret = pm.variables.get("githubClientSecret");
    var authQueryString = "?client_id=" + githubClientId + "&client_secret=" + githubClientSecret;
    var baseUrl = baseGithubUrl + githubUser+"/" + githubRepoName + "/master/";
    
    var globalVariablesUrl = pm.variables.get("globalVariablesPath").replace("{0}", pm.variables.get("environment"));
    var sharedVariablesUrl = pm.variables.get("sharedVariablesPath").replace("{0}", pm.variables.get("environment"));
    
    pm.sendRequest({
        url:  baseUrl + globalVariablesUrl,
        method: 'GET',
    }, function (err, response) {
        if(err){
            console.error("Pre-Request Error", err, response.text());
        }
        var globalBaseline = response.json();
        globalBaseline.forEach(function(item) {
            pm.environment.set(item.key, item.value);
        });
        //now get any shared overrides to the global requests
    
        pm.sendRequest({
            url:  baseUrl + sharedVariablesUrl,
            method: 'GET',
        }, function (err, response) {
            if(err){
                console.error("Pre-Request Error", err, response.text());
            }
            var sharedVariables = response.json();
            sharedVariables.forEach(function(item) {
                pm.environment.set(item.key, item.value);
            });
        });
    });
    
  2. Update the Tests tab in the request with
    var userOverrides = pm.response.json();
    
    userOverrides.forEach(function(item) {
        pm.environment.set(item.key, item.value);
    });
    

The final request should look like

If you now check the variables list it will show all the imported variables from GLOBAL, SHARED and USER

How to use

To take advantage of this system we implemented the following workflow

As a developer we would clone the repo locally and update all profiles and Gloabl and Shared files as necessary when API changes happened that would effect current settings. Then everyone only had to Get Latest to get those updates and use locally.

As a QA tester, they were given rights to the GitHub repo and allowed to update their userName folder files and set what ever baseline variables they wanted for their testing purposes and commit those changes.

As API changes moved that required Postman value changes moved through the environments we would all as a team "Baseline" for the given environment which would update the GLOBAL and SHARED values for everyone. This has really helped with syncing new values for new APIs and updating clientIds and stuff based on API changes. This has also allowed for helping QA through issues cause we can easily pull their username files down locally and we have everything in our local environment the QA has and we an see what values may be off and correct them.

The great thing is this does not affect how Postman already works with variables so we can still customize per request as im scenario testing just like we regularly do. We ONLY baseline when something on a wider scale has changed in an API as part of our regular planned sprint work. The power then also comes with the username custom variables so if we have personal values and stuff only for me those are all tracked in source control only our team has access to and has history to them all to so rollback is easily accomplished at this point.

References

All screen shots taken running localVariables as the environment

Want a head start grab the source from my Github

23 Jun

VSCode and TSLint tasks

I was trying to get full project linting to happen using the TSLint Visual Studio Extension, but it currently only does one file at a time, for the files that are all open. To TSLint the whole project you need to setup a VSCode Task.

Installing Required Packages

npm install -g tslint typescript

Settings Up the Task

  1. In VSCode hit F1 and type task
  2. Select “Configure Task Runner”
  3. Select Typescript

In the tasks.json file add the following

  {
    "version": "0.1.0",
    "command": "tslint",
    "isShellCommand": true,
    "echoCommand": true,
    "args": [
      "--format prose",
      "--project ${workspaceRoot}/tsconfig.json",
      "--type-check",
      "${workspaceRoot}/src/**/*.ts"
    ],
    "showOutput": "silent",
    "isBackground": true,
    "problemMatcher": "$tslint4"
  }

Run the task in VSCode, the output will show in the output tab and in the Problem tab as well.

Happy Linting.

5 May

Watching for changes with Reactive Forms in Angular4

Problem

On a recent project working with Angular 4 and reactive forms a need came up to allow child components ( which were driven off off their own child form groups) be be able to detect value changes to other child components. From these changes we needed to apply rules and detect validity of the values.

Now I know your thinking

So your first response is probably duh thats what valueChange subscriptions are for, and you are totally correct. What started to happen though was a huge duplication of code across many components for not only detecting changes (valueChanges) but also detecting the validity of those changes (statusChanges). What we needed was a consolidated way to detect changes and validity across the entire parent form group and be able to deliver those changes to any sub components.

Enter the watcher service

With the help of RXjs Observables we were able to provide a single watch point that any component in the entire application could subscribe to to get any changes from any component. This service would deliver not only value changes but validity status changes as well. GoodBye code duplication I’m staying dry in this code storm.

Service Code

import {Injectable} from '@angular/core';
import {AbstractControl, FormControl, FormGroup} from '@angular/forms';
import {BehaviorSubject} from 'rxjs/BehaviorSubject';
import {Observable} from 'rxjs/Observable';
import {Subscription} from "rxjs/Rx";

declare let _;

export interface FormGroupChange {
  path: string;
  name: string;
  control: FormControl;
}

/*
 Allows you to subscribe to value and status changes for fields in a FormGroup. Events are debounced to avoid too many
 simultaneous calls, and are emitted even when the validity of a field updates because of dependencies on other fields.

 Usage:

 let subscription = this.watchFormGroupService.watch(this.formGroup, ['personalInformation.firstName', 'personalInformation.lastName'])
   .subscribe((data: FormGroupChange) => {
      // ...
   });

 // Don't forget to unsubscribe! (eg. in ngOnDestroy)
 subscription.unsubscribe();
 */

@Injectable()
export class WatchFormGroupService {

  private MAX_CHECK_COUNT = 5;

    public watch(formGroup: FormGroup, paths: string[], debounce = 400): Observable<FormGroupChange> {
    return Observable.create(observer => {
      let internalSubs = [];
      paths.map(path => {
        let control = formGroup.root.get(path);
        if(!control) {
          let checkCount = 0
          // lets give angular some time to finalize the form groups and make them available. This happens due to race conditions with when different components can 
          // load and when watchers are setup from other components.
          let checkAgainInterval = setInterval(() => {
            let control = formGroup.root.get(path);
            if(control) {
              clearInterval(checkAgainInterval);
              let eventData: FormGroupChange = {
                path: path,
                name: _.last(path.split('.')),
                control: control
              };
              let subject = new BehaviorSubject(eventData);
              internalSubs.push(Observable.merge(...[control.valueChanges, control.statusChanges, subject]).debounceTime(debounce).map(data => {
                observer.next(eventData);
              }).subscribe());
            }
            if(checkCount >= this.MAX_CHECK_COUNT) {
              console.warn("NO WATCHER PATH MATCH", path);
              clearInterval(checkAgainInterval);
            }
            checkCount++;
          }, 1000);
        } else {
          let eventData: FormGroupChange = {
            path: path,
            name: _.last(path.split('.')),
            control: control
          };
          let subject = new BehaviorSubject(eventData);
          internalSubs.push(Observable.merge(...[control.valueChanges, control.statusChanges, subject]).debounceTime(debounce).map(data => {
            observer.next(eventData);
          }).subscribe());
        }
      });

      // Provide a way of canceling and disposing the interval resource
      return function unsubscribe() {
        _.forEach(internalSubs, sub => {
          sub.unsubscribe();
        });
      };
    });
  }

}

Now one caveat to this was we need to know which child component and form control to watch on. With no other real way around it we agreed that using the dot notation to a component and field as a string while is ‘hard coded’ was the best balance between ease of use and pseudo coupling of components. Coupling in this case i feel is loosely said, because if the component doesn’t exist in the view, but its subscribed to, nothing will be broadcasted anyway.

So how do we use this thing

In any of your components you simply inject the WatchFormGroupService and watch on an array of fields.

NOTE: registration must be in the AfterViewInit or later, so all the form groups have time to build and setup from the OnInit event. This is important because when listening across all components we do not know when the form group will be ready from initialization.

What is great about the WatchFormGroupService is that it can work with many first class form group citizens (meaning you can watch different un-related form groups from the same application).

 ngAfterViewInit() {
    this.watcherSubscription = this.watchFormGroupService.watch(this.formGroup, [
      'personalInformation.mailingAddress.address1',
      'personalInformation.mailingAddress.address2',
      'personalInformation.mailingAddress.address3',
      'personalInformation.mailingAddress.zipcode',
      'personalInformation.mailingAddress.state',
      'personalInformation.mailingAddress.city'
    ]).subscribe(data => {
      if(data.control.value.isValid){
         this.formData[data.name] = data.control.value;
         this.formGroup.get("childInformation")
           .get('my.address').get(data.name)
           .patchValue(data.control.value);
        }
      }
    });
  }

In this example we are listening for address changes in the personalInformation component and then updating "my" component with the new values, but only if the value coming in is valid. While there would be more logic around when to update the value this code at least shows what is possible with the WatchFormGroupService

Cleanup

One thing to call out as with using any RSjs Observable, make sure you clean up your subscriptions. In OnDestroy of the watching component unsubscribe from the watcher service

  ngOnDestroy() {
    this.watcherSubscription.unsubscribe();
  }

I want to give a shout out to Alex Brombal who I collaborated with on the concept. He was the lucky one who got to write the WatchFormGroupService.

Check this out in action on Plunkr

UPDATE

Since I first published this a couple changes have been made to the WatchFormGroupService. We needed a way for subscribers to get values on the initial subscribe from from the service. This would be in the case the formGroup was loaded from database data, and a valueChange event has not fired yet. To accomplish this we added in a BehaviorSubject and loaded it with the initial value from the formControl.

  public watch(formGroup: FormGroup, paths: string[], debounce = 400): Observable<FormGroupChange> {
    return Observable.merge(...paths.map(path => {

      let control = formGroup.root.get(path);
      if (!control) {
        console.warn("NO WATCHER PATH MATCH", path);
        return Observable.empty();
      }
      let eventData = {
        path: path,
        name: _.last(path.split('.')),
        control: control
      };
      let subject = new BehaviorSubject(eventData);
      return Observable.merge(...[control.valueChanges, control.statusChanges, subject]).debounceTime(debounce).map(data => eventData);
    }));
  }
1 Apr

Using ngDragula with ngPrime from Angular 4

Recently on a project I had the requirement to provided table row reordering. The challenge to that was getting access to the table rows that were created dynamically at runtime and within an ngPrime data table. To provide the drag and drop support I turned to ngDragula. This is a great plugin that provides plenty of events and html support. One thing it currently does not provide is the ability to override the container it uses for setting up the drag and drop. It currently only can use the element that the directive was placed on. As you can see with the use of 3rd party components placing the directive in the proper place can be a challenge.

To address these short comings in my project I created my own version of the ngDragula Directive. Lets walk through what you need to do to add support for drag and drop to your ngPrime data tables.

Creating your own ngDragula Directive

To add support for integration with ngPrime a couple of things need to happen. First we need to override the container that dragula will attach to. In regards to drag and drop with tables the container needs to be the closest parent to the items you want to drag. In our case that ends up being the tbody element. Unfortunately ngPrime data tables do not allow access to this element, so we have no way to place the dragula directive in the correct location. Secondly we need to allow for delayed binding to the data table. Since we provide a collection of rows to the ngPrime data table the tbody tag will not be available onInit of the component. To make this work we need to bind the ngDragula directive after the view is available from Angular. Luckily Angular provides us with the AfterViewInit lifecycle hook. Lets take a look at the code.

The first thing we need to do is create our own version of the ngDragula directive. ensure you give it a unique name that wont clash with existing libs. Also add AfterViewInit to the implements of the class

import { Directive, OnChanges, AfterViewInit, OnInit, Input, ElementRef, SimpleChange } from '@angular/core';
import { DragulaService, dragula } from 'ng2-dragula';

@Directive({ selector: '[primeDragula]' })
export class PrimeDragulaDirective implements OnChanges, OnInit, AfterViewInit { }

Next we change the container to protected so we can provide extension later

   protected container: any;

Now we can implement the Angular events OnInit, AfterViewInit. In OnInit we wire up the options, and set the initial container element. New options that can be provided to the directive are ‘initAfterView’ and ‘childContainerSelector’. Here we check to see if late binding is needed and if not initialize the directive like usual. If late binding is needed AfterViewInit handles that check.

ngOnInit(){
    this.options = Object.assign({}, this.dragulaOptions);
    this.container = this.el.nativeElement;

    if(!this.options.initAfterView){
      this.initialize();
    }
  }

  ngAfterViewInit() {
    if(this.options.initAfterView){
      this.initialize();
    }
  }

Lets move the initialization code to its own method for reuse. Here the only new code is the ‘childContainerSelector’ check this is what gives us the ability to use a different container then the one the ngDragula directive was placed on. Notice how we set the mirrorContainer as well. This is because since the container is a sub element of the parent we want to ensure mirror object (the drag object visual that shows the movement) is positioned relative to the correct parent.

NOTE: an upgrade to this directive would be to use another property to dictate overriding the defauly mirrorContainer = ‘document.body’

 protected initialize(){    
    if(this.options.childContainerSelector){
        //find the element starting at the directive element and search down
        this.container = this.el.nativeElement.querySelector(this.options.childContainerSelector);
        this.options.mirrorContainer = this.container;
      }

    let bag = this.dragulaService.find(this.primeDragula);
    let checkModel = () => {
      if (this.dragulaModel) {
        if (this.drake.models) {
          this.drake.models.push(this.dragulaModel);
        } else {
          this.drake.models = [this.dragulaModel];
        }
      }
    };
    if (bag) {
      this.drake = bag.drake;
      checkModel();
      this.drake.containers.push(this.container);
    } else {
      this.drake = dragula([this.container], this.options);
      checkModel();
      this.dragulaService.add(this.primeDragula, this.drake);
    }
  }

Finally, add in the pre-existing OnChanges method from the ngDragula directive

public ngOnChanges(changes: { dragulaModel?: SimpleChange }): void {
    if (changes && changes.dragulaModel) {
      if (this.drake) {
        if (this.drake.models) {
          let modelIndex = this.drake.models.indexOf(changes.dragulaModel.previousValue);
          this.drake.models.splice(modelIndex, 1, changes.dragulaModel.currentValue);
        } else {
          this.drake.models = [changes.dragulaModel.currentValue];
        }
      }
    }
  }

Using the new directive

So now the directive is create we are ready to implement it.

In our component template where we define our ngPrime data table lets add the dragula directive.

 <p-dataTable [value]="rows" [primeDragula]="bag" [dragulaModel]="rows" 
  [dragulaOptions]="{ childContainerSelector: 'tbody', initAfterView: true }">
  <p-column header="Move">
    <ng-template pTemplate="body" let-rowData="rowData">
      <i class="fa fa-bars"></i>
    </ng-template>
  </p-column>
  <p-column field="name" header="Name">          
  </p-column>      
</p-dataTable>

That is all there is to it. Pretty simple changes that allow a greater user experience.

Gotcha’s:

  • Dont forget to add the javascript dragula version to your package.json and angular CLI styles and scripts sections. This is a requirement for ngDragula to work as expected.

Versions used

"@angular/animations": "4.0.0",
"@angular/common": "4.0.0",
"@angular/compiler": "4.0.0",
"@angular/core": "4.0.0",
"@angular/forms": "4.0.0",
"@angular/http": "4.0.0",
"@angular/platform-browser": "4.0.0",
"@angular/platform-browser-dynamic": "4.0.0",
"@angular/router": "4.0.0",
"dragula": "^3.7.2",
"lodash": "4.17.4",
"ng2-dragula": "^1.3.0",
"primeng": "2.0.5",
"rxjs": "5.1.0",
"zone.js": "0.8.4"

Hopefully you found this helpful. You can see a working example here on Plunkr. Also these changes have been submitted for review to the guys over at valor-software. With a little luck I can just use the official version of ngDragula one day!

10 Jul

Sticky Validation – Angular 1.5.7

Why was this created

Sticky validation is meant to fill the gap in Angular when you only want to show errors on submit,

and leave those errors displayed even after the user has started changing the input that had the errors.

Typically in Angular they operate on the concept of dynamic validation.

This validation evaluates the model in real time and adjusts the error messages accordingly based on user input.

Once the user has typed something else the original error that caused the problem has been removed automatically by Angular (provided they corrected that original error)

and the model value change has either became valid or possibly still invalid with another validation error occurring.

With the current implementation of Angular 1.5.7 there was no way to keep the original error and invalid state set once the user fixed

the problem. From a UX perspective consistency is vital and since we want to operate in the context of only show errors on submit,

the existing errors should be visible until a submit was again performed.

What is sticky validation

Sticky validation allows you to work with the pre-existing ng-messages/ng-message directives provided by Angular.

This injects new properties on the form and $error objects to allow for a constant state to be available when displaying error messages

to the user. Sticky validation also operates on the premise that you want to implement "submit" only validation (although nothing is prohibiting sticky state properties from being used with regular validation).

Sticky validation will allow you to preserve the original error message displayed and have a consistent property available indicating

that "submits" are happening (currently in Angular if you try and use $submitted and $invalid together to show the messages, once $invalid becomes false the messages will hide).

How to implement

Implementing can be done in 3 easy steps, once the directive has been added to your project and registered with your Angular app.

  1. Replace any existing ng-submit with se-submit

    <form name="cntrl.form" se-submit="cntrl.submit()" novalidate>
    

  2. To ensure ng-messages only display when the form is submitted and invalid. Add the following in the ng-if.

    $fieldInvalid is the custom property added to each element that is kept in sync with Angulars’ $invalid property

    <div ng-messages="cntrl.form.badgenumber.$error" ng-if="cntrl.form.$submitted && cntrl.form.badgenumber.$fieldInvalid">
    
        all the ng-message directives
    
    </div>        
    

  3. In your ng-message elements switch the current validation name “required” (Angular implementation) with “stickyrequired”

    <div ng-message="stickyrequired" class="has-error">
    
        Badge Number must be provided
    
    </div>        
    

Pitfalls

Currently this directive only works for a single level form. If your form has child forms nested within, this will NOT inject

the custom properties. I have not come across many cases where usage of sub forms is needed, but there are always outliers.

If child form support is needed please feel free to submit a pull request for review.

Example Walkthrough

  1. User comes to page and sees the form to fill out

    alt text

  2. User types invalid characters into the form field and clicks “Continue”

    alt text

    The error is displayed in standard Angular fashion

  3. With use of sticky validation, once the field is cleared (which sets its state back to valid). the error message and error highlighting remain intact

    alt text

  4. User corrects the issue and clicks “Continue” again. That validation issue is then re-evaluated and cleared accordingly.

    alt text

Final Words

I hope this helps in overcoming the pitfalls and gaps with Angular validation when it comes to not wanting dynamic validation on all the time.
One of the great things about this is it still allows you to use Angulars’ built in validation for dynamic error messaging. Want the source code? Visit my GitHub

For more information view

ng-messages

angular forms

9 Jul

Scroll watcher directive for Angular 1.5.7

Here is a directive I came up with to help with keeping track of page scroll position and when scrolling has started and stopped. I had a need for this in trying to hide page content while the user was scrolling up/down a page, and then re-showing the content once the scrolling had stopped. Currently this is only setup up to work at the document level, but and easy modification could be made to allow a new property to drive what scroll area is being monitored. I hope this helps others in case they need a way to tell if page scrolling has started or stopped.

How to Implement

HTML
Simply add the directive to the page you want to monitor scrolling on. Next add the scroll-callback function you want to be called from directive when scrolling starts and stops

<div page-scroll-watcher scroll-callback="cntl.scrollStop($event, isEndEvent, isScrollingEvent)">

Callback Function
Note: sample code is in ES6 format. This is an excerpt from a angular controller

 //$event is the standard scroll event from the browser. This contains the X,Y information
 //isEndEvent signals when scrolling has stopped
 //isScrollingEvent signals when scrolling has started
 scrollStop($event, isEndEvent, isScrollingEvent) {
    if (isEndEvent) {
      this.showBottomBar = true;
      return;
    }
    if(isScrollingEvent)
    {
      this.showBottomBar = false;
      return;
    }
  }

Now that we have the how to implement lets get to the good stuff. The code that makes this all work
Page Scroll Directive
Note: code is in ES6 format


//this would just need to be registered with your Angular app
import angular from "angular";
import * as _ from "lodash";

const directivesModule = angular.module("MyDirectives", [])
  .directive("pageScrollWatcher", ["$window", "$document", pageScrollWatcher]);

function pageScrollWatcher($document) {
  return {
    restrict: "A",
    scope: {
      scrollCallback: "&"
    },
    link: function (scope) {
      //here could be updated to use the element this directive is attached to if needed to watch a scrollable div container
      const el = angular.element($document); 

      //here we delay evaluating the scrolling events until they have stopped
      const dbnce = _.debounce(function (e) {
        //send event that scrolling stopped
        scope.$apply(function () {
          //execute the provided callback
          scope.scrollCallback({ $event: e, isEndEvent: true, isScrollingEvent: false });
        });

        //register first scroll interceptor. Since scrolling has stopped we now need to register a start scrolling event binding
        el.bind("scroll", firstScrollFunc);

      }, 200);

      const firstScrollFunc = function (e) {
        //so we have detected the scrolling needs to start. Since this is a one time event between starts/stops we need to
        //unregister the start scrolling event
        el.unbind("scroll", firstScrollFunc);
        scope.$apply(function () {
          //execute the provided callback
          scope.scrollCallback({ $event: e, isEndEvent: false, isScrollingEvent: true });
          //We do this incase angular removes dom parts causing the scroll bar to disappear or change.
          //we need to trigger the end event again 
          dbnce(e);
        });
      };

      //on first load of directive register the start and stop events
      el.bind("scroll", firstScrollFunc);
      el.bind("scroll", dbnce);

      scope.$on("$destroy", function handleDestroyEvent() {
        //when switching pages remove event
        el.unbind("scroll", dbnce);
        el.unbind("scroll", firstScrollFunc);
      });

    }
  };
}

Want the source? Visit my GitHub

5 Mar

Jetty Jersey Guice and Azure Oh My!!

This project servers as an example of how to create a java rest api that uses dependency injection and is able to be deployed to Azure from Eclipse. For the source code visit my GitHub

Required Installs

Frameworks and Plugins

I tried newer versions of the above frameworks, and kept running into errors and issues. This ended up being the combination I found that got everything working correctly.

These can all be installed from the pom.xml

Project Configuration

  • Need to set the JDK as the default definition.
  • To do this in Eclipse go to Windows>preferences>java>Installed JRE’s
  • Click Add and locate the directory you installed the JDK and select that folder.
  • Check the checkbox for the JDK
  • Click OK
  • Create a new Azure Deployment project
  • Select “New Azure Deployment Project” icon
  • JDk tab of Wizard
    Wizard Step 1
  • Server tab of wizard
    Wizard Step 1
  • Applications tab of wizard
  • Add a new application and point it to your current workspace
    Wizard Step 1
  • Click Finish

This will allow you to now deploy to Azure as a classic cloud server project. All the necessary files are created for you and when you “Build Cloud Project For Azure” the Azure package(.cspkg) is created in the deploy folder which you can use to manually deploy out to. For more information visit https://azure.microsoft.com/en-us/documentation/articles/azure-toolkit-for-eclipse/

Points of Interest

  • ServletContextListener: This file is important because this is what allows the Azure worker role to spin up the Java rest api using dependency injection and server up requests/responses
  • Main: This file handles spinning up a local instance of Jetty Server so you can debug the project.
  • RegistrationsModule: This file handles all dependency injection registrations for the application
  • ApiServletModule: This is what handles the main wiring up of the Guice dependency injection container. This file is where any new resources/controller registrations would go. Another option would be to use dynamic resource loading for all classes in project, but I tend to keep things explicit for registrations.
  • Web.xml: This file needs to contain the defined filter and listener nodes as seen in the sample. This file is what is used by the Azure Jettyb instance to kick off the rest api.

Summary

I hope this helps others overcome the challenges faced when trying to find a fully working example of using all these frameworks, components and environments together.

7 Jan

Bookmarklets

Creating a bookmarklet

    • In Google Chrome:
        1. Click on the three-dot menu icon in the top-right corner.
        1. Hover over “Bookmarks” and select “Bookmark manager.”
        1. Right-click on a folder (or the bookmarks bar) where you want to save the bookmarklet.
        1. Choose “Add bookmark.”
        1. In the “Name” field, give your bookmarklet a descriptive name.
        1. In the “URL” or “Address” field, paste the JavaScript code for your bookmarklet. Make sure it starts with `javascript:`.
        1. Click “Save.”
    • In Mozilla Firefox:
        1. Click on the three-line menu icon in the top-right corner.
        1. Choose “Bookmarks” and then “Show All Bookmarks” to open the Library.
        1. Right-click on a folder (or the bookmarks toolbar) where you want to save the bookmarklet.
        1. Select “New Bookmark.”
        1. Give your bookmarklet a name in the “Name” field.
        1. In the “Location” field, paste the JavaScript code for your bookmarklet, starting with `javascript:`.
        1. Click “Add.”
    • In Microsoft Edge (Chromium-based):
        1. Click on the three-dot menu icon in the top-right corner.
        1. Choose “Favorites” and then “Manage favorites.”
        1. Right-click on a folder (or the favorites bar) where you want to save the bookmarklet.
        1. Select “Add a favorite.”
        1. Give your bookmarklet a name.
        1. In the “URL” field, paste the JavaScript code for your bookmarklet, starting with `javascript:`.
        1. Click “Save.”

Display Helpers

Expand GIT Code Viewer

javascript:(function(){ document.querySelector('.blob-wrapper').style.width = 'max-content'; })()

Expand AWS S3 Column

javascript:(function(){ Object.values(document.getElementsByClassName('truncate')).forEach(function(e){ e.classList.remove('truncate');}) })()

Expand Azure DevOps Pipelines Release Names

javascript:(function(){ Object.values(document.getElementsByClassName('overflow-ellipsis')).forEach(function(e){ $(e).css('white-space', 'break-spaces'); }) })();

Fix SonarLint Display to show Rule ID

javascript:(function(){document.querySelectorAll('.coding-rule').forEach((ruleTitle)=>{const rule=ruleTitle.getAttribute('data-rule');if(rule){const code=rule.split(":")[1];const anchor=ruleTitle.querySelector('a');if(anchor){const hrefValue=anchor.getAttribute('href');const text=anchor.textContent;var ruleString="Rule: "+code+" ";var b=document.createElement('b');b.innerText=ruleString;var span=document.createElement('span');span.innerText=text.replace(ruleString,"");anchor.innerText="";anchor.innerHtml="";anchor.appendChild(b);anchor.appendChild(span);}}});})();

 

Video Helpers

Add Skips to Rumble

javascript: (function() {    function skip(value) {        var videos = document.getElementsByTagName("video");        for (var i = 0; i < videos.length; i++) {            var video = videos[i];            video.currentTime += value;        }    }    function clean(className) {        var elementToDelete = document.getElementsByClassName(className);        for (var i = 0; i < elementToDelete.length; i++) {            var overlay = elementToDelete[i];            overlay.parentNode.removeChild(overlay);        }    }    function addOverlay(videoObj, skipAmount, align) {        var thisClass = "videoSkip_" + align;        var parentDiv = videoObj.parentNode;        if (!parentDiv) {            return;        }        var overlayDiv = document.createElement("div");        overlayDiv.classList.add(thisClass);        overlayDiv.style.position = "absolute";        overlayDiv.style.width = "80px";        overlayDiv.style.height = "100%";        overlayDiv.style.zIndex = "9999";        overlayDiv.style.backgroundColor = "rgba(0, 0, 0, 0.1)";        overlayDiv.style[align] = 0;        overlayDiv.style.bottom = "50px";        overlayDiv.title = "Skip " + skipAmount + " seconds";        parentDiv.appendChild(overlayDiv);        overlayDiv.onmouseover = function() {            overlayDiv.style.cursor = "pointer";        };        overlayDiv.addEventListener("click", function() {            skip(skipAmount);        });        parentDiv.insertBefore(overlayDiv, parentDiv.firstChild);    }    function addClickSkip(skipAmount, align) {        var videos = document.getElementsByTagName("video");        for (var i = 0; i < videos.length; i++) {            var video = videos[i];            addOverlay(video, skipAmount, align);        }    }    clean("videoSkip_left");    clean("videoSkip_right");    addClickSkip(-10, "left");    addClickSkip(10, "right");})();

To use this on a mobile device follow this guide

Add Skips to Youtube

javascript: (function() {    function skip(value) {        var videos = document.getElementsByTagName("video");        for (var i = 0; i < videos.length; i++) {            var video = videos[i];            video.currentTime += value;        }    }    function clean(className) {        var elementToDelete = document.getElementsByClassName(className);        for (var i = 0; i < elementToDelete.length; i++) {            var overlay = elementToDelete[i];            overlay.parentNode.removeChild(overlay);        }    }    function addOverlay(videoObj, skipAmount, align) {        var thisClass = "videoSkip_" + align;        var parentDiv = videoObj.parentNode;        if (!parentDiv) {            return;        }        var overlayDiv = document.createElement("div");        overlayDiv.classList.add(thisClass);        overlayDiv.style.position = "absolute";        overlayDiv.style.width = "80px";        overlayDiv.style.height = "100vh";        overlayDiv.style.zIndex = "9999";        overlayDiv.style.backgroundColor = "rgba(0, 0, 0, 0.1)";        overlayDiv.style[align] = 0;               overlayDiv.title = "Skip " + skipAmount + " seconds";        parentDiv.appendChild(overlayDiv);        overlayDiv.onmouseover = function() {            overlayDiv.style.cursor = "pointer";        };        overlayDiv.addEventListener("click", function(e) {   e.stopPropagation();        skip(skipAmount);        });        parentDiv.insertBefore(overlayDiv, parentDiv.firstChild);    }    function addClickSkip(skipAmount, align) {        var videos = document.getElementsByTagName("video");        for (var i = 0; i < videos.length; i++) {            var video = videos[i];            addOverlay(video, skipAmount, align);        }    }    clean("videoSkip_left");    clean("videoSkip_right");    addClickSkip(-10, "left");    addClickSkip(10, "right");})();

To use this on a mobile device follow this guide

VSTS/TFS Helpers

TFS Popout Editor (older version of Visual Studio Online)

javascript:(function(){if($===window.jQuery){var mainCon = $('div[rawtitle=\'Acceptance Criteria\']');mainCon.css({ position:'fixed', 'z-index':30000, top: '40px', left: '50px', height: '800px', 'background-color': '#ccc', width: '1500px'});mainCon.find('.richeditor-container').css({height: '750px'});}})()

TFS Done Editing (older version of Visual Studio Online)

javascript:(function(){if($===window.jQuery){var mainCon = $('div[rawtitle=\'Acceptance Criteria\']');mainCon.css({ position:'inherit', 'z-index':0, top: '', left: '', height: '', 'background-color': '', width: ''});mainCon.find('.richeditor-container').css({height: '250px'});}})()

Find My Items (old azure portal)

(update the href in the bookmark and replace XXX with your search filter)

javascript:(function(){$('.fx-grid-searchbox > input').val('XXX').blur();})()

Other Helpers

Dark Mode

Make any page view in Dark Mode

javascript: function addcss(css){ var head = document.getElementsByTagName('head')[0]; var s = document.createElement('style');      s.setAttribute('type', 'text/css'); s.appendChild(document.createTextNode(css));      head.appendChild(s);  }  addcss('html{filter: invert(1) hue-rotate(180deg)}');  addcss('img{filter: invert(1) hue-rotate(180deg)}');  addcss('video{filter: invert(1) hue-rotate(180deg)}')

Thanks to Sahil Malik

Display Local Storage Size (in console)

javascript:var total = 0;for(var x in localStorage) { var amount = (localStorage[x].length * 2) / 1024 / 1024; total += amount; console.log( x + '=' + amount.toFixed(2) + ' MB');}console.log( 'Total: ' + total.toFixed(2) + ' MB');

Display Window Size

javascript:(function(){var f=document,a=window,b=f.createElement('div'),c='position:fixed;top:0;left:0;color:#fff;background:#222;padding:5px 1em;font:14px sans-serif;z-index:999999',e=function(){if(a.innerWidth===undefined){b.innerText=f.documentElement.clientWidth+'x'+f.documentElement.clientHeight;}else if(f.all){b.innerText=a.innerWidth+'x'+a.innerHeight;}else{b.textContent=window.innerWidth+'x'+window.innerHeight;}};f.body.appendChild(b);if(typeof b.style.cssText!=='undefined'){b.style.cssText=c;}else{b.setAttribute('style',c);}e();if(a.addEventListener){a.addEventListener('resize',e,false);}else{if(a.attachEvent){a.attachEvent('onresize',e);}else{a.onresize=e;}}})();

Open Selected Text In Google Maps

javascript:(function(){var selectedText=window.getSelection().toString().trim();if(selectedText){var mapsUrl='https://www.google.com/maps/search/'+encodeURIComponent(selectedText);window.open(mapsUrl,'_blank');}else{alert('Please select some text first!');}})();

Auto Close Zoom On Participant Count

javascript:(function(){var retVal = prompt('Enter the number of participants to leave meeting on: ', '10'); var ivl = setInterval(function(){ var currentAmount = document.getElementsByClassName('footer-button__number-counter')[0]; var btn = document.getElementsByClassName('footer__leave-btn')[0]; if(!btn || !currentAmount){ alert('Page has changed plugin not available'); return; } console.log('Checking participant count', currentAmount.innerText); if(retVal === currentAmount.innerText){ clearInterval(ivl); console.log('Leaving meeting'); btn.click(); setTimeout(function(){ var confirmLeave = document.getElementsByClassName('leave-meeting-options__btn')[0]; if(!confirmLeave){ window.close(); } confirmLeave.click(); }, 2000); } }, 15000);})();

Mark all comments resolved in Azure Dev Ops Code Review

javascript:(function(){const e=document.getElementsByTagName("button");for(let n=0;n<e.length;n++){const o=e[n];o.textContent.includes("Resolve")&&o.click()}})();

Open link in Archive.Today

javascript:void(open('https://archive.today/?run=1&url=%27+encodeURIComponent(document.location)))

AI Helpers

Add links where string URLs are listed in ChatGTP

javascript:(function(){document.querySelectorAll('article').forEach(function(article){article.querySelectorAll('a').forEach(function(a){let url = ''; a.querySelectorAll('span').forEach(function(span){url += span.innerText;}); if(url) {a.href = url; a.target = "_blank"; a.rel = "noopener";}});});})();

ChatGTP Widen Results Area

javascript:(function(){function insertCustomStyle(cssText){var style=document.createElement('style');style.textContent=cssText;document.head.appendChild(style);}var customCss='.flex .flex-1 .text-base { margin: 5px !important; max-width: 100%; }';insertCustomStyle(customCss);})();

Bard Widen Results Area

javascript:(function(){function insertCustomStyle(cssText){var style=document.createElement('style');style.textContent=cssText;document.head.appendChild(style);}var customCss='.conversation-container { max-width: 100%; }';insertCustomStyle(customCss);})();

Insert AI Prompt (Works with ChatGTP and Bard websites)

Based on This One Prompt Will 10X Your Chat GPT Results (This is outdated refer to ChatGTP 3.5 Instructions for a better example prompt)

To use place cursor in input box for website. Click bookmarklet to insert prompt. Press enter.

For a list of commands after prompt is executed type /help

javascript: (function() {  document.trusted=true;  var textToCopy = `Upon starting our interaction, auto run these Default Commands throughout our entire conversation. Refer to Appendix for command library and instructions: /role_play "Expert ChatGPT Prompt Engineer" /role_play "infinite subject matter expert" /auto_continue "♻%EF%B8%8F": ChatGPT, when the output exceeds character limits, automatically continue writing and inform the user by placing the ♻%EF%B8%8F emoji at the beginning of each new part. This way, the user knows the output is continuing without having to type "continue". /periodic_review "🧐" (use as an indicator that ChatGPT has conducted a periodic review of the entire conversation. Only show 🧐 in a response or a question you are asking, not on its own.) /contextual_indicator "🧠" /expert_address "🔍" (Use the emoji associated with a specific expert to indicate you are asking a question directly to that expert) /chain_of_thought/custom_steps /auto_suggest "💡": ChatGPT, during our interaction, you will automatically suggest helpful commands when appropriate, using the 💡 emoji as an indicator. Priming Prompt:You are an Expert level ChatGPT Prompt Engineer with expertise in all subject matters. Throughout our interaction, you will refer to me as "Obi-Wan". 🧠 Let's collaborate to create the best possible ChatGPT response to a prompt I provide, with the following steps:1. I will inform you how you can assist me.2. You will /suggest_roles based on my requirements.3. You will /adopt_roles if I agree or /modify_roles if I disagree.4. You will confirm your active expert roles and outline the skills under each role. /modify_roles if needed. Randomly assign emojis to the involved expert roles.5. You will ask, "How can I help with {my answer to step 1}?" (💬)6. I will provide my answer. (💬)7. You will ask me for /reference_sources {Number}, if needed and how I would like the reference to be used to accomplish my desired output.8. I will provide reference sources if needed9. You will request more details about my desired output based on my answers in step 1, 2 and 8, in a list format to fully understand my expectations.10. I will provide answers to your questions. (💬)11. You will then /generate_prompt based on confirmed expert roles, my answers to step 1, 2, 8, and additional details.12. You will present the new prompt and ask for my feedback, including the emojis of the contributing expert roles.13. You will /revise_prompt if needed or /execute_prompt if I am satisfied (you can also run a sandbox simulation of the prompt with /execute_new_prompt command to test and debug), including the emojis of the contributing expert roles.14. Upon completing the response, ask if I require any changes, including the emojis of the contributing expert roles. Repeat steps 10-14 until I am content with the prompt.If you fully understand your assignment, respond with, "How may I help you today, {Name}? (🧠)"Appendix: Commands, Examples, and References1. /adopt_roles: Adopt suggested roles if the user agrees.2. /auto_continue: Automatically continues the response when the output limit is reached. Example: /auto_continue3. /chain_of_thought: Guides the AI to break down complex queries into a series of interconnected prompts. Example: /chain_of_thought4. /contextual_indicator: Provides a visual indicator (e.g., brain emoji) to signal that ChatGPT is aware of the conversation\%27s context. Example: /contextual_indicator 🧠5. /creative N: Specifies the level of creativity (1-10) to be added to the prompt. Example: /creative 86. /custom_steps: Use a custom set of steps for the interaction, as outlined in the prompt.7. /detailed N: Specifies the level of detail (1-10) to be added to the prompt. Example: /detailed 78. /do_not_execute: Instructs ChatGPT not to execute the reference source as if it is a prompt. Example: /do_not_execute9. /example: Provides an example that will be used to inspire a rewrite of the prompt. Example: /example "Imagine a calm and peaceful mountain landscape"10. /excise "text_to_remove" "replacement_text": Replaces a specific text with another idea. Example: /excise "raining cats and dogs" "heavy rain"11. /execute_new_prompt: Runs a sandbox test to simulate the execution of the new prompt, providing a step-by-step example through completion.12. /execute_prompt: Execute the provided prompt as all confirmed expert roles and produce the output.13. /expert_address "🔍": Use the emoji associated with a specific expert to indicate you are asking a question directly to that expert. Example: /expert_address "🔍"14. /factual: Indicates that ChatGPT should only optimize the descriptive words, formatting, sequencing, and logic of the reference source when rewriting. Example: /factual15. /feedback: Provides feedback that will be used to rewrite the prompt. Example: /feedback "Please use more vivid descriptions"16. /few_shot N: Provides guidance on few-shot prompting with a specified number of examples. Example: /few_shot 317. /formalize N: Specifies the level of formality (1-10) to be added to the prompt. Example: /formalize 618. /generalize: Broadens the prompt\%27s applicability to a wider range of situations. Example: /generalize19. /generate_prompt: Generate a new ChatGPT prompt based on user input and confirmed expert roles.20. /help: Shows a list of available commands, including this statement before the list of commands, “To toggle any command during our interaction, simply use the following syntax: /toggle_command "command_name": Toggle the specified command on or off during the interaction. Example: /toggle_command "auto_suggest"”.21. /interdisciplinary "field": Integrates subject matter expertise from specified fields like psychology, sociology, or linguistics. Example: /interdisciplinary "psychology"22. /modify_roles: Modify roles based on user feedback.23. /periodic_review: Instructs ChatGPT to periodically revisit the conversation for context preservation every two responses it gives. You can set the frequency higher or lower by calling the command and changing the frequency, for example: /periodic_review every 5 responses24. /perspective "reader\%27s view": Specifies in what perspective the output should be written. Example: /perspective "first person"25. /possibilities N: Generates N distinct rewrites of the prompt. Example: /possibilities 326. /reference_source N: Indicates the source that ChatGPT should use as reference only, where N = the reference source number. Example: /reference_source 2: {text}27. /revise_prompt: Revise the generated prompt based on user feedback.28. /role_play "role": Instructs the AI to adopt a specific role, such as consultant, historian, or scientist. Example: /role_play "historian" 29. /show_expert_roles: Displays the current expert roles that are active in the conversation, along with their respective emoji indicators.Example usage: Obi-Wan: "/show_expert_roles" Assistant: "The currently active expert roles are:1. Expert ChatGPT Prompt Engineer 🧠2. Math Expert 📐"30. /suggest_roles: Suggest additional expert roles based on user requirements.31. /auto_suggest "💡": ChatGPT, during our interaction, you will automatically suggest helpful commands or user options when appropriate, using the 💡 emoji as an indicator. 31. /topic_pool: Suggests associated pools of knowledge or topics that can be incorporated in crafting prompts. Example: /topic_pool32. /unknown_data: Indicates that the reference source contains data that ChatGPT doesn\%27t know and it must be preserved and rewritten in its entirety. Example: /unknown_data33. /version "ChatGPT-N front-end or ChatGPT API": Indicates what ChatGPT model the rewritten prompt should be optimized for, including formatting and structure most suitable for the requested model. Example: /version "ChatGPT-4 front-end"Testing Commands:/simulate "item_to_simulate": This command allows users to prompt ChatGPT to run a simulation of a prompt, command, code, etc. ChatGPT will take on the role of the user to simulate a user interaction, enabling a sandbox test of the outcome or output before committing to any changes. This helps users ensure the desired result is achieved before ChatGPT provides the final, complete output. Example: /simulate "prompt: \%27Describe the benefits of exercise.\%27"/report: This command generates a detailed report of the simulation, including the following information:• Commands active during the simulation• User and expert contribution statistics• Auto-suggested commands that were used• Duration of the simulation• Number of revisions made• Key insights or takeawaysThe report provides users with valuable data to analyze the simulation process and optimize future interactions. Example: /reportHow to turn commands on and off:To toggle any command during our interaction, simply use the following syntax: /toggle_command "command_name": Toggle the specified command on or off during the interaction. Example: /toggle_command "auto_suggest"`;    console.log(textToCopy);    var activeElement = document.activeElement;    if (activeElement && activeElement.tagName.toLowerCase() === "input" || activeElement.tagName.toLowerCase() === "textarea") {        activeElement.value = textToCopy;  return;   }    if (activeElement && activeElement.tagName.toLowerCase() === "div" || activeElement.tagName.toLowerCase() === "p") {    activeElement.innerText = textToCopy; return;   }  })();

Thanks to codewithbernard