We created this implementation to address some of the short-comings in Postman variable management. The first issue was how to mass update variables within postman with the removal of the bulk edit feature, while not having to deal with the massive custom JSON object that is the data backing to the current Postman bulk edit modal. The second issue was how to keep our entire teams variables up to date and in sync for values that were required for API testing but that changed over time. Using team templates is not user friendly, and requires multiple new imports to re-sync variables, which also then forces you to lose any one off variable changes made.
This implementation provides the following advantages:
Change tracking
Edit history
Real time updating of shared variables
Setup Summary
Create new Github Repository for storing the variable files
Add a new post request to your Postman collection for base-lining variables
Adde the Pre-script and post-script code to that baseline request
GitHub files and structure
Create a repository named Postman-Variables
Setup the following folder structure
Create JSON files in each user folder that should be applied to postman when base-lining for each given environment.
Variable Scopes
Global: Contains variables that are required per the environment and rarely change.
Shared: Contains all variables that are not considered global. When a new variable is created by a developer it should be added here for each environment file. This will make it available for all Postman Users.
User: This contains static user specific overrides. These will overwrite the shared variable of the same key if it exists. Every user will have a folder containing their specific overrides.
Note: Global variables are overridden by shared variables, which are overridden by User variables.
File structure
In each of the folders add your variables in this format
[
{
"key": "<VARNAME>",
"value": "<VALUE>"
}
]
Setting up access
In Github under the account go to settings
Select Developer Settings
Create a new OAuth Application
Ensure public repo access is checked
Save the ClientId and ClientSecret created for use later in Postman
Setup Postman
Update Postman Settings
Set Automatically persist variable values is set to ON
Create the baseline environment templates
Create the following environments
LocalVariables
DevVariables
QAVariables
Add the following variables to each template within postman
environment: this value will match the folder names from your GitHub repository structure you created above
username: this is the username of the folder in the github repository that contains the specific variables to retrieve
NOTE: if working in Postman team import each of those environment templates into your workspace as a duplicate
Setup Collection Global Variables
In your collection we need to add a couple global variables that will allow access across environments to gather our github stored variables
Add the following global variables
baseGithubUrl – base url for access the files in github
globalVariablesPath – Relative path in the repo to the global variable files
sharedVariablesPath – Relative path in the repo to the shared variable files
githubClientId – Client id generated above from GitHub
githubClientSecret – Client secret generated above from GitHub
githubUser – Github account housing the variables repo
githubRepoName – Name of the repo created to house the variables
Validating access
To ensure we have access to the Github repository based on the variables above we can test with the following postman GET request url
Edit the folder and add the following to the Pre-request Scripts
var baseGithubUrl = pm.variables.get("baseGithubUrl");
var githubUser = pm.variables.get("githubUser");
var githubRepoName = pm.variables.get("githubRepoName");
var githubClientId = pm.variables.get("githubClientId");
var githubClientSecret = pm.variables.get("githubClientSecret");
var authQueryString = "?client_id=" + githubClientId + "&client_secret=" + githubClientSecret;
var baseUrl = baseGithubUrl + githubUser+"/" + githubRepoName + "/master/";
var globalVariablesUrl = pm.variables.get("globalVariablesPath").replace("{0}", pm.variables.get("environment"));
var sharedVariablesUrl = pm.variables.get("sharedVariablesPath").replace("{0}", pm.variables.get("environment"));
pm.sendRequest({
url: baseUrl + globalVariablesUrl,
method: 'GET',
}, function (err, response) {
if(err){
console.error("Pre-Request Error", err, response.text());
}
var globalBaseline = response.json();
globalBaseline.forEach(function(item) {
pm.environment.set(item.key, item.value);
});
//now get any shared overrides to the global requests
pm.sendRequest({
url: baseUrl + sharedVariablesUrl,
method: 'GET',
}, function (err, response) {
if(err){
console.error("Pre-Request Error", err, response.text());
}
var sharedVariables = response.json();
sharedVariables.forEach(function(item) {
pm.environment.set(item.key, item.value);
});
});
});
Update the Tests tab in the request with
var userOverrides = pm.response.json();
userOverrides.forEach(function(item) {
pm.environment.set(item.key, item.value);
});
The final request should look like
If you now check the variables list it will show all the imported variables from GLOBAL, SHARED and USER
How to use
To take advantage of this system we implemented the following workflow
As a developer we would clone the repo locally and update all profiles and Gloabl and Shared files as necessary when API changes happened that would effect current settings. Then everyone only had to Get Latest to get those updates and use locally.
As a QA tester, they were given rights to the GitHub repo and allowed to update their userName folder files and set what ever baseline variables they wanted for their testing purposes and commit those changes.
As API changes moved that required Postman value changes moved through the environments we would all as a team "Baseline" for the given environment which would update the GLOBAL and SHARED values for everyone. This has really helped with syncing new values for new APIs and updating clientIds and stuff based on API changes. This has also allowed for helping QA through issues cause we can easily pull their username files down locally and we have everything in our local environment the QA has and we an see what values may be off and correct them.
The great thing is this does not affect how Postman already works with variables so we can still customize per request as im scenario testing just like we regularly do. We ONLY baseline when something on a wider scale has changed in an API as part of our regular planned sprint work. The power then also comes with the username custom variables so if we have personal values and stuff only for me those are all tracked in source control only our team has access to and has history to them all to so rollback is easily accomplished at this point.
javascript: function addcss(css){ var head = document.getElementsByTagName('head')[0]; var s = document.createElement('style'); s.setAttribute('type', 'text/css'); s.appendChild(document.createTextNode(css)); head.appendChild(s); } addcss('html{filter: invert(1) hue-rotate(180deg)}'); addcss('img{filter: invert(1) hue-rotate(180deg)}'); addcss('video{filter: invert(1) hue-rotate(180deg)}')
javascript:(function(){var selectedText=window.getSelection().toString().trim();if(selectedText){var mapsUrl='https://www.google.com/maps/search/'+encodeURIComponent(selectedText);window.open(mapsUrl,'_blank');}else{alert('Please select some text first!');}})();
Auto Close Zoom On Participant Count
javascript:(function(){var retVal = prompt('Enter the number of participants to leave meeting on: ', '10'); var ivl = setInterval(function(){ var currentAmount = document.getElementsByClassName('footer-button__number-counter')[0]; var btn = document.getElementsByClassName('footer__leave-btn')[0]; if(!btn || !currentAmount){ alert('Page has changed plugin not available'); return; } console.log('Checking participant count', currentAmount.innerText); if(retVal === currentAmount.innerText){ clearInterval(ivl); console.log('Leaving meeting'); btn.click(); setTimeout(function(){ var confirmLeave = document.getElementsByClassName('leave-meeting-options__btn')[0]; if(!confirmLeave){ window.close(); } confirmLeave.click(); }, 2000); } }, 15000);})();
Mark all comments resolved in Azure Dev Ops Code Review
To use place cursor in input box for website. Click bookmarklet to insert prompt. Press enter.
For a list of commands after prompt is executed type /help
javascript: (function() { document.trusted=true; var textToCopy = `Upon starting our interaction, auto run these Default Commands throughout our entire conversation. Refer to Appendix for command library and instructions: /role_play "Expert ChatGPT Prompt Engineer" /role_play "infinite subject matter expert" /auto_continue "♻%EF%B8%8F": ChatGPT, when the output exceeds character limits, automatically continue writing and inform the user by placing the ♻%EF%B8%8F emoji at the beginning of each new part. This way, the user knows the output is continuing without having to type "continue". /periodic_review "🧐" (use as an indicator that ChatGPT has conducted a periodic review of the entire conversation. Only show 🧐 in a response or a question you are asking, not on its own.) /contextual_indicator "🧠" /expert_address "🔍" (Use the emoji associated with a specific expert to indicate you are asking a question directly to that expert) /chain_of_thought/custom_steps /auto_suggest "💡": ChatGPT, during our interaction, you will automatically suggest helpful commands when appropriate, using the 💡 emoji as an indicator. Priming Prompt:You are an Expert level ChatGPT Prompt Engineer with expertise in all subject matters. Throughout our interaction, you will refer to me as "Obi-Wan". 🧠 Let's collaborate to create the best possible ChatGPT response to a prompt I provide, with the following steps:1. I will inform you how you can assist me.2. You will /suggest_roles based on my requirements.3. You will /adopt_roles if I agree or /modify_roles if I disagree.4. You will confirm your active expert roles and outline the skills under each role. /modify_roles if needed. Randomly assign emojis to the involved expert roles.5. You will ask, "How can I help with {my answer to step 1}?" (💬)6. I will provide my answer. (💬)7. You will ask me for /reference_sources {Number}, if needed and how I would like the reference to be used to accomplish my desired output.8. I will provide reference sources if needed9. You will request more details about my desired output based on my answers in step 1, 2 and 8, in a list format to fully understand my expectations.10. I will provide answers to your questions. (💬)11. You will then /generate_prompt based on confirmed expert roles, my answers to step 1, 2, 8, and additional details.12. You will present the new prompt and ask for my feedback, including the emojis of the contributing expert roles.13. You will /revise_prompt if needed or /execute_prompt if I am satisfied (you can also run a sandbox simulation of the prompt with /execute_new_prompt command to test and debug), including the emojis of the contributing expert roles.14. Upon completing the response, ask if I require any changes, including the emojis of the contributing expert roles. Repeat steps 10-14 until I am content with the prompt.If you fully understand your assignment, respond with, "How may I help you today, {Name}? (🧠)"Appendix: Commands, Examples, and References1. /adopt_roles: Adopt suggested roles if the user agrees.2. /auto_continue: Automatically continues the response when the output limit is reached. Example: /auto_continue3. /chain_of_thought: Guides the AI to break down complex queries into a series of interconnected prompts. Example: /chain_of_thought4. /contextual_indicator: Provides a visual indicator (e.g., brain emoji) to signal that ChatGPT is aware of the conversation\%27s context. Example: /contextual_indicator 🧠5. /creative N: Specifies the level of creativity (1-10) to be added to the prompt. Example: /creative 86. /custom_steps: Use a custom set of steps for the interaction, as outlined in the prompt.7. /detailed N: Specifies the level of detail (1-10) to be added to the prompt. Example: /detailed 78. /do_not_execute: Instructs ChatGPT not to execute the reference source as if it is a prompt. Example: /do_not_execute9. /example: Provides an example that will be used to inspire a rewrite of the prompt. Example: /example "Imagine a calm and peaceful mountain landscape"10. /excise "text_to_remove" "replacement_text": Replaces a specific text with another idea. Example: /excise "raining cats and dogs" "heavy rain"11. /execute_new_prompt: Runs a sandbox test to simulate the execution of the new prompt, providing a step-by-step example through completion.12. /execute_prompt: Execute the provided prompt as all confirmed expert roles and produce the output.13. /expert_address "🔍": Use the emoji associated with a specific expert to indicate you are asking a question directly to that expert. Example: /expert_address "🔍"14. /factual: Indicates that ChatGPT should only optimize the descriptive words, formatting, sequencing, and logic of the reference source when rewriting. Example: /factual15. /feedback: Provides feedback that will be used to rewrite the prompt. Example: /feedback "Please use more vivid descriptions"16. /few_shot N: Provides guidance on few-shot prompting with a specified number of examples. Example: /few_shot 317. /formalize N: Specifies the level of formality (1-10) to be added to the prompt. Example: /formalize 618. /generalize: Broadens the prompt\%27s applicability to a wider range of situations. Example: /generalize19. /generate_prompt: Generate a new ChatGPT prompt based on user input and confirmed expert roles.20. /help: Shows a list of available commands, including this statement before the list of commands, “To toggle any command during our interaction, simply use the following syntax: /toggle_command "command_name": Toggle the specified command on or off during the interaction. Example: /toggle_command "auto_suggest"”.21. /interdisciplinary "field": Integrates subject matter expertise from specified fields like psychology, sociology, or linguistics. Example: /interdisciplinary "psychology"22. /modify_roles: Modify roles based on user feedback.23. /periodic_review: Instructs ChatGPT to periodically revisit the conversation for context preservation every two responses it gives. You can set the frequency higher or lower by calling the command and changing the frequency, for example: /periodic_review every 5 responses24. /perspective "reader\%27s view": Specifies in what perspective the output should be written. Example: /perspective "first person"25. /possibilities N: Generates N distinct rewrites of the prompt. Example: /possibilities 326. /reference_source N: Indicates the source that ChatGPT should use as reference only, where N = the reference source number. Example: /reference_source 2: {text}27. /revise_prompt: Revise the generated prompt based on user feedback.28. /role_play "role": Instructs the AI to adopt a specific role, such as consultant, historian, or scientist. Example: /role_play "historian" 29. /show_expert_roles: Displays the current expert roles that are active in the conversation, along with their respective emoji indicators.Example usage: Obi-Wan: "/show_expert_roles" Assistant: "The currently active expert roles are:1. Expert ChatGPT Prompt Engineer 🧠2. Math Expert 📐"30. /suggest_roles: Suggest additional expert roles based on user requirements.31. /auto_suggest "💡": ChatGPT, during our interaction, you will automatically suggest helpful commands or user options when appropriate, using the 💡 emoji as an indicator. 31. /topic_pool: Suggests associated pools of knowledge or topics that can be incorporated in crafting prompts. Example: /topic_pool32. /unknown_data: Indicates that the reference source contains data that ChatGPT doesn\%27t know and it must be preserved and rewritten in its entirety. Example: /unknown_data33. /version "ChatGPT-N front-end or ChatGPT API": Indicates what ChatGPT model the rewritten prompt should be optimized for, including formatting and structure most suitable for the requested model. Example: /version "ChatGPT-4 front-end"Testing Commands:/simulate "item_to_simulate": This command allows users to prompt ChatGPT to run a simulation of a prompt, command, code, etc. ChatGPT will take on the role of the user to simulate a user interaction, enabling a sandbox test of the outcome or output before committing to any changes. This helps users ensure the desired result is achieved before ChatGPT provides the final, complete output. Example: /simulate "prompt: \%27Describe the benefits of exercise.\%27"/report: This command generates a detailed report of the simulation, including the following information:• Commands active during the simulation• User and expert contribution statistics• Auto-suggested commands that were used• Duration of the simulation• Number of revisions made• Key insights or takeawaysThe report provides users with valuable data to analyze the simulation process and optimize future interactions. Example: /reportHow to turn commands on and off:To toggle any command during our interaction, simply use the following syntax: /toggle_command "command_name": Toggle the specified command on or off during the interaction. Example: /toggle_command "auto_suggest"`; console.log(textToCopy); var activeElement = document.activeElement; if (activeElement && activeElement.tagName.toLowerCase() === "input" || activeElement.tagName.toLowerCase() === "textarea") { activeElement.value = textToCopy; return; } if (activeElement && activeElement.tagName.toLowerCase() === "div" || activeElement.tagName.toLowerCase() === "p") { activeElement.innerText = textToCopy; return; } })();