diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..6a39ba3f --- /dev/null +++ b/404.html @@ -0,0 +1,3206 @@ + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ + + +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ +

404 - Not found

+ +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/about/index.html b/about/index.html new file mode 100644 index 00000000..2480511c --- /dev/null +++ b/about/index.html @@ -0,0 +1,3276 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ + + +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + +

About NERC

+

We are currently in the pilot phase of the project and are focusing on +developing the technology to make it easy for researchers to take advantage of +a suite of services (IaaS, PaaS, SaaS) that are not readily available +today. This includes:

+
    +
  1. The creation of the building blocks needed for production cloud services
  2. +
  3. Begin collaboration with Systems Engineers from other institutions with well +established RC groups
  4. +
  5. On-board select proof of concept use cases from institutions within the +MGHPCC consortium and other institutions +within Massachusetts
  6. +
+

The longer term objectives will be centered around activities that will focus on:

+
    +
  1. Engaging with various OpenStack communities by sharing best practices and +setting standards for deployments
  2. +
  3. Connecting regularly with the Mass Open Cloud +(MOC) leadership to understand when new technologies they are developing with +RedHat, Inc. – and as part of the new NSF funded Open Cloud Testbed – might be ready for +adoption into the production NERC environment
  4. +
  5. Broadening the local deployment team of NERC to include partner universities +within the MGHPCC consortium.
  6. +
+

NERC-overview +Figure 1: NERC Overview

+

NERC production services (red) stand on top of the existing NESE storage +services (blue) that are built on the strong foundation of MGHPCC (green) that +provides core facility and network access. The Innovation Hub (grey) enables +new technologies to be rapidly adopted by the NERC or NESE services. On the +far left (purple) are the Research and Learning communities which are the +primary customers of NERC. As users proceed down the stack of production +services from Web-apps, that require more technical skills, the Cloud +Facilitators (orange) in the middle guide and educate users on how to best +use the services.

+

For more information, +view +NERC's concept document.

+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/assets/images/MGHPCC_logo.png b/assets/images/MGHPCC_logo.png new file mode 100644 index 00000000..a0b1d002 Binary files /dev/null and b/assets/images/MGHPCC_logo.png differ diff --git a/assets/images/boston-university-logo.png b/assets/images/boston-university-logo.png new file mode 100644 index 00000000..466e9889 Binary files /dev/null and b/assets/images/boston-university-logo.png differ diff --git a/assets/images/favicon.ico b/assets/images/favicon.ico new file mode 100644 index 00000000..d86d064d Binary files /dev/null and b/assets/images/favicon.ico differ diff --git a/assets/images/favicon.png b/assets/images/favicon.png new file mode 100644 index 00000000..1cf13b9f Binary files /dev/null and b/assets/images/favicon.png differ diff --git a/assets/images/harvard-university_logo.png b/assets/images/harvard-university_logo.png new file mode 100644 index 00000000..1b6b3272 Binary files /dev/null and b/assets/images/harvard-university_logo.png differ diff --git a/assets/images/logo.png b/assets/images/logo.png new file mode 100644 index 00000000..f955141b Binary files /dev/null and b/assets/images/logo.png differ diff --git a/assets/images/logo_original.png b/assets/images/logo_original.png new file mode 100644 index 00000000..f955141b Binary files /dev/null and b/assets/images/logo_original.png differ diff --git a/assets/javascripts/bundle.5a2dcb6a.min.js b/assets/javascripts/bundle.5a2dcb6a.min.js new file mode 100644 index 00000000..6f9720b6 --- /dev/null +++ b/assets/javascripts/bundle.5a2dcb6a.min.js @@ -0,0 +1,29 @@ +"use strict";(()=>{var aa=Object.create;var wr=Object.defineProperty;var sa=Object.getOwnPropertyDescriptor;var ca=Object.getOwnPropertyNames,kt=Object.getOwnPropertySymbols,fa=Object.getPrototypeOf,Er=Object.prototype.hasOwnProperty,fn=Object.prototype.propertyIsEnumerable;var cn=(e,t,r)=>t in e?wr(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,H=(e,t)=>{for(var r in t||(t={}))Er.call(t,r)&&cn(e,r,t[r]);if(kt)for(var r of kt(t))fn.call(t,r)&&cn(e,r,t[r]);return e};var un=(e,t)=>{var r={};for(var n in e)Er.call(e,n)&&t.indexOf(n)<0&&(r[n]=e[n]);if(e!=null&&kt)for(var n of kt(e))t.indexOf(n)<0&&fn.call(e,n)&&(r[n]=e[n]);return r};var yt=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var ua=(e,t,r,n)=>{if(t&&typeof t=="object"||typeof t=="function")for(let o of ca(t))!Er.call(e,o)&&o!==r&&wr(e,o,{get:()=>t[o],enumerable:!(n=sa(t,o))||n.enumerable});return e};var Ye=(e,t,r)=>(r=e!=null?aa(fa(e)):{},ua(t||!e||!e.__esModule?wr(r,"default",{value:e,enumerable:!0}):r,e));var ln=yt((Sr,pn)=>{(function(e,t){typeof Sr=="object"&&typeof pn!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(Sr,function(){"use strict";function e(r){var n=!0,o=!1,i=null,s={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function a(_){return!!(_&&_!==document&&_.nodeName!=="HTML"&&_.nodeName!=="BODY"&&"classList"in _&&"contains"in _.classList)}function c(_){var We=_.type,Fe=_.tagName;return!!(Fe==="INPUT"&&s[We]&&!_.readOnly||Fe==="TEXTAREA"&&!_.readOnly||_.isContentEditable)}function f(_){_.classList.contains("focus-visible")||(_.classList.add("focus-visible"),_.setAttribute("data-focus-visible-added",""))}function u(_){!_.hasAttribute("data-focus-visible-added")||(_.classList.remove("focus-visible"),_.removeAttribute("data-focus-visible-added"))}function p(_){_.metaKey||_.altKey||_.ctrlKey||(a(r.activeElement)&&f(r.activeElement),n=!0)}function l(_){n=!1}function d(_){!a(_.target)||(n||c(_.target))&&f(_.target)}function h(_){!a(_.target)||(_.target.classList.contains("focus-visible")||_.target.hasAttribute("data-focus-visible-added"))&&(o=!0,window.clearTimeout(i),i=window.setTimeout(function(){o=!1},100),u(_.target))}function b(_){document.visibilityState==="hidden"&&(o&&(n=!0),U())}function U(){document.addEventListener("mousemove",W),document.addEventListener("mousedown",W),document.addEventListener("mouseup",W),document.addEventListener("pointermove",W),document.addEventListener("pointerdown",W),document.addEventListener("pointerup",W),document.addEventListener("touchmove",W),document.addEventListener("touchstart",W),document.addEventListener("touchend",W)}function G(){document.removeEventListener("mousemove",W),document.removeEventListener("mousedown",W),document.removeEventListener("mouseup",W),document.removeEventListener("pointermove",W),document.removeEventListener("pointerdown",W),document.removeEventListener("pointerup",W),document.removeEventListener("touchmove",W),document.removeEventListener("touchstart",W),document.removeEventListener("touchend",W)}function W(_){_.target.nodeName&&_.target.nodeName.toLowerCase()==="html"||(n=!1,G())}document.addEventListener("keydown",p,!0),document.addEventListener("mousedown",l,!0),document.addEventListener("pointerdown",l,!0),document.addEventListener("touchstart",l,!0),document.addEventListener("visibilitychange",b,!0),U(),r.addEventListener("focus",d,!0),r.addEventListener("blur",h,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)})});var mn=yt(Or=>{(function(e){var t=function(){try{return!!Symbol.iterator}catch(f){return!1}},r=t(),n=function(f){var u={next:function(){var p=f.shift();return{done:p===void 0,value:p}}};return r&&(u[Symbol.iterator]=function(){return u}),u},o=function(f){return encodeURIComponent(f).replace(/%20/g,"+")},i=function(f){return decodeURIComponent(String(f).replace(/\+/g," "))},s=function(){var f=function(p){Object.defineProperty(this,"_entries",{writable:!0,value:{}});var l=typeof p;if(l!=="undefined")if(l==="string")p!==""&&this._fromString(p);else if(p instanceof f){var d=this;p.forEach(function(G,W){d.append(W,G)})}else if(p!==null&&l==="object")if(Object.prototype.toString.call(p)==="[object Array]")for(var h=0;hd[0]?1:0}),f._entries&&(f._entries={});for(var p=0;p1?i(d[1]):"")}})})(typeof global!="undefined"?global:typeof window!="undefined"?window:typeof self!="undefined"?self:Or);(function(e){var t=function(){try{var o=new e.URL("b","http://a");return o.pathname="c d",o.href==="http://a/c%20d"&&o.searchParams}catch(i){return!1}},r=function(){var o=e.URL,i=function(c,f){typeof c!="string"&&(c=String(c)),f&&typeof f!="string"&&(f=String(f));var u=document,p;if(f&&(e.location===void 0||f!==e.location.href)){f=f.toLowerCase(),u=document.implementation.createHTMLDocument(""),p=u.createElement("base"),p.href=f,u.head.appendChild(p);try{if(p.href.indexOf(f)!==0)throw new Error(p.href)}catch(_){throw new Error("URL unable to set base "+f+" due to "+_)}}var l=u.createElement("a");l.href=c,p&&(u.body.appendChild(l),l.href=l.href);var d=u.createElement("input");if(d.type="url",d.value=c,l.protocol===":"||!/:/.test(l.href)||!d.checkValidity()&&!f)throw new TypeError("Invalid URL");Object.defineProperty(this,"_anchorElement",{value:l});var h=new e.URLSearchParams(this.search),b=!0,U=!0,G=this;["append","delete","set"].forEach(function(_){var We=h[_];h[_]=function(){We.apply(h,arguments),b&&(U=!1,G.search=h.toString(),U=!0)}}),Object.defineProperty(this,"searchParams",{value:h,enumerable:!0});var W=void 0;Object.defineProperty(this,"_updateSearchParams",{enumerable:!1,configurable:!1,writable:!1,value:function(){this.search!==W&&(W=this.search,U&&(b=!1,this.searchParams._fromString(this.search),b=!0))}})},s=i.prototype,a=function(c){Object.defineProperty(s,c,{get:function(){return this._anchorElement[c]},set:function(f){this._anchorElement[c]=f},enumerable:!0})};["hash","host","hostname","port","protocol"].forEach(function(c){a(c)}),Object.defineProperty(s,"search",{get:function(){return this._anchorElement.search},set:function(c){this._anchorElement.search=c,this._updateSearchParams()},enumerable:!0}),Object.defineProperties(s,{toString:{get:function(){var c=this;return function(){return c.href}}},href:{get:function(){return this._anchorElement.href.replace(/\?$/,"")},set:function(c){this._anchorElement.href=c,this._updateSearchParams()},enumerable:!0},pathname:{get:function(){return this._anchorElement.pathname.replace(/(^\/?)/,"/")},set:function(c){this._anchorElement.pathname=c},enumerable:!0},origin:{get:function(){var c={"http:":80,"https:":443,"ftp:":21}[this._anchorElement.protocol],f=this._anchorElement.port!=c&&this._anchorElement.port!=="";return this._anchorElement.protocol+"//"+this._anchorElement.hostname+(f?":"+this._anchorElement.port:"")},enumerable:!0},password:{get:function(){return""},set:function(c){},enumerable:!0},username:{get:function(){return""},set:function(c){},enumerable:!0}}),i.createObjectURL=function(c){return o.createObjectURL.apply(o,arguments)},i.revokeObjectURL=function(c){return o.revokeObjectURL.apply(o,arguments)},e.URL=i};if(t()||r(),e.location!==void 0&&!("origin"in e.location)){var n=function(){return e.location.protocol+"//"+e.location.hostname+(e.location.port?":"+e.location.port:"")};try{Object.defineProperty(e.location,"origin",{get:n,enumerable:!0})}catch(o){setInterval(function(){e.location.origin=n()},100)}}})(typeof global!="undefined"?global:typeof window!="undefined"?window:typeof self!="undefined"?self:Or)});var Pn=yt((Ks,$t)=>{/*! ***************************************************************************** +Copyright (c) Microsoft Corporation. + +Permission to use, copy, modify, and/or distribute this software for any +purpose with or without fee is hereby granted. + +THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH +REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, +INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR +OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +PERFORMANCE OF THIS SOFTWARE. +***************************************************************************** */var dn,hn,bn,vn,gn,yn,xn,wn,En,Ht,_r,Sn,On,_n,rt,Tn,Mn,Ln,An,Cn,Rn,kn,Hn,Pt;(function(e){var t=typeof global=="object"?global:typeof self=="object"?self:typeof this=="object"?this:{};typeof define=="function"&&define.amd?define("tslib",["exports"],function(n){e(r(t,r(n)))}):typeof $t=="object"&&typeof $t.exports=="object"?e(r(t,r($t.exports))):e(r(t));function r(n,o){return n!==t&&(typeof Object.create=="function"?Object.defineProperty(n,"__esModule",{value:!0}):n.__esModule=!0),function(i,s){return n[i]=o?o(i,s):s}}})(function(e){var t=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(n,o){n.__proto__=o}||function(n,o){for(var i in o)Object.prototype.hasOwnProperty.call(o,i)&&(n[i]=o[i])};dn=function(n,o){if(typeof o!="function"&&o!==null)throw new TypeError("Class extends value "+String(o)+" is not a constructor or null");t(n,o);function i(){this.constructor=n}n.prototype=o===null?Object.create(o):(i.prototype=o.prototype,new i)},hn=Object.assign||function(n){for(var o,i=1,s=arguments.length;i=0;u--)(f=n[u])&&(c=(a<3?f(c):a>3?f(o,i,c):f(o,i))||c);return a>3&&c&&Object.defineProperty(o,i,c),c},gn=function(n,o){return function(i,s){o(i,s,n)}},yn=function(n,o){if(typeof Reflect=="object"&&typeof Reflect.metadata=="function")return Reflect.metadata(n,o)},xn=function(n,o,i,s){function a(c){return c instanceof i?c:new i(function(f){f(c)})}return new(i||(i=Promise))(function(c,f){function u(d){try{l(s.next(d))}catch(h){f(h)}}function p(d){try{l(s.throw(d))}catch(h){f(h)}}function l(d){d.done?c(d.value):a(d.value).then(u,p)}l((s=s.apply(n,o||[])).next())})},wn=function(n,o){var i={label:0,sent:function(){if(c[0]&1)throw c[1];return c[1]},trys:[],ops:[]},s,a,c,f;return f={next:u(0),throw:u(1),return:u(2)},typeof Symbol=="function"&&(f[Symbol.iterator]=function(){return this}),f;function u(l){return function(d){return p([l,d])}}function p(l){if(s)throw new TypeError("Generator is already executing.");for(;i;)try{if(s=1,a&&(c=l[0]&2?a.return:l[0]?a.throw||((c=a.return)&&c.call(a),0):a.next)&&!(c=c.call(a,l[1])).done)return c;switch(a=0,c&&(l=[l[0]&2,c.value]),l[0]){case 0:case 1:c=l;break;case 4:return i.label++,{value:l[1],done:!1};case 5:i.label++,a=l[1],l=[0];continue;case 7:l=i.ops.pop(),i.trys.pop();continue;default:if(c=i.trys,!(c=c.length>0&&c[c.length-1])&&(l[0]===6||l[0]===2)){i=0;continue}if(l[0]===3&&(!c||l[1]>c[0]&&l[1]=n.length&&(n=void 0),{value:n&&n[s++],done:!n}}};throw new TypeError(o?"Object is not iterable.":"Symbol.iterator is not defined.")},_r=function(n,o){var i=typeof Symbol=="function"&&n[Symbol.iterator];if(!i)return n;var s=i.call(n),a,c=[],f;try{for(;(o===void 0||o-- >0)&&!(a=s.next()).done;)c.push(a.value)}catch(u){f={error:u}}finally{try{a&&!a.done&&(i=s.return)&&i.call(s)}finally{if(f)throw f.error}}return c},Sn=function(){for(var n=[],o=0;o1||u(b,U)})})}function u(b,U){try{p(s[b](U))}catch(G){h(c[0][3],G)}}function p(b){b.value instanceof rt?Promise.resolve(b.value.v).then(l,d):h(c[0][2],b)}function l(b){u("next",b)}function d(b){u("throw",b)}function h(b,U){b(U),c.shift(),c.length&&u(c[0][0],c[0][1])}},Mn=function(n){var o,i;return o={},s("next"),s("throw",function(a){throw a}),s("return"),o[Symbol.iterator]=function(){return this},o;function s(a,c){o[a]=n[a]?function(f){return(i=!i)?{value:rt(n[a](f)),done:a==="return"}:c?c(f):f}:c}},Ln=function(n){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var o=n[Symbol.asyncIterator],i;return o?o.call(n):(n=typeof Ht=="function"?Ht(n):n[Symbol.iterator](),i={},s("next"),s("throw"),s("return"),i[Symbol.asyncIterator]=function(){return this},i);function s(c){i[c]=n[c]&&function(f){return new Promise(function(u,p){f=n[c](f),a(u,p,f.done,f.value)})}}function a(c,f,u,p){Promise.resolve(p).then(function(l){c({value:l,done:u})},f)}},An=function(n,o){return Object.defineProperty?Object.defineProperty(n,"raw",{value:o}):n.raw=o,n};var r=Object.create?function(n,o){Object.defineProperty(n,"default",{enumerable:!0,value:o})}:function(n,o){n.default=o};Cn=function(n){if(n&&n.__esModule)return n;var o={};if(n!=null)for(var i in n)i!=="default"&&Object.prototype.hasOwnProperty.call(n,i)&&Pt(o,n,i);return r(o,n),o},Rn=function(n){return n&&n.__esModule?n:{default:n}},kn=function(n,o,i,s){if(i==="a"&&!s)throw new TypeError("Private accessor was defined without a getter");if(typeof o=="function"?n!==o||!s:!o.has(n))throw new TypeError("Cannot read private member from an object whose class did not declare it");return i==="m"?s:i==="a"?s.call(n):s?s.value:o.get(n)},Hn=function(n,o,i,s,a){if(s==="m")throw new TypeError("Private method is not writable");if(s==="a"&&!a)throw new TypeError("Private accessor was defined without a setter");if(typeof o=="function"?n!==o||!a:!o.has(n))throw new TypeError("Cannot write private member to an object whose class did not declare it");return s==="a"?a.call(n,i):a?a.value=i:o.set(n,i),i},e("__extends",dn),e("__assign",hn),e("__rest",bn),e("__decorate",vn),e("__param",gn),e("__metadata",yn),e("__awaiter",xn),e("__generator",wn),e("__exportStar",En),e("__createBinding",Pt),e("__values",Ht),e("__read",_r),e("__spread",Sn),e("__spreadArrays",On),e("__spreadArray",_n),e("__await",rt),e("__asyncGenerator",Tn),e("__asyncDelegator",Mn),e("__asyncValues",Ln),e("__makeTemplateObject",An),e("__importStar",Cn),e("__importDefault",Rn),e("__classPrivateFieldGet",kn),e("__classPrivateFieldSet",Hn)})});var Br=yt((At,Yr)=>{/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */(function(t,r){typeof At=="object"&&typeof Yr=="object"?Yr.exports=r():typeof define=="function"&&define.amd?define([],r):typeof At=="object"?At.ClipboardJS=r():t.ClipboardJS=r()})(At,function(){return function(){var e={686:function(n,o,i){"use strict";i.d(o,{default:function(){return ia}});var s=i(279),a=i.n(s),c=i(370),f=i.n(c),u=i(817),p=i.n(u);function l(j){try{return document.execCommand(j)}catch(T){return!1}}var d=function(T){var O=p()(T);return l("cut"),O},h=d;function b(j){var T=document.documentElement.getAttribute("dir")==="rtl",O=document.createElement("textarea");O.style.fontSize="12pt",O.style.border="0",O.style.padding="0",O.style.margin="0",O.style.position="absolute",O.style[T?"right":"left"]="-9999px";var k=window.pageYOffset||document.documentElement.scrollTop;return O.style.top="".concat(k,"px"),O.setAttribute("readonly",""),O.value=j,O}var U=function(T,O){var k=b(T);O.container.appendChild(k);var $=p()(k);return l("copy"),k.remove(),$},G=function(T){var O=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},k="";return typeof T=="string"?k=U(T,O):T instanceof HTMLInputElement&&!["text","search","url","tel","password"].includes(T==null?void 0:T.type)?k=U(T.value,O):(k=p()(T),l("copy")),k},W=G;function _(j){return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?_=function(O){return typeof O}:_=function(O){return O&&typeof Symbol=="function"&&O.constructor===Symbol&&O!==Symbol.prototype?"symbol":typeof O},_(j)}var We=function(){var T=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},O=T.action,k=O===void 0?"copy":O,$=T.container,q=T.target,Te=T.text;if(k!=="copy"&&k!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(q!==void 0)if(q&&_(q)==="object"&&q.nodeType===1){if(k==="copy"&&q.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if(k==="cut"&&(q.hasAttribute("readonly")||q.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if(Te)return W(Te,{container:$});if(q)return k==="cut"?h(q):W(q,{container:$})},Fe=We;function Pe(j){return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?Pe=function(O){return typeof O}:Pe=function(O){return O&&typeof Symbol=="function"&&O.constructor===Symbol&&O!==Symbol.prototype?"symbol":typeof O},Pe(j)}function Ji(j,T){if(!(j instanceof T))throw new TypeError("Cannot call a class as a function")}function sn(j,T){for(var O=0;O0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof $.action=="function"?$.action:this.defaultAction,this.target=typeof $.target=="function"?$.target:this.defaultTarget,this.text=typeof $.text=="function"?$.text:this.defaultText,this.container=Pe($.container)==="object"?$.container:document.body}},{key:"listenClick",value:function($){var q=this;this.listener=f()($,"click",function(Te){return q.onClick(Te)})}},{key:"onClick",value:function($){var q=$.delegateTarget||$.currentTarget,Te=this.action(q)||"copy",Rt=Fe({action:Te,container:this.container,target:this.target(q),text:this.text(q)});this.emit(Rt?"success":"error",{action:Te,text:Rt,trigger:q,clearSelection:function(){q&&q.focus(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function($){return xr("action",$)}},{key:"defaultTarget",value:function($){var q=xr("target",$);if(q)return document.querySelector(q)}},{key:"defaultText",value:function($){return xr("text",$)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function($){var q=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return W($,q)}},{key:"cut",value:function($){return h($)}},{key:"isSupported",value:function(){var $=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],q=typeof $=="string"?[$]:$,Te=!!document.queryCommandSupported;return q.forEach(function(Rt){Te=Te&&!!document.queryCommandSupported(Rt)}),Te}}]),O}(a()),ia=oa},828:function(n){var o=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function s(a,c){for(;a&&a.nodeType!==o;){if(typeof a.matches=="function"&&a.matches(c))return a;a=a.parentNode}}n.exports=s},438:function(n,o,i){var s=i(828);function a(u,p,l,d,h){var b=f.apply(this,arguments);return u.addEventListener(l,b,h),{destroy:function(){u.removeEventListener(l,b,h)}}}function c(u,p,l,d,h){return typeof u.addEventListener=="function"?a.apply(null,arguments):typeof l=="function"?a.bind(null,document).apply(null,arguments):(typeof u=="string"&&(u=document.querySelectorAll(u)),Array.prototype.map.call(u,function(b){return a(b,p,l,d,h)}))}function f(u,p,l,d){return function(h){h.delegateTarget=s(h.target,p),h.delegateTarget&&d.call(u,h)}}n.exports=c},879:function(n,o){o.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},o.nodeList=function(i){var s=Object.prototype.toString.call(i);return i!==void 0&&(s==="[object NodeList]"||s==="[object HTMLCollection]")&&"length"in i&&(i.length===0||o.node(i[0]))},o.string=function(i){return typeof i=="string"||i instanceof String},o.fn=function(i){var s=Object.prototype.toString.call(i);return s==="[object Function]"}},370:function(n,o,i){var s=i(879),a=i(438);function c(l,d,h){if(!l&&!d&&!h)throw new Error("Missing required arguments");if(!s.string(d))throw new TypeError("Second argument must be a String");if(!s.fn(h))throw new TypeError("Third argument must be a Function");if(s.node(l))return f(l,d,h);if(s.nodeList(l))return u(l,d,h);if(s.string(l))return p(l,d,h);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function f(l,d,h){return l.addEventListener(d,h),{destroy:function(){l.removeEventListener(d,h)}}}function u(l,d,h){return Array.prototype.forEach.call(l,function(b){b.addEventListener(d,h)}),{destroy:function(){Array.prototype.forEach.call(l,function(b){b.removeEventListener(d,h)})}}}function p(l,d,h){return a(document.body,l,d,h)}n.exports=c},817:function(n){function o(i){var s;if(i.nodeName==="SELECT")i.focus(),s=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var a=i.hasAttribute("readonly");a||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),a||i.removeAttribute("readonly"),s=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var c=window.getSelection(),f=document.createRange();f.selectNodeContents(i),c.removeAllRanges(),c.addRange(f),s=c.toString()}return s}n.exports=o},279:function(n){function o(){}o.prototype={on:function(i,s,a){var c=this.e||(this.e={});return(c[i]||(c[i]=[])).push({fn:s,ctx:a}),this},once:function(i,s,a){var c=this;function f(){c.off(i,f),s.apply(a,arguments)}return f._=s,this.on(i,f,a)},emit:function(i){var s=[].slice.call(arguments,1),a=((this.e||(this.e={}))[i]||[]).slice(),c=0,f=a.length;for(c;c{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var Ms=/["'&<>]/;Si.exports=Ls;function Ls(e){var t=""+e,r=Ms.exec(t);if(!r)return t;var n,o="",i=0,s=0;for(i=r.index;i0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var n=this,o=this,i=o.hasError,s=o.isStopped,a=o.observers;return i||s?Tr:(this.currentObservers=null,a.push(r),new $e(function(){n.currentObservers=null,Ue(a,r)}))},t.prototype._checkFinalizedStatuses=function(r){var n=this,o=n.hasError,i=n.thrownError,s=n.isStopped;o?r.error(i):s&&r.complete()},t.prototype.asObservable=function(){var r=new F;return r.source=this,r},t.create=function(r,n){return new Qn(r,n)},t}(F);var Qn=function(e){ne(t,e);function t(r,n){var o=e.call(this)||this;return o.destination=r,o.source=n,o}return t.prototype.next=function(r){var n,o;(o=(n=this.destination)===null||n===void 0?void 0:n.next)===null||o===void 0||o.call(n,r)},t.prototype.error=function(r){var n,o;(o=(n=this.destination)===null||n===void 0?void 0:n.error)===null||o===void 0||o.call(n,r)},t.prototype.complete=function(){var r,n;(n=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||n===void 0||n.call(r)},t.prototype._subscribe=function(r){var n,o;return(o=(n=this.source)===null||n===void 0?void 0:n.subscribe(r))!==null&&o!==void 0?o:Tr},t}(E);var wt={now:function(){return(wt.delegate||Date).now()},delegate:void 0};var Et=function(e){ne(t,e);function t(r,n,o){r===void 0&&(r=1/0),n===void 0&&(n=1/0),o===void 0&&(o=wt);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=n,i._timestampProvider=o,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=n===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,n),i}return t.prototype.next=function(r){var n=this,o=n.isStopped,i=n._buffer,s=n._infiniteTimeWindow,a=n._timestampProvider,c=n._windowTime;o||(i.push(r),!s&&i.push(a.now()+c)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var n=this._innerSubscribe(r),o=this,i=o._infiniteTimeWindow,s=o._buffer,a=s.slice(),c=0;c0?e.prototype.requestAsyncId.call(this,r,n,o):(r.actions.push(this),r._scheduled||(r._scheduled=at.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,n,o){var i;if(o===void 0&&(o=0),o!=null?o>0:this.delay>0)return e.prototype.recycleAsyncId.call(this,r,n,o);var s=r.actions;n!=null&&((i=s[s.length-1])===null||i===void 0?void 0:i.id)!==n&&(at.cancelAnimationFrame(n),r._scheduled=void 0)},t}(zt);var Gn=function(e){ne(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var n=this._scheduled;this._scheduled=void 0;var o=this.actions,i;r=r||o.shift();do if(i=r.execute(r.state,r.delay))break;while((r=o[0])&&r.id===n&&o.shift());if(this._active=!1,i){for(;(r=o[0])&&r.id===n&&o.shift();)r.unsubscribe();throw i}},t}(Nt);var xe=new Gn(Bn);var R=new F(function(e){return e.complete()});function qt(e){return e&&L(e.schedule)}function Hr(e){return e[e.length-1]}function Ve(e){return L(Hr(e))?e.pop():void 0}function Ee(e){return qt(Hr(e))?e.pop():void 0}function Kt(e,t){return typeof Hr(e)=="number"?e.pop():t}var st=function(e){return e&&typeof e.length=="number"&&typeof e!="function"};function Qt(e){return L(e==null?void 0:e.then)}function Yt(e){return L(e[it])}function Bt(e){return Symbol.asyncIterator&&L(e==null?void 0:e[Symbol.asyncIterator])}function Gt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function ya(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var Jt=ya();function Xt(e){return L(e==null?void 0:e[Jt])}function Zt(e){return jn(this,arguments,function(){var r,n,o,i;return It(this,function(s){switch(s.label){case 0:r=e.getReader(),s.label=1;case 1:s.trys.push([1,,9,10]),s.label=2;case 2:return[4,jt(r.read())];case 3:return n=s.sent(),o=n.value,i=n.done,i?[4,jt(void 0)]:[3,5];case 4:return[2,s.sent()];case 5:return[4,jt(o)];case 6:return[4,s.sent()];case 7:return s.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function er(e){return L(e==null?void 0:e.getReader)}function z(e){if(e instanceof F)return e;if(e!=null){if(Yt(e))return xa(e);if(st(e))return wa(e);if(Qt(e))return Ea(e);if(Bt(e))return Jn(e);if(Xt(e))return Sa(e);if(er(e))return Oa(e)}throw Gt(e)}function xa(e){return new F(function(t){var r=e[it]();if(L(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function wa(e){return new F(function(t){for(var r=0;r=2,!0))}function ie(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new E}:t,n=e.resetOnError,o=n===void 0?!0:n,i=e.resetOnComplete,s=i===void 0?!0:i,a=e.resetOnRefCountZero,c=a===void 0?!0:a;return function(f){var u,p,l,d=0,h=!1,b=!1,U=function(){p==null||p.unsubscribe(),p=void 0},G=function(){U(),u=l=void 0,h=b=!1},W=function(){var _=u;G(),_==null||_.unsubscribe()};return g(function(_,We){d++,!b&&!h&&U();var Fe=l=l!=null?l:r();We.add(function(){d--,d===0&&!b&&!h&&(p=Dr(W,c))}),Fe.subscribe(We),!u&&d>0&&(u=new Ge({next:function(Pe){return Fe.next(Pe)},error:function(Pe){b=!0,U(),p=Dr(G,o,Pe),Fe.error(Pe)},complete:function(){h=!0,U(),p=Dr(G,s),Fe.complete()}}),z(_).subscribe(u))})(f)}}function Dr(e,t){for(var r=[],n=2;ne.next(document)),e}function Q(e,t=document){return Array.from(t.querySelectorAll(e))}function K(e,t=document){let r=pe(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function pe(e,t=document){return t.querySelector(e)||void 0}function Ie(){return document.activeElement instanceof HTMLElement&&document.activeElement||void 0}function nr(e){return A(v(document.body,"focusin"),v(document.body,"focusout")).pipe(Re(1),m(()=>{let t=Ie();return typeof t!="undefined"?e.contains(t):!1}),N(e===Ie()),B())}function qe(e){return{x:e.offsetLeft,y:e.offsetTop}}function yo(e){return A(v(window,"load"),v(window,"resize")).pipe(Ae(0,xe),m(()=>qe(e)),N(qe(e)))}function or(e){return{x:e.scrollLeft,y:e.scrollTop}}function pt(e){return A(v(e,"scroll"),v(window,"resize")).pipe(Ae(0,xe),m(()=>or(e)),N(or(e)))}var wo=function(){if(typeof Map!="undefined")return Map;function e(t,r){var n=-1;return t.some(function(o,i){return o[0]===r?(n=i,!0):!1}),n}return function(){function t(){this.__entries__=[]}return Object.defineProperty(t.prototype,"size",{get:function(){return this.__entries__.length},enumerable:!0,configurable:!0}),t.prototype.get=function(r){var n=e(this.__entries__,r),o=this.__entries__[n];return o&&o[1]},t.prototype.set=function(r,n){var o=e(this.__entries__,r);~o?this.__entries__[o][1]=n:this.__entries__.push([r,n])},t.prototype.delete=function(r){var n=this.__entries__,o=e(n,r);~o&&n.splice(o,1)},t.prototype.has=function(r){return!!~e(this.__entries__,r)},t.prototype.clear=function(){this.__entries__.splice(0)},t.prototype.forEach=function(r,n){n===void 0&&(n=null);for(var o=0,i=this.__entries__;o0},e.prototype.connect_=function(){!qr||this.connected_||(document.addEventListener("transitionend",this.onTransitionEnd_),window.addEventListener("resize",this.refresh),Ka?(this.mutationsObserver_=new MutationObserver(this.refresh),this.mutationsObserver_.observe(document,{attributes:!0,childList:!0,characterData:!0,subtree:!0})):(document.addEventListener("DOMSubtreeModified",this.refresh),this.mutationEventsAdded_=!0),this.connected_=!0)},e.prototype.disconnect_=function(){!qr||!this.connected_||(document.removeEventListener("transitionend",this.onTransitionEnd_),window.removeEventListener("resize",this.refresh),this.mutationsObserver_&&this.mutationsObserver_.disconnect(),this.mutationEventsAdded_&&document.removeEventListener("DOMSubtreeModified",this.refresh),this.mutationsObserver_=null,this.mutationEventsAdded_=!1,this.connected_=!1)},e.prototype.onTransitionEnd_=function(t){var r=t.propertyName,n=r===void 0?"":r,o=qa.some(function(i){return!!~n.indexOf(i)});o&&this.refresh()},e.getInstance=function(){return this.instance_||(this.instance_=new e),this.instance_},e.instance_=null,e}(),Eo=function(e,t){for(var r=0,n=Object.keys(t);r0},e}(),Oo=typeof WeakMap!="undefined"?new WeakMap:new wo,_o=function(){function e(t){if(!(this instanceof e))throw new TypeError("Cannot call a class as a function.");if(!arguments.length)throw new TypeError("1 argument required, but only 0 present.");var r=Qa.getInstance(),n=new ns(t,r,this);Oo.set(this,n)}return e}();["observe","unobserve","disconnect"].forEach(function(e){_o.prototype[e]=function(){var t;return(t=Oo.get(this))[e].apply(t,arguments)}});var os=function(){return typeof ir.ResizeObserver!="undefined"?ir.ResizeObserver:_o}(),To=os;var Mo=new E,is=P(()=>I(new To(e=>{for(let t of e)Mo.next(t)}))).pipe(S(e=>A(Se,I(e)).pipe(C(()=>e.disconnect()))),X(1));function he(e){return{width:e.offsetWidth,height:e.offsetHeight}}function ve(e){return is.pipe(w(t=>t.observe(e)),S(t=>Mo.pipe(x(({target:r})=>r===e),C(()=>t.unobserve(e)),m(()=>he(e)))),N(he(e)))}function mt(e){return{width:e.scrollWidth,height:e.scrollHeight}}function cr(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}var Lo=new E,as=P(()=>I(new IntersectionObserver(e=>{for(let t of e)Lo.next(t)},{threshold:0}))).pipe(S(e=>A(Se,I(e)).pipe(C(()=>e.disconnect()))),X(1));function fr(e){return as.pipe(w(t=>t.observe(e)),S(t=>Lo.pipe(x(({target:r})=>r===e),C(()=>t.unobserve(e)),m(({isIntersecting:r})=>r))))}function Ao(e,t=16){return pt(e).pipe(m(({y:r})=>{let n=he(e),o=mt(e);return r>=o.height-n.height-t}),B())}var ur={drawer:K("[data-md-toggle=drawer]"),search:K("[data-md-toggle=search]")};function Co(e){return ur[e].checked}function Ke(e,t){ur[e].checked!==t&&ur[e].click()}function dt(e){let t=ur[e];return v(t,"change").pipe(m(()=>t.checked),N(t.checked))}function ss(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function Ro(){return v(window,"keydown").pipe(x(e=>!(e.metaKey||e.ctrlKey)),m(e=>({mode:Co("search")?"search":"global",type:e.key,claim(){e.preventDefault(),e.stopPropagation()}})),x(({mode:e,type:t})=>{if(e==="global"){let r=Ie();if(typeof r!="undefined")return!ss(r,t)}return!0}),ie())}function Oe(){return new URL(location.href)}function pr(e){location.href=e.href}function ko(){return new E}function Ho(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)Ho(e,r)}function M(e,t,...r){let n=document.createElement(e);if(t)for(let o of Object.keys(t))typeof t[o]!="undefined"&&(typeof t[o]!="boolean"?n.setAttribute(o,t[o]):n.setAttribute(o,""));for(let o of r)Ho(n,o);return n}function Po(e,t){let r=t;if(e.length>r){for(;e[r]!==" "&&--r>0;);return`${e.substring(0,r)}...`}return e}function lr(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function $o(){return location.hash.substring(1)}function Io(e){let t=M("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function cs(){return v(window,"hashchange").pipe(m($o),N($o()),x(e=>e.length>0),X(1))}function jo(){return cs().pipe(m(e=>pe(`[id="${e}"]`)),x(e=>typeof e!="undefined"))}function Kr(e){let t=matchMedia(e);return rr(r=>t.addListener(()=>r(t.matches))).pipe(N(t.matches))}function Fo(){let e=matchMedia("print");return A(v(window,"beforeprint").pipe(m(()=>!0)),v(window,"afterprint").pipe(m(()=>!1))).pipe(N(e.matches))}function Qr(e,t){return e.pipe(S(r=>r?t():R))}function mr(e,t={credentials:"same-origin"}){return ue(fetch(`${e}`,t)).pipe(ce(()=>R),S(r=>r.status!==200?Ot(()=>new Error(r.statusText)):I(r)))}function je(e,t){return mr(e,t).pipe(S(r=>r.json()),X(1))}function Uo(e,t){let r=new DOMParser;return mr(e,t).pipe(S(n=>n.text()),m(n=>r.parseFromString(n,"text/xml")),X(1))}function Do(e){let t=M("script",{src:e});return P(()=>(document.head.appendChild(t),A(v(t,"load"),v(t,"error").pipe(S(()=>Ot(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(m(()=>{}),C(()=>document.head.removeChild(t)),oe(1))))}function Wo(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function Vo(){return A(v(window,"scroll",{passive:!0}),v(window,"resize",{passive:!0})).pipe(m(Wo),N(Wo()))}function zo(){return{width:innerWidth,height:innerHeight}}function No(){return v(window,"resize",{passive:!0}).pipe(m(zo),N(zo()))}function qo(){return Y([Vo(),No()]).pipe(m(([e,t])=>({offset:e,size:t})),X(1))}function dr(e,{viewport$:t,header$:r}){let n=t.pipe(J("size")),o=Y([n,r]).pipe(m(()=>qe(e)));return Y([r,t,o]).pipe(m(([{height:i},{offset:s,size:a},{x:c,y:f}])=>({offset:{x:s.x-c,y:s.y-f+i},size:a})))}function Ko(e,{tx$:t}){let r=v(e,"message").pipe(m(({data:n})=>n));return t.pipe(Lt(()=>r,{leading:!0,trailing:!0}),w(n=>e.postMessage(n)),S(()=>r),ie())}var fs=K("#__config"),ht=JSON.parse(fs.textContent);ht.base=`${new URL(ht.base,Oe())}`;function le(){return ht}function Z(e){return ht.features.includes(e)}function re(e,t){return typeof t!="undefined"?ht.translations[e].replace("#",t.toString()):ht.translations[e]}function _e(e,t=document){return K(`[data-md-component=${e}]`,t)}function te(e,t=document){return Q(`[data-md-component=${e}]`,t)}function us(e){let t=K(".md-typeset > :first-child",e);return v(t,"click",{once:!0}).pipe(m(()=>K(".md-typeset",e)),m(r=>({hash:__md_hash(r.innerHTML)})))}function Qo(e){return!Z("announce.dismiss")||!e.childElementCount?R:P(()=>{let t=new E;return t.pipe(N({hash:__md_get("__announce")})).subscribe(({hash:r})=>{var n;r&&r===((n=__md_get("__announce"))!=null?n:r)&&(e.hidden=!0,__md_set("__announce",r))}),us(e).pipe(w(r=>t.next(r)),C(()=>t.complete()),m(r=>H({ref:e},r)))})}function ps(e,{target$:t}){return t.pipe(m(r=>({hidden:r!==e})))}function Yo(e,t){let r=new E;return r.subscribe(({hidden:n})=>{e.hidden=n}),ps(e,t).pipe(w(n=>r.next(n)),C(()=>r.complete()),m(n=>H({ref:e},n)))}var ii=Ye(Br());function Gr(e){return M("div",{class:"md-tooltip",id:e},M("div",{class:"md-tooltip__inner md-typeset"}))}function Bo(e,t){if(t=t?`${t}_annotation_${e}`:void 0,t){let r=t?`#${t}`:void 0;return M("aside",{class:"md-annotation",tabIndex:0},Gr(t),M("a",{href:r,class:"md-annotation__index",tabIndex:-1},M("span",{"data-md-annotation-id":e})))}else return M("aside",{class:"md-annotation",tabIndex:0},Gr(t),M("span",{class:"md-annotation__index",tabIndex:-1},M("span",{"data-md-annotation-id":e})))}function Go(e){return M("button",{class:"md-clipboard md-icon",title:re("clipboard.copy"),"data-clipboard-target":`#${e} > code`})}function Jr(e,t){let r=t&2,n=t&1,o=Object.keys(e.terms).filter(a=>!e.terms[a]).reduce((a,c)=>[...a,M("del",null,c)," "],[]).slice(0,-1),i=new URL(e.location);Z("search.highlight")&&i.searchParams.set("h",Object.entries(e.terms).filter(([,a])=>a).reduce((a,[c])=>`${a} ${c}`.trim(),""));let{tags:s}=le();return M("a",{href:`${i}`,class:"md-search-result__link",tabIndex:-1},M("article",{class:["md-search-result__article",...r?["md-search-result__article--document"]:[]].join(" "),"data-md-score":e.score.toFixed(2)},r>0&&M("div",{class:"md-search-result__icon md-icon"}),M("h1",{class:"md-search-result__title"},e.title),n>0&&e.text.length>0&&M("p",{class:"md-search-result__teaser"},Po(e.text,320)),e.tags&&M("div",{class:"md-typeset"},e.tags.map(a=>{let c=a.replace(/<[^>]+>/g,""),f=s?c in s?`md-tag-icon md-tag-icon--${s[c]}`:"md-tag-icon":"";return M("span",{class:`md-tag ${f}`},a)})),n>0&&o.length>0&&M("p",{class:"md-search-result__terms"},re("search.result.term.missing"),": ",...o)))}function Jo(e){let t=e[0].score,r=[...e],n=r.findIndex(f=>!f.location.includes("#")),[o]=r.splice(n,1),i=r.findIndex(f=>f.scoreJr(f,1)),...a.length?[M("details",{class:"md-search-result__more"},M("summary",{tabIndex:-1},a.length>0&&a.length===1?re("search.result.more.one"):re("search.result.more.other",a.length)),...a.map(f=>Jr(f,1)))]:[]];return M("li",{class:"md-search-result__item"},c)}function Xo(e){return M("ul",{class:"md-source__facts"},Object.entries(e).map(([t,r])=>M("li",{class:`md-source__fact md-source__fact--${t}`},typeof r=="number"?lr(r):r)))}function Xr(e){let t=`tabbed-control tabbed-control--${e}`;return M("div",{class:t,hidden:!0},M("button",{class:"tabbed-button",tabIndex:-1}))}function Zo(e){return M("div",{class:"md-typeset__scrollwrap"},M("div",{class:"md-typeset__table"},e))}function ls(e){let t=le(),r=new URL(`../${e.version}/`,t.base);return M("li",{class:"md-version__item"},M("a",{href:`${r}`,class:"md-version__link"},e.title))}function ei(e,t){return M("div",{class:"md-version"},M("button",{class:"md-version__current","aria-label":re("select.version.title")},t.title),M("ul",{class:"md-version__list"},e.map(ls)))}function ms(e,t){let r=P(()=>Y([yo(e),pt(t)])).pipe(m(([{x:n,y:o},i])=>{let{width:s,height:a}=he(e);return{x:n-i.x+s/2,y:o-i.y+a/2}}));return nr(e).pipe(S(n=>r.pipe(m(o=>({active:n,offset:o})),oe(+!n||1/0))))}function ti(e,t,{target$:r}){let[n,o]=Array.from(e.children);return P(()=>{let i=new E,s=i.pipe(de(1));return i.subscribe({next({offset:a}){e.style.setProperty("--md-tooltip-x",`${a.x}px`),e.style.setProperty("--md-tooltip-y",`${a.y}px`)},complete(){e.style.removeProperty("--md-tooltip-x"),e.style.removeProperty("--md-tooltip-y")}}),fr(e).pipe(ee(s)).subscribe(a=>{e.toggleAttribute("data-md-visible",a)}),A(i.pipe(x(({active:a})=>a)),i.pipe(Re(250),x(({active:a})=>!a))).subscribe({next({active:a}){a?e.prepend(n):n.remove()},complete(){e.prepend(n)}}),i.pipe(Ae(16,xe)).subscribe(({active:a})=>{n.classList.toggle("md-tooltip--active",a)}),i.pipe(zr(125,xe),x(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:a})=>a)).subscribe({next(a){a?e.style.setProperty("--md-tooltip-0",`${-a}px`):e.style.removeProperty("--md-tooltip-0")},complete(){e.style.removeProperty("--md-tooltip-0")}}),v(o,"click").pipe(ee(s),x(a=>!(a.metaKey||a.ctrlKey))).subscribe(a=>a.preventDefault()),v(o,"mousedown").pipe(ee(s),ae(i)).subscribe(([a,{active:c}])=>{var f;if(a.button!==0||a.metaKey||a.ctrlKey)a.preventDefault();else if(c){a.preventDefault();let u=e.parentElement.closest(".md-annotation");u instanceof HTMLElement?u.focus():(f=Ie())==null||f.blur()}}),r.pipe(ee(s),x(a=>a===n),ke(125)).subscribe(()=>e.focus()),ms(e,t).pipe(w(a=>i.next(a)),C(()=>i.complete()),m(a=>H({ref:e},a)))})}function ds(e){let t=[];for(let r of Q(".c, .c1, .cm",e)){let n=[],o=document.createNodeIterator(r,NodeFilter.SHOW_TEXT);for(let i=o.nextNode();i;i=o.nextNode())n.push(i);for(let i of n){let s;for(;s=/(\(\d+\))(!)?/.exec(i.textContent);){let[,a,c]=s;if(typeof c=="undefined"){let f=i.splitText(s.index);i=f.splitText(a.length),t.push(f)}else{i.textContent=a,t.push(i);break}}}}return t}function ri(e,t){t.append(...Array.from(e.childNodes))}function ni(e,t,{target$:r,print$:n}){let o=t.closest("[id]"),i=o==null?void 0:o.id,s=new Map;for(let a of ds(t)){let[,c]=a.textContent.match(/\((\d+)\)/);pe(`li:nth-child(${c})`,e)&&(s.set(c,Bo(c,i)),a.replaceWith(s.get(c)))}return s.size===0?R:P(()=>{let a=new E,c=[];for(let[f,u]of s)c.push([K(".md-typeset",u),K(`li:nth-child(${f})`,e)]);return n.pipe(ee(a.pipe(de(1)))).subscribe(f=>{e.hidden=!f;for(let[u,p]of c)f?ri(u,p):ri(p,u)}),A(...[...s].map(([,f])=>ti(f,t,{target$:r}))).pipe(C(()=>a.complete()),ie())})}var hs=0;function ai(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return ai(t)}}function oi(e){return ve(e).pipe(m(({width:t})=>({scrollable:mt(e).width>t})),J("scrollable"))}function si(e,t){let{matches:r}=matchMedia("(hover)"),n=P(()=>{let o=new E;if(o.subscribe(({scrollable:s})=>{s&&r?e.setAttribute("tabindex","0"):e.removeAttribute("tabindex")}),ii.default.isSupported()){let s=e.closest("pre");s.id=`__code_${++hs}`,s.insertBefore(Go(s.id),e)}let i=e.closest(".highlight");if(i instanceof HTMLElement){let s=ai(i);if(typeof s!="undefined"&&(i.classList.contains("annotate")||Z("content.code.annotate"))){let a=ni(s,e,t);return oi(e).pipe(w(c=>o.next(c)),C(()=>o.complete()),m(c=>H({ref:e},c)),et(ve(i).pipe(m(({width:c,height:f})=>c&&f),B(),S(c=>c?a:R))))}}return oi(e).pipe(w(s=>o.next(s)),C(()=>o.complete()),m(s=>H({ref:e},s)))});return Z("content.lazy")?fr(e).pipe(x(o=>o),oe(1),S(()=>n)):n}var ci=".node circle,.node ellipse,.node path,.node polygon,.node rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}marker{fill:var(--md-mermaid-edge-color)!important}.edgeLabel .label rect{fill:#0000}.label{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.label foreignObject{line-height:normal;overflow:visible}.label div .edgeLabel{color:var(--md-mermaid-label-fg-color)}.edgeLabel,.edgeLabel rect,.label div .edgeLabel{background-color:var(--md-mermaid-label-bg-color)}.edgeLabel,.edgeLabel rect{fill:var(--md-mermaid-label-bg-color);color:var(--md-mermaid-edge-color)}.edgePath .path,.flowchart-link{stroke:var(--md-mermaid-edge-color)}.edgePath .arrowheadPath{fill:var(--md-mermaid-edge-color);stroke:none}.cluster rect{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}.cluster span{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}defs #flowchart-circleEnd,defs #flowchart-circleStart,defs #flowchart-crossEnd,defs #flowchart-crossStart,defs #flowchart-pointEnd,defs #flowchart-pointStart{stroke:none}g.classGroup line,g.classGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.classGroup text{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.classLabel .box{fill:var(--md-mermaid-label-bg-color);background-color:var(--md-mermaid-label-bg-color);opacity:1}.classLabel .label{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node .divider{stroke:var(--md-mermaid-node-fg-color)}.relation{stroke:var(--md-mermaid-edge-color)}.cardinality{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.cardinality text{fill:inherit!important}defs #classDiagram-compositionEnd,defs #classDiagram-compositionStart,defs #classDiagram-dependencyEnd,defs #classDiagram-dependencyStart,defs #classDiagram-extensionEnd,defs #classDiagram-extensionStart{fill:var(--md-mermaid-edge-color)!important;stroke:var(--md-mermaid-edge-color)!important}defs #classDiagram-aggregationEnd,defs #classDiagram-aggregationStart{fill:var(--md-mermaid-label-bg-color)!important;stroke:var(--md-mermaid-edge-color)!important}g.stateGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.stateGroup .state-title{fill:var(--md-mermaid-label-fg-color)!important;font-family:var(--md-mermaid-font-family)}g.stateGroup .composit{fill:var(--md-mermaid-label-bg-color)}.nodeLabel{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node circle.state-end,.node circle.state-start,.start-state{fill:var(--md-mermaid-edge-color);stroke:none}.end-state-inner,.end-state-outer{fill:var(--md-mermaid-edge-color)}.end-state-inner,.node circle.state-end{stroke:var(--md-mermaid-label-bg-color)}.transition{stroke:var(--md-mermaid-edge-color)}[id^=state-fork] rect,[id^=state-join] rect{fill:var(--md-mermaid-edge-color)!important;stroke:none!important}.statediagram-cluster.statediagram-cluster .inner{fill:var(--md-default-bg-color)}.statediagram-cluster rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.statediagram-state rect.divider{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}defs #statediagram-barbEnd{stroke:var(--md-mermaid-edge-color)}.entityBox{fill:var(--md-mermaid-label-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityLabel{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.relationshipLabelBox{fill:var(--md-mermaid-label-bg-color);fill-opacity:1;background-color:var(--md-mermaid-label-bg-color);opacity:1}.relationshipLabel{fill:var(--md-mermaid-label-fg-color)}.relationshipLine{stroke:var(--md-mermaid-edge-color)}defs #ONE_OR_MORE_END *,defs #ONE_OR_MORE_START *,defs #ONLY_ONE_END *,defs #ONLY_ONE_START *,defs #ZERO_OR_MORE_END *,defs #ZERO_OR_MORE_START *,defs #ZERO_OR_ONE_END *,defs #ZERO_OR_ONE_START *{stroke:var(--md-mermaid-edge-color)!important}.actor,defs #ZERO_OR_MORE_END circle,defs #ZERO_OR_MORE_START circle{fill:var(--md-mermaid-label-bg-color)}.actor{stroke:var(--md-mermaid-node-fg-color)}text.actor>tspan{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}line{stroke:var(--md-default-fg-color--lighter)}.messageLine0,.messageLine1{stroke:var(--md-mermaid-edge-color)}.loopText>tspan,.messageText,.noteText>tspan{fill:var(--md-mermaid-edge-color);stroke:none;font-family:var(--md-mermaid-font-family)!important}.noteText>tspan{fill:#000}#arrowhead path{fill:var(--md-mermaid-edge-color);stroke:none}.loopLine{stroke:var(--md-mermaid-node-fg-color)}.labelBox,.loopLine{fill:var(--md-mermaid-node-bg-color)}.labelBox{stroke:none}.labelText,.labelText>span{fill:var(--md-mermaid-node-fg-color);font-family:var(--md-mermaid-font-family)}";var Zr,vs=0;function gs(){return typeof mermaid=="undefined"||mermaid instanceof Element?Do("https://unpkg.com/mermaid@9.1.7/dist/mermaid.min.js"):I(void 0)}function fi(e){return e.classList.remove("mermaid"),Zr||(Zr=gs().pipe(w(()=>mermaid.initialize({startOnLoad:!1,themeCSS:ci,sequence:{actorFontSize:"16px",messageFontSize:"16px",noteFontSize:"16px"}})),m(()=>{}),X(1))),Zr.subscribe(()=>{e.classList.add("mermaid");let t=`__mermaid_${vs++}`,r=M("div",{class:"mermaid"});mermaid.mermaidAPI.render(t,e.textContent,n=>{let o=r.attachShadow({mode:"closed"});o.innerHTML=n,e.replaceWith(r)})}),Zr.pipe(m(()=>({ref:e})))}function ys(e,{target$:t,print$:r}){let n=!0;return A(t.pipe(m(o=>o.closest("details:not([open])")),x(o=>e===o),m(()=>({action:"open",reveal:!0}))),r.pipe(x(o=>o||!n),w(()=>n=e.open),m(o=>({action:o?"open":"close"}))))}function ui(e,t){return P(()=>{let r=new E;return r.subscribe(({action:n,reveal:o})=>{e.toggleAttribute("open",n==="open"),o&&e.scrollIntoView()}),ys(e,t).pipe(w(n=>r.next(n)),C(()=>r.complete()),m(n=>H({ref:e},n)))})}var pi=M("table");function li(e){return e.replaceWith(pi),pi.replaceWith(Zo(e)),I({ref:e})}function xs(e){let t=Q(":scope > input",e),r=t.find(n=>n.checked)||t[0];return A(...t.map(n=>v(n,"change").pipe(m(()=>K(`label[for="${n.id}"]`))))).pipe(N(K(`label[for="${r.id}"]`)),m(n=>({active:n})))}function mi(e,{viewport$:t}){let r=Xr("prev");e.append(r);let n=Xr("next");e.append(n);let o=K(".tabbed-labels",e);return P(()=>{let i=new E,s=i.pipe(de(1));return Y([i,ve(e)]).pipe(Ae(1,xe),ee(s)).subscribe({next([{active:a},c]){let f=qe(a),{width:u}=he(a);e.style.setProperty("--md-indicator-x",`${f.x}px`),e.style.setProperty("--md-indicator-width",`${u}px`);let p=or(o);(f.xp.x+c.width)&&o.scrollTo({left:Math.max(0,f.x-16),behavior:"smooth"})},complete(){e.style.removeProperty("--md-indicator-x"),e.style.removeProperty("--md-indicator-width")}}),Y([pt(o),ve(o)]).pipe(ee(s)).subscribe(([a,c])=>{let f=mt(o);r.hidden=a.x<16,n.hidden=a.x>f.width-c.width-16}),A(v(r,"click").pipe(m(()=>-1)),v(n,"click").pipe(m(()=>1))).pipe(ee(s)).subscribe(a=>{let{width:c}=he(o);o.scrollBy({left:c*a,behavior:"smooth"})}),Z("content.tabs.link")&&i.pipe(He(1),ae(t)).subscribe(([{active:a},{offset:c}])=>{let f=a.innerText.trim();if(a.hasAttribute("data-md-switching"))a.removeAttribute("data-md-switching");else{let u=e.offsetTop-c.y;for(let l of Q("[data-tabs]"))for(let d of Q(":scope > input",l)){let h=K(`label[for="${d.id}"]`);if(h!==a&&h.innerText.trim()===f){h.setAttribute("data-md-switching",""),d.click();break}}window.scrollTo({top:e.offsetTop-u});let p=__md_get("__tabs")||[];__md_set("__tabs",[...new Set([f,...p])])}}),xs(e).pipe(w(a=>i.next(a)),C(()=>i.complete()),m(a=>H({ref:e},a)))}).pipe(Je(fe))}function di(e,{viewport$:t,target$:r,print$:n}){return A(...Q("pre:not(.mermaid) > code",e).map(o=>si(o,{target$:r,print$:n})),...Q("pre.mermaid",e).map(o=>fi(o)),...Q("table:not([class])",e).map(o=>li(o)),...Q("details",e).map(o=>ui(o,{target$:r,print$:n})),...Q("[data-tabs]",e).map(o=>mi(o,{viewport$:t})))}function ws(e,{alert$:t}){return t.pipe(S(r=>A(I(!0),I(!1).pipe(ke(2e3))).pipe(m(n=>({message:r,active:n})))))}function hi(e,t){let r=K(".md-typeset",e);return P(()=>{let n=new E;return n.subscribe(({message:o,active:i})=>{e.classList.toggle("md-dialog--active",i),r.textContent=o}),ws(e,t).pipe(w(o=>n.next(o)),C(()=>n.complete()),m(o=>H({ref:e},o)))})}function Es({viewport$:e}){if(!Z("header.autohide"))return I(!1);let t=e.pipe(m(({offset:{y:o}})=>o),Ce(2,1),m(([o,i])=>[oMath.abs(i-o.y)>100),m(([,[o]])=>o),B()),n=dt("search");return Y([e,n]).pipe(m(([{offset:o},i])=>o.y>400&&!i),B(),S(o=>o?r:I(!1)),N(!1))}function bi(e,t){return P(()=>Y([ve(e),Es(t)])).pipe(m(([{height:r},n])=>({height:r,hidden:n})),B((r,n)=>r.height===n.height&&r.hidden===n.hidden),X(1))}function vi(e,{header$:t,main$:r}){return P(()=>{let n=new E,o=n.pipe(de(1));return n.pipe(J("active"),Ze(t)).subscribe(([{active:i},{hidden:s}])=>{e.classList.toggle("md-header--shadow",i&&!s),e.hidden=s}),r.subscribe(n),t.pipe(ee(o),m(i=>H({ref:e},i)))})}function Ss(e,{viewport$:t,header$:r}){return dr(e,{viewport$:t,header$:r}).pipe(m(({offset:{y:n}})=>{let{height:o}=he(e);return{active:n>=o}}),J("active"))}function gi(e,t){return P(()=>{let r=new E;r.subscribe(({active:o})=>{e.classList.toggle("md-header__title--active",o)});let n=pe("article h1");return typeof n=="undefined"?R:Ss(n,t).pipe(w(o=>r.next(o)),C(()=>r.complete()),m(o=>H({ref:e},o)))})}function yi(e,{viewport$:t,header$:r}){let n=r.pipe(m(({height:i})=>i),B()),o=n.pipe(S(()=>ve(e).pipe(m(({height:i})=>({top:e.offsetTop,bottom:e.offsetTop+i})),J("bottom"))));return Y([n,o,t]).pipe(m(([i,{top:s,bottom:a},{offset:{y:c},size:{height:f}}])=>(f=Math.max(0,f-Math.max(0,s-c,i)-Math.max(0,f+c-a)),{offset:s-i,height:f,active:s-i<=c})),B((i,s)=>i.offset===s.offset&&i.height===s.height&&i.active===s.active))}function Os(e){let t=__md_get("__palette")||{index:e.findIndex(r=>matchMedia(r.getAttribute("data-md-color-media")).matches)};return I(...e).pipe(se(r=>v(r,"change").pipe(m(()=>r))),N(e[Math.max(0,t.index)]),m(r=>({index:e.indexOf(r),color:{scheme:r.getAttribute("data-md-color-scheme"),primary:r.getAttribute("data-md-color-primary"),accent:r.getAttribute("data-md-color-accent")}})),X(1))}function xi(e){return P(()=>{let t=new E;t.subscribe(n=>{document.body.setAttribute("data-md-color-switching","");for(let[o,i]of Object.entries(n.color))document.body.setAttribute(`data-md-color-${o}`,i);for(let o=0;o{document.body.removeAttribute("data-md-color-switching")});let r=Q("input",e);return Os(r).pipe(w(n=>t.next(n)),C(()=>t.complete()),m(n=>H({ref:e},n)))})}var en=Ye(Br());function _s(e){e.setAttribute("data-md-copying","");let t=e.innerText;return e.removeAttribute("data-md-copying"),t}function wi({alert$:e}){en.default.isSupported()&&new F(t=>{new en.default("[data-clipboard-target], [data-clipboard-text]",{text:r=>r.getAttribute("data-clipboard-text")||_s(K(r.getAttribute("data-clipboard-target")))}).on("success",r=>t.next(r))}).pipe(w(t=>{t.trigger.focus()}),m(()=>re("clipboard.copied"))).subscribe(e)}function Ts(e){if(e.length<2)return[""];let[t,r]=[...e].sort((o,i)=>o.length-i.length).map(o=>o.replace(/[^/]+$/,"")),n=0;if(t===r)n=t.length;else for(;t.charCodeAt(n)===r.charCodeAt(n);)n++;return e.map(o=>o.replace(t.slice(0,n),""))}function hr(e){let t=__md_get("__sitemap",sessionStorage,e);if(t)return I(t);{let r=le();return Uo(new URL("sitemap.xml",e||r.base)).pipe(m(n=>Ts(Q("loc",n).map(o=>o.textContent))),ce(()=>R),De([]),w(n=>__md_set("__sitemap",n,sessionStorage,e)))}}function Ei({document$:e,location$:t,viewport$:r}){let n=le();if(location.protocol==="file:")return;"scrollRestoration"in history&&(history.scrollRestoration="manual",v(window,"beforeunload").subscribe(()=>{history.scrollRestoration="auto"}));let o=pe("link[rel=icon]");typeof o!="undefined"&&(o.href=o.href);let i=hr().pipe(m(f=>f.map(u=>`${new URL(u,n.base)}`)),S(f=>v(document.body,"click").pipe(x(u=>!u.metaKey&&!u.ctrlKey),S(u=>{if(u.target instanceof Element){let p=u.target.closest("a");if(p&&!p.target){let l=new URL(p.href);if(l.search="",l.hash="",l.pathname!==location.pathname&&f.includes(l.toString()))return u.preventDefault(),I({url:new URL(p.href)})}}return Se}))),ie()),s=v(window,"popstate").pipe(x(f=>f.state!==null),m(f=>({url:new URL(location.href),offset:f.state})),ie());A(i,s).pipe(B((f,u)=>f.url.href===u.url.href),m(({url:f})=>f)).subscribe(t);let a=t.pipe(J("pathname"),S(f=>mr(f.href).pipe(ce(()=>(pr(f),Se)))),ie());i.pipe(ut(a)).subscribe(({url:f})=>{history.pushState({},"",`${f}`)});let c=new DOMParser;a.pipe(S(f=>f.text()),m(f=>c.parseFromString(f,"text/html"))).subscribe(e),e.pipe(He(1)).subscribe(f=>{for(let u of["title","link[rel=canonical]","meta[name=author]","meta[name=description]","[data-md-component=announce]","[data-md-component=container]","[data-md-component=header-topic]","[data-md-component=outdated]","[data-md-component=logo]","[data-md-component=skip]",...Z("navigation.tabs.sticky")?["[data-md-component=tabs]"]:[]]){let p=pe(u),l=pe(u,f);typeof p!="undefined"&&typeof l!="undefined"&&p.replaceWith(l)}}),e.pipe(He(1),m(()=>_e("container")),S(f=>Q("script",f)),Ir(f=>{let u=M("script");if(f.src){for(let p of f.getAttributeNames())u.setAttribute(p,f.getAttribute(p));return f.replaceWith(u),new F(p=>{u.onload=()=>p.complete()})}else return u.textContent=f.textContent,f.replaceWith(u),R})).subscribe(),A(i,s).pipe(ut(e)).subscribe(({url:f,offset:u})=>{f.hash&&!u?Io(f.hash):window.scrollTo(0,(u==null?void 0:u.y)||0)}),r.pipe(Mt(i),Re(250),J("offset")).subscribe(({offset:f})=>{history.replaceState(f,"")}),A(i,s).pipe(Ce(2,1),x(([f,u])=>f.url.pathname===u.url.pathname),m(([,f])=>f)).subscribe(({offset:f})=>{window.scrollTo(0,(f==null?void 0:f.y)||0)})}var As=Ye(tn());var Oi=Ye(tn());function rn(e,t){let r=new RegExp(e.separator,"img"),n=(o,i,s)=>`${i}${s}`;return o=>{o=o.replace(/[\s*+\-:~^]+/g," ").trim();let i=new RegExp(`(^|${e.separator})(${o.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return s=>(t?(0,Oi.default)(s):s).replace(i,n).replace(/<\/mark>(\s+)]*>/img,"$1")}}function _i(e){return e.split(/"([^"]+)"/g).map((t,r)=>r&1?t.replace(/^\b|^(?![^\x00-\x7F]|$)|\s+/g," +"):t).join("").replace(/"|(?:^|\s+)[*+\-:^~]+(?=\s+|$)/g,"").trim()}function bt(e){return e.type===1}function Ti(e){return e.type===2}function vt(e){return e.type===3}function Rs({config:e,docs:t}){e.lang.length===1&&e.lang[0]==="en"&&(e.lang=[re("search.config.lang")]),e.separator==="[\\s\\-]+"&&(e.separator=re("search.config.separator"));let n={pipeline:re("search.config.pipeline").split(/\s*,\s*/).filter(Boolean),suggestions:Z("search.suggest")};return{config:e,docs:t,options:n}}function Mi(e,t){let r=le(),n=new Worker(e),o=new E,i=Ko(n,{tx$:o}).pipe(m(s=>{if(vt(s))for(let a of s.data.items)for(let c of a)c.location=`${new URL(c.location,r.base)}`;return s}),ie());return ue(t).pipe(m(s=>({type:0,data:Rs(s)}))).subscribe(o.next.bind(o)),{tx$:o,rx$:i}}function Li({document$:e}){let t=le(),r=je(new URL("../versions.json",t.base)).pipe(ce(()=>R)),n=r.pipe(m(o=>{let[,i]=t.base.match(/([^/]+)\/?$/);return o.find(({version:s,aliases:a})=>s===i||a.includes(i))||o[0]}));r.pipe(m(o=>new Map(o.map(i=>[`${new URL(`../${i.version}/`,t.base)}`,i]))),S(o=>v(document.body,"click").pipe(x(i=>!i.metaKey&&!i.ctrlKey),ae(n),S(([i,s])=>{if(i.target instanceof Element){let a=i.target.closest("a");if(a&&!a.target&&o.has(a.href)){let c=a.href;return!i.target.closest(".md-version")&&o.get(c)===s?R:(i.preventDefault(),I(c))}}return R}),S(i=>{let{version:s}=o.get(i);return hr(new URL(i)).pipe(m(a=>{let f=Oe().href.replace(t.base,"");return a.includes(f.split("#")[0])?new URL(`../${s}/${f}`,t.base):new URL(i)}))})))).subscribe(o=>pr(o)),Y([r,n]).subscribe(([o,i])=>{K(".md-header__topic").appendChild(ei(o,i))}),e.pipe(S(()=>n)).subscribe(o=>{var s;let i=__md_get("__outdated",sessionStorage);if(i===null){let a=((s=t.version)==null?void 0:s.default)||"latest";i=!o.aliases.includes(a),__md_set("__outdated",i,sessionStorage)}if(i)for(let a of te("outdated"))a.hidden=!1})}function ks(e,{rx$:t}){let r=(__search==null?void 0:__search.transform)||_i,{searchParams:n}=Oe();n.has("q")&&Ke("search",!0);let o=t.pipe(x(bt),oe(1),m(()=>n.get("q")||""));dt("search").pipe(x(a=>!a),oe(1)).subscribe(()=>{let a=new URL(location.href);a.searchParams.delete("q"),history.replaceState({},"",`${a}`)}),o.subscribe(a=>{a&&(e.value=a,e.focus())});let i=nr(e),s=A(v(e,"keyup"),v(e,"focus").pipe(ke(1)),o).pipe(m(()=>r(e.value)),N(""),B());return Y([s,i]).pipe(m(([a,c])=>({value:a,focus:c})),X(1))}function Ai(e,{tx$:t,rx$:r}){let n=new E,o=n.pipe(de(1));return n.pipe(J("value"),m(({value:i})=>({type:2,data:i}))).subscribe(t.next.bind(t)),n.pipe(J("focus")).subscribe(({focus:i})=>{i?(Ke("search",i),e.placeholder=""):e.placeholder=re("search.placeholder")}),v(e.form,"reset").pipe(ee(o)).subscribe(()=>e.focus()),ks(e,{tx$:t,rx$:r}).pipe(w(i=>n.next(i)),C(()=>n.complete()),m(i=>H({ref:e},i)),ie())}function Ci(e,{rx$:t},{query$:r}){let n=new E,o=Ao(e.parentElement).pipe(x(Boolean)),i=K(":scope > :first-child",e),s=K(":scope > :last-child",e),a=t.pipe(x(bt),oe(1));return n.pipe(ae(r),Mt(a)).subscribe(([{items:f},{value:u}])=>{if(u)switch(f.length){case 0:i.textContent=re("search.result.none");break;case 1:i.textContent=re("search.result.one");break;default:i.textContent=re("search.result.other",lr(f.length))}else i.textContent=re("search.result.placeholder")}),n.pipe(w(()=>s.innerHTML=""),S(({items:f})=>A(I(...f.slice(0,10)),I(...f.slice(10)).pipe(Ce(4),Nr(o),S(([u])=>u))))).subscribe(f=>s.appendChild(Jo(f))),t.pipe(x(vt),m(({data:f})=>f)).pipe(w(f=>n.next(f)),C(()=>n.complete()),m(f=>H({ref:e},f)))}function Hs(e,{query$:t}){return t.pipe(m(({value:r})=>{let n=Oe();return n.hash="",n.searchParams.delete("h"),n.searchParams.set("q",r),{url:n}}))}function Ri(e,t){let r=new E;return r.subscribe(({url:n})=>{e.setAttribute("data-clipboard-text",e.href),e.href=`${n}`}),v(e,"click").subscribe(n=>n.preventDefault()),Hs(e,t).pipe(w(n=>r.next(n)),C(()=>r.complete()),m(n=>H({ref:e},n)))}function ki(e,{rx$:t},{keyboard$:r}){let n=new E,o=_e("search-query"),i=A(v(o,"keydown"),v(o,"focus")).pipe(Le(fe),m(()=>o.value),B());return n.pipe(Ze(i),m(([{suggestions:a},c])=>{let f=c.split(/([\s-]+)/);if((a==null?void 0:a.length)&&f[f.length-1]){let u=a[a.length-1];u.startsWith(f[f.length-1])&&(f[f.length-1]=u)}else f.length=0;return f})).subscribe(a=>e.innerHTML=a.join("").replace(/\s/g," ")),r.pipe(x(({mode:a})=>a==="search")).subscribe(a=>{switch(a.type){case"ArrowRight":e.innerText.length&&o.selectionStart===o.value.length&&(o.value=e.innerText);break}}),t.pipe(x(vt),m(({data:a})=>a)).pipe(w(a=>n.next(a)),C(()=>n.complete()),m(()=>({ref:e})))}function Hi(e,{index$:t,keyboard$:r}){let n=le();try{let o=(__search==null?void 0:__search.worker)||n.search,i=Mi(o,t),s=_e("search-query",e),a=_e("search-result",e),{tx$:c,rx$:f}=i;c.pipe(x(Ti),ut(f.pipe(x(bt))),oe(1)).subscribe(c.next.bind(c)),r.pipe(x(({mode:l})=>l==="search")).subscribe(l=>{let d=Ie();switch(l.type){case"Enter":if(d===s){let h=new Map;for(let b of Q(":first-child [href]",a)){let U=b.firstElementChild;h.set(b,parseFloat(U.getAttribute("data-md-score")))}if(h.size){let[[b]]=[...h].sort(([,U],[,G])=>G-U);b.click()}l.claim()}break;case"Escape":case"Tab":Ke("search",!1),s.blur();break;case"ArrowUp":case"ArrowDown":if(typeof d=="undefined")s.focus();else{let h=[s,...Q(":not(details) > [href], summary, details[open] [href]",a)],b=Math.max(0,(Math.max(0,h.indexOf(d))+h.length+(l.type==="ArrowUp"?-1:1))%h.length);h[b].focus()}l.claim();break;default:s!==Ie()&&s.focus()}}),r.pipe(x(({mode:l})=>l==="global")).subscribe(l=>{switch(l.type){case"f":case"s":case"/":s.focus(),s.select(),l.claim();break}});let u=Ai(s,i),p=Ci(a,i,{query$:u});return A(u,p).pipe(et(...te("search-share",e).map(l=>Ri(l,{query$:u})),...te("search-suggest",e).map(l=>ki(l,i,{keyboard$:r}))))}catch(o){return e.hidden=!0,Se}}function Pi(e,{index$:t,location$:r}){return Y([t,r.pipe(N(Oe()),x(n=>!!n.searchParams.get("h")))]).pipe(m(([n,o])=>rn(n.config,!0)(o.searchParams.get("h"))),m(n=>{var s;let o=new Map,i=document.createNodeIterator(e,NodeFilter.SHOW_TEXT);for(let a=i.nextNode();a;a=i.nextNode())if((s=a.parentElement)!=null&&s.offsetHeight){let c=a.textContent,f=n(c);f.length>c.length&&o.set(a,f)}for(let[a,c]of o){let{childNodes:f}=M("span",null,c);a.replaceWith(...Array.from(f))}return{ref:e,nodes:o}}))}function Ps(e,{viewport$:t,main$:r}){let n=e.parentElement,o=n.offsetTop-n.parentElement.offsetTop;return Y([r,t]).pipe(m(([{offset:i,height:s},{offset:{y:a}}])=>(s=s+Math.min(o,Math.max(0,a-i))-o,{height:s,locked:a>=i+o})),B((i,s)=>i.height===s.height&&i.locked===s.locked))}function nn(e,n){var o=n,{header$:t}=o,r=un(o,["header$"]);let i=K(".md-sidebar__scrollwrap",e),{y:s}=qe(i);return P(()=>{let a=new E;return a.pipe(Ae(0,xe),ae(t)).subscribe({next([{height:c},{height:f}]){i.style.height=`${c-2*s}px`,e.style.top=`${f}px`},complete(){i.style.height="",e.style.top=""}}),a.pipe(Le(xe),oe(1)).subscribe(()=>{for(let c of Q(".md-nav__link--active[href]",e)){let f=cr(c);if(typeof f!="undefined"){let u=c.offsetTop-f.offsetTop,{height:p}=he(f);f.scrollTo({top:u-p/2})}}}),Ps(e,r).pipe(w(c=>a.next(c)),C(()=>a.complete()),m(c=>H({ref:e},c)))})}function $i(e,t){if(typeof t!="undefined"){let r=`https://api.github.com/repos/${e}/${t}`;return _t(je(`${r}/releases/latest`).pipe(ce(()=>R),m(n=>({version:n.tag_name})),De({})),je(r).pipe(ce(()=>R),m(n=>({stars:n.stargazers_count,forks:n.forks_count})),De({}))).pipe(m(([n,o])=>H(H({},n),o)))}else{let r=`https://api.github.com/users/${e}`;return je(r).pipe(m(n=>({repositories:n.public_repos})),De({}))}}function Ii(e,t){let r=`https://${e}/api/v4/projects/${encodeURIComponent(t)}`;return je(r).pipe(ce(()=>R),m(({star_count:n,forks_count:o})=>({stars:n,forks:o})),De({}))}function ji(e){let t=e.match(/^.+github\.com\/([^/]+)\/?([^/]+)?/i);if(t){let[,r,n]=t;return $i(r,n)}if(t=e.match(/^.+?([^/]*gitlab[^/]+)\/(.+?)\/?$/i),t){let[,r,n]=t;return Ii(r,n)}return R}var $s;function Is(e){return $s||($s=P(()=>{let t=__md_get("__source",sessionStorage);if(t)return I(t);if(te("consent").length){let n=__md_get("__consent");if(!(n&&n.github))return R}return ji(e.href).pipe(w(n=>__md_set("__source",n,sessionStorage)))}).pipe(ce(()=>R),x(t=>Object.keys(t).length>0),m(t=>({facts:t})),X(1)))}function Fi(e){let t=K(":scope > :last-child",e);return P(()=>{let r=new E;return r.subscribe(({facts:n})=>{t.appendChild(Xo(n)),t.classList.add("md-source__repository--active")}),Is(e).pipe(w(n=>r.next(n)),C(()=>r.complete()),m(n=>H({ref:e},n)))})}function js(e,{viewport$:t,header$:r}){return ve(document.body).pipe(S(()=>dr(e,{header$:r,viewport$:t})),m(({offset:{y:n}})=>({hidden:n>=10})),J("hidden"))}function Ui(e,t){return P(()=>{let r=new E;return r.subscribe({next({hidden:n}){e.hidden=n},complete(){e.hidden=!1}}),(Z("navigation.tabs.sticky")?I({hidden:!1}):js(e,t)).pipe(w(n=>r.next(n)),C(()=>r.complete()),m(n=>H({ref:e},n)))})}function Fs(e,{viewport$:t,header$:r}){let n=new Map,o=Q("[href^=\\#]",e);for(let a of o){let c=decodeURIComponent(a.hash.substring(1)),f=pe(`[id="${c}"]`);typeof f!="undefined"&&n.set(a,f)}let i=r.pipe(J("height"),m(({height:a})=>{let c=_e("main"),f=K(":scope > :first-child",c);return a+.8*(f.offsetTop-c.offsetTop)}),ie());return ve(document.body).pipe(J("height"),S(a=>P(()=>{let c=[];return I([...n].reduce((f,[u,p])=>{for(;c.length&&n.get(c[c.length-1]).tagName>=p.tagName;)c.pop();let l=p.offsetTop;for(;!l&&p.parentElement;)p=p.parentElement,l=p.offsetTop;return f.set([...c=[...c,u]].reverse(),l)},new Map))}).pipe(m(c=>new Map([...c].sort(([,f],[,u])=>f-u))),Ze(i),S(([c,f])=>t.pipe(Ur(([u,p],{offset:{y:l},size:d})=>{let h=l+d.height>=Math.floor(a.height);for(;p.length;){let[,b]=p[0];if(b-f=l&&!h)p=[u.pop(),...p];else break}return[u,p]},[[],[...c]]),B((u,p)=>u[0]===p[0]&&u[1]===p[1])))))).pipe(m(([a,c])=>({prev:a.map(([f])=>f),next:c.map(([f])=>f)})),N({prev:[],next:[]}),Ce(2,1),m(([a,c])=>a.prev.length{let o=new E,i=o.pipe(de(1));if(o.subscribe(({prev:s,next:a})=>{for(let[c]of a)c.classList.remove("md-nav__link--passed"),c.classList.remove("md-nav__link--active");for(let[c,[f]]of s.entries())f.classList.add("md-nav__link--passed"),f.classList.toggle("md-nav__link--active",c===s.length-1)}),Z("toc.follow")){let s=A(t.pipe(Re(1),m(()=>{})),t.pipe(Re(250),m(()=>"smooth")));o.pipe(x(({prev:a})=>a.length>0),ae(s)).subscribe(([{prev:a},c])=>{let[f]=a[a.length-1];if(f.offsetHeight){let u=cr(f);if(typeof u!="undefined"){let p=f.offsetTop-u.offsetTop,{height:l}=he(u);u.scrollTo({top:p-l/2,behavior:c})}}})}return Z("navigation.tracking")&&t.pipe(ee(i),J("offset"),Re(250),He(1),ee(n.pipe(He(1))),Tt({delay:250}),ae(o)).subscribe(([,{prev:s}])=>{let a=Oe(),c=s[s.length-1];if(c&&c.length){let[f]=c,{hash:u}=new URL(f.href);a.hash!==u&&(a.hash=u,history.replaceState({},"",`${a}`))}else a.hash="",history.replaceState({},"",`${a}`)}),Fs(e,{viewport$:t,header$:r}).pipe(w(s=>o.next(s)),C(()=>o.complete()),m(s=>H({ref:e},s)))})}function Us(e,{viewport$:t,main$:r,target$:n}){let o=t.pipe(m(({offset:{y:s}})=>s),Ce(2,1),m(([s,a])=>s>a&&a>0),B()),i=r.pipe(m(({active:s})=>s));return Y([i,o]).pipe(m(([s,a])=>!(s&&a)),B(),ee(n.pipe(He(1))),Fr(!0),Tt({delay:250}),m(s=>({hidden:s})))}function Wi(e,{viewport$:t,header$:r,main$:n,target$:o}){let i=new E,s=i.pipe(de(1));return i.subscribe({next({hidden:a}){e.hidden=a,a?(e.setAttribute("tabindex","-1"),e.blur()):e.removeAttribute("tabindex")},complete(){e.style.top="",e.hidden=!0,e.removeAttribute("tabindex")}}),r.pipe(ee(s),J("height")).subscribe(({height:a})=>{e.style.top=`${a+16}px`}),Us(e,{viewport$:t,main$:n,target$:o}).pipe(w(a=>i.next(a)),C(()=>i.complete()),m(a=>H({ref:e},a)))}function Vi({document$:e,tablet$:t}){e.pipe(S(()=>Q(".md-toggle--indeterminate, [data-md-state=indeterminate]")),w(r=>{r.indeterminate=!0,r.checked=!1}),se(r=>v(r,"change").pipe(Wr(()=>r.classList.contains("md-toggle--indeterminate")),m(()=>r))),ae(t)).subscribe(([r,n])=>{r.classList.remove("md-toggle--indeterminate"),n&&(r.checked=!1)})}function Ds(){return/(iPad|iPhone|iPod)/.test(navigator.userAgent)}function zi({document$:e}){e.pipe(S(()=>Q("[data-md-scrollfix]")),w(t=>t.removeAttribute("data-md-scrollfix")),x(Ds),se(t=>v(t,"touchstart").pipe(m(()=>t)))).subscribe(t=>{let r=t.scrollTop;r===0?t.scrollTop=1:r+t.offsetHeight===t.scrollHeight&&(t.scrollTop=r-1)})}function Ni({viewport$:e,tablet$:t}){Y([dt("search"),t]).pipe(m(([r,n])=>r&&!n),S(r=>I(r).pipe(ke(r?400:100))),ae(e)).subscribe(([r,{offset:{y:n}}])=>{if(r)document.body.setAttribute("data-md-scrolllock",""),document.body.style.top=`-${n}px`;else{let o=-1*parseInt(document.body.style.top,10);document.body.removeAttribute("data-md-scrolllock"),document.body.style.top="",o&&window.scrollTo(0,o)}})}Object.entries||(Object.entries=function(e){let t=[];for(let r of Object.keys(e))t.push([r,e[r]]);return t});Object.values||(Object.values=function(e){let t=[];for(let r of Object.keys(e))t.push(e[r]);return t});typeof Element!="undefined"&&(Element.prototype.scrollTo||(Element.prototype.scrollTo=function(e,t){typeof e=="object"?(this.scrollLeft=e.left,this.scrollTop=e.top):(this.scrollLeft=e,this.scrollTop=t)}),Element.prototype.replaceWith||(Element.prototype.replaceWith=function(...e){let t=this.parentNode;if(t){e.length===0&&t.removeChild(this);for(let r=e.length-1;r>=0;r--){let n=e[r];typeof n=="string"?n=document.createTextNode(n):n.parentNode&&n.parentNode.removeChild(n),r?t.insertBefore(this.previousSibling,n):t.replaceChild(n,this)}}}));document.documentElement.classList.remove("no-js");document.documentElement.classList.add("js");var tt=go(),vr=ko(),gt=jo(),on=Ro(),we=qo(),gr=Kr("(min-width: 960px)"),Ki=Kr("(min-width: 1220px)"),Qi=Fo(),Yi=le(),Bi=document.forms.namedItem("search")?(__search==null?void 0:__search.index)||je(new URL("search/search_index.json",Yi.base)):Se,an=new E;wi({alert$:an});Z("navigation.instant")&&Ei({document$:tt,location$:vr,viewport$:we});var qi;((qi=Yi.version)==null?void 0:qi.provider)==="mike"&&Li({document$:tt});A(vr,gt).pipe(ke(125)).subscribe(()=>{Ke("drawer",!1),Ke("search",!1)});on.pipe(x(({mode:e})=>e==="global")).subscribe(e=>{switch(e.type){case"p":case",":let t=pe("[href][rel=prev]");typeof t!="undefined"&&t.click();break;case"n":case".":let r=pe("[href][rel=next]");typeof r!="undefined"&&r.click();break}});Vi({document$:tt,tablet$:gr});zi({document$:tt});Ni({viewport$:we,tablet$:gr});var Qe=bi(_e("header"),{viewport$:we}),br=tt.pipe(m(()=>_e("main")),S(e=>yi(e,{viewport$:we,header$:Qe})),X(1)),Ws=A(...te("consent").map(e=>Yo(e,{target$:gt})),...te("dialog").map(e=>hi(e,{alert$:an})),...te("header").map(e=>vi(e,{viewport$:we,header$:Qe,main$:br})),...te("palette").map(e=>xi(e)),...te("search").map(e=>Hi(e,{index$:Bi,keyboard$:on})),...te("source").map(e=>Fi(e))),Vs=P(()=>A(...te("announce").map(e=>Qo(e)),...te("content").map(e=>di(e,{viewport$:we,target$:gt,print$:Qi})),...te("content").map(e=>Z("search.highlight")?Pi(e,{index$:Bi,location$:vr}):R),...te("header-title").map(e=>gi(e,{viewport$:we,header$:Qe})),...te("sidebar").map(e=>e.getAttribute("data-md-type")==="navigation"?Qr(Ki,()=>nn(e,{viewport$:we,header$:Qe,main$:br})):Qr(gr,()=>nn(e,{viewport$:we,header$:Qe,main$:br}))),...te("tabs").map(e=>Ui(e,{viewport$:we,header$:Qe})),...te("toc").map(e=>Di(e,{viewport$:we,header$:Qe,target$:gt})),...te("top").map(e=>Wi(e,{viewport$:we,header$:Qe,main$:br,target$:gt})))),Gi=tt.pipe(S(()=>Vs),et(Ws),X(1));Gi.subscribe();window.document$=tt;window.location$=vr;window.target$=gt;window.keyboard$=on;window.viewport$=we;window.tablet$=gr;window.screen$=Ki;window.print$=Qi;window.alert$=an;window.component$=Gi;})(); +//# sourceMappingURL=bundle.5a2dcb6a.min.js.map + diff --git a/assets/javascripts/bundle.5a2dcb6a.min.js.map b/assets/javascripts/bundle.5a2dcb6a.min.js.map new file mode 100644 index 00000000..34e26a3a --- /dev/null +++ b/assets/javascripts/bundle.5a2dcb6a.min.js.map @@ -0,0 +1,8 @@ +{ + "version": 3, + "sources": ["node_modules/focus-visible/dist/focus-visible.js", "node_modules/url-polyfill/url-polyfill.js", "node_modules/rxjs/node_modules/tslib/tslib.js", "node_modules/clipboard/dist/clipboard.js", "node_modules/escape-html/index.js", "node_modules/array-flat-polyfill/index.mjs", "src/assets/javascripts/bundle.ts", "node_modules/unfetch/polyfill/index.js", "node_modules/rxjs/node_modules/tslib/modules/index.js", "node_modules/rxjs/src/internal/util/isFunction.ts", "node_modules/rxjs/src/internal/util/createErrorClass.ts", "node_modules/rxjs/src/internal/util/UnsubscriptionError.ts", "node_modules/rxjs/src/internal/util/arrRemove.ts", "node_modules/rxjs/src/internal/Subscription.ts", "node_modules/rxjs/src/internal/config.ts", "node_modules/rxjs/src/internal/scheduler/timeoutProvider.ts", "node_modules/rxjs/src/internal/util/reportUnhandledError.ts", "node_modules/rxjs/src/internal/util/noop.ts", "node_modules/rxjs/src/internal/NotificationFactories.ts", "node_modules/rxjs/src/internal/util/errorContext.ts", "node_modules/rxjs/src/internal/Subscriber.ts", "node_modules/rxjs/src/internal/symbol/observable.ts", "node_modules/rxjs/src/internal/util/identity.ts", "node_modules/rxjs/src/internal/util/pipe.ts", "node_modules/rxjs/src/internal/Observable.ts", "node_modules/rxjs/src/internal/util/lift.ts", "node_modules/rxjs/src/internal/operators/OperatorSubscriber.ts", "node_modules/rxjs/src/internal/scheduler/animationFrameProvider.ts", "node_modules/rxjs/src/internal/util/ObjectUnsubscribedError.ts", "node_modules/rxjs/src/internal/Subject.ts", "node_modules/rxjs/src/internal/scheduler/dateTimestampProvider.ts", "node_modules/rxjs/src/internal/ReplaySubject.ts", "node_modules/rxjs/src/internal/scheduler/Action.ts", "node_modules/rxjs/src/internal/scheduler/intervalProvider.ts", "node_modules/rxjs/src/internal/scheduler/AsyncAction.ts", "node_modules/rxjs/src/internal/Scheduler.ts", "node_modules/rxjs/src/internal/scheduler/AsyncScheduler.ts", "node_modules/rxjs/src/internal/scheduler/async.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameAction.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameScheduler.ts", "node_modules/rxjs/src/internal/scheduler/animationFrame.ts", "node_modules/rxjs/src/internal/observable/empty.ts", "node_modules/rxjs/src/internal/util/isScheduler.ts", "node_modules/rxjs/src/internal/util/args.ts", "node_modules/rxjs/src/internal/util/isArrayLike.ts", "node_modules/rxjs/src/internal/util/isPromise.ts", "node_modules/rxjs/src/internal/util/isInteropObservable.ts", "node_modules/rxjs/src/internal/util/isAsyncIterable.ts", "node_modules/rxjs/src/internal/util/throwUnobservableError.ts", "node_modules/rxjs/src/internal/symbol/iterator.ts", "node_modules/rxjs/src/internal/util/isIterable.ts", "node_modules/rxjs/src/internal/util/isReadableStreamLike.ts", "node_modules/rxjs/src/internal/observable/innerFrom.ts", "node_modules/rxjs/src/internal/util/executeSchedule.ts", "node_modules/rxjs/src/internal/operators/observeOn.ts", "node_modules/rxjs/src/internal/operators/subscribeOn.ts", "node_modules/rxjs/src/internal/scheduled/scheduleObservable.ts", "node_modules/rxjs/src/internal/scheduled/schedulePromise.ts", "node_modules/rxjs/src/internal/scheduled/scheduleArray.ts", "node_modules/rxjs/src/internal/scheduled/scheduleIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleAsyncIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleReadableStreamLike.ts", "node_modules/rxjs/src/internal/scheduled/scheduled.ts", "node_modules/rxjs/src/internal/observable/from.ts", "node_modules/rxjs/src/internal/observable/of.ts", "node_modules/rxjs/src/internal/observable/throwError.ts", "node_modules/rxjs/src/internal/util/isDate.ts", "node_modules/rxjs/src/internal/operators/map.ts", "node_modules/rxjs/src/internal/util/mapOneOrManyArgs.ts", "node_modules/rxjs/src/internal/util/argsArgArrayOrObject.ts", "node_modules/rxjs/src/internal/util/createObject.ts", "node_modules/rxjs/src/internal/observable/combineLatest.ts", "node_modules/rxjs/src/internal/operators/mergeInternals.ts", "node_modules/rxjs/src/internal/operators/mergeMap.ts", "node_modules/rxjs/src/internal/operators/mergeAll.ts", "node_modules/rxjs/src/internal/operators/concatAll.ts", "node_modules/rxjs/src/internal/observable/concat.ts", "node_modules/rxjs/src/internal/observable/defer.ts", "node_modules/rxjs/src/internal/observable/fromEvent.ts", "node_modules/rxjs/src/internal/observable/fromEventPattern.ts", "node_modules/rxjs/src/internal/observable/timer.ts", "node_modules/rxjs/src/internal/observable/merge.ts", "node_modules/rxjs/src/internal/observable/never.ts", "node_modules/rxjs/src/internal/util/argsOrArgArray.ts", "node_modules/rxjs/src/internal/operators/filter.ts", "node_modules/rxjs/src/internal/observable/zip.ts", "node_modules/rxjs/src/internal/operators/audit.ts", "node_modules/rxjs/src/internal/operators/auditTime.ts", "node_modules/rxjs/src/internal/operators/bufferCount.ts", "node_modules/rxjs/src/internal/operators/catchError.ts", "node_modules/rxjs/src/internal/operators/scanInternals.ts", "node_modules/rxjs/src/internal/operators/combineLatest.ts", "node_modules/rxjs/src/internal/operators/combineLatestWith.ts", "node_modules/rxjs/src/internal/operators/concatMap.ts", "node_modules/rxjs/src/internal/operators/debounceTime.ts", "node_modules/rxjs/src/internal/operators/defaultIfEmpty.ts", "node_modules/rxjs/src/internal/operators/take.ts", "node_modules/rxjs/src/internal/operators/ignoreElements.ts", "node_modules/rxjs/src/internal/operators/mapTo.ts", "node_modules/rxjs/src/internal/operators/delayWhen.ts", "node_modules/rxjs/src/internal/operators/delay.ts", "node_modules/rxjs/src/internal/operators/distinctUntilChanged.ts", "node_modules/rxjs/src/internal/operators/distinctUntilKeyChanged.ts", "node_modules/rxjs/src/internal/operators/endWith.ts", "node_modules/rxjs/src/internal/operators/finalize.ts", "node_modules/rxjs/src/internal/operators/takeLast.ts", "node_modules/rxjs/src/internal/operators/merge.ts", "node_modules/rxjs/src/internal/operators/mergeWith.ts", "node_modules/rxjs/src/internal/operators/repeat.ts", "node_modules/rxjs/src/internal/operators/sample.ts", "node_modules/rxjs/src/internal/operators/scan.ts", "node_modules/rxjs/src/internal/operators/share.ts", "node_modules/rxjs/src/internal/operators/shareReplay.ts", "node_modules/rxjs/src/internal/operators/skip.ts", "node_modules/rxjs/src/internal/operators/skipUntil.ts", "node_modules/rxjs/src/internal/operators/startWith.ts", "node_modules/rxjs/src/internal/operators/switchMap.ts", "node_modules/rxjs/src/internal/operators/takeUntil.ts", "node_modules/rxjs/src/internal/operators/takeWhile.ts", "node_modules/rxjs/src/internal/operators/tap.ts", "node_modules/rxjs/src/internal/operators/throttle.ts", "node_modules/rxjs/src/internal/operators/throttleTime.ts", "node_modules/rxjs/src/internal/operators/withLatestFrom.ts", "node_modules/rxjs/src/internal/operators/zip.ts", "node_modules/rxjs/src/internal/operators/zipWith.ts", "src/assets/javascripts/browser/document/index.ts", "src/assets/javascripts/browser/element/_/index.ts", "src/assets/javascripts/browser/element/focus/index.ts", "src/assets/javascripts/browser/element/offset/_/index.ts", "src/assets/javascripts/browser/element/offset/content/index.ts", "node_modules/resize-observer-polyfill/dist/ResizeObserver.es.js", "src/assets/javascripts/browser/element/size/_/index.ts", "src/assets/javascripts/browser/element/size/content/index.ts", "src/assets/javascripts/browser/element/visibility/index.ts", "src/assets/javascripts/browser/toggle/index.ts", "src/assets/javascripts/browser/keyboard/index.ts", "src/assets/javascripts/browser/location/_/index.ts", "src/assets/javascripts/utilities/h/index.ts", "src/assets/javascripts/utilities/string/index.ts", "src/assets/javascripts/browser/location/hash/index.ts", "src/assets/javascripts/browser/media/index.ts", "src/assets/javascripts/browser/request/index.ts", "src/assets/javascripts/browser/script/index.ts", "src/assets/javascripts/browser/viewport/offset/index.ts", "src/assets/javascripts/browser/viewport/size/index.ts", "src/assets/javascripts/browser/viewport/_/index.ts", "src/assets/javascripts/browser/viewport/at/index.ts", "src/assets/javascripts/browser/worker/index.ts", "src/assets/javascripts/_/index.ts", "src/assets/javascripts/components/_/index.ts", "src/assets/javascripts/components/announce/index.ts", "src/assets/javascripts/components/consent/index.ts", "src/assets/javascripts/components/content/code/_/index.ts", "src/assets/javascripts/templates/tooltip/index.tsx", "src/assets/javascripts/templates/annotation/index.tsx", "src/assets/javascripts/templates/clipboard/index.tsx", "src/assets/javascripts/templates/search/index.tsx", "src/assets/javascripts/templates/source/index.tsx", "src/assets/javascripts/templates/tabbed/index.tsx", "src/assets/javascripts/templates/table/index.tsx", "src/assets/javascripts/templates/version/index.tsx", "src/assets/javascripts/components/content/annotation/_/index.ts", "src/assets/javascripts/components/content/annotation/list/index.ts", "src/assets/javascripts/components/content/code/mermaid/index.ts", "src/assets/javascripts/components/content/details/index.ts", "src/assets/javascripts/components/content/table/index.ts", "src/assets/javascripts/components/content/tabs/index.ts", "src/assets/javascripts/components/content/_/index.ts", "src/assets/javascripts/components/dialog/index.ts", "src/assets/javascripts/components/header/_/index.ts", "src/assets/javascripts/components/header/title/index.ts", "src/assets/javascripts/components/main/index.ts", "src/assets/javascripts/components/palette/index.ts", "src/assets/javascripts/integrations/clipboard/index.ts", "src/assets/javascripts/integrations/sitemap/index.ts", "src/assets/javascripts/integrations/instant/index.ts", "src/assets/javascripts/integrations/search/document/index.ts", "src/assets/javascripts/integrations/search/highlighter/index.ts", "src/assets/javascripts/integrations/search/query/transform/index.ts", "src/assets/javascripts/integrations/search/worker/message/index.ts", "src/assets/javascripts/integrations/search/worker/_/index.ts", "src/assets/javascripts/integrations/version/index.ts", "src/assets/javascripts/components/search/query/index.ts", "src/assets/javascripts/components/search/result/index.ts", "src/assets/javascripts/components/search/share/index.ts", "src/assets/javascripts/components/search/suggest/index.ts", "src/assets/javascripts/components/search/_/index.ts", "src/assets/javascripts/components/search/highlight/index.ts", "src/assets/javascripts/components/sidebar/index.ts", "src/assets/javascripts/components/source/facts/github/index.ts", "src/assets/javascripts/components/source/facts/gitlab/index.ts", "src/assets/javascripts/components/source/facts/_/index.ts", "src/assets/javascripts/components/source/_/index.ts", "src/assets/javascripts/components/tabs/index.ts", "src/assets/javascripts/components/toc/index.ts", "src/assets/javascripts/components/top/index.ts", "src/assets/javascripts/patches/indeterminate/index.ts", "src/assets/javascripts/patches/scrollfix/index.ts", "src/assets/javascripts/patches/scrolllock/index.ts", "src/assets/javascripts/polyfills/index.ts"], + "sourceRoot": "../../../..", + "sourcesContent": ["(function (global, factory) {\n typeof exports === 'object' && typeof module !== 'undefined' ? factory() :\n typeof define === 'function' && define.amd ? define(factory) :\n (factory());\n}(this, (function () { 'use strict';\n\n /**\n * Applies the :focus-visible polyfill at the given scope.\n * A scope in this case is either the top-level Document or a Shadow Root.\n *\n * @param {(Document|ShadowRoot)} scope\n * @see https://github.com/WICG/focus-visible\n */\n function applyFocusVisiblePolyfill(scope) {\n var hadKeyboardEvent = true;\n var hadFocusVisibleRecently = false;\n var hadFocusVisibleRecentlyTimeout = null;\n\n var inputTypesAllowlist = {\n text: true,\n search: true,\n url: true,\n tel: true,\n email: true,\n password: true,\n number: true,\n date: true,\n month: true,\n week: true,\n time: true,\n datetime: true,\n 'datetime-local': true\n };\n\n /**\n * Helper function for legacy browsers and iframes which sometimes focus\n * elements like document, body, and non-interactive SVG.\n * @param {Element} el\n */\n function isValidFocusTarget(el) {\n if (\n el &&\n el !== document &&\n el.nodeName !== 'HTML' &&\n el.nodeName !== 'BODY' &&\n 'classList' in el &&\n 'contains' in el.classList\n ) {\n return true;\n }\n return false;\n }\n\n /**\n * Computes whether the given element should automatically trigger the\n * `focus-visible` class being added, i.e. whether it should always match\n * `:focus-visible` when focused.\n * @param {Element} el\n * @return {boolean}\n */\n function focusTriggersKeyboardModality(el) {\n var type = el.type;\n var tagName = el.tagName;\n\n if (tagName === 'INPUT' && inputTypesAllowlist[type] && !el.readOnly) {\n return true;\n }\n\n if (tagName === 'TEXTAREA' && !el.readOnly) {\n return true;\n }\n\n if (el.isContentEditable) {\n return true;\n }\n\n return false;\n }\n\n /**\n * Add the `focus-visible` class to the given element if it was not added by\n * the author.\n * @param {Element} el\n */\n function addFocusVisibleClass(el) {\n if (el.classList.contains('focus-visible')) {\n return;\n }\n el.classList.add('focus-visible');\n el.setAttribute('data-focus-visible-added', '');\n }\n\n /**\n * Remove the `focus-visible` class from the given element if it was not\n * originally added by the author.\n * @param {Element} el\n */\n function removeFocusVisibleClass(el) {\n if (!el.hasAttribute('data-focus-visible-added')) {\n return;\n }\n el.classList.remove('focus-visible');\n el.removeAttribute('data-focus-visible-added');\n }\n\n /**\n * If the most recent user interaction was via the keyboard;\n * and the key press did not include a meta, alt/option, or control key;\n * then the modality is keyboard. Otherwise, the modality is not keyboard.\n * Apply `focus-visible` to any current active element and keep track\n * of our keyboard modality state with `hadKeyboardEvent`.\n * @param {KeyboardEvent} e\n */\n function onKeyDown(e) {\n if (e.metaKey || e.altKey || e.ctrlKey) {\n return;\n }\n\n if (isValidFocusTarget(scope.activeElement)) {\n addFocusVisibleClass(scope.activeElement);\n }\n\n hadKeyboardEvent = true;\n }\n\n /**\n * If at any point a user clicks with a pointing device, ensure that we change\n * the modality away from keyboard.\n * This avoids the situation where a user presses a key on an already focused\n * element, and then clicks on a different element, focusing it with a\n * pointing device, while we still think we're in keyboard modality.\n * @param {Event} e\n */\n function onPointerDown(e) {\n hadKeyboardEvent = false;\n }\n\n /**\n * On `focus`, add the `focus-visible` class to the target if:\n * - the target received focus as a result of keyboard navigation, or\n * - the event target is an element that will likely require interaction\n * via the keyboard (e.g. a text box)\n * @param {Event} e\n */\n function onFocus(e) {\n // Prevent IE from focusing the document or HTML element.\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (hadKeyboardEvent || focusTriggersKeyboardModality(e.target)) {\n addFocusVisibleClass(e.target);\n }\n }\n\n /**\n * On `blur`, remove the `focus-visible` class from the target.\n * @param {Event} e\n */\n function onBlur(e) {\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (\n e.target.classList.contains('focus-visible') ||\n e.target.hasAttribute('data-focus-visible-added')\n ) {\n // To detect a tab/window switch, we look for a blur event followed\n // rapidly by a visibility change.\n // If we don't see a visibility change within 100ms, it's probably a\n // regular focus change.\n hadFocusVisibleRecently = true;\n window.clearTimeout(hadFocusVisibleRecentlyTimeout);\n hadFocusVisibleRecentlyTimeout = window.setTimeout(function() {\n hadFocusVisibleRecently = false;\n }, 100);\n removeFocusVisibleClass(e.target);\n }\n }\n\n /**\n * If the user changes tabs, keep track of whether or not the previously\n * focused element had .focus-visible.\n * @param {Event} e\n */\n function onVisibilityChange(e) {\n if (document.visibilityState === 'hidden') {\n // If the tab becomes active again, the browser will handle calling focus\n // on the element (Safari actually calls it twice).\n // If this tab change caused a blur on an element with focus-visible,\n // re-apply the class when the user switches back to the tab.\n if (hadFocusVisibleRecently) {\n hadKeyboardEvent = true;\n }\n addInitialPointerMoveListeners();\n }\n }\n\n /**\n * Add a group of listeners to detect usage of any pointing devices.\n * These listeners will be added when the polyfill first loads, and anytime\n * the window is blurred, so that they are active when the window regains\n * focus.\n */\n function addInitialPointerMoveListeners() {\n document.addEventListener('mousemove', onInitialPointerMove);\n document.addEventListener('mousedown', onInitialPointerMove);\n document.addEventListener('mouseup', onInitialPointerMove);\n document.addEventListener('pointermove', onInitialPointerMove);\n document.addEventListener('pointerdown', onInitialPointerMove);\n document.addEventListener('pointerup', onInitialPointerMove);\n document.addEventListener('touchmove', onInitialPointerMove);\n document.addEventListener('touchstart', onInitialPointerMove);\n document.addEventListener('touchend', onInitialPointerMove);\n }\n\n function removeInitialPointerMoveListeners() {\n document.removeEventListener('mousemove', onInitialPointerMove);\n document.removeEventListener('mousedown', onInitialPointerMove);\n document.removeEventListener('mouseup', onInitialPointerMove);\n document.removeEventListener('pointermove', onInitialPointerMove);\n document.removeEventListener('pointerdown', onInitialPointerMove);\n document.removeEventListener('pointerup', onInitialPointerMove);\n document.removeEventListener('touchmove', onInitialPointerMove);\n document.removeEventListener('touchstart', onInitialPointerMove);\n document.removeEventListener('touchend', onInitialPointerMove);\n }\n\n /**\n * When the polfyill first loads, assume the user is in keyboard modality.\n * If any event is received from a pointing device (e.g. mouse, pointer,\n * touch), turn off keyboard modality.\n * This accounts for situations where focus enters the page from the URL bar.\n * @param {Event} e\n */\n function onInitialPointerMove(e) {\n // Work around a Safari quirk that fires a mousemove on whenever the\n // window blurs, even if you're tabbing out of the page. \u00AF\\_(\u30C4)_/\u00AF\n if (e.target.nodeName && e.target.nodeName.toLowerCase() === 'html') {\n return;\n }\n\n hadKeyboardEvent = false;\n removeInitialPointerMoveListeners();\n }\n\n // For some kinds of state, we are interested in changes at the global scope\n // only. For example, global pointer input, global key presses and global\n // visibility change should affect the state at every scope:\n document.addEventListener('keydown', onKeyDown, true);\n document.addEventListener('mousedown', onPointerDown, true);\n document.addEventListener('pointerdown', onPointerDown, true);\n document.addEventListener('touchstart', onPointerDown, true);\n document.addEventListener('visibilitychange', onVisibilityChange, true);\n\n addInitialPointerMoveListeners();\n\n // For focus and blur, we specifically care about state changes in the local\n // scope. This is because focus / blur events that originate from within a\n // shadow root are not re-dispatched from the host element if it was already\n // the active element in its own scope:\n scope.addEventListener('focus', onFocus, true);\n scope.addEventListener('blur', onBlur, true);\n\n // We detect that a node is a ShadowRoot by ensuring that it is a\n // DocumentFragment and also has a host property. This check covers native\n // implementation and polyfill implementation transparently. If we only cared\n // about the native implementation, we could just check if the scope was\n // an instance of a ShadowRoot.\n if (scope.nodeType === Node.DOCUMENT_FRAGMENT_NODE && scope.host) {\n // Since a ShadowRoot is a special kind of DocumentFragment, it does not\n // have a root element to add a class to. So, we add this attribute to the\n // host element instead:\n scope.host.setAttribute('data-js-focus-visible', '');\n } else if (scope.nodeType === Node.DOCUMENT_NODE) {\n document.documentElement.classList.add('js-focus-visible');\n document.documentElement.setAttribute('data-js-focus-visible', '');\n }\n }\n\n // It is important to wrap all references to global window and document in\n // these checks to support server-side rendering use cases\n // @see https://github.com/WICG/focus-visible/issues/199\n if (typeof window !== 'undefined' && typeof document !== 'undefined') {\n // Make the polyfill helper globally available. This can be used as a signal\n // to interested libraries that wish to coordinate with the polyfill for e.g.,\n // applying the polyfill to a shadow root:\n window.applyFocusVisiblePolyfill = applyFocusVisiblePolyfill;\n\n // Notify interested libraries of the polyfill's presence, in case the\n // polyfill was loaded lazily:\n var event;\n\n try {\n event = new CustomEvent('focus-visible-polyfill-ready');\n } catch (error) {\n // IE11 does not support using CustomEvent as a constructor directly:\n event = document.createEvent('CustomEvent');\n event.initCustomEvent('focus-visible-polyfill-ready', false, false, {});\n }\n\n window.dispatchEvent(event);\n }\n\n if (typeof document !== 'undefined') {\n // Apply the polyfill to the global document, so that no JavaScript\n // coordination is required to use the polyfill in the top-level document:\n applyFocusVisiblePolyfill(document);\n }\n\n})));\n", "(function(global) {\r\n /**\r\n * Polyfill URLSearchParams\r\n *\r\n * Inspired from : https://github.com/WebReflection/url-search-params/blob/master/src/url-search-params.js\r\n */\r\n\r\n var checkIfIteratorIsSupported = function() {\r\n try {\r\n return !!Symbol.iterator;\r\n } catch (error) {\r\n return false;\r\n }\r\n };\r\n\r\n\r\n var iteratorSupported = checkIfIteratorIsSupported();\r\n\r\n var createIterator = function(items) {\r\n var iterator = {\r\n next: function() {\r\n var value = items.shift();\r\n return { done: value === void 0, value: value };\r\n }\r\n };\r\n\r\n if (iteratorSupported) {\r\n iterator[Symbol.iterator] = function() {\r\n return iterator;\r\n };\r\n }\r\n\r\n return iterator;\r\n };\r\n\r\n /**\r\n * Search param name and values should be encoded according to https://url.spec.whatwg.org/#urlencoded-serializing\r\n * encodeURIComponent() produces the same result except encoding spaces as `%20` instead of `+`.\r\n */\r\n var serializeParam = function(value) {\r\n return encodeURIComponent(value).replace(/%20/g, '+');\r\n };\r\n\r\n var deserializeParam = function(value) {\r\n return decodeURIComponent(String(value).replace(/\\+/g, ' '));\r\n };\r\n\r\n var polyfillURLSearchParams = function() {\r\n\r\n var URLSearchParams = function(searchString) {\r\n Object.defineProperty(this, '_entries', { writable: true, value: {} });\r\n var typeofSearchString = typeof searchString;\r\n\r\n if (typeofSearchString === 'undefined') {\r\n // do nothing\r\n } else if (typeofSearchString === 'string') {\r\n if (searchString !== '') {\r\n this._fromString(searchString);\r\n }\r\n } else if (searchString instanceof URLSearchParams) {\r\n var _this = this;\r\n searchString.forEach(function(value, name) {\r\n _this.append(name, value);\r\n });\r\n } else if ((searchString !== null) && (typeofSearchString === 'object')) {\r\n if (Object.prototype.toString.call(searchString) === '[object Array]') {\r\n for (var i = 0; i < searchString.length; i++) {\r\n var entry = searchString[i];\r\n if ((Object.prototype.toString.call(entry) === '[object Array]') || (entry.length !== 2)) {\r\n this.append(entry[0], entry[1]);\r\n } else {\r\n throw new TypeError('Expected [string, any] as entry at index ' + i + ' of URLSearchParams\\'s input');\r\n }\r\n }\r\n } else {\r\n for (var key in searchString) {\r\n if (searchString.hasOwnProperty(key)) {\r\n this.append(key, searchString[key]);\r\n }\r\n }\r\n }\r\n } else {\r\n throw new TypeError('Unsupported input\\'s type for URLSearchParams');\r\n }\r\n };\r\n\r\n var proto = URLSearchParams.prototype;\r\n\r\n proto.append = function(name, value) {\r\n if (name in this._entries) {\r\n this._entries[name].push(String(value));\r\n } else {\r\n this._entries[name] = [String(value)];\r\n }\r\n };\r\n\r\n proto.delete = function(name) {\r\n delete this._entries[name];\r\n };\r\n\r\n proto.get = function(name) {\r\n return (name in this._entries) ? this._entries[name][0] : null;\r\n };\r\n\r\n proto.getAll = function(name) {\r\n return (name in this._entries) ? this._entries[name].slice(0) : [];\r\n };\r\n\r\n proto.has = function(name) {\r\n return (name in this._entries);\r\n };\r\n\r\n proto.set = function(name, value) {\r\n this._entries[name] = [String(value)];\r\n };\r\n\r\n proto.forEach = function(callback, thisArg) {\r\n var entries;\r\n for (var name in this._entries) {\r\n if (this._entries.hasOwnProperty(name)) {\r\n entries = this._entries[name];\r\n for (var i = 0; i < entries.length; i++) {\r\n callback.call(thisArg, entries[i], name, this);\r\n }\r\n }\r\n }\r\n };\r\n\r\n proto.keys = function() {\r\n var items = [];\r\n this.forEach(function(value, name) {\r\n items.push(name);\r\n });\r\n return createIterator(items);\r\n };\r\n\r\n proto.values = function() {\r\n var items = [];\r\n this.forEach(function(value) {\r\n items.push(value);\r\n });\r\n return createIterator(items);\r\n };\r\n\r\n proto.entries = function() {\r\n var items = [];\r\n this.forEach(function(value, name) {\r\n items.push([name, value]);\r\n });\r\n return createIterator(items);\r\n };\r\n\r\n if (iteratorSupported) {\r\n proto[Symbol.iterator] = proto.entries;\r\n }\r\n\r\n proto.toString = function() {\r\n var searchArray = [];\r\n this.forEach(function(value, name) {\r\n searchArray.push(serializeParam(name) + '=' + serializeParam(value));\r\n });\r\n return searchArray.join('&');\r\n };\r\n\r\n\r\n global.URLSearchParams = URLSearchParams;\r\n };\r\n\r\n var checkIfURLSearchParamsSupported = function() {\r\n try {\r\n var URLSearchParams = global.URLSearchParams;\r\n\r\n return (\r\n (new URLSearchParams('?a=1').toString() === 'a=1') &&\r\n (typeof URLSearchParams.prototype.set === 'function') &&\r\n (typeof URLSearchParams.prototype.entries === 'function')\r\n );\r\n } catch (e) {\r\n return false;\r\n }\r\n };\r\n\r\n if (!checkIfURLSearchParamsSupported()) {\r\n polyfillURLSearchParams();\r\n }\r\n\r\n var proto = global.URLSearchParams.prototype;\r\n\r\n if (typeof proto.sort !== 'function') {\r\n proto.sort = function() {\r\n var _this = this;\r\n var items = [];\r\n this.forEach(function(value, name) {\r\n items.push([name, value]);\r\n if (!_this._entries) {\r\n _this.delete(name);\r\n }\r\n });\r\n items.sort(function(a, b) {\r\n if (a[0] < b[0]) {\r\n return -1;\r\n } else if (a[0] > b[0]) {\r\n return +1;\r\n } else {\r\n return 0;\r\n }\r\n });\r\n if (_this._entries) { // force reset because IE keeps keys index\r\n _this._entries = {};\r\n }\r\n for (var i = 0; i < items.length; i++) {\r\n this.append(items[i][0], items[i][1]);\r\n }\r\n };\r\n }\r\n\r\n if (typeof proto._fromString !== 'function') {\r\n Object.defineProperty(proto, '_fromString', {\r\n enumerable: false,\r\n configurable: false,\r\n writable: false,\r\n value: function(searchString) {\r\n if (this._entries) {\r\n this._entries = {};\r\n } else {\r\n var keys = [];\r\n this.forEach(function(value, name) {\r\n keys.push(name);\r\n });\r\n for (var i = 0; i < keys.length; i++) {\r\n this.delete(keys[i]);\r\n }\r\n }\r\n\r\n searchString = searchString.replace(/^\\?/, '');\r\n var attributes = searchString.split('&');\r\n var attribute;\r\n for (var i = 0; i < attributes.length; i++) {\r\n attribute = attributes[i].split('=');\r\n this.append(\r\n deserializeParam(attribute[0]),\r\n (attribute.length > 1) ? deserializeParam(attribute[1]) : ''\r\n );\r\n }\r\n }\r\n });\r\n }\r\n\r\n // HTMLAnchorElement\r\n\r\n})(\r\n (typeof global !== 'undefined') ? global\r\n : ((typeof window !== 'undefined') ? window\r\n : ((typeof self !== 'undefined') ? self : this))\r\n);\r\n\r\n(function(global) {\r\n /**\r\n * Polyfill URL\r\n *\r\n * Inspired from : https://github.com/arv/DOM-URL-Polyfill/blob/master/src/url.js\r\n */\r\n\r\n var checkIfURLIsSupported = function() {\r\n try {\r\n var u = new global.URL('b', 'http://a');\r\n u.pathname = 'c d';\r\n return (u.href === 'http://a/c%20d') && u.searchParams;\r\n } catch (e) {\r\n return false;\r\n }\r\n };\r\n\r\n\r\n var polyfillURL = function() {\r\n var _URL = global.URL;\r\n\r\n var URL = function(url, base) {\r\n if (typeof url !== 'string') url = String(url);\r\n if (base && typeof base !== 'string') base = String(base);\r\n\r\n // Only create another document if the base is different from current location.\r\n var doc = document, baseElement;\r\n if (base && (global.location === void 0 || base !== global.location.href)) {\r\n base = base.toLowerCase();\r\n doc = document.implementation.createHTMLDocument('');\r\n baseElement = doc.createElement('base');\r\n baseElement.href = base;\r\n doc.head.appendChild(baseElement);\r\n try {\r\n if (baseElement.href.indexOf(base) !== 0) throw new Error(baseElement.href);\r\n } catch (err) {\r\n throw new Error('URL unable to set base ' + base + ' due to ' + err);\r\n }\r\n }\r\n\r\n var anchorElement = doc.createElement('a');\r\n anchorElement.href = url;\r\n if (baseElement) {\r\n doc.body.appendChild(anchorElement);\r\n anchorElement.href = anchorElement.href; // force href to refresh\r\n }\r\n\r\n var inputElement = doc.createElement('input');\r\n inputElement.type = 'url';\r\n inputElement.value = url;\r\n\r\n if (anchorElement.protocol === ':' || !/:/.test(anchorElement.href) || (!inputElement.checkValidity() && !base)) {\r\n throw new TypeError('Invalid URL');\r\n }\r\n\r\n Object.defineProperty(this, '_anchorElement', {\r\n value: anchorElement\r\n });\r\n\r\n\r\n // create a linked searchParams which reflect its changes on URL\r\n var searchParams = new global.URLSearchParams(this.search);\r\n var enableSearchUpdate = true;\r\n var enableSearchParamsUpdate = true;\r\n var _this = this;\r\n ['append', 'delete', 'set'].forEach(function(methodName) {\r\n var method = searchParams[methodName];\r\n searchParams[methodName] = function() {\r\n method.apply(searchParams, arguments);\r\n if (enableSearchUpdate) {\r\n enableSearchParamsUpdate = false;\r\n _this.search = searchParams.toString();\r\n enableSearchParamsUpdate = true;\r\n }\r\n };\r\n });\r\n\r\n Object.defineProperty(this, 'searchParams', {\r\n value: searchParams,\r\n enumerable: true\r\n });\r\n\r\n var search = void 0;\r\n Object.defineProperty(this, '_updateSearchParams', {\r\n enumerable: false,\r\n configurable: false,\r\n writable: false,\r\n value: function() {\r\n if (this.search !== search) {\r\n search = this.search;\r\n if (enableSearchParamsUpdate) {\r\n enableSearchUpdate = false;\r\n this.searchParams._fromString(this.search);\r\n enableSearchUpdate = true;\r\n }\r\n }\r\n }\r\n });\r\n };\r\n\r\n var proto = URL.prototype;\r\n\r\n var linkURLWithAnchorAttribute = function(attributeName) {\r\n Object.defineProperty(proto, attributeName, {\r\n get: function() {\r\n return this._anchorElement[attributeName];\r\n },\r\n set: function(value) {\r\n this._anchorElement[attributeName] = value;\r\n },\r\n enumerable: true\r\n });\r\n };\r\n\r\n ['hash', 'host', 'hostname', 'port', 'protocol']\r\n .forEach(function(attributeName) {\r\n linkURLWithAnchorAttribute(attributeName);\r\n });\r\n\r\n Object.defineProperty(proto, 'search', {\r\n get: function() {\r\n return this._anchorElement['search'];\r\n },\r\n set: function(value) {\r\n this._anchorElement['search'] = value;\r\n this._updateSearchParams();\r\n },\r\n enumerable: true\r\n });\r\n\r\n Object.defineProperties(proto, {\r\n\r\n 'toString': {\r\n get: function() {\r\n var _this = this;\r\n return function() {\r\n return _this.href;\r\n };\r\n }\r\n },\r\n\r\n 'href': {\r\n get: function() {\r\n return this._anchorElement.href.replace(/\\?$/, '');\r\n },\r\n set: function(value) {\r\n this._anchorElement.href = value;\r\n this._updateSearchParams();\r\n },\r\n enumerable: true\r\n },\r\n\r\n 'pathname': {\r\n get: function() {\r\n return this._anchorElement.pathname.replace(/(^\\/?)/, '/');\r\n },\r\n set: function(value) {\r\n this._anchorElement.pathname = value;\r\n },\r\n enumerable: true\r\n },\r\n\r\n 'origin': {\r\n get: function() {\r\n // get expected port from protocol\r\n var expectedPort = { 'http:': 80, 'https:': 443, 'ftp:': 21 }[this._anchorElement.protocol];\r\n // add port to origin if, expected port is different than actual port\r\n // and it is not empty f.e http://foo:8080\r\n // 8080 != 80 && 8080 != ''\r\n var addPortToOrigin = this._anchorElement.port != expectedPort &&\r\n this._anchorElement.port !== '';\r\n\r\n return this._anchorElement.protocol +\r\n '//' +\r\n this._anchorElement.hostname +\r\n (addPortToOrigin ? (':' + this._anchorElement.port) : '');\r\n },\r\n enumerable: true\r\n },\r\n\r\n 'password': { // TODO\r\n get: function() {\r\n return '';\r\n },\r\n set: function(value) {\r\n },\r\n enumerable: true\r\n },\r\n\r\n 'username': { // TODO\r\n get: function() {\r\n return '';\r\n },\r\n set: function(value) {\r\n },\r\n enumerable: true\r\n },\r\n });\r\n\r\n URL.createObjectURL = function(blob) {\r\n return _URL.createObjectURL.apply(_URL, arguments);\r\n };\r\n\r\n URL.revokeObjectURL = function(url) {\r\n return _URL.revokeObjectURL.apply(_URL, arguments);\r\n };\r\n\r\n global.URL = URL;\r\n\r\n };\r\n\r\n if (!checkIfURLIsSupported()) {\r\n polyfillURL();\r\n }\r\n\r\n if ((global.location !== void 0) && !('origin' in global.location)) {\r\n var getOrigin = function() {\r\n return global.location.protocol + '//' + global.location.hostname + (global.location.port ? (':' + global.location.port) : '');\r\n };\r\n\r\n try {\r\n Object.defineProperty(global.location, 'origin', {\r\n get: getOrigin,\r\n enumerable: true\r\n });\r\n } catch (e) {\r\n setInterval(function() {\r\n global.location.origin = getOrigin();\r\n }, 100);\r\n }\r\n }\r\n\r\n})(\r\n (typeof global !== 'undefined') ? global\r\n : ((typeof window !== 'undefined') ? window\r\n : ((typeof self !== 'undefined') ? self : this))\r\n);\r\n", "/*! *****************************************************************************\r\nCopyright (c) Microsoft Corporation.\r\n\r\nPermission to use, copy, modify, and/or distribute this software for any\r\npurpose with or without fee is hereby granted.\r\n\r\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\r\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\r\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\r\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\r\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\r\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\r\nPERFORMANCE OF THIS SOFTWARE.\r\n***************************************************************************** */\r\n/* global global, define, System, Reflect, Promise */\r\nvar __extends;\r\nvar __assign;\r\nvar __rest;\r\nvar __decorate;\r\nvar __param;\r\nvar __metadata;\r\nvar __awaiter;\r\nvar __generator;\r\nvar __exportStar;\r\nvar __values;\r\nvar __read;\r\nvar __spread;\r\nvar __spreadArrays;\r\nvar __spreadArray;\r\nvar __await;\r\nvar __asyncGenerator;\r\nvar __asyncDelegator;\r\nvar __asyncValues;\r\nvar __makeTemplateObject;\r\nvar __importStar;\r\nvar __importDefault;\r\nvar __classPrivateFieldGet;\r\nvar __classPrivateFieldSet;\r\nvar __createBinding;\r\n(function (factory) {\r\n var root = typeof global === \"object\" ? global : typeof self === \"object\" ? self : typeof this === \"object\" ? this : {};\r\n if (typeof define === \"function\" && define.amd) {\r\n define(\"tslib\", [\"exports\"], function (exports) { factory(createExporter(root, createExporter(exports))); });\r\n }\r\n else if (typeof module === \"object\" && typeof module.exports === \"object\") {\r\n factory(createExporter(root, createExporter(module.exports)));\r\n }\r\n else {\r\n factory(createExporter(root));\r\n }\r\n function createExporter(exports, previous) {\r\n if (exports !== root) {\r\n if (typeof Object.create === \"function\") {\r\n Object.defineProperty(exports, \"__esModule\", { value: true });\r\n }\r\n else {\r\n exports.__esModule = true;\r\n }\r\n }\r\n return function (id, v) { return exports[id] = previous ? previous(id, v) : v; };\r\n }\r\n})\r\n(function (exporter) {\r\n var extendStatics = Object.setPrototypeOf ||\r\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\r\n function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };\r\n\r\n __extends = function (d, b) {\r\n if (typeof b !== \"function\" && b !== null)\r\n throw new TypeError(\"Class extends value \" + String(b) + \" is not a constructor or null\");\r\n extendStatics(d, b);\r\n function __() { this.constructor = d; }\r\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\r\n };\r\n\r\n __assign = Object.assign || function (t) {\r\n for (var s, i = 1, n = arguments.length; i < n; i++) {\r\n s = arguments[i];\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\r\n }\r\n return t;\r\n };\r\n\r\n __rest = function (s, e) {\r\n var t = {};\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\r\n t[p] = s[p];\r\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\r\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\r\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\r\n t[p[i]] = s[p[i]];\r\n }\r\n return t;\r\n };\r\n\r\n __decorate = function (decorators, target, key, desc) {\r\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\r\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\r\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\r\n return c > 3 && r && Object.defineProperty(target, key, r), r;\r\n };\r\n\r\n __param = function (paramIndex, decorator) {\r\n return function (target, key) { decorator(target, key, paramIndex); }\r\n };\r\n\r\n __metadata = function (metadataKey, metadataValue) {\r\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\r\n };\r\n\r\n __awaiter = function (thisArg, _arguments, P, generator) {\r\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\r\n return new (P || (P = Promise))(function (resolve, reject) {\r\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\r\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\r\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\r\n step((generator = generator.apply(thisArg, _arguments || [])).next());\r\n });\r\n };\r\n\r\n __generator = function (thisArg, body) {\r\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;\r\n return g = { next: verb(0), \"throw\": verb(1), \"return\": verb(2) }, typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\r\n function verb(n) { return function (v) { return step([n, v]); }; }\r\n function step(op) {\r\n if (f) throw new TypeError(\"Generator is already executing.\");\r\n while (_) try {\r\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\r\n if (y = 0, t) op = [op[0] & 2, t.value];\r\n switch (op[0]) {\r\n case 0: case 1: t = op; break;\r\n case 4: _.label++; return { value: op[1], done: false };\r\n case 5: _.label++; y = op[1]; op = [0]; continue;\r\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\r\n default:\r\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\r\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\r\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\r\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\r\n if (t[2]) _.ops.pop();\r\n _.trys.pop(); continue;\r\n }\r\n op = body.call(thisArg, _);\r\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\r\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\r\n }\r\n };\r\n\r\n __exportStar = function(m, o) {\r\n for (var p in m) if (p !== \"default\" && !Object.prototype.hasOwnProperty.call(o, p)) __createBinding(o, m, p);\r\n };\r\n\r\n __createBinding = Object.create ? (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });\r\n }) : (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n o[k2] = m[k];\r\n });\r\n\r\n __values = function (o) {\r\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\r\n if (m) return m.call(o);\r\n if (o && typeof o.length === \"number\") return {\r\n next: function () {\r\n if (o && i >= o.length) o = void 0;\r\n return { value: o && o[i++], done: !o };\r\n }\r\n };\r\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\r\n };\r\n\r\n __read = function (o, n) {\r\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\r\n if (!m) return o;\r\n var i = m.call(o), r, ar = [], e;\r\n try {\r\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\r\n }\r\n catch (error) { e = { error: error }; }\r\n finally {\r\n try {\r\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\r\n }\r\n finally { if (e) throw e.error; }\r\n }\r\n return ar;\r\n };\r\n\r\n /** @deprecated */\r\n __spread = function () {\r\n for (var ar = [], i = 0; i < arguments.length; i++)\r\n ar = ar.concat(__read(arguments[i]));\r\n return ar;\r\n };\r\n\r\n /** @deprecated */\r\n __spreadArrays = function () {\r\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\r\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\r\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\r\n r[k] = a[j];\r\n return r;\r\n };\r\n\r\n __spreadArray = function (to, from, pack) {\r\n if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {\r\n if (ar || !(i in from)) {\r\n if (!ar) ar = Array.prototype.slice.call(from, 0, i);\r\n ar[i] = from[i];\r\n }\r\n }\r\n return to.concat(ar || Array.prototype.slice.call(from));\r\n };\r\n\r\n __await = function (v) {\r\n return this instanceof __await ? (this.v = v, this) : new __await(v);\r\n };\r\n\r\n __asyncGenerator = function (thisArg, _arguments, generator) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\r\n return i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i;\r\n function verb(n) { if (g[n]) i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; }\r\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\r\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\r\n function fulfill(value) { resume(\"next\", value); }\r\n function reject(value) { resume(\"throw\", value); }\r\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\r\n };\r\n\r\n __asyncDelegator = function (o) {\r\n var i, p;\r\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\r\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: n === \"return\" } : f ? f(v) : v; } : f; }\r\n };\r\n\r\n __asyncValues = function (o) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var m = o[Symbol.asyncIterator], i;\r\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\r\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\r\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\r\n };\r\n\r\n __makeTemplateObject = function (cooked, raw) {\r\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\r\n return cooked;\r\n };\r\n\r\n var __setModuleDefault = Object.create ? (function(o, v) {\r\n Object.defineProperty(o, \"default\", { enumerable: true, value: v });\r\n }) : function(o, v) {\r\n o[\"default\"] = v;\r\n };\r\n\r\n __importStar = function (mod) {\r\n if (mod && mod.__esModule) return mod;\r\n var result = {};\r\n if (mod != null) for (var k in mod) if (k !== \"default\" && Object.prototype.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);\r\n __setModuleDefault(result, mod);\r\n return result;\r\n };\r\n\r\n __importDefault = function (mod) {\r\n return (mod && mod.__esModule) ? mod : { \"default\": mod };\r\n };\r\n\r\n __classPrivateFieldGet = function (receiver, state, kind, f) {\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a getter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot read private member from an object whose class did not declare it\");\r\n return kind === \"m\" ? f : kind === \"a\" ? f.call(receiver) : f ? f.value : state.get(receiver);\r\n };\r\n\r\n __classPrivateFieldSet = function (receiver, state, value, kind, f) {\r\n if (kind === \"m\") throw new TypeError(\"Private method is not writable\");\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a setter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot write private member to an object whose class did not declare it\");\r\n return (kind === \"a\" ? f.call(receiver, value) : f ? f.value = value : state.set(receiver, value)), value;\r\n };\r\n\r\n exporter(\"__extends\", __extends);\r\n exporter(\"__assign\", __assign);\r\n exporter(\"__rest\", __rest);\r\n exporter(\"__decorate\", __decorate);\r\n exporter(\"__param\", __param);\r\n exporter(\"__metadata\", __metadata);\r\n exporter(\"__awaiter\", __awaiter);\r\n exporter(\"__generator\", __generator);\r\n exporter(\"__exportStar\", __exportStar);\r\n exporter(\"__createBinding\", __createBinding);\r\n exporter(\"__values\", __values);\r\n exporter(\"__read\", __read);\r\n exporter(\"__spread\", __spread);\r\n exporter(\"__spreadArrays\", __spreadArrays);\r\n exporter(\"__spreadArray\", __spreadArray);\r\n exporter(\"__await\", __await);\r\n exporter(\"__asyncGenerator\", __asyncGenerator);\r\n exporter(\"__asyncDelegator\", __asyncDelegator);\r\n exporter(\"__asyncValues\", __asyncValues);\r\n exporter(\"__makeTemplateObject\", __makeTemplateObject);\r\n exporter(\"__importStar\", __importStar);\r\n exporter(\"__importDefault\", __importDefault);\r\n exporter(\"__classPrivateFieldGet\", __classPrivateFieldGet);\r\n exporter(\"__classPrivateFieldSet\", __classPrivateFieldSet);\r\n});\r\n", "/*!\n * clipboard.js v2.0.11\n * https://clipboardjs.com/\n *\n * Licensed MIT \u00A9 Zeno Rocha\n */\n(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"ClipboardJS\"] = factory();\n\telse\n\t\troot[\"ClipboardJS\"] = factory();\n})(this, function() {\nreturn /******/ (function() { // webpackBootstrap\n/******/ \tvar __webpack_modules__ = ({\n\n/***/ 686:\n/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n\n// EXPORTS\n__webpack_require__.d(__webpack_exports__, {\n \"default\": function() { return /* binding */ clipboard; }\n});\n\n// EXTERNAL MODULE: ./node_modules/tiny-emitter/index.js\nvar tiny_emitter = __webpack_require__(279);\nvar tiny_emitter_default = /*#__PURE__*/__webpack_require__.n(tiny_emitter);\n// EXTERNAL MODULE: ./node_modules/good-listener/src/listen.js\nvar listen = __webpack_require__(370);\nvar listen_default = /*#__PURE__*/__webpack_require__.n(listen);\n// EXTERNAL MODULE: ./node_modules/select/src/select.js\nvar src_select = __webpack_require__(817);\nvar select_default = /*#__PURE__*/__webpack_require__.n(src_select);\n;// CONCATENATED MODULE: ./src/common/command.js\n/**\n * Executes a given operation type.\n * @param {String} type\n * @return {Boolean}\n */\nfunction command(type) {\n try {\n return document.execCommand(type);\n } catch (err) {\n return false;\n }\n}\n;// CONCATENATED MODULE: ./src/actions/cut.js\n\n\n/**\n * Cut action wrapper.\n * @param {String|HTMLElement} target\n * @return {String}\n */\n\nvar ClipboardActionCut = function ClipboardActionCut(target) {\n var selectedText = select_default()(target);\n command('cut');\n return selectedText;\n};\n\n/* harmony default export */ var actions_cut = (ClipboardActionCut);\n;// CONCATENATED MODULE: ./src/common/create-fake-element.js\n/**\n * Creates a fake textarea element with a value.\n * @param {String} value\n * @return {HTMLElement}\n */\nfunction createFakeElement(value) {\n var isRTL = document.documentElement.getAttribute('dir') === 'rtl';\n var fakeElement = document.createElement('textarea'); // Prevent zooming on iOS\n\n fakeElement.style.fontSize = '12pt'; // Reset box model\n\n fakeElement.style.border = '0';\n fakeElement.style.padding = '0';\n fakeElement.style.margin = '0'; // Move element out of screen horizontally\n\n fakeElement.style.position = 'absolute';\n fakeElement.style[isRTL ? 'right' : 'left'] = '-9999px'; // Move element to the same position vertically\n\n var yPosition = window.pageYOffset || document.documentElement.scrollTop;\n fakeElement.style.top = \"\".concat(yPosition, \"px\");\n fakeElement.setAttribute('readonly', '');\n fakeElement.value = value;\n return fakeElement;\n}\n;// CONCATENATED MODULE: ./src/actions/copy.js\n\n\n\n/**\n * Create fake copy action wrapper using a fake element.\n * @param {String} target\n * @param {Object} options\n * @return {String}\n */\n\nvar fakeCopyAction = function fakeCopyAction(value, options) {\n var fakeElement = createFakeElement(value);\n options.container.appendChild(fakeElement);\n var selectedText = select_default()(fakeElement);\n command('copy');\n fakeElement.remove();\n return selectedText;\n};\n/**\n * Copy action wrapper.\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @return {String}\n */\n\n\nvar ClipboardActionCopy = function ClipboardActionCopy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n var selectedText = '';\n\n if (typeof target === 'string') {\n selectedText = fakeCopyAction(target, options);\n } else if (target instanceof HTMLInputElement && !['text', 'search', 'url', 'tel', 'password'].includes(target === null || target === void 0 ? void 0 : target.type)) {\n // If input type doesn't support `setSelectionRange`. Simulate it. https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputElement/setSelectionRange\n selectedText = fakeCopyAction(target.value, options);\n } else {\n selectedText = select_default()(target);\n command('copy');\n }\n\n return selectedText;\n};\n\n/* harmony default export */ var actions_copy = (ClipboardActionCopy);\n;// CONCATENATED MODULE: ./src/actions/default.js\nfunction _typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { _typeof = function _typeof(obj) { return typeof obj; }; } else { _typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return _typeof(obj); }\n\n\n\n/**\n * Inner function which performs selection from either `text` or `target`\n * properties and then executes copy or cut operations.\n * @param {Object} options\n */\n\nvar ClipboardActionDefault = function ClipboardActionDefault() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n // Defines base properties passed from constructor.\n var _options$action = options.action,\n action = _options$action === void 0 ? 'copy' : _options$action,\n container = options.container,\n target = options.target,\n text = options.text; // Sets the `action` to be performed which can be either 'copy' or 'cut'.\n\n if (action !== 'copy' && action !== 'cut') {\n throw new Error('Invalid \"action\" value, use either \"copy\" or \"cut\"');\n } // Sets the `target` property using an element that will be have its content copied.\n\n\n if (target !== undefined) {\n if (target && _typeof(target) === 'object' && target.nodeType === 1) {\n if (action === 'copy' && target.hasAttribute('disabled')) {\n throw new Error('Invalid \"target\" attribute. Please use \"readonly\" instead of \"disabled\" attribute');\n }\n\n if (action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) {\n throw new Error('Invalid \"target\" attribute. You can\\'t cut text from elements with \"readonly\" or \"disabled\" attributes');\n }\n } else {\n throw new Error('Invalid \"target\" value, use a valid Element');\n }\n } // Define selection strategy based on `text` property.\n\n\n if (text) {\n return actions_copy(text, {\n container: container\n });\n } // Defines which selection strategy based on `target` property.\n\n\n if (target) {\n return action === 'cut' ? actions_cut(target) : actions_copy(target, {\n container: container\n });\n }\n};\n\n/* harmony default export */ var actions_default = (ClipboardActionDefault);\n;// CONCATENATED MODULE: ./src/clipboard.js\nfunction clipboard_typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { clipboard_typeof = function _typeof(obj) { return typeof obj; }; } else { clipboard_typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return clipboard_typeof(obj); }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }\n\nfunction _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function\"); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, writable: true, configurable: true } }); if (superClass) _setPrototypeOf(subClass, superClass); }\n\nfunction _setPrototypeOf(o, p) { _setPrototypeOf = Object.setPrototypeOf || function _setPrototypeOf(o, p) { o.__proto__ = p; return o; }; return _setPrototypeOf(o, p); }\n\nfunction _createSuper(Derived) { var hasNativeReflectConstruct = _isNativeReflectConstruct(); return function _createSuperInternal() { var Super = _getPrototypeOf(Derived), result; if (hasNativeReflectConstruct) { var NewTarget = _getPrototypeOf(this).constructor; result = Reflect.construct(Super, arguments, NewTarget); } else { result = Super.apply(this, arguments); } return _possibleConstructorReturn(this, result); }; }\n\nfunction _possibleConstructorReturn(self, call) { if (call && (clipboard_typeof(call) === \"object\" || typeof call === \"function\")) { return call; } return _assertThisInitialized(self); }\n\nfunction _assertThisInitialized(self) { if (self === void 0) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return self; }\n\nfunction _isNativeReflectConstruct() { if (typeof Reflect === \"undefined\" || !Reflect.construct) return false; if (Reflect.construct.sham) return false; if (typeof Proxy === \"function\") return true; try { Date.prototype.toString.call(Reflect.construct(Date, [], function () {})); return true; } catch (e) { return false; } }\n\nfunction _getPrototypeOf(o) { _getPrototypeOf = Object.setPrototypeOf ? Object.getPrototypeOf : function _getPrototypeOf(o) { return o.__proto__ || Object.getPrototypeOf(o); }; return _getPrototypeOf(o); }\n\n\n\n\n\n\n/**\n * Helper function to retrieve attribute value.\n * @param {String} suffix\n * @param {Element} element\n */\n\nfunction getAttributeValue(suffix, element) {\n var attribute = \"data-clipboard-\".concat(suffix);\n\n if (!element.hasAttribute(attribute)) {\n return;\n }\n\n return element.getAttribute(attribute);\n}\n/**\n * Base class which takes one or more elements, adds event listeners to them,\n * and instantiates a new `ClipboardAction` on each click.\n */\n\n\nvar Clipboard = /*#__PURE__*/function (_Emitter) {\n _inherits(Clipboard, _Emitter);\n\n var _super = _createSuper(Clipboard);\n\n /**\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n * @param {Object} options\n */\n function Clipboard(trigger, options) {\n var _this;\n\n _classCallCheck(this, Clipboard);\n\n _this = _super.call(this);\n\n _this.resolveOptions(options);\n\n _this.listenClick(trigger);\n\n return _this;\n }\n /**\n * Defines if attributes would be resolved using internal setter functions\n * or custom functions that were passed in the constructor.\n * @param {Object} options\n */\n\n\n _createClass(Clipboard, [{\n key: \"resolveOptions\",\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n this.action = typeof options.action === 'function' ? options.action : this.defaultAction;\n this.target = typeof options.target === 'function' ? options.target : this.defaultTarget;\n this.text = typeof options.text === 'function' ? options.text : this.defaultText;\n this.container = clipboard_typeof(options.container) === 'object' ? options.container : document.body;\n }\n /**\n * Adds a click event listener to the passed trigger.\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n */\n\n }, {\n key: \"listenClick\",\n value: function listenClick(trigger) {\n var _this2 = this;\n\n this.listener = listen_default()(trigger, 'click', function (e) {\n return _this2.onClick(e);\n });\n }\n /**\n * Defines a new `ClipboardAction` on each click event.\n * @param {Event} e\n */\n\n }, {\n key: \"onClick\",\n value: function onClick(e) {\n var trigger = e.delegateTarget || e.currentTarget;\n var action = this.action(trigger) || 'copy';\n var text = actions_default({\n action: action,\n container: this.container,\n target: this.target(trigger),\n text: this.text(trigger)\n }); // Fires an event based on the copy operation result.\n\n this.emit(text ? 'success' : 'error', {\n action: action,\n text: text,\n trigger: trigger,\n clearSelection: function clearSelection() {\n if (trigger) {\n trigger.focus();\n }\n\n window.getSelection().removeAllRanges();\n }\n });\n }\n /**\n * Default `action` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultAction\",\n value: function defaultAction(trigger) {\n return getAttributeValue('action', trigger);\n }\n /**\n * Default `target` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultTarget\",\n value: function defaultTarget(trigger) {\n var selector = getAttributeValue('target', trigger);\n\n if (selector) {\n return document.querySelector(selector);\n }\n }\n /**\n * Allow fire programmatically a copy action\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @returns Text copied.\n */\n\n }, {\n key: \"defaultText\",\n\n /**\n * Default `text` lookup function.\n * @param {Element} trigger\n */\n value: function defaultText(trigger) {\n return getAttributeValue('text', trigger);\n }\n /**\n * Destroy lifecycle.\n */\n\n }, {\n key: \"destroy\",\n value: function destroy() {\n this.listener.destroy();\n }\n }], [{\n key: \"copy\",\n value: function copy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n return actions_copy(target, options);\n }\n /**\n * Allow fire programmatically a cut action\n * @param {String|HTMLElement} target\n * @returns Text cutted.\n */\n\n }, {\n key: \"cut\",\n value: function cut(target) {\n return actions_cut(target);\n }\n /**\n * Returns the support of the given action, or all actions if no action is\n * given.\n * @param {String} [action]\n */\n\n }, {\n key: \"isSupported\",\n value: function isSupported() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut'];\n var actions = typeof action === 'string' ? [action] : action;\n var support = !!document.queryCommandSupported;\n actions.forEach(function (action) {\n support = support && !!document.queryCommandSupported(action);\n });\n return support;\n }\n }]);\n\n return Clipboard;\n}((tiny_emitter_default()));\n\n/* harmony default export */ var clipboard = (Clipboard);\n\n/***/ }),\n\n/***/ 828:\n/***/ (function(module) {\n\nvar DOCUMENT_NODE_TYPE = 9;\n\n/**\n * A polyfill for Element.matches()\n */\nif (typeof Element !== 'undefined' && !Element.prototype.matches) {\n var proto = Element.prototype;\n\n proto.matches = proto.matchesSelector ||\n proto.mozMatchesSelector ||\n proto.msMatchesSelector ||\n proto.oMatchesSelector ||\n proto.webkitMatchesSelector;\n}\n\n/**\n * Finds the closest parent that matches a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @return {Function}\n */\nfunction closest (element, selector) {\n while (element && element.nodeType !== DOCUMENT_NODE_TYPE) {\n if (typeof element.matches === 'function' &&\n element.matches(selector)) {\n return element;\n }\n element = element.parentNode;\n }\n}\n\nmodule.exports = closest;\n\n\n/***/ }),\n\n/***/ 438:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar closest = __webpack_require__(828);\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction _delegate(element, selector, type, callback, useCapture) {\n var listenerFn = listener.apply(this, arguments);\n\n element.addEventListener(type, listenerFn, useCapture);\n\n return {\n destroy: function() {\n element.removeEventListener(type, listenerFn, useCapture);\n }\n }\n}\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element|String|Array} [elements]\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction delegate(elements, selector, type, callback, useCapture) {\n // Handle the regular Element usage\n if (typeof elements.addEventListener === 'function') {\n return _delegate.apply(null, arguments);\n }\n\n // Handle Element-less usage, it defaults to global delegation\n if (typeof type === 'function') {\n // Use `document` as the first parameter, then apply arguments\n // This is a short way to .unshift `arguments` without running into deoptimizations\n return _delegate.bind(null, document).apply(null, arguments);\n }\n\n // Handle Selector-based usage\n if (typeof elements === 'string') {\n elements = document.querySelectorAll(elements);\n }\n\n // Handle Array-like based usage\n return Array.prototype.map.call(elements, function (element) {\n return _delegate(element, selector, type, callback, useCapture);\n });\n}\n\n/**\n * Finds closest match and invokes callback.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Function}\n */\nfunction listener(element, selector, type, callback) {\n return function(e) {\n e.delegateTarget = closest(e.target, selector);\n\n if (e.delegateTarget) {\n callback.call(element, e);\n }\n }\n}\n\nmodule.exports = delegate;\n\n\n/***/ }),\n\n/***/ 879:\n/***/ (function(__unused_webpack_module, exports) {\n\n/**\n * Check if argument is a HTML element.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.node = function(value) {\n return value !== undefined\n && value instanceof HTMLElement\n && value.nodeType === 1;\n};\n\n/**\n * Check if argument is a list of HTML elements.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.nodeList = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return value !== undefined\n && (type === '[object NodeList]' || type === '[object HTMLCollection]')\n && ('length' in value)\n && (value.length === 0 || exports.node(value[0]));\n};\n\n/**\n * Check if argument is a string.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.string = function(value) {\n return typeof value === 'string'\n || value instanceof String;\n};\n\n/**\n * Check if argument is a function.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.fn = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return type === '[object Function]';\n};\n\n\n/***/ }),\n\n/***/ 370:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar is = __webpack_require__(879);\nvar delegate = __webpack_require__(438);\n\n/**\n * Validates all params and calls the right\n * listener function based on its target type.\n *\n * @param {String|HTMLElement|HTMLCollection|NodeList} target\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listen(target, type, callback) {\n if (!target && !type && !callback) {\n throw new Error('Missing required arguments');\n }\n\n if (!is.string(type)) {\n throw new TypeError('Second argument must be a String');\n }\n\n if (!is.fn(callback)) {\n throw new TypeError('Third argument must be a Function');\n }\n\n if (is.node(target)) {\n return listenNode(target, type, callback);\n }\n else if (is.nodeList(target)) {\n return listenNodeList(target, type, callback);\n }\n else if (is.string(target)) {\n return listenSelector(target, type, callback);\n }\n else {\n throw new TypeError('First argument must be a String, HTMLElement, HTMLCollection, or NodeList');\n }\n}\n\n/**\n * Adds an event listener to a HTML element\n * and returns a remove listener function.\n *\n * @param {HTMLElement} node\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNode(node, type, callback) {\n node.addEventListener(type, callback);\n\n return {\n destroy: function() {\n node.removeEventListener(type, callback);\n }\n }\n}\n\n/**\n * Add an event listener to a list of HTML elements\n * and returns a remove listener function.\n *\n * @param {NodeList|HTMLCollection} nodeList\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNodeList(nodeList, type, callback) {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.addEventListener(type, callback);\n });\n\n return {\n destroy: function() {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.removeEventListener(type, callback);\n });\n }\n }\n}\n\n/**\n * Add an event listener to a selector\n * and returns a remove listener function.\n *\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenSelector(selector, type, callback) {\n return delegate(document.body, selector, type, callback);\n}\n\nmodule.exports = listen;\n\n\n/***/ }),\n\n/***/ 817:\n/***/ (function(module) {\n\nfunction select(element) {\n var selectedText;\n\n if (element.nodeName === 'SELECT') {\n element.focus();\n\n selectedText = element.value;\n }\n else if (element.nodeName === 'INPUT' || element.nodeName === 'TEXTAREA') {\n var isReadOnly = element.hasAttribute('readonly');\n\n if (!isReadOnly) {\n element.setAttribute('readonly', '');\n }\n\n element.select();\n element.setSelectionRange(0, element.value.length);\n\n if (!isReadOnly) {\n element.removeAttribute('readonly');\n }\n\n selectedText = element.value;\n }\n else {\n if (element.hasAttribute('contenteditable')) {\n element.focus();\n }\n\n var selection = window.getSelection();\n var range = document.createRange();\n\n range.selectNodeContents(element);\n selection.removeAllRanges();\n selection.addRange(range);\n\n selectedText = selection.toString();\n }\n\n return selectedText;\n}\n\nmodule.exports = select;\n\n\n/***/ }),\n\n/***/ 279:\n/***/ (function(module) {\n\nfunction E () {\n // Keep this empty so it's easier to inherit from\n // (via https://github.com/lipsmack from https://github.com/scottcorgan/tiny-emitter/issues/3)\n}\n\nE.prototype = {\n on: function (name, callback, ctx) {\n var e = this.e || (this.e = {});\n\n (e[name] || (e[name] = [])).push({\n fn: callback,\n ctx: ctx\n });\n\n return this;\n },\n\n once: function (name, callback, ctx) {\n var self = this;\n function listener () {\n self.off(name, listener);\n callback.apply(ctx, arguments);\n };\n\n listener._ = callback\n return this.on(name, listener, ctx);\n },\n\n emit: function (name) {\n var data = [].slice.call(arguments, 1);\n var evtArr = ((this.e || (this.e = {}))[name] || []).slice();\n var i = 0;\n var len = evtArr.length;\n\n for (i; i < len; i++) {\n evtArr[i].fn.apply(evtArr[i].ctx, data);\n }\n\n return this;\n },\n\n off: function (name, callback) {\n var e = this.e || (this.e = {});\n var evts = e[name];\n var liveEvents = [];\n\n if (evts && callback) {\n for (var i = 0, len = evts.length; i < len; i++) {\n if (evts[i].fn !== callback && evts[i].fn._ !== callback)\n liveEvents.push(evts[i]);\n }\n }\n\n // Remove event from queue to prevent memory leak\n // Suggested by https://github.com/lazd\n // Ref: https://github.com/scottcorgan/tiny-emitter/commit/c6ebfaa9bc973b33d110a84a307742b7cf94c953#commitcomment-5024910\n\n (liveEvents.length)\n ? e[name] = liveEvents\n : delete e[name];\n\n return this;\n }\n};\n\nmodule.exports = E;\nmodule.exports.TinyEmitter = E;\n\n\n/***/ })\n\n/******/ \t});\n/************************************************************************/\n/******/ \t// The module cache\n/******/ \tvar __webpack_module_cache__ = {};\n/******/ \t\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(__webpack_module_cache__[moduleId]) {\n/******/ \t\t\treturn __webpack_module_cache__[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = __webpack_module_cache__[moduleId] = {\n/******/ \t\t\t// no module.id needed\n/******/ \t\t\t// no module.loaded needed\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/ \t\n/******/ \t\t// Execute the module function\n/******/ \t\t__webpack_modules__[moduleId](module, module.exports, __webpack_require__);\n/******/ \t\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/ \t\n/************************************************************************/\n/******/ \t/* webpack/runtime/compat get default export */\n/******/ \t!function() {\n/******/ \t\t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t\t__webpack_require__.n = function(module) {\n/******/ \t\t\tvar getter = module && module.__esModule ?\n/******/ \t\t\t\tfunction() { return module['default']; } :\n/******/ \t\t\t\tfunction() { return module; };\n/******/ \t\t\t__webpack_require__.d(getter, { a: getter });\n/******/ \t\t\treturn getter;\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/define property getters */\n/******/ \t!function() {\n/******/ \t\t// define getter functions for harmony exports\n/******/ \t\t__webpack_require__.d = function(exports, definition) {\n/******/ \t\t\tfor(var key in definition) {\n/******/ \t\t\t\tif(__webpack_require__.o(definition, key) && !__webpack_require__.o(exports, key)) {\n/******/ \t\t\t\t\tObject.defineProperty(exports, key, { enumerable: true, get: definition[key] });\n/******/ \t\t\t\t}\n/******/ \t\t\t}\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/hasOwnProperty shorthand */\n/******/ \t!function() {\n/******/ \t\t__webpack_require__.o = function(obj, prop) { return Object.prototype.hasOwnProperty.call(obj, prop); }\n/******/ \t}();\n/******/ \t\n/************************************************************************/\n/******/ \t// module exports must be returned from runtime so entry inlining is disabled\n/******/ \t// startup\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(686);\n/******/ })()\n.default;\n});", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "Array.prototype.flat||Object.defineProperty(Array.prototype,\"flat\",{configurable:!0,value:function r(){var t=isNaN(arguments[0])?1:Number(arguments[0]);return t?Array.prototype.reduce.call(this,function(a,e){return Array.isArray(e)?a.push.apply(a,r.call(e,t-1)):a.push(e),a},[]):Array.prototype.slice.call(this)},writable:!0}),Array.prototype.flatMap||Object.defineProperty(Array.prototype,\"flatMap\",{configurable:!0,value:function(r){return Array.prototype.map.apply(this,arguments).flat()},writable:!0})\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport \"array-flat-polyfill\"\nimport \"focus-visible\"\nimport \"unfetch/polyfill\"\nimport \"url-polyfill\"\n\nimport {\n EMPTY,\n NEVER,\n Subject,\n defer,\n delay,\n filter,\n map,\n merge,\n mergeWith,\n shareReplay,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"./_\"\nimport {\n at,\n getOptionalElement,\n requestJSON,\n setToggle,\n watchDocument,\n watchKeyboard,\n watchLocation,\n watchLocationTarget,\n watchMedia,\n watchPrint,\n watchViewport\n} from \"./browser\"\nimport {\n getComponentElement,\n getComponentElements,\n mountAnnounce,\n mountBackToTop,\n mountConsent,\n mountContent,\n mountDialog,\n mountHeader,\n mountHeaderTitle,\n mountPalette,\n mountSearch,\n mountSearchHiglight,\n mountSidebar,\n mountSource,\n mountTableOfContents,\n mountTabs,\n watchHeader,\n watchMain\n} from \"./components\"\nimport {\n SearchIndex,\n setupClipboardJS,\n setupInstantLoading,\n setupVersionSelector\n} from \"./integrations\"\nimport {\n patchIndeterminate,\n patchScrollfix,\n patchScrolllock\n} from \"./patches\"\nimport \"./polyfills\"\n\n/* ----------------------------------------------------------------------------\n * Application\n * ------------------------------------------------------------------------- */\n\n/* Yay, JavaScript is available */\ndocument.documentElement.classList.remove(\"no-js\")\ndocument.documentElement.classList.add(\"js\")\n\n/* Set up navigation observables and subjects */\nconst document$ = watchDocument()\nconst location$ = watchLocation()\nconst target$ = watchLocationTarget()\nconst keyboard$ = watchKeyboard()\n\n/* Set up media observables */\nconst viewport$ = watchViewport()\nconst tablet$ = watchMedia(\"(min-width: 960px)\")\nconst screen$ = watchMedia(\"(min-width: 1220px)\")\nconst print$ = watchPrint()\n\n/* Retrieve search index, if search is enabled */\nconst config = configuration()\nconst index$ = document.forms.namedItem(\"search\")\n ? __search?.index || requestJSON(\n new URL(\"search/search_index.json\", config.base)\n )\n : NEVER\n\n/* Set up Clipboard.js integration */\nconst alert$ = new Subject()\nsetupClipboardJS({ alert$ })\n\n/* Set up instant loading, if enabled */\nif (feature(\"navigation.instant\"))\n setupInstantLoading({ document$, location$, viewport$ })\n\n/* Set up version selector */\nif (config.version?.provider === \"mike\")\n setupVersionSelector({ document$ })\n\n/* Always close drawer and search on navigation */\nmerge(location$, target$)\n .pipe(\n delay(125)\n )\n .subscribe(() => {\n setToggle(\"drawer\", false)\n setToggle(\"search\", false)\n })\n\n/* Set up global keyboard handlers */\nkeyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Go to previous page */\n case \"p\":\n case \",\":\n const prev = getOptionalElement(\"[href][rel=prev]\")\n if (typeof prev !== \"undefined\")\n prev.click()\n break\n\n /* Go to next page */\n case \"n\":\n case \".\":\n const next = getOptionalElement(\"[href][rel=next]\")\n if (typeof next !== \"undefined\")\n next.click()\n break\n }\n })\n\n/* Set up patches */\npatchIndeterminate({ document$, tablet$ })\npatchScrollfix({ document$ })\npatchScrolllock({ viewport$, tablet$ })\n\n/* Set up header and main area observable */\nconst header$ = watchHeader(getComponentElement(\"header\"), { viewport$ })\nconst main$ = document$\n .pipe(\n map(() => getComponentElement(\"main\")),\n switchMap(el => watchMain(el, { viewport$, header$ })),\n shareReplay(1)\n )\n\n/* Set up control component observables */\nconst control$ = merge(\n\n /* Consent */\n ...getComponentElements(\"consent\")\n .map(el => mountConsent(el, { target$ })),\n\n /* Dialog */\n ...getComponentElements(\"dialog\")\n .map(el => mountDialog(el, { alert$ })),\n\n /* Header */\n ...getComponentElements(\"header\")\n .map(el => mountHeader(el, { viewport$, header$, main$ })),\n\n /* Color palette */\n ...getComponentElements(\"palette\")\n .map(el => mountPalette(el)),\n\n /* Search */\n ...getComponentElements(\"search\")\n .map(el => mountSearch(el, { index$, keyboard$ })),\n\n /* Repository information */\n ...getComponentElements(\"source\")\n .map(el => mountSource(el))\n)\n\n/* Set up content component observables */\nconst content$ = defer(() => merge(\n\n /* Announcement bar */\n ...getComponentElements(\"announce\")\n .map(el => mountAnnounce(el)),\n\n /* Content */\n ...getComponentElements(\"content\")\n .map(el => mountContent(el, { viewport$, target$, print$ })),\n\n /* Search highlighting */\n ...getComponentElements(\"content\")\n .map(el => feature(\"search.highlight\")\n ? mountSearchHiglight(el, { index$, location$ })\n : EMPTY\n ),\n\n /* Header title */\n ...getComponentElements(\"header-title\")\n .map(el => mountHeaderTitle(el, { viewport$, header$ })),\n\n /* Sidebar */\n ...getComponentElements(\"sidebar\")\n .map(el => el.getAttribute(\"data-md-type\") === \"navigation\"\n ? at(screen$, () => mountSidebar(el, { viewport$, header$, main$ }))\n : at(tablet$, () => mountSidebar(el, { viewport$, header$, main$ }))\n ),\n\n /* Navigation tabs */\n ...getComponentElements(\"tabs\")\n .map(el => mountTabs(el, { viewport$, header$ })),\n\n /* Table of contents */\n ...getComponentElements(\"toc\")\n .map(el => mountTableOfContents(el, { viewport$, header$, target$ })),\n\n /* Back-to-top button */\n ...getComponentElements(\"top\")\n .map(el => mountBackToTop(el, { viewport$, header$, main$, target$ }))\n))\n\n/* Set up component observables */\nconst component$ = document$\n .pipe(\n switchMap(() => content$),\n mergeWith(control$),\n shareReplay(1)\n )\n\n/* Subscribe to all components */\ncomponent$.subscribe()\n\n/* ----------------------------------------------------------------------------\n * Exports\n * ------------------------------------------------------------------------- */\n\nwindow.document$ = document$ /* Document observable */\nwindow.location$ = location$ /* Location subject */\nwindow.target$ = target$ /* Location target observable */\nwindow.keyboard$ = keyboard$ /* Keyboard observable */\nwindow.viewport$ = viewport$ /* Viewport observable */\nwindow.tablet$ = tablet$ /* Media tablet observable */\nwindow.screen$ = screen$ /* Media screen observable */\nwindow.print$ = print$ /* Media print observable */\nwindow.alert$ = alert$ /* Alert subject */\nwindow.component$ = component$ /* Component observable */\n", "self.fetch||(self.fetch=function(e,n){return n=n||{},new Promise(function(t,s){var r=new XMLHttpRequest,o=[],u=[],i={},a=function(){return{ok:2==(r.status/100|0),statusText:r.statusText,status:r.status,url:r.responseURL,text:function(){return Promise.resolve(r.responseText)},json:function(){return Promise.resolve(r.responseText).then(JSON.parse)},blob:function(){return Promise.resolve(new Blob([r.response]))},clone:a,headers:{keys:function(){return o},entries:function(){return u},get:function(e){return i[e.toLowerCase()]},has:function(e){return e.toLowerCase()in i}}}};for(var c in r.open(n.method||\"get\",e,!0),r.onload=function(){r.getAllResponseHeaders().replace(/^(.*?):[^\\S\\n]*([\\s\\S]*?)$/gm,function(e,n,t){o.push(n=n.toLowerCase()),u.push([n,t]),i[n]=i[n]?i[n]+\",\"+t:t}),t(a())},r.onerror=s,r.withCredentials=\"include\"==n.credentials,n.headers)r.setRequestHeader(c,n.headers[c]);r.send(n.body||null)})});\n", "import tslib from '../tslib.js';\r\nconst {\r\n __extends,\r\n __assign,\r\n __rest,\r\n __decorate,\r\n __param,\r\n __metadata,\r\n __awaiter,\r\n __generator,\r\n __exportStar,\r\n __createBinding,\r\n __values,\r\n __read,\r\n __spread,\r\n __spreadArrays,\r\n __spreadArray,\r\n __await,\r\n __asyncGenerator,\r\n __asyncDelegator,\r\n __asyncValues,\r\n __makeTemplateObject,\r\n __importStar,\r\n __importDefault,\r\n __classPrivateFieldGet,\r\n __classPrivateFieldSet,\r\n} = tslib;\r\nexport {\r\n __extends,\r\n __assign,\r\n __rest,\r\n __decorate,\r\n __param,\r\n __metadata,\r\n __awaiter,\r\n __generator,\r\n __exportStar,\r\n __createBinding,\r\n __values,\r\n __read,\r\n __spread,\r\n __spreadArrays,\r\n __spreadArray,\r\n __await,\r\n __asyncGenerator,\r\n __asyncDelegator,\r\n __asyncValues,\r\n __makeTemplateObject,\r\n __importStar,\r\n __importDefault,\r\n __classPrivateFieldGet,\r\n __classPrivateFieldSet,\r\n};\r\n", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n ReplaySubject,\n Subject,\n fromEvent\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch document\n *\n * Documents are implemented as subjects, so all downstream observables are\n * automatically updated when a new document is emitted.\n *\n * @returns Document subject\n */\nexport function watchDocument(): Subject {\n const document$ = new ReplaySubject(1)\n fromEvent(document, \"DOMContentLoaded\", { once: true })\n .subscribe(() => document$.next(document))\n\n /* Return document */\n return document$\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve all elements matching the query selector\n *\n * @template T - Element type\n *\n * @param selector - Query selector\n * @param node - Node of reference\n *\n * @returns Elements\n */\nexport function getElements(\n selector: T, node?: ParentNode\n): HTMLElementTagNameMap[T][]\n\nexport function getElements(\n selector: string, node?: ParentNode\n): T[]\n\nexport function getElements(\n selector: string, node: ParentNode = document\n): T[] {\n return Array.from(node.querySelectorAll(selector))\n}\n\n/**\n * Retrieve an element matching a query selector or throw a reference error\n *\n * Note that this function assumes that the element is present. If unsure if an\n * element is existent, use the `getOptionalElement` function instead.\n *\n * @template T - Element type\n *\n * @param selector - Query selector\n * @param node - Node of reference\n *\n * @returns Element\n */\nexport function getElement(\n selector: T, node?: ParentNode\n): HTMLElementTagNameMap[T]\n\nexport function getElement(\n selector: string, node?: ParentNode\n): T\n\nexport function getElement(\n selector: string, node: ParentNode = document\n): T {\n const el = getOptionalElement(selector, node)\n if (typeof el === \"undefined\")\n throw new ReferenceError(\n `Missing element: expected \"${selector}\" to be present`\n )\n\n /* Return element */\n return el\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Retrieve an optional element matching the query selector\n *\n * @template T - Element type\n *\n * @param selector - Query selector\n * @param node - Node of reference\n *\n * @returns Element or nothing\n */\nexport function getOptionalElement(\n selector: T, node?: ParentNode\n): HTMLElementTagNameMap[T] | undefined\n\nexport function getOptionalElement(\n selector: string, node?: ParentNode\n): T | undefined\n\nexport function getOptionalElement(\n selector: string, node: ParentNode = document\n): T | undefined {\n return node.querySelector(selector) || undefined\n}\n\n/**\n * Retrieve the currently active element\n *\n * @returns Element or nothing\n */\nexport function getActiveElement(): HTMLElement | undefined {\n return document.activeElement instanceof HTMLElement\n ? document.activeElement || undefined\n : undefined\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n debounceTime,\n distinctUntilChanged,\n fromEvent,\n map,\n merge,\n startWith\n} from \"rxjs\"\n\nimport { getActiveElement } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch element focus\n *\n * Previously, this function used `focus` and `blur` events to determine whether\n * an element is focused, but this doesn't work if there are focusable elements\n * within the elements itself. A better solutions are `focusin` and `focusout`\n * events, which bubble up the tree and allow for more fine-grained control.\n *\n * `debounceTime` is necessary, because when a focus change happens inside an\n * element, the observable would first emit `false` and then `true` again.\n *\n * @param el - Element\n *\n * @returns Element focus observable\n */\nexport function watchElementFocus(\n el: HTMLElement\n): Observable {\n return merge(\n fromEvent(document.body, \"focusin\"),\n fromEvent(document.body, \"focusout\")\n )\n .pipe(\n debounceTime(1),\n map(() => {\n const active = getActiveElement()\n return typeof active !== \"undefined\"\n ? el.contains(active)\n : false\n }),\n startWith(el === getActiveElement()),\n distinctUntilChanged()\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n animationFrameScheduler,\n auditTime,\n fromEvent,\n map,\n merge,\n startWith\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Element offset\n */\nexport interface ElementOffset {\n x: number /* Horizontal offset */\n y: number /* Vertical offset */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve element offset\n *\n * @param el - Element\n *\n * @returns Element offset\n */\nexport function getElementOffset(\n el: HTMLElement\n): ElementOffset {\n return {\n x: el.offsetLeft,\n y: el.offsetTop\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch element offset\n *\n * @param el - Element\n *\n * @returns Element offset observable\n */\nexport function watchElementOffset(\n el: HTMLElement\n): Observable {\n return merge(\n fromEvent(window, \"load\"),\n fromEvent(window, \"resize\")\n )\n .pipe(\n auditTime(0, animationFrameScheduler),\n map(() => getElementOffset(el)),\n startWith(getElementOffset(el))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n animationFrameScheduler,\n auditTime,\n fromEvent,\n map,\n merge,\n startWith\n} from \"rxjs\"\n\nimport { ElementOffset } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve element content offset (= scroll offset)\n *\n * @param el - Element\n *\n * @returns Element content offset\n */\nexport function getElementContentOffset(\n el: HTMLElement\n): ElementOffset {\n return {\n x: el.scrollLeft,\n y: el.scrollTop\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch element content offset\n *\n * @param el - Element\n *\n * @returns Element content offset observable\n */\nexport function watchElementContentOffset(\n el: HTMLElement\n): Observable {\n return merge(\n fromEvent(el, \"scroll\"),\n fromEvent(window, \"resize\")\n )\n .pipe(\n auditTime(0, animationFrameScheduler),\n map(() => getElementContentOffset(el)),\n startWith(getElementContentOffset(el))\n )\n}\n", "/**\r\n * A collection of shims that provide minimal functionality of the ES6 collections.\r\n *\r\n * These implementations are not meant to be used outside of the ResizeObserver\r\n * modules as they cover only a limited range of use cases.\r\n */\r\n/* eslint-disable require-jsdoc, valid-jsdoc */\r\nvar MapShim = (function () {\r\n if (typeof Map !== 'undefined') {\r\n return Map;\r\n }\r\n /**\r\n * Returns index in provided array that matches the specified key.\r\n *\r\n * @param {Array} arr\r\n * @param {*} key\r\n * @returns {number}\r\n */\r\n function getIndex(arr, key) {\r\n var result = -1;\r\n arr.some(function (entry, index) {\r\n if (entry[0] === key) {\r\n result = index;\r\n return true;\r\n }\r\n return false;\r\n });\r\n return result;\r\n }\r\n return /** @class */ (function () {\r\n function class_1() {\r\n this.__entries__ = [];\r\n }\r\n Object.defineProperty(class_1.prototype, \"size\", {\r\n /**\r\n * @returns {boolean}\r\n */\r\n get: function () {\r\n return this.__entries__.length;\r\n },\r\n enumerable: true,\r\n configurable: true\r\n });\r\n /**\r\n * @param {*} key\r\n * @returns {*}\r\n */\r\n class_1.prototype.get = function (key) {\r\n var index = getIndex(this.__entries__, key);\r\n var entry = this.__entries__[index];\r\n return entry && entry[1];\r\n };\r\n /**\r\n * @param {*} key\r\n * @param {*} value\r\n * @returns {void}\r\n */\r\n class_1.prototype.set = function (key, value) {\r\n var index = getIndex(this.__entries__, key);\r\n if (~index) {\r\n this.__entries__[index][1] = value;\r\n }\r\n else {\r\n this.__entries__.push([key, value]);\r\n }\r\n };\r\n /**\r\n * @param {*} key\r\n * @returns {void}\r\n */\r\n class_1.prototype.delete = function (key) {\r\n var entries = this.__entries__;\r\n var index = getIndex(entries, key);\r\n if (~index) {\r\n entries.splice(index, 1);\r\n }\r\n };\r\n /**\r\n * @param {*} key\r\n * @returns {void}\r\n */\r\n class_1.prototype.has = function (key) {\r\n return !!~getIndex(this.__entries__, key);\r\n };\r\n /**\r\n * @returns {void}\r\n */\r\n class_1.prototype.clear = function () {\r\n this.__entries__.splice(0);\r\n };\r\n /**\r\n * @param {Function} callback\r\n * @param {*} [ctx=null]\r\n * @returns {void}\r\n */\r\n class_1.prototype.forEach = function (callback, ctx) {\r\n if (ctx === void 0) { ctx = null; }\r\n for (var _i = 0, _a = this.__entries__; _i < _a.length; _i++) {\r\n var entry = _a[_i];\r\n callback.call(ctx, entry[1], entry[0]);\r\n }\r\n };\r\n return class_1;\r\n }());\r\n})();\n\n/**\r\n * Detects whether window and document objects are available in current environment.\r\n */\r\nvar isBrowser = typeof window !== 'undefined' && typeof document !== 'undefined' && window.document === document;\n\n// Returns global object of a current environment.\r\nvar global$1 = (function () {\r\n if (typeof global !== 'undefined' && global.Math === Math) {\r\n return global;\r\n }\r\n if (typeof self !== 'undefined' && self.Math === Math) {\r\n return self;\r\n }\r\n if (typeof window !== 'undefined' && window.Math === Math) {\r\n return window;\r\n }\r\n // eslint-disable-next-line no-new-func\r\n return Function('return this')();\r\n})();\n\n/**\r\n * A shim for the requestAnimationFrame which falls back to the setTimeout if\r\n * first one is not supported.\r\n *\r\n * @returns {number} Requests' identifier.\r\n */\r\nvar requestAnimationFrame$1 = (function () {\r\n if (typeof requestAnimationFrame === 'function') {\r\n // It's required to use a bounded function because IE sometimes throws\r\n // an \"Invalid calling object\" error if rAF is invoked without the global\r\n // object on the left hand side.\r\n return requestAnimationFrame.bind(global$1);\r\n }\r\n return function (callback) { return setTimeout(function () { return callback(Date.now()); }, 1000 / 60); };\r\n})();\n\n// Defines minimum timeout before adding a trailing call.\r\nvar trailingTimeout = 2;\r\n/**\r\n * Creates a wrapper function which ensures that provided callback will be\r\n * invoked only once during the specified delay period.\r\n *\r\n * @param {Function} callback - Function to be invoked after the delay period.\r\n * @param {number} delay - Delay after which to invoke callback.\r\n * @returns {Function}\r\n */\r\nfunction throttle (callback, delay) {\r\n var leadingCall = false, trailingCall = false, lastCallTime = 0;\r\n /**\r\n * Invokes the original callback function and schedules new invocation if\r\n * the \"proxy\" was called during current request.\r\n *\r\n * @returns {void}\r\n */\r\n function resolvePending() {\r\n if (leadingCall) {\r\n leadingCall = false;\r\n callback();\r\n }\r\n if (trailingCall) {\r\n proxy();\r\n }\r\n }\r\n /**\r\n * Callback invoked after the specified delay. It will further postpone\r\n * invocation of the original function delegating it to the\r\n * requestAnimationFrame.\r\n *\r\n * @returns {void}\r\n */\r\n function timeoutCallback() {\r\n requestAnimationFrame$1(resolvePending);\r\n }\r\n /**\r\n * Schedules invocation of the original function.\r\n *\r\n * @returns {void}\r\n */\r\n function proxy() {\r\n var timeStamp = Date.now();\r\n if (leadingCall) {\r\n // Reject immediately following calls.\r\n if (timeStamp - lastCallTime < trailingTimeout) {\r\n return;\r\n }\r\n // Schedule new call to be in invoked when the pending one is resolved.\r\n // This is important for \"transitions\" which never actually start\r\n // immediately so there is a chance that we might miss one if change\r\n // happens amids the pending invocation.\r\n trailingCall = true;\r\n }\r\n else {\r\n leadingCall = true;\r\n trailingCall = false;\r\n setTimeout(timeoutCallback, delay);\r\n }\r\n lastCallTime = timeStamp;\r\n }\r\n return proxy;\r\n}\n\n// Minimum delay before invoking the update of observers.\r\nvar REFRESH_DELAY = 20;\r\n// A list of substrings of CSS properties used to find transition events that\r\n// might affect dimensions of observed elements.\r\nvar transitionKeys = ['top', 'right', 'bottom', 'left', 'width', 'height', 'size', 'weight'];\r\n// Check if MutationObserver is available.\r\nvar mutationObserverSupported = typeof MutationObserver !== 'undefined';\r\n/**\r\n * Singleton controller class which handles updates of ResizeObserver instances.\r\n */\r\nvar ResizeObserverController = /** @class */ (function () {\r\n /**\r\n * Creates a new instance of ResizeObserverController.\r\n *\r\n * @private\r\n */\r\n function ResizeObserverController() {\r\n /**\r\n * Indicates whether DOM listeners have been added.\r\n *\r\n * @private {boolean}\r\n */\r\n this.connected_ = false;\r\n /**\r\n * Tells that controller has subscribed for Mutation Events.\r\n *\r\n * @private {boolean}\r\n */\r\n this.mutationEventsAdded_ = false;\r\n /**\r\n * Keeps reference to the instance of MutationObserver.\r\n *\r\n * @private {MutationObserver}\r\n */\r\n this.mutationsObserver_ = null;\r\n /**\r\n * A list of connected observers.\r\n *\r\n * @private {Array}\r\n */\r\n this.observers_ = [];\r\n this.onTransitionEnd_ = this.onTransitionEnd_.bind(this);\r\n this.refresh = throttle(this.refresh.bind(this), REFRESH_DELAY);\r\n }\r\n /**\r\n * Adds observer to observers list.\r\n *\r\n * @param {ResizeObserverSPI} observer - Observer to be added.\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.addObserver = function (observer) {\r\n if (!~this.observers_.indexOf(observer)) {\r\n this.observers_.push(observer);\r\n }\r\n // Add listeners if they haven't been added yet.\r\n if (!this.connected_) {\r\n this.connect_();\r\n }\r\n };\r\n /**\r\n * Removes observer from observers list.\r\n *\r\n * @param {ResizeObserverSPI} observer - Observer to be removed.\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.removeObserver = function (observer) {\r\n var observers = this.observers_;\r\n var index = observers.indexOf(observer);\r\n // Remove observer if it's present in registry.\r\n if (~index) {\r\n observers.splice(index, 1);\r\n }\r\n // Remove listeners if controller has no connected observers.\r\n if (!observers.length && this.connected_) {\r\n this.disconnect_();\r\n }\r\n };\r\n /**\r\n * Invokes the update of observers. It will continue running updates insofar\r\n * it detects changes.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.refresh = function () {\r\n var changesDetected = this.updateObservers_();\r\n // Continue running updates if changes have been detected as there might\r\n // be future ones caused by CSS transitions.\r\n if (changesDetected) {\r\n this.refresh();\r\n }\r\n };\r\n /**\r\n * Updates every observer from observers list and notifies them of queued\r\n * entries.\r\n *\r\n * @private\r\n * @returns {boolean} Returns \"true\" if any observer has detected changes in\r\n * dimensions of it's elements.\r\n */\r\n ResizeObserverController.prototype.updateObservers_ = function () {\r\n // Collect observers that have active observations.\r\n var activeObservers = this.observers_.filter(function (observer) {\r\n return observer.gatherActive(), observer.hasActive();\r\n });\r\n // Deliver notifications in a separate cycle in order to avoid any\r\n // collisions between observers, e.g. when multiple instances of\r\n // ResizeObserver are tracking the same element and the callback of one\r\n // of them changes content dimensions of the observed target. Sometimes\r\n // this may result in notifications being blocked for the rest of observers.\r\n activeObservers.forEach(function (observer) { return observer.broadcastActive(); });\r\n return activeObservers.length > 0;\r\n };\r\n /**\r\n * Initializes DOM listeners.\r\n *\r\n * @private\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.connect_ = function () {\r\n // Do nothing if running in a non-browser environment or if listeners\r\n // have been already added.\r\n if (!isBrowser || this.connected_) {\r\n return;\r\n }\r\n // Subscription to the \"Transitionend\" event is used as a workaround for\r\n // delayed transitions. This way it's possible to capture at least the\r\n // final state of an element.\r\n document.addEventListener('transitionend', this.onTransitionEnd_);\r\n window.addEventListener('resize', this.refresh);\r\n if (mutationObserverSupported) {\r\n this.mutationsObserver_ = new MutationObserver(this.refresh);\r\n this.mutationsObserver_.observe(document, {\r\n attributes: true,\r\n childList: true,\r\n characterData: true,\r\n subtree: true\r\n });\r\n }\r\n else {\r\n document.addEventListener('DOMSubtreeModified', this.refresh);\r\n this.mutationEventsAdded_ = true;\r\n }\r\n this.connected_ = true;\r\n };\r\n /**\r\n * Removes DOM listeners.\r\n *\r\n * @private\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.disconnect_ = function () {\r\n // Do nothing if running in a non-browser environment or if listeners\r\n // have been already removed.\r\n if (!isBrowser || !this.connected_) {\r\n return;\r\n }\r\n document.removeEventListener('transitionend', this.onTransitionEnd_);\r\n window.removeEventListener('resize', this.refresh);\r\n if (this.mutationsObserver_) {\r\n this.mutationsObserver_.disconnect();\r\n }\r\n if (this.mutationEventsAdded_) {\r\n document.removeEventListener('DOMSubtreeModified', this.refresh);\r\n }\r\n this.mutationsObserver_ = null;\r\n this.mutationEventsAdded_ = false;\r\n this.connected_ = false;\r\n };\r\n /**\r\n * \"Transitionend\" event handler.\r\n *\r\n * @private\r\n * @param {TransitionEvent} event\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.onTransitionEnd_ = function (_a) {\r\n var _b = _a.propertyName, propertyName = _b === void 0 ? '' : _b;\r\n // Detect whether transition may affect dimensions of an element.\r\n var isReflowProperty = transitionKeys.some(function (key) {\r\n return !!~propertyName.indexOf(key);\r\n });\r\n if (isReflowProperty) {\r\n this.refresh();\r\n }\r\n };\r\n /**\r\n * Returns instance of the ResizeObserverController.\r\n *\r\n * @returns {ResizeObserverController}\r\n */\r\n ResizeObserverController.getInstance = function () {\r\n if (!this.instance_) {\r\n this.instance_ = new ResizeObserverController();\r\n }\r\n return this.instance_;\r\n };\r\n /**\r\n * Holds reference to the controller's instance.\r\n *\r\n * @private {ResizeObserverController}\r\n */\r\n ResizeObserverController.instance_ = null;\r\n return ResizeObserverController;\r\n}());\n\n/**\r\n * Defines non-writable/enumerable properties of the provided target object.\r\n *\r\n * @param {Object} target - Object for which to define properties.\r\n * @param {Object} props - Properties to be defined.\r\n * @returns {Object} Target object.\r\n */\r\nvar defineConfigurable = (function (target, props) {\r\n for (var _i = 0, _a = Object.keys(props); _i < _a.length; _i++) {\r\n var key = _a[_i];\r\n Object.defineProperty(target, key, {\r\n value: props[key],\r\n enumerable: false,\r\n writable: false,\r\n configurable: true\r\n });\r\n }\r\n return target;\r\n});\n\n/**\r\n * Returns the global object associated with provided element.\r\n *\r\n * @param {Object} target\r\n * @returns {Object}\r\n */\r\nvar getWindowOf = (function (target) {\r\n // Assume that the element is an instance of Node, which means that it\r\n // has the \"ownerDocument\" property from which we can retrieve a\r\n // corresponding global object.\r\n var ownerGlobal = target && target.ownerDocument && target.ownerDocument.defaultView;\r\n // Return the local global object if it's not possible extract one from\r\n // provided element.\r\n return ownerGlobal || global$1;\r\n});\n\n// Placeholder of an empty content rectangle.\r\nvar emptyRect = createRectInit(0, 0, 0, 0);\r\n/**\r\n * Converts provided string to a number.\r\n *\r\n * @param {number|string} value\r\n * @returns {number}\r\n */\r\nfunction toFloat(value) {\r\n return parseFloat(value) || 0;\r\n}\r\n/**\r\n * Extracts borders size from provided styles.\r\n *\r\n * @param {CSSStyleDeclaration} styles\r\n * @param {...string} positions - Borders positions (top, right, ...)\r\n * @returns {number}\r\n */\r\nfunction getBordersSize(styles) {\r\n var positions = [];\r\n for (var _i = 1; _i < arguments.length; _i++) {\r\n positions[_i - 1] = arguments[_i];\r\n }\r\n return positions.reduce(function (size, position) {\r\n var value = styles['border-' + position + '-width'];\r\n return size + toFloat(value);\r\n }, 0);\r\n}\r\n/**\r\n * Extracts paddings sizes from provided styles.\r\n *\r\n * @param {CSSStyleDeclaration} styles\r\n * @returns {Object} Paddings box.\r\n */\r\nfunction getPaddings(styles) {\r\n var positions = ['top', 'right', 'bottom', 'left'];\r\n var paddings = {};\r\n for (var _i = 0, positions_1 = positions; _i < positions_1.length; _i++) {\r\n var position = positions_1[_i];\r\n var value = styles['padding-' + position];\r\n paddings[position] = toFloat(value);\r\n }\r\n return paddings;\r\n}\r\n/**\r\n * Calculates content rectangle of provided SVG element.\r\n *\r\n * @param {SVGGraphicsElement} target - Element content rectangle of which needs\r\n * to be calculated.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction getSVGContentRect(target) {\r\n var bbox = target.getBBox();\r\n return createRectInit(0, 0, bbox.width, bbox.height);\r\n}\r\n/**\r\n * Calculates content rectangle of provided HTMLElement.\r\n *\r\n * @param {HTMLElement} target - Element for which to calculate the content rectangle.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction getHTMLElementContentRect(target) {\r\n // Client width & height properties can't be\r\n // used exclusively as they provide rounded values.\r\n var clientWidth = target.clientWidth, clientHeight = target.clientHeight;\r\n // By this condition we can catch all non-replaced inline, hidden and\r\n // detached elements. Though elements with width & height properties less\r\n // than 0.5 will be discarded as well.\r\n //\r\n // Without it we would need to implement separate methods for each of\r\n // those cases and it's not possible to perform a precise and performance\r\n // effective test for hidden elements. E.g. even jQuery's ':visible' filter\r\n // gives wrong results for elements with width & height less than 0.5.\r\n if (!clientWidth && !clientHeight) {\r\n return emptyRect;\r\n }\r\n var styles = getWindowOf(target).getComputedStyle(target);\r\n var paddings = getPaddings(styles);\r\n var horizPad = paddings.left + paddings.right;\r\n var vertPad = paddings.top + paddings.bottom;\r\n // Computed styles of width & height are being used because they are the\r\n // only dimensions available to JS that contain non-rounded values. It could\r\n // be possible to utilize the getBoundingClientRect if only it's data wasn't\r\n // affected by CSS transformations let alone paddings, borders and scroll bars.\r\n var width = toFloat(styles.width), height = toFloat(styles.height);\r\n // Width & height include paddings and borders when the 'border-box' box\r\n // model is applied (except for IE).\r\n if (styles.boxSizing === 'border-box') {\r\n // Following conditions are required to handle Internet Explorer which\r\n // doesn't include paddings and borders to computed CSS dimensions.\r\n //\r\n // We can say that if CSS dimensions + paddings are equal to the \"client\"\r\n // properties then it's either IE, and thus we don't need to subtract\r\n // anything, or an element merely doesn't have paddings/borders styles.\r\n if (Math.round(width + horizPad) !== clientWidth) {\r\n width -= getBordersSize(styles, 'left', 'right') + horizPad;\r\n }\r\n if (Math.round(height + vertPad) !== clientHeight) {\r\n height -= getBordersSize(styles, 'top', 'bottom') + vertPad;\r\n }\r\n }\r\n // Following steps can't be applied to the document's root element as its\r\n // client[Width/Height] properties represent viewport area of the window.\r\n // Besides, it's as well not necessary as the itself neither has\r\n // rendered scroll bars nor it can be clipped.\r\n if (!isDocumentElement(target)) {\r\n // In some browsers (only in Firefox, actually) CSS width & height\r\n // include scroll bars size which can be removed at this step as scroll\r\n // bars are the only difference between rounded dimensions + paddings\r\n // and \"client\" properties, though that is not always true in Chrome.\r\n var vertScrollbar = Math.round(width + horizPad) - clientWidth;\r\n var horizScrollbar = Math.round(height + vertPad) - clientHeight;\r\n // Chrome has a rather weird rounding of \"client\" properties.\r\n // E.g. for an element with content width of 314.2px it sometimes gives\r\n // the client width of 315px and for the width of 314.7px it may give\r\n // 314px. And it doesn't happen all the time. So just ignore this delta\r\n // as a non-relevant.\r\n if (Math.abs(vertScrollbar) !== 1) {\r\n width -= vertScrollbar;\r\n }\r\n if (Math.abs(horizScrollbar) !== 1) {\r\n height -= horizScrollbar;\r\n }\r\n }\r\n return createRectInit(paddings.left, paddings.top, width, height);\r\n}\r\n/**\r\n * Checks whether provided element is an instance of the SVGGraphicsElement.\r\n *\r\n * @param {Element} target - Element to be checked.\r\n * @returns {boolean}\r\n */\r\nvar isSVGGraphicsElement = (function () {\r\n // Some browsers, namely IE and Edge, don't have the SVGGraphicsElement\r\n // interface.\r\n if (typeof SVGGraphicsElement !== 'undefined') {\r\n return function (target) { return target instanceof getWindowOf(target).SVGGraphicsElement; };\r\n }\r\n // If it's so, then check that element is at least an instance of the\r\n // SVGElement and that it has the \"getBBox\" method.\r\n // eslint-disable-next-line no-extra-parens\r\n return function (target) { return (target instanceof getWindowOf(target).SVGElement &&\r\n typeof target.getBBox === 'function'); };\r\n})();\r\n/**\r\n * Checks whether provided element is a document element ().\r\n *\r\n * @param {Element} target - Element to be checked.\r\n * @returns {boolean}\r\n */\r\nfunction isDocumentElement(target) {\r\n return target === getWindowOf(target).document.documentElement;\r\n}\r\n/**\r\n * Calculates an appropriate content rectangle for provided html or svg element.\r\n *\r\n * @param {Element} target - Element content rectangle of which needs to be calculated.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction getContentRect(target) {\r\n if (!isBrowser) {\r\n return emptyRect;\r\n }\r\n if (isSVGGraphicsElement(target)) {\r\n return getSVGContentRect(target);\r\n }\r\n return getHTMLElementContentRect(target);\r\n}\r\n/**\r\n * Creates rectangle with an interface of the DOMRectReadOnly.\r\n * Spec: https://drafts.fxtf.org/geometry/#domrectreadonly\r\n *\r\n * @param {DOMRectInit} rectInit - Object with rectangle's x/y coordinates and dimensions.\r\n * @returns {DOMRectReadOnly}\r\n */\r\nfunction createReadOnlyRect(_a) {\r\n var x = _a.x, y = _a.y, width = _a.width, height = _a.height;\r\n // If DOMRectReadOnly is available use it as a prototype for the rectangle.\r\n var Constr = typeof DOMRectReadOnly !== 'undefined' ? DOMRectReadOnly : Object;\r\n var rect = Object.create(Constr.prototype);\r\n // Rectangle's properties are not writable and non-enumerable.\r\n defineConfigurable(rect, {\r\n x: x, y: y, width: width, height: height,\r\n top: y,\r\n right: x + width,\r\n bottom: height + y,\r\n left: x\r\n });\r\n return rect;\r\n}\r\n/**\r\n * Creates DOMRectInit object based on the provided dimensions and the x/y coordinates.\r\n * Spec: https://drafts.fxtf.org/geometry/#dictdef-domrectinit\r\n *\r\n * @param {number} x - X coordinate.\r\n * @param {number} y - Y coordinate.\r\n * @param {number} width - Rectangle's width.\r\n * @param {number} height - Rectangle's height.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction createRectInit(x, y, width, height) {\r\n return { x: x, y: y, width: width, height: height };\r\n}\n\n/**\r\n * Class that is responsible for computations of the content rectangle of\r\n * provided DOM element and for keeping track of it's changes.\r\n */\r\nvar ResizeObservation = /** @class */ (function () {\r\n /**\r\n * Creates an instance of ResizeObservation.\r\n *\r\n * @param {Element} target - Element to be observed.\r\n */\r\n function ResizeObservation(target) {\r\n /**\r\n * Broadcasted width of content rectangle.\r\n *\r\n * @type {number}\r\n */\r\n this.broadcastWidth = 0;\r\n /**\r\n * Broadcasted height of content rectangle.\r\n *\r\n * @type {number}\r\n */\r\n this.broadcastHeight = 0;\r\n /**\r\n * Reference to the last observed content rectangle.\r\n *\r\n * @private {DOMRectInit}\r\n */\r\n this.contentRect_ = createRectInit(0, 0, 0, 0);\r\n this.target = target;\r\n }\r\n /**\r\n * Updates content rectangle and tells whether it's width or height properties\r\n * have changed since the last broadcast.\r\n *\r\n * @returns {boolean}\r\n */\r\n ResizeObservation.prototype.isActive = function () {\r\n var rect = getContentRect(this.target);\r\n this.contentRect_ = rect;\r\n return (rect.width !== this.broadcastWidth ||\r\n rect.height !== this.broadcastHeight);\r\n };\r\n /**\r\n * Updates 'broadcastWidth' and 'broadcastHeight' properties with a data\r\n * from the corresponding properties of the last observed content rectangle.\r\n *\r\n * @returns {DOMRectInit} Last observed content rectangle.\r\n */\r\n ResizeObservation.prototype.broadcastRect = function () {\r\n var rect = this.contentRect_;\r\n this.broadcastWidth = rect.width;\r\n this.broadcastHeight = rect.height;\r\n return rect;\r\n };\r\n return ResizeObservation;\r\n}());\n\nvar ResizeObserverEntry = /** @class */ (function () {\r\n /**\r\n * Creates an instance of ResizeObserverEntry.\r\n *\r\n * @param {Element} target - Element that is being observed.\r\n * @param {DOMRectInit} rectInit - Data of the element's content rectangle.\r\n */\r\n function ResizeObserverEntry(target, rectInit) {\r\n var contentRect = createReadOnlyRect(rectInit);\r\n // According to the specification following properties are not writable\r\n // and are also not enumerable in the native implementation.\r\n //\r\n // Property accessors are not being used as they'd require to define a\r\n // private WeakMap storage which may cause memory leaks in browsers that\r\n // don't support this type of collections.\r\n defineConfigurable(this, { target: target, contentRect: contentRect });\r\n }\r\n return ResizeObserverEntry;\r\n}());\n\nvar ResizeObserverSPI = /** @class */ (function () {\r\n /**\r\n * Creates a new instance of ResizeObserver.\r\n *\r\n * @param {ResizeObserverCallback} callback - Callback function that is invoked\r\n * when one of the observed elements changes it's content dimensions.\r\n * @param {ResizeObserverController} controller - Controller instance which\r\n * is responsible for the updates of observer.\r\n * @param {ResizeObserver} callbackCtx - Reference to the public\r\n * ResizeObserver instance which will be passed to callback function.\r\n */\r\n function ResizeObserverSPI(callback, controller, callbackCtx) {\r\n /**\r\n * Collection of resize observations that have detected changes in dimensions\r\n * of elements.\r\n *\r\n * @private {Array}\r\n */\r\n this.activeObservations_ = [];\r\n /**\r\n * Registry of the ResizeObservation instances.\r\n *\r\n * @private {Map}\r\n */\r\n this.observations_ = new MapShim();\r\n if (typeof callback !== 'function') {\r\n throw new TypeError('The callback provided as parameter 1 is not a function.');\r\n }\r\n this.callback_ = callback;\r\n this.controller_ = controller;\r\n this.callbackCtx_ = callbackCtx;\r\n }\r\n /**\r\n * Starts observing provided element.\r\n *\r\n * @param {Element} target - Element to be observed.\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.observe = function (target) {\r\n if (!arguments.length) {\r\n throw new TypeError('1 argument required, but only 0 present.');\r\n }\r\n // Do nothing if current environment doesn't have the Element interface.\r\n if (typeof Element === 'undefined' || !(Element instanceof Object)) {\r\n return;\r\n }\r\n if (!(target instanceof getWindowOf(target).Element)) {\r\n throw new TypeError('parameter 1 is not of type \"Element\".');\r\n }\r\n var observations = this.observations_;\r\n // Do nothing if element is already being observed.\r\n if (observations.has(target)) {\r\n return;\r\n }\r\n observations.set(target, new ResizeObservation(target));\r\n this.controller_.addObserver(this);\r\n // Force the update of observations.\r\n this.controller_.refresh();\r\n };\r\n /**\r\n * Stops observing provided element.\r\n *\r\n * @param {Element} target - Element to stop observing.\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.unobserve = function (target) {\r\n if (!arguments.length) {\r\n throw new TypeError('1 argument required, but only 0 present.');\r\n }\r\n // Do nothing if current environment doesn't have the Element interface.\r\n if (typeof Element === 'undefined' || !(Element instanceof Object)) {\r\n return;\r\n }\r\n if (!(target instanceof getWindowOf(target).Element)) {\r\n throw new TypeError('parameter 1 is not of type \"Element\".');\r\n }\r\n var observations = this.observations_;\r\n // Do nothing if element is not being observed.\r\n if (!observations.has(target)) {\r\n return;\r\n }\r\n observations.delete(target);\r\n if (!observations.size) {\r\n this.controller_.removeObserver(this);\r\n }\r\n };\r\n /**\r\n * Stops observing all elements.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.disconnect = function () {\r\n this.clearActive();\r\n this.observations_.clear();\r\n this.controller_.removeObserver(this);\r\n };\r\n /**\r\n * Collects observation instances the associated element of which has changed\r\n * it's content rectangle.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.gatherActive = function () {\r\n var _this = this;\r\n this.clearActive();\r\n this.observations_.forEach(function (observation) {\r\n if (observation.isActive()) {\r\n _this.activeObservations_.push(observation);\r\n }\r\n });\r\n };\r\n /**\r\n * Invokes initial callback function with a list of ResizeObserverEntry\r\n * instances collected from active resize observations.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.broadcastActive = function () {\r\n // Do nothing if observer doesn't have active observations.\r\n if (!this.hasActive()) {\r\n return;\r\n }\r\n var ctx = this.callbackCtx_;\r\n // Create ResizeObserverEntry instance for every active observation.\r\n var entries = this.activeObservations_.map(function (observation) {\r\n return new ResizeObserverEntry(observation.target, observation.broadcastRect());\r\n });\r\n this.callback_.call(ctx, entries, ctx);\r\n this.clearActive();\r\n };\r\n /**\r\n * Clears the collection of active observations.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.clearActive = function () {\r\n this.activeObservations_.splice(0);\r\n };\r\n /**\r\n * Tells whether observer has active observations.\r\n *\r\n * @returns {boolean}\r\n */\r\n ResizeObserverSPI.prototype.hasActive = function () {\r\n return this.activeObservations_.length > 0;\r\n };\r\n return ResizeObserverSPI;\r\n}());\n\n// Registry of internal observers. If WeakMap is not available use current shim\r\n// for the Map collection as it has all required methods and because WeakMap\r\n// can't be fully polyfilled anyway.\r\nvar observers = typeof WeakMap !== 'undefined' ? new WeakMap() : new MapShim();\r\n/**\r\n * ResizeObserver API. Encapsulates the ResizeObserver SPI implementation\r\n * exposing only those methods and properties that are defined in the spec.\r\n */\r\nvar ResizeObserver = /** @class */ (function () {\r\n /**\r\n * Creates a new instance of ResizeObserver.\r\n *\r\n * @param {ResizeObserverCallback} callback - Callback that is invoked when\r\n * dimensions of the observed elements change.\r\n */\r\n function ResizeObserver(callback) {\r\n if (!(this instanceof ResizeObserver)) {\r\n throw new TypeError('Cannot call a class as a function.');\r\n }\r\n if (!arguments.length) {\r\n throw new TypeError('1 argument required, but only 0 present.');\r\n }\r\n var controller = ResizeObserverController.getInstance();\r\n var observer = new ResizeObserverSPI(callback, controller, this);\r\n observers.set(this, observer);\r\n }\r\n return ResizeObserver;\r\n}());\r\n// Expose public methods of ResizeObserver.\r\n[\r\n 'observe',\r\n 'unobserve',\r\n 'disconnect'\r\n].forEach(function (method) {\r\n ResizeObserver.prototype[method] = function () {\r\n var _a;\r\n return (_a = observers.get(this))[method].apply(_a, arguments);\r\n };\r\n});\n\nvar index = (function () {\r\n // Export existing implementation if available.\r\n if (typeof global$1.ResizeObserver !== 'undefined') {\r\n return global$1.ResizeObserver;\r\n }\r\n return ResizeObserver;\r\n})();\n\nexport default index;\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport ResizeObserver from \"resize-observer-polyfill\"\nimport {\n NEVER,\n Observable,\n Subject,\n defer,\n filter,\n finalize,\n map,\n merge,\n of,\n shareReplay,\n startWith,\n switchMap,\n tap\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Element offset\n */\nexport interface ElementSize {\n width: number /* Element width */\n height: number /* Element height */\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Resize observer entry subject\n */\nconst entry$ = new Subject()\n\n/**\n * Resize observer observable\n *\n * This observable will create a `ResizeObserver` on the first subscription\n * and will automatically terminate it when there are no more subscribers.\n * It's quite important to centralize observation in a single `ResizeObserver`,\n * as the performance difference can be quite dramatic, as the link shows.\n *\n * @see https://bit.ly/3iIYfEm - Google Groups on performance\n */\nconst observer$ = defer(() => of(\n new ResizeObserver(entries => {\n for (const entry of entries)\n entry$.next(entry)\n })\n))\n .pipe(\n switchMap(observer => merge(NEVER, of(observer))\n .pipe(\n finalize(() => observer.disconnect())\n )\n ),\n shareReplay(1)\n )\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve element size\n *\n * @param el - Element\n *\n * @returns Element size\n */\nexport function getElementSize(\n el: HTMLElement\n): ElementSize {\n return {\n width: el.offsetWidth,\n height: el.offsetHeight\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch element size\n *\n * This function returns an observable that subscribes to a single internal\n * instance of `ResizeObserver` upon subscription, and emit resize events until\n * termination. Note that this function should not be called with the same\n * element twice, as the first unsubscription will terminate observation.\n *\n * Sadly, we can't use the `DOMRect` objects returned by the observer, because\n * we need the emitted values to be consistent with `getElementSize`, which will\n * return the used values (rounded) and not actual values (unrounded). Thus, we\n * use the `offset*` properties. See the linked GitHub issue.\n *\n * @see https://bit.ly/3m0k3he - GitHub issue\n *\n * @param el - Element\n *\n * @returns Element size observable\n */\nexport function watchElementSize(\n el: HTMLElement\n): Observable {\n return observer$\n .pipe(\n tap(observer => observer.observe(el)),\n switchMap(observer => entry$\n .pipe(\n filter(({ target }) => target === el),\n finalize(() => observer.unobserve(el)),\n map(() => getElementSize(el))\n )\n ),\n startWith(getElementSize(el))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { ElementSize } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve element content size (= scroll width and height)\n *\n * @param el - Element\n *\n * @returns Element content size\n */\nexport function getElementContentSize(\n el: HTMLElement\n): ElementSize {\n return {\n width: el.scrollWidth,\n height: el.scrollHeight\n }\n}\n\n/**\n * Retrieve the overflowing container of an element, if any\n *\n * @param el - Element\n *\n * @returns Overflowing container or nothing\n */\nexport function getElementContainer(\n el: HTMLElement\n): HTMLElement | undefined {\n let parent = el.parentElement\n while (parent)\n if (\n el.scrollWidth <= parent.scrollWidth &&\n el.scrollHeight <= parent.scrollHeight\n )\n parent = (el = parent).parentElement\n else\n break\n\n /* Return overflowing container */\n return parent ? el : undefined\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n NEVER,\n Observable,\n Subject,\n defer,\n distinctUntilChanged,\n filter,\n finalize,\n map,\n merge,\n of,\n shareReplay,\n switchMap,\n tap\n} from \"rxjs\"\n\nimport {\n getElementContentSize,\n getElementSize,\n watchElementContentOffset\n} from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Intersection observer entry subject\n */\nconst entry$ = new Subject()\n\n/**\n * Intersection observer observable\n *\n * This observable will create an `IntersectionObserver` on first subscription\n * and will automatically terminate it when there are no more subscribers.\n *\n * @see https://bit.ly/3iIYfEm - Google Groups on performance\n */\nconst observer$ = defer(() => of(\n new IntersectionObserver(entries => {\n for (const entry of entries)\n entry$.next(entry)\n }, {\n threshold: 0\n })\n))\n .pipe(\n switchMap(observer => merge(NEVER, of(observer))\n .pipe(\n finalize(() => observer.disconnect())\n )\n ),\n shareReplay(1)\n )\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch element visibility\n *\n * @param el - Element\n *\n * @returns Element visibility observable\n */\nexport function watchElementVisibility(\n el: HTMLElement\n): Observable {\n return observer$\n .pipe(\n tap(observer => observer.observe(el)),\n switchMap(observer => entry$\n .pipe(\n filter(({ target }) => target === el),\n finalize(() => observer.unobserve(el)),\n map(({ isIntersecting }) => isIntersecting)\n )\n )\n )\n}\n\n/**\n * Watch element boundary\n *\n * This function returns an observable which emits whether the bottom content\n * boundary (= scroll offset) of an element is within a certain threshold.\n *\n * @param el - Element\n * @param threshold - Threshold\n *\n * @returns Element boundary observable\n */\nexport function watchElementBoundary(\n el: HTMLElement, threshold = 16\n): Observable {\n return watchElementContentOffset(el)\n .pipe(\n map(({ y }) => {\n const visible = getElementSize(el)\n const content = getElementContentSize(el)\n return y >= (\n content.height - visible.height - threshold\n )\n }),\n distinctUntilChanged()\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n fromEvent,\n map,\n startWith\n} from \"rxjs\"\n\nimport { getElement } from \"../element\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Toggle\n */\nexport type Toggle =\n | \"drawer\" /* Toggle for drawer */\n | \"search\" /* Toggle for search */\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Toggle map\n */\nconst toggles: Record = {\n drawer: getElement(\"[data-md-toggle=drawer]\"),\n search: getElement(\"[data-md-toggle=search]\")\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve the value of a toggle\n *\n * @param name - Toggle\n *\n * @returns Toggle value\n */\nexport function getToggle(name: Toggle): boolean {\n return toggles[name].checked\n}\n\n/**\n * Set toggle\n *\n * Simulating a click event seems to be the most cross-browser compatible way\n * of changing the value while also emitting a `change` event. Before, Material\n * used `CustomEvent` to programmatically change the value of a toggle, but this\n * is a much simpler and cleaner solution which doesn't require a polyfill.\n *\n * @param name - Toggle\n * @param value - Toggle value\n */\nexport function setToggle(name: Toggle, value: boolean): void {\n if (toggles[name].checked !== value)\n toggles[name].click()\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch toggle\n *\n * @param name - Toggle\n *\n * @returns Toggle value observable\n */\nexport function watchToggle(name: Toggle): Observable {\n const el = toggles[name]\n return fromEvent(el, \"change\")\n .pipe(\n map(() => el.checked),\n startWith(el.checked)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n filter,\n fromEvent,\n map,\n share\n} from \"rxjs\"\n\nimport { getActiveElement } from \"../element\"\nimport { getToggle } from \"../toggle\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Keyboard mode\n */\nexport type KeyboardMode =\n | \"global\" /* Global */\n | \"search\" /* Search is open */\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Keyboard\n */\nexport interface Keyboard {\n mode: KeyboardMode /* Keyboard mode */\n type: string /* Key type */\n claim(): void /* Key claim */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Check whether an element may receive keyboard input\n *\n * @param el - Element\n * @param type - Key type\n *\n * @returns Test result\n */\nfunction isSusceptibleToKeyboard(\n el: HTMLElement, type: string\n): boolean {\n switch (el.constructor) {\n\n /* Input elements */\n case HTMLInputElement:\n /* @ts-expect-error - omit unnecessary type cast */\n if (el.type === \"radio\")\n return /^Arrow/.test(type)\n else\n return true\n\n /* Select element and textarea */\n case HTMLSelectElement:\n case HTMLTextAreaElement:\n return true\n\n /* Everything else */\n default:\n return el.isContentEditable\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch keyboard\n *\n * @returns Keyboard observable\n */\nexport function watchKeyboard(): Observable {\n return fromEvent(window, \"keydown\")\n .pipe(\n filter(ev => !(ev.metaKey || ev.ctrlKey)),\n map(ev => ({\n mode: getToggle(\"search\") ? \"search\" : \"global\",\n type: ev.key,\n claim() {\n ev.preventDefault()\n ev.stopPropagation()\n }\n } as Keyboard)),\n filter(({ mode, type }) => {\n if (mode === \"global\") {\n const active = getActiveElement()\n if (typeof active !== \"undefined\")\n return !isSusceptibleToKeyboard(active, type)\n }\n return true\n }),\n share()\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Subject } from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve location\n *\n * This function returns a `URL` object (and not `Location`) to normalize the\n * typings across the application. Furthermore, locations need to be tracked\n * without setting them and `Location` is a singleton which represents the\n * current location.\n *\n * @returns URL\n */\nexport function getLocation(): URL {\n return new URL(location.href)\n}\n\n/**\n * Set location\n *\n * @param url - URL to change to\n */\nexport function setLocation(url: URL): void {\n location.href = url.href\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch location\n *\n * @returns Location subject\n */\nexport function watchLocation(): Subject {\n return new Subject()\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { JSX as JSXInternal } from \"preact\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * HTML attributes\n */\ntype Attributes =\n & JSXInternal.HTMLAttributes\n & JSXInternal.SVGAttributes\n & Record\n\n/**\n * Child element\n */\ntype Child =\n | HTMLElement\n | Text\n | string\n | number\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Append a child node to an element\n *\n * @param el - Element\n * @param child - Child node(s)\n */\nfunction appendChild(el: HTMLElement, child: Child | Child[]): void {\n\n /* Handle primitive types (including raw HTML) */\n if (typeof child === \"string\" || typeof child === \"number\") {\n el.innerHTML += child.toString()\n\n /* Handle nodes */\n } else if (child instanceof Node) {\n el.appendChild(child)\n\n /* Handle nested children */\n } else if (Array.isArray(child)) {\n for (const node of child)\n appendChild(el, node)\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * JSX factory\n *\n * @template T - Element type\n *\n * @param tag - HTML tag\n * @param attributes - HTML attributes\n * @param children - Child elements\n *\n * @returns Element\n */\nexport function h(\n tag: T, attributes?: Attributes | null, ...children: Child[]\n): HTMLElementTagNameMap[T]\n\nexport function h(\n tag: string, attributes?: Attributes | null, ...children: Child[]\n): T\n\nexport function h(\n tag: string, attributes?: Attributes | null, ...children: Child[]\n): T {\n const el = document.createElement(tag)\n\n /* Set attributes, if any */\n if (attributes)\n for (const attr of Object.keys(attributes)) {\n if (typeof attributes[attr] === \"undefined\")\n continue\n\n /* Set default attribute or boolean */\n if (typeof attributes[attr] !== \"boolean\")\n el.setAttribute(attr, attributes[attr])\n else\n el.setAttribute(attr, \"\")\n }\n\n /* Append child nodes */\n for (const child of children)\n appendChild(el, child)\n\n /* Return element */\n return el as T\n}\n\n/* ----------------------------------------------------------------------------\n * Namespace\n * ------------------------------------------------------------------------- */\n\nexport declare namespace h {\n namespace JSX {\n type Element = HTMLElement\n type IntrinsicElements = JSXInternal.IntrinsicElements\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Truncate a string after the given number of characters\n *\n * This is not a very reasonable approach, since the summaries kind of suck.\n * It would be better to create something more intelligent, highlighting the\n * search occurrences and making a better summary out of it, but this note was\n * written three years ago, so who knows if we'll ever fix it.\n *\n * @param value - Value to be truncated\n * @param n - Number of characters\n *\n * @returns Truncated value\n */\nexport function truncate(value: string, n: number): string {\n let i = n\n if (value.length > i) {\n while (value[i] !== \" \" && --i > 0) { /* keep eating */ }\n return `${value.substring(0, i)}...`\n }\n return value\n}\n\n/**\n * Round a number for display with repository facts\n *\n * This is a reverse-engineered version of GitHub's weird rounding algorithm\n * for stars, forks and all other numbers. While all numbers below `1,000` are\n * returned as-is, bigger numbers are converted to fixed numbers:\n *\n * - `1,049` => `1k`\n * - `1,050` => `1.1k`\n * - `1,949` => `1.9k`\n * - `1,950` => `2k`\n *\n * @param value - Original value\n *\n * @returns Rounded value\n */\nexport function round(value: number): string {\n if (value > 999) {\n const digits = +((value - 950) % 1000 > 99)\n return `${((value + 0.000001) / 1000).toFixed(digits)}k`\n } else {\n return value.toString()\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n filter,\n fromEvent,\n map,\n shareReplay,\n startWith\n} from \"rxjs\"\n\nimport { getOptionalElement } from \"~/browser\"\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve location hash\n *\n * @returns Location hash\n */\nexport function getLocationHash(): string {\n return location.hash.substring(1)\n}\n\n/**\n * Set location hash\n *\n * Setting a new fragment identifier via `location.hash` will have no effect\n * if the value doesn't change. When a new fragment identifier is set, we want\n * the browser to target the respective element at all times, which is why we\n * use this dirty little trick.\n *\n * @param hash - Location hash\n */\nexport function setLocationHash(hash: string): void {\n const el = h(\"a\", { href: hash })\n el.addEventListener(\"click\", ev => ev.stopPropagation())\n el.click()\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch location hash\n *\n * @returns Location hash observable\n */\nexport function watchLocationHash(): Observable {\n return fromEvent(window, \"hashchange\")\n .pipe(\n map(getLocationHash),\n startWith(getLocationHash()),\n filter(hash => hash.length > 0),\n shareReplay(1)\n )\n}\n\n/**\n * Watch location target\n *\n * @returns Location target observable\n */\nexport function watchLocationTarget(): Observable {\n return watchLocationHash()\n .pipe(\n map(id => getOptionalElement(`[id=\"${id}\"]`)!),\n filter(el => typeof el !== \"undefined\")\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n fromEvent,\n fromEventPattern,\n map,\n merge,\n startWith,\n switchMap\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch media query\n *\n * Note that although `MediaQueryList.addListener` is deprecated we have to\n * use it, because it's the only way to ensure proper downward compatibility.\n *\n * @see https://bit.ly/3dUBH2m - GitHub issue\n *\n * @param query - Media query\n *\n * @returns Media observable\n */\nexport function watchMedia(query: string): Observable {\n const media = matchMedia(query)\n return fromEventPattern(next => (\n media.addListener(() => next(media.matches))\n ))\n .pipe(\n startWith(media.matches)\n )\n}\n\n/**\n * Watch print mode\n *\n * @returns Print observable\n */\nexport function watchPrint(): Observable {\n const media = matchMedia(\"print\")\n return merge(\n fromEvent(window, \"beforeprint\").pipe(map(() => true)),\n fromEvent(window, \"afterprint\").pipe(map(() => false))\n )\n .pipe(\n startWith(media.matches)\n )\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Toggle an observable with a media observable\n *\n * @template T - Data type\n *\n * @param query$ - Media observable\n * @param factory - Observable factory\n *\n * @returns Toggled observable\n */\nexport function at(\n query$: Observable, factory: () => Observable\n): Observable {\n return query$\n .pipe(\n switchMap(active => active ? factory() : EMPTY)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n catchError,\n from,\n map,\n of,\n shareReplay,\n switchMap,\n throwError\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch the given URL\n *\n * If the request fails (e.g. when dispatched from `file://` locations), the\n * observable will complete without emitting a value.\n *\n * @param url - Request URL\n * @param options - Options\n *\n * @returns Response observable\n */\nexport function request(\n url: URL | string, options: RequestInit = { credentials: \"same-origin\" }\n): Observable {\n return from(fetch(`${url}`, options))\n .pipe(\n catchError(() => EMPTY),\n switchMap(res => res.status !== 200\n ? throwError(() => new Error(res.statusText))\n : of(res)\n )\n )\n}\n\n/**\n * Fetch JSON from the given URL\n *\n * @template T - Data type\n *\n * @param url - Request URL\n * @param options - Options\n *\n * @returns Data observable\n */\nexport function requestJSON(\n url: URL | string, options?: RequestInit\n): Observable {\n return request(url, options)\n .pipe(\n switchMap(res => res.json()),\n shareReplay(1)\n )\n}\n\n/**\n * Fetch XML from the given URL\n *\n * @param url - Request URL\n * @param options - Options\n *\n * @returns Data observable\n */\nexport function requestXML(\n url: URL | string, options?: RequestInit\n): Observable {\n const dom = new DOMParser()\n return request(url, options)\n .pipe(\n switchMap(res => res.text()),\n map(res => dom.parseFromString(res, \"text/xml\")),\n shareReplay(1)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n defer,\n finalize,\n fromEvent,\n map,\n merge,\n switchMap,\n take,\n throwError\n} from \"rxjs\"\n\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create and load a `script` element\n *\n * This function returns an observable that will emit when the script was\n * successfully loaded, or throw an error if it didn't.\n *\n * @param src - Script URL\n *\n * @returns Script observable\n */\nexport function watchScript(src: string): Observable {\n const script = h(\"script\", { src })\n return defer(() => {\n document.head.appendChild(script)\n return merge(\n fromEvent(script, \"load\"),\n fromEvent(script, \"error\")\n .pipe(\n switchMap(() => (\n throwError(() => new ReferenceError(`Invalid script: ${src}`))\n ))\n )\n )\n .pipe(\n map(() => undefined),\n finalize(() => document.head.removeChild(script)),\n take(1)\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n fromEvent,\n map,\n merge,\n startWith\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Viewport offset\n */\nexport interface ViewportOffset {\n x: number /* Horizontal offset */\n y: number /* Vertical offset */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve viewport offset\n *\n * On iOS Safari, viewport offset can be negative due to overflow scrolling.\n * As this may induce strange behaviors downstream, we'll just limit it to 0.\n *\n * @returns Viewport offset\n */\nexport function getViewportOffset(): ViewportOffset {\n return {\n x: Math.max(0, scrollX),\n y: Math.max(0, scrollY)\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch viewport offset\n *\n * @returns Viewport offset observable\n */\nexport function watchViewportOffset(): Observable {\n return merge(\n fromEvent(window, \"scroll\", { passive: true }),\n fromEvent(window, \"resize\", { passive: true })\n )\n .pipe(\n map(getViewportOffset),\n startWith(getViewportOffset())\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n fromEvent,\n map,\n startWith\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Viewport size\n */\nexport interface ViewportSize {\n width: number /* Viewport width */\n height: number /* Viewport height */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve viewport size\n *\n * @returns Viewport size\n */\nexport function getViewportSize(): ViewportSize {\n return {\n width: innerWidth,\n height: innerHeight\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch viewport size\n *\n * @returns Viewport size observable\n */\nexport function watchViewportSize(): Observable {\n return fromEvent(window, \"resize\", { passive: true })\n .pipe(\n map(getViewportSize),\n startWith(getViewportSize())\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n combineLatest,\n map,\n shareReplay\n} from \"rxjs\"\n\nimport {\n ViewportOffset,\n watchViewportOffset\n} from \"../offset\"\nimport {\n ViewportSize,\n watchViewportSize\n} from \"../size\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Viewport\n */\nexport interface Viewport {\n offset: ViewportOffset /* Viewport offset */\n size: ViewportSize /* Viewport size */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch viewport\n *\n * @returns Viewport observable\n */\nexport function watchViewport(): Observable {\n return combineLatest([\n watchViewportOffset(),\n watchViewportSize()\n ])\n .pipe(\n map(([offset, size]) => ({ offset, size })),\n shareReplay(1)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n combineLatest,\n distinctUntilKeyChanged,\n map\n} from \"rxjs\"\n\nimport { Header } from \"~/components\"\n\nimport { getElementOffset } from \"../../element\"\nimport { Viewport } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
/* Header observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch viewport relative to element\n *\n * @param el - Element\n * @param options - Options\n *\n * @returns Viewport observable\n */\nexport function watchViewportAt(\n el: HTMLElement, { viewport$, header$ }: WatchOptions\n): Observable {\n const size$ = viewport$\n .pipe(\n distinctUntilKeyChanged(\"size\")\n )\n\n /* Compute element offset */\n const offset$ = combineLatest([size$, header$])\n .pipe(\n map(() => getElementOffset(el))\n )\n\n /* Compute relative viewport, return hot observable */\n return combineLatest([header$, viewport$, offset$])\n .pipe(\n map(([{ height }, { offset, size }, { x, y }]) => ({\n offset: {\n x: offset.x - x,\n y: offset.y - y + height\n },\n size\n }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n fromEvent,\n map,\n share,\n switchMap,\n tap,\n throttle\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Worker message\n */\nexport interface WorkerMessage {\n type: unknown /* Message type */\n data?: unknown /* Message data */\n}\n\n/**\n * Worker handler\n *\n * @template T - Message type\n */\nexport interface WorkerHandler<\n T extends WorkerMessage\n> {\n tx$: Subject /* Message transmission subject */\n rx$: Observable /* Message receive observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n *\n * @template T - Worker message type\n */\ninterface WatchOptions {\n tx$: Observable /* Message transmission observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch a web worker\n *\n * This function returns an observable that sends all values emitted by the\n * message observable to the web worker. Web worker communication is expected\n * to be bidirectional (request-response) and synchronous. Messages that are\n * emitted during a pending request are throttled, the last one is emitted.\n *\n * @param worker - Web worker\n * @param options - Options\n *\n * @returns Worker message observable\n */\nexport function watchWorker(\n worker: Worker, { tx$ }: WatchOptions\n): Observable {\n\n /* Intercept messages from worker-like objects */\n const rx$ = fromEvent(worker, \"message\")\n .pipe(\n map(({ data }) => data as T)\n )\n\n /* Send and receive messages, return hot observable */\n return tx$\n .pipe(\n throttle(() => rx$, { leading: true, trailing: true }),\n tap(message => worker.postMessage(message)),\n switchMap(() => rx$),\n share()\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { getElement, getLocation } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Feature flag\n */\nexport type Flag =\n | \"announce.dismiss\" /* Dismissable announcement bar */\n | \"content.code.annotate\" /* Code annotations */\n | \"content.lazy\" /* Lazy content elements */\n | \"content.tabs.link\" /* Link content tabs */\n | \"header.autohide\" /* Hide header */\n | \"navigation.expand\" /* Automatic expansion */\n | \"navigation.indexes\" /* Section pages */\n | \"navigation.instant\" /* Instant loading */\n | \"navigation.sections\" /* Section navigation */\n | \"navigation.tabs\" /* Tabs navigation */\n | \"navigation.tabs.sticky\" /* Tabs navigation (sticky) */\n | \"navigation.top\" /* Back-to-top button */\n | \"navigation.tracking\" /* Anchor tracking */\n | \"search.highlight\" /* Search highlighting */\n | \"search.share\" /* Search sharing */\n | \"search.suggest\" /* Search suggestions */\n | \"toc.follow\" /* Following table of contents */\n | \"toc.integrate\" /* Integrated table of contents */\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Translation\n */\nexport type Translation =\n | \"clipboard.copy\" /* Copy to clipboard */\n | \"clipboard.copied\" /* Copied to clipboard */\n | \"search.config.lang\" /* Search language */\n | \"search.config.pipeline\" /* Search pipeline */\n | \"search.config.separator\" /* Search separator */\n | \"search.placeholder\" /* Search */\n | \"search.result.placeholder\" /* Type to start searching */\n | \"search.result.none\" /* No matching documents */\n | \"search.result.one\" /* 1 matching document */\n | \"search.result.other\" /* # matching documents */\n | \"search.result.more.one\" /* 1 more on this page */\n | \"search.result.more.other\" /* # more on this page */\n | \"search.result.term.missing\" /* Missing */\n | \"select.version.title\" /* Version selector */\n\n/**\n * Translations\n */\nexport type Translations = Record\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Versioning\n */\nexport interface Versioning {\n provider: \"mike\" /* Version provider */\n default?: string /* Default version */\n}\n\n/**\n * Configuration\n */\nexport interface Config {\n base: string /* Base URL */\n features: Flag[] /* Feature flags */\n translations: Translations /* Translations */\n search: string /* Search worker URL */\n tags?: Record /* Tags mapping */\n version?: Versioning /* Versioning */\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve global configuration and make base URL absolute\n */\nconst script = getElement(\"#__config\")\nconst config: Config = JSON.parse(script.textContent!)\nconfig.base = `${new URL(config.base, getLocation())}`\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve global configuration\n *\n * @returns Global configuration\n */\nexport function configuration(): Config {\n return config\n}\n\n/**\n * Check whether a feature flag is enabled\n *\n * @param flag - Feature flag\n *\n * @returns Test result\n */\nexport function feature(flag: Flag): boolean {\n return config.features.includes(flag)\n}\n\n/**\n * Retrieve the translation for the given key\n *\n * @param key - Key to be translated\n * @param value - Positional value, if any\n *\n * @returns Translation\n */\nexport function translation(\n key: Translation, value?: string | number\n): string {\n return typeof value !== \"undefined\"\n ? config.translations[key].replace(\"#\", value.toString())\n : config.translations[key]\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { getElement, getElements } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Component type\n */\nexport type ComponentType =\n | \"announce\" /* Announcement bar */\n | \"container\" /* Container */\n | \"consent\" /* Consent */\n | \"content\" /* Content */\n | \"dialog\" /* Dialog */\n | \"header\" /* Header */\n | \"header-title\" /* Header title */\n | \"header-topic\" /* Header topic */\n | \"main\" /* Main area */\n | \"outdated\" /* Version warning */\n | \"palette\" /* Color palette */\n | \"search\" /* Search */\n | \"search-query\" /* Search input */\n | \"search-result\" /* Search results */\n | \"search-share\" /* Search sharing */\n | \"search-suggest\" /* Search suggestions */\n | \"sidebar\" /* Sidebar */\n | \"skip\" /* Skip link */\n | \"source\" /* Repository information */\n | \"tabs\" /* Navigation tabs */\n | \"toc\" /* Table of contents */\n | \"top\" /* Back-to-top button */\n\n/**\n * Component\n *\n * @template T - Component type\n * @template U - Reference type\n */\nexport type Component<\n T extends {} = {},\n U extends HTMLElement = HTMLElement\n> =\n T & {\n ref: U /* Component reference */\n }\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Component type map\n */\ninterface ComponentTypeMap {\n \"announce\": HTMLElement /* Announcement bar */\n \"container\": HTMLElement /* Container */\n \"consent\": HTMLElement /* Consent */\n \"content\": HTMLElement /* Content */\n \"dialog\": HTMLElement /* Dialog */\n \"header\": HTMLElement /* Header */\n \"header-title\": HTMLElement /* Header title */\n \"header-topic\": HTMLElement /* Header topic */\n \"main\": HTMLElement /* Main area */\n \"outdated\": HTMLElement /* Version warning */\n \"palette\": HTMLElement /* Color palette */\n \"search\": HTMLElement /* Search */\n \"search-query\": HTMLInputElement /* Search input */\n \"search-result\": HTMLElement /* Search results */\n \"search-share\": HTMLAnchorElement /* Search sharing */\n \"search-suggest\": HTMLElement /* Search suggestions */\n \"sidebar\": HTMLElement /* Sidebar */\n \"skip\": HTMLAnchorElement /* Skip link */\n \"source\": HTMLAnchorElement /* Repository information */\n \"tabs\": HTMLElement /* Navigation tabs */\n \"toc\": HTMLElement /* Table of contents */\n \"top\": HTMLAnchorElement /* Back-to-top button */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve the element for a given component or throw a reference error\n *\n * @template T - Component type\n *\n * @param type - Component type\n * @param node - Node of reference\n *\n * @returns Element\n */\nexport function getComponentElement(\n type: T, node: ParentNode = document\n): ComponentTypeMap[T] {\n return getElement(`[data-md-component=${type}]`, node)\n}\n\n/**\n * Retrieve all elements for a given component\n *\n * @template T - Component type\n *\n * @param type - Component type\n * @param node - Node of reference\n *\n * @returns Elements\n */\nexport function getComponentElements(\n type: T, node: ParentNode = document\n): ComponentTypeMap[T][] {\n return getElements(`[data-md-component=${type}]`, node)\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n Subject,\n defer,\n finalize,\n fromEvent,\n map,\n startWith,\n tap\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport { getElement } from \"~/browser\"\n\nimport { Component } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Announcement bar\n */\nexport interface Announce {\n hash: number /* Content hash */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch announcement bar\n *\n * @param el - Announcement bar element\n *\n * @returns Announcement bar observable\n */\nexport function watchAnnounce(\n el: HTMLElement\n): Observable {\n const button = getElement(\".md-typeset > :first-child\", el)\n return fromEvent(button, \"click\", { once: true })\n .pipe(\n map(() => getElement(\".md-typeset\", el)),\n map(content => ({ hash: __md_hash(content.innerHTML) }))\n )\n}\n\n/**\n * Mount announcement bar\n *\n * @param el - Announcement bar element\n *\n * @returns Announcement bar component observable\n */\nexport function mountAnnounce(\n el: HTMLElement\n): Observable> {\n if (!feature(\"announce.dismiss\") || !el.childElementCount)\n return EMPTY\n\n /* Mount component on subscription */\n return defer(() => {\n const push$ = new Subject()\n push$\n .pipe(\n startWith({ hash: __md_get(\"__announce\") })\n )\n .subscribe(({ hash }) => {\n if (hash && hash === (__md_get(\"__announce\") ?? hash)) {\n el.hidden = true\n\n /* Persist preference in local storage */\n __md_set(\"__announce\", hash)\n }\n })\n\n /* Create and return component */\n return watchAnnounce(el)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n finalize,\n map,\n tap\n} from \"rxjs\"\n\nimport { Component } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Consent\n */\nexport interface Consent {\n hidden: boolean /* Consent is hidden */\n}\n\n/**\n * Consent defaults\n */\nexport interface ConsentDefaults {\n analytics?: boolean /* Consent for Analytics */\n github?: boolean /* Consent for GitHub */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n target$: Observable /* Target observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n target$: Observable /* Target observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch consent\n *\n * @param el - Consent element\n * @param options - Options\n *\n * @returns Consent observable\n */\nexport function watchConsent(\n el: HTMLElement, { target$ }: WatchOptions\n): Observable {\n return target$\n .pipe(\n map(target => ({ hidden: target !== el }))\n )\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Mount consent\n *\n * @param el - Consent element\n * @param options - Options\n *\n * @returns Consent component observable\n */\nexport function mountConsent(\n el: HTMLElement, options: MountOptions\n): Observable> {\n const internal$ = new Subject()\n internal$.subscribe(({ hidden }) => {\n el.hidden = hidden\n })\n\n /* Create and return component */\n return watchConsent(el, options)\n .pipe(\n tap(state => internal$.next(state)),\n finalize(() => internal$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport ClipboardJS from \"clipboard\"\nimport {\n EMPTY,\n Observable,\n Subject,\n defer,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n filter,\n finalize,\n map,\n mergeWith,\n switchMap,\n take,\n tap\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport {\n getElementContentSize,\n watchElementSize,\n watchElementVisibility\n} from \"~/browser\"\nimport { renderClipboardButton } from \"~/templates\"\n\nimport { Component } from \"../../../_\"\nimport {\n Annotation,\n mountAnnotationList\n} from \"../../annotation\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Code block\n */\nexport interface CodeBlock {\n scrollable: boolean /* Code block overflows */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n target$: Observable /* Location target observable */\n print$: Observable /* Media print observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Global sequence number for code blocks\n */\nlet sequence = 0\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Find candidate list element directly following a code block\n *\n * @param el - Code block element\n *\n * @returns List element or nothing\n */\nfunction findCandidateList(el: HTMLElement): HTMLElement | undefined {\n if (el.nextElementSibling) {\n const sibling = el.nextElementSibling as HTMLElement\n if (sibling.tagName === \"OL\")\n return sibling\n\n /* Skip empty paragraphs - see https://bit.ly/3r4ZJ2O */\n else if (sibling.tagName === \"P\" && !sibling.children.length)\n return findCandidateList(sibling)\n }\n\n /* Everything else */\n return undefined\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch code block\n *\n * This function monitors size changes of the viewport, as well as switches of\n * content tabs with embedded code blocks, as both may trigger overflow.\n *\n * @param el - Code block element\n *\n * @returns Code block observable\n */\nexport function watchCodeBlock(\n el: HTMLElement\n): Observable {\n return watchElementSize(el)\n .pipe(\n map(({ width }) => {\n const content = getElementContentSize(el)\n return {\n scrollable: content.width > width\n }\n }),\n distinctUntilKeyChanged(\"scrollable\")\n )\n}\n\n/**\n * Mount code block\n *\n * This function ensures that an overflowing code block is focusable through\n * keyboard, so it can be scrolled without a mouse to improve on accessibility.\n * Furthermore, if code annotations are enabled, they are mounted if and only\n * if the code block is currently visible, e.g., not in a hidden content tab.\n *\n * Note that code blocks may be mounted eagerly or lazily. If they're mounted\n * lazily (on first visibility), code annotation anchor links will not work,\n * as they are evaluated on initial page load, and code annotations in general\n * might feel a little bumpier.\n *\n * @param el - Code block element\n * @param options - Options\n *\n * @returns Code block and annotation component observable\n */\nexport function mountCodeBlock(\n el: HTMLElement, options: MountOptions\n): Observable> {\n const { matches: hover } = matchMedia(\"(hover)\")\n\n /* Defer mounting of code block - see https://bit.ly/3vHVoVD */\n const factory$ = defer(() => {\n const push$ = new Subject()\n push$.subscribe(({ scrollable }) => {\n if (scrollable && hover)\n el.setAttribute(\"tabindex\", \"0\")\n else\n el.removeAttribute(\"tabindex\")\n })\n\n /* Render button for Clipboard.js integration */\n if (ClipboardJS.isSupported()) {\n const parent = el.closest(\"pre\")!\n parent.id = `__code_${++sequence}`\n parent.insertBefore(\n renderClipboardButton(parent.id),\n el\n )\n }\n\n /* Handle code annotations */\n const container = el.closest(\".highlight\")\n if (container instanceof HTMLElement) {\n const list = findCandidateList(container)\n\n /* Mount code annotations, if enabled */\n if (typeof list !== \"undefined\" && (\n container.classList.contains(\"annotate\") ||\n feature(\"content.code.annotate\")\n )) {\n const annotations$ = mountAnnotationList(list, el, options)\n\n /* Create and return component */\n return watchCodeBlock(el)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state })),\n mergeWith(\n watchElementSize(container)\n .pipe(\n map(({ width, height }) => width && height),\n distinctUntilChanged(),\n switchMap(active => active ? annotations$ : EMPTY)\n )\n )\n )\n }\n }\n\n /* Create and return component */\n return watchCodeBlock(el)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n\n /* Mount code block lazily */\n if (feature(\"content.lazy\"))\n return watchElementVisibility(el)\n .pipe(\n filter(visible => visible),\n take(1),\n switchMap(() => factory$)\n )\n\n /* Mount code block */\n return factory$\n}\n", "/*\n * Copyright (c) 2016-2021 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a tooltip\n *\n * @param id - Tooltip identifier\n *\n * @returns Element\n */\nexport function renderTooltip(id?: string): HTMLElement {\n return (\n
\n
\n
\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { h } from \"~/utilities\"\n\nimport { renderTooltip } from \"../tooltip\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render an annotation\n *\n * @param id - Annotation identifier\n * @param prefix - Tooltip identifier prefix\n *\n * @returns Element\n */\nexport function renderAnnotation(\n id: string | number, prefix?: string\n): HTMLElement {\n prefix = prefix ? `${prefix}_annotation_${id}` : undefined\n\n /* Render tooltip with anchor, if given */\n if (prefix) {\n const anchor = prefix ? `#${prefix}` : undefined\n return (\n \n )\n } else {\n return (\n \n )\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { translation } from \"~/_\"\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a 'copy-to-clipboard' button\n *\n * @param id - Unique identifier\n *\n * @returns Element\n */\nexport function renderClipboardButton(id: string): HTMLElement {\n return (\n code`}\n >\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { ComponentChild } from \"preact\"\n\nimport { configuration, feature, translation } from \"~/_\"\nimport {\n SearchDocument,\n SearchMetadata,\n SearchResultItem\n} from \"~/integrations/search\"\nimport { h, truncate } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Render flag\n */\nconst enum Flag {\n TEASER = 1, /* Render teaser */\n PARENT = 2 /* Render as parent */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper function\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a search document\n *\n * @param document - Search document\n * @param flag - Render flags\n *\n * @returns Element\n */\nfunction renderSearchDocument(\n document: SearchDocument & SearchMetadata, flag: Flag\n): HTMLElement {\n const parent = flag & Flag.PARENT\n const teaser = flag & Flag.TEASER\n\n /* Render missing query terms */\n const missing = Object.keys(document.terms)\n .filter(key => !document.terms[key])\n .reduce((list, key) => [\n ...list, {key}, \" \"\n ], [])\n .slice(0, -1)\n\n /* Assemble query string for highlighting */\n const url = new URL(document.location)\n if (feature(\"search.highlight\"))\n url.searchParams.set(\"h\", Object.entries(document.terms)\n .filter(([, match]) => match)\n .reduce((highlight, [value]) => `${highlight} ${value}`.trim(), \"\")\n )\n\n /* Render article or section, depending on flags */\n const { tags } = configuration()\n return (\n \n \n {parent > 0 &&
}\n

{document.title}

\n {teaser > 0 && document.text.length > 0 &&\n

\n {truncate(document.text, 320)}\n

\n }\n {document.tags && (\n
\n {document.tags.map(tag => {\n const id = tag.replace(/<[^>]+>/g, \"\")\n const type = tags\n ? id in tags\n ? `md-tag-icon md-tag-icon--${tags[id]}`\n : \"md-tag-icon\"\n : \"\"\n return (\n {tag}\n )\n })}\n
\n )}\n {teaser > 0 && missing.length > 0 &&\n

\n {translation(\"search.result.term.missing\")}: {...missing}\n

\n }\n \n
\n )\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a search result\n *\n * @param result - Search result\n *\n * @returns Element\n */\nexport function renderSearchResultItem(\n result: SearchResultItem\n): HTMLElement {\n const threshold = result[0].score\n const docs = [...result]\n\n /* Find and extract parent article */\n const parent = docs.findIndex(doc => !doc.location.includes(\"#\"))\n const [article] = docs.splice(parent, 1)\n\n /* Determine last index above threshold */\n let index = docs.findIndex(doc => doc.score < threshold)\n if (index === -1)\n index = docs.length\n\n /* Partition sections */\n const best = docs.slice(0, index)\n const more = docs.slice(index)\n\n /* Render children */\n const children = [\n renderSearchDocument(article, Flag.PARENT | +(!parent && index === 0)),\n ...best.map(section => renderSearchDocument(section, Flag.TEASER)),\n ...more.length ? [\n
\n \n {more.length > 0 && more.length === 1\n ? translation(\"search.result.more.one\")\n : translation(\"search.result.more.other\", more.length)\n }\n \n {...more.map(section => renderSearchDocument(section, Flag.TEASER))}\n
\n ] : []\n ]\n\n /* Render search result */\n return (\n
  • \n {children}\n
  • \n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { SourceFacts } from \"~/components\"\nimport { h, round } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render repository facts\n *\n * @param facts - Repository facts\n *\n * @returns Element\n */\nexport function renderSourceFacts(facts: SourceFacts): HTMLElement {\n return (\n
      \n {Object.entries(facts).map(([key, value]) => (\n
    • \n {typeof value === \"number\" ? round(value) : value}\n
    • \n ))}\n
    \n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Tabbed control type\n */\ntype TabbedControlType =\n | \"prev\"\n | \"next\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render control for content tabs\n *\n * @param type - Control type\n *\n * @returns Element\n */\nexport function renderTabbedControl(\n type: TabbedControlType\n): HTMLElement {\n const classes = `tabbed-control tabbed-control--${type}`\n return (\n \n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a table inside a wrapper to improve scrolling on mobile\n *\n * @param table - Table element\n *\n * @returns Element\n */\nexport function renderTable(table: HTMLElement): HTMLElement {\n return (\n
    \n
    \n {table}\n
    \n
    \n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { configuration, translation } from \"~/_\"\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Version\n */\nexport interface Version {\n version: string /* Version identifier */\n title: string /* Version title */\n aliases: string[] /* Version aliases */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a version\n *\n * @param version - Version\n *\n * @returns Element\n */\nfunction renderVersion(version: Version): HTMLElement {\n const config = configuration()\n\n /* Ensure trailing slash - see https://bit.ly/3rL5u3f */\n const url = new URL(`../${version.version}/`, config.base)\n return (\n
  • \n \n {version.title}\n \n
  • \n )\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a version selector\n *\n * @param versions - Versions\n * @param active - Active version\n *\n * @returns Element\n */\nexport function renderVersionSelector(\n versions: Version[], active: Version\n): HTMLElement {\n return (\n
    \n \n {active.title}\n \n
      \n {versions.map(renderVersion)}\n
    \n
    \n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n animationFrameScheduler,\n auditTime,\n combineLatest,\n debounceTime,\n defer,\n delay,\n filter,\n finalize,\n fromEvent,\n map,\n merge,\n switchMap,\n take,\n takeLast,\n takeUntil,\n tap,\n throttleTime,\n withLatestFrom\n} from \"rxjs\"\n\nimport {\n ElementOffset,\n getActiveElement,\n getElementSize,\n watchElementContentOffset,\n watchElementFocus,\n watchElementOffset,\n watchElementVisibility\n} from \"~/browser\"\n\nimport { Component } from \"../../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Annotation\n */\nexport interface Annotation {\n active: boolean /* Annotation is active */\n offset: ElementOffset /* Annotation offset */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n target$: Observable /* Location target observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch annotation\n *\n * @param el - Annotation element\n * @param container - Containing element\n *\n * @returns Annotation observable\n */\nexport function watchAnnotation(\n el: HTMLElement, container: HTMLElement\n): Observable {\n const offset$ = defer(() => combineLatest([\n watchElementOffset(el),\n watchElementContentOffset(container)\n ]))\n .pipe(\n map(([{ x, y }, scroll]): ElementOffset => {\n const { width, height } = getElementSize(el)\n return ({\n x: x - scroll.x + width / 2,\n y: y - scroll.y + height / 2\n })\n })\n )\n\n /* Actively watch annotation on focus */\n return watchElementFocus(el)\n .pipe(\n switchMap(active => offset$\n .pipe(\n map(offset => ({ active, offset })),\n take(+!active || Infinity)\n )\n )\n )\n}\n\n/**\n * Mount annotation\n *\n * @param el - Annotation element\n * @param container - Containing element\n * @param options - Options\n *\n * @returns Annotation component observable\n */\nexport function mountAnnotation(\n el: HTMLElement, container: HTMLElement, { target$ }: MountOptions\n): Observable> {\n const [tooltip, index] = Array.from(el.children)\n\n /* Mount component on subscription */\n return defer(() => {\n const push$ = new Subject()\n const done$ = push$.pipe(takeLast(1))\n push$.subscribe({\n\n /* Handle emission */\n next({ offset }) {\n el.style.setProperty(\"--md-tooltip-x\", `${offset.x}px`)\n el.style.setProperty(\"--md-tooltip-y\", `${offset.y}px`)\n },\n\n /* Handle complete */\n complete() {\n el.style.removeProperty(\"--md-tooltip-x\")\n el.style.removeProperty(\"--md-tooltip-y\")\n }\n })\n\n /* Start animation only when annotation is visible */\n watchElementVisibility(el)\n .pipe(\n takeUntil(done$)\n )\n .subscribe(visible => {\n el.toggleAttribute(\"data-md-visible\", visible)\n })\n\n /* Toggle tooltip presence to mitigate empty lines when copying */\n merge(\n push$.pipe(filter(({ active }) => active)),\n push$.pipe(debounceTime(250), filter(({ active }) => !active))\n )\n .subscribe({\n\n /* Handle emission */\n next({ active }) {\n if (active)\n el.prepend(tooltip)\n else\n tooltip.remove()\n },\n\n /* Handle complete */\n complete() {\n el.prepend(tooltip)\n }\n })\n\n /* Toggle tooltip visibility */\n push$\n .pipe(\n auditTime(16, animationFrameScheduler)\n )\n .subscribe(({ active }) => {\n tooltip.classList.toggle(\"md-tooltip--active\", active)\n })\n\n /* Track relative origin of tooltip */\n push$\n .pipe(\n throttleTime(125, animationFrameScheduler),\n filter(() => !!el.offsetParent),\n map(() => el.offsetParent!.getBoundingClientRect()),\n map(({ x }) => x)\n )\n .subscribe({\n\n /* Handle emission */\n next(origin) {\n if (origin)\n el.style.setProperty(\"--md-tooltip-0\", `${-origin}px`)\n else\n el.style.removeProperty(\"--md-tooltip-0\")\n },\n\n /* Handle complete */\n complete() {\n el.style.removeProperty(\"--md-tooltip-0\")\n }\n })\n\n /* Allow to copy link without scrolling to anchor */\n fromEvent(index, \"click\")\n .pipe(\n takeUntil(done$),\n filter(ev => !(ev.metaKey || ev.ctrlKey))\n )\n .subscribe(ev => ev.preventDefault())\n\n /* Allow to open link in new tab or blur on close */\n fromEvent(index, \"mousedown\")\n .pipe(\n takeUntil(done$),\n withLatestFrom(push$)\n )\n .subscribe(([ev, { active }]) => {\n\n /* Open in new tab */\n if (ev.button !== 0 || ev.metaKey || ev.ctrlKey) {\n ev.preventDefault()\n\n /* Close annotation */\n } else if (active) {\n ev.preventDefault()\n\n /* Focus parent annotation, if any */\n const parent = el.parentElement!.closest(\".md-annotation\")\n if (parent instanceof HTMLElement)\n parent.focus()\n else\n getActiveElement()?.blur()\n }\n })\n\n /* Open and focus annotation on location target */\n target$\n .pipe(\n takeUntil(done$),\n filter(target => target === tooltip),\n delay(125)\n )\n .subscribe(() => el.focus())\n\n /* Create and return component */\n return watchAnnotation(el, container)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n Subject,\n defer,\n finalize,\n merge,\n share,\n takeLast,\n takeUntil\n} from \"rxjs\"\n\nimport {\n getElement,\n getElements,\n getOptionalElement\n} from \"~/browser\"\nimport { renderAnnotation } from \"~/templates\"\n\nimport { Component } from \"../../../_\"\nimport {\n Annotation,\n mountAnnotation\n} from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n target$: Observable /* Location target observable */\n print$: Observable /* Media print observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Find all annotation markers in the given code block\n *\n * @param container - Containing element\n *\n * @returns Annotation markers\n */\nfunction findAnnotationMarkers(container: HTMLElement): Text[] {\n const markers: Text[] = []\n for (const el of getElements(\".c, .c1, .cm\", container)) {\n const nodes: Text[] = []\n\n /* Find all text nodes in current element */\n const it = document.createNodeIterator(el, NodeFilter.SHOW_TEXT)\n for (let node = it.nextNode(); node; node = it.nextNode())\n nodes.push(node as Text)\n\n /* Find all markers in each text node */\n for (let text of nodes) {\n let match: RegExpExecArray | null\n\n /* Split text at marker and add to list */\n while ((match = /(\\(\\d+\\))(!)?/.exec(text.textContent!))) {\n const [, id, force] = match\n if (typeof force === \"undefined\") {\n const marker = text.splitText(match.index)\n text = marker.splitText(id.length)\n markers.push(marker)\n\n /* Replace entire text with marker */\n } else {\n text.textContent = id\n markers.push(text)\n break\n }\n }\n }\n }\n return markers\n}\n\n/**\n * Swap the child nodes of two elements\n *\n * @param source - Source element\n * @param target - Target element\n */\nfunction swap(source: HTMLElement, target: HTMLElement): void {\n target.append(...Array.from(source.childNodes))\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount annotation list\n *\n * This function analyzes the containing code block and checks for markers\n * referring to elements in the given annotation list. If no markers are found,\n * the list is left untouched. Otherwise, list elements are rendered as\n * annotations inside the code block.\n *\n * @param el - Annotation list element\n * @param container - Containing element\n * @param options - Options\n *\n * @returns Annotation component observable\n */\nexport function mountAnnotationList(\n el: HTMLElement, container: HTMLElement, { target$, print$ }: MountOptions\n): Observable> {\n\n /* Compute prefix for tooltip anchors */\n const parent = container.closest(\"[id]\")\n const prefix = parent?.id\n\n /* Find and replace all markers with empty annotations */\n const annotations = new Map()\n for (const marker of findAnnotationMarkers(container)) {\n const [, id] = marker.textContent!.match(/\\((\\d+)\\)/)!\n if (getOptionalElement(`li:nth-child(${id})`, el)) {\n annotations.set(id, renderAnnotation(id, prefix))\n marker.replaceWith(annotations.get(id)!)\n }\n }\n\n /* Keep list if there are no annotations to render */\n if (annotations.size === 0)\n return EMPTY\n\n /* Mount component on subscription */\n return defer(() => {\n const done$ = new Subject()\n\n /* Retrieve container pairs for swapping */\n const pairs: [HTMLElement, HTMLElement][] = []\n for (const [id, annotation] of annotations)\n pairs.push([\n getElement(\".md-typeset\", annotation),\n getElement(`li:nth-child(${id})`, el)\n ])\n\n /* Handle print mode - see https://bit.ly/3rgPdpt */\n print$\n .pipe(\n takeUntil(done$.pipe(takeLast(1)))\n )\n .subscribe(active => {\n el.hidden = !active\n\n /* Show annotations in code block or list (print) */\n for (const [inner, child] of pairs)\n if (!active)\n swap(child, inner)\n else\n swap(inner, child)\n })\n\n /* Create and return component */\n return merge(...[...annotations]\n .map(([, annotation]) => (\n mountAnnotation(annotation, container, { target$ })\n ))\n )\n .pipe(\n finalize(() => done$.complete()),\n share()\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n map,\n of,\n shareReplay,\n tap\n} from \"rxjs\"\n\nimport { watchScript } from \"~/browser\"\nimport { h } from \"~/utilities\"\n\nimport { Component } from \"../../../_\"\n\nimport themeCSS from \"./index.css\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mermaid diagram\n */\nexport interface Mermaid {}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Mermaid instance observable\n */\nlet mermaid$: Observable\n\n/**\n * Global sequence number for diagrams\n */\nlet sequence = 0\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch Mermaid script\n *\n * @returns Mermaid scripts observable\n */\nfunction fetchScripts(): Observable {\n return typeof mermaid === \"undefined\" || mermaid instanceof Element\n ? watchScript(\"https://unpkg.com/mermaid@9.1.7/dist/mermaid.min.js\")\n : of(undefined)\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount Mermaid diagram\n *\n * @param el - Code block element\n *\n * @returns Mermaid diagram component observable\n */\nexport function mountMermaid(\n el: HTMLElement\n): Observable> {\n el.classList.remove(\"mermaid\") // Hack: mitigate https://bit.ly/3CiN6Du\n mermaid$ ||= fetchScripts()\n .pipe(\n tap(() => mermaid.initialize({\n startOnLoad: false,\n themeCSS,\n sequence: {\n actorFontSize: \"16px\", // Hack: mitigate https://bit.ly/3y0NEi3\n messageFontSize: \"16px\",\n noteFontSize: \"16px\"\n }\n })),\n map(() => undefined),\n shareReplay(1)\n )\n\n /* Render diagram */\n mermaid$.subscribe(() => {\n el.classList.add(\"mermaid\") // Hack: mitigate https://bit.ly/3CiN6Du\n const id = `__mermaid_${sequence++}`\n const host = h(\"div\", { class: \"mermaid\" })\n mermaid.mermaidAPI.render(id, el.textContent, (svg: string) => {\n\n /* Create a shadow root and inject diagram */\n const shadow = host.attachShadow({ mode: \"closed\" })\n shadow.innerHTML = svg\n\n /* Replace code block with diagram */\n el.replaceWith(host)\n })\n })\n\n /* Create and return component */\n return mermaid$\n .pipe(\n map(() => ({ ref: el }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n defer,\n filter,\n finalize,\n map,\n merge,\n tap\n} from \"rxjs\"\n\nimport { Component } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Details\n */\nexport interface Details {\n action: \"open\" | \"close\" /* Details state */\n reveal?: boolean /* Details is revealed */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n target$: Observable /* Location target observable */\n print$: Observable /* Media print observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n target$: Observable /* Location target observable */\n print$: Observable /* Media print observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch details\n *\n * @param el - Details element\n * @param options - Options\n *\n * @returns Details observable\n */\nexport function watchDetails(\n el: HTMLDetailsElement, { target$, print$ }: WatchOptions\n): Observable
    {\n let open = true\n return merge(\n\n /* Open and focus details on location target */\n target$\n .pipe(\n map(target => target.closest(\"details:not([open])\")!),\n filter(details => el === details),\n map(() => ({\n action: \"open\", reveal: true\n }) as Details)\n ),\n\n /* Open details on print and close afterwards */\n print$\n .pipe(\n filter(active => active || !open),\n tap(() => open = el.open),\n map(active => ({\n action: active ? \"open\" : \"close\"\n }) as Details)\n )\n )\n}\n\n/**\n * Mount details\n *\n * This function ensures that `details` tags are opened on anchor jumps and\n * prior to printing, so the whole content of the page is visible.\n *\n * @param el - Details element\n * @param options - Options\n *\n * @returns Details component observable\n */\nexport function mountDetails(\n el: HTMLDetailsElement, options: MountOptions\n): Observable> {\n return defer(() => {\n const push$ = new Subject
    ()\n push$.subscribe(({ action, reveal }) => {\n el.toggleAttribute(\"open\", action === \"open\")\n if (reveal)\n el.scrollIntoView()\n })\n\n /* Create and return component */\n return watchDetails(el, options)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, of } from \"rxjs\"\n\nimport { renderTable } from \"~/templates\"\nimport { h } from \"~/utilities\"\n\nimport { Component } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Data table\n */\nexport interface DataTable {}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Sentinel for replacement\n */\nconst sentinel = h(\"table\")\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount data table\n *\n * This function wraps a data table in another scrollable container, so it can\n * be smoothly scrolled on smaller screen sizes and won't break the layout.\n *\n * @param el - Data table element\n *\n * @returns Data table component observable\n */\nexport function mountDataTable(\n el: HTMLElement\n): Observable> {\n el.replaceWith(sentinel)\n sentinel.replaceWith(renderTable(el))\n\n /* Create and return component */\n return of({ ref: el })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n animationFrameScheduler,\n asyncScheduler,\n auditTime,\n combineLatest,\n defer,\n finalize,\n fromEvent,\n map,\n merge,\n skip,\n startWith,\n subscribeOn,\n takeLast,\n takeUntil,\n tap,\n withLatestFrom\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport {\n Viewport,\n getElement,\n getElementContentOffset,\n getElementContentSize,\n getElementOffset,\n getElementSize,\n getElements,\n watchElementContentOffset,\n watchElementSize\n} from \"~/browser\"\nimport { renderTabbedControl } from \"~/templates\"\n\nimport { Component } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Content tabs\n */\nexport interface ContentTabs {\n active: HTMLLabelElement /* Active tab label */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch content tabs\n *\n * @param el - Content tabs element\n *\n * @returns Content tabs observable\n */\nexport function watchContentTabs(\n el: HTMLElement\n): Observable {\n const inputs = getElements(\":scope > input\", el)\n const initial = inputs.find(input => input.checked) || inputs[0]\n return merge(...inputs.map(input => fromEvent(input, \"change\")\n .pipe(\n map(() => getElement(`label[for=\"${input.id}\"]`))\n )\n ))\n .pipe(\n startWith(getElement(`label[for=\"${initial.id}\"]`)),\n map(active => ({ active }))\n )\n}\n\n/**\n * Mount content tabs\n *\n * This function scrolls the active tab into view. While this functionality is\n * provided by browsers as part of `scrollInfoView`, browsers will always also\n * scroll the vertical axis, which we do not want. Thus, we decided to provide\n * this functionality ourselves.\n *\n * @param el - Content tabs element\n * @param options - Options\n *\n * @returns Content tabs component observable\n */\nexport function mountContentTabs(\n el: HTMLElement, { viewport$ }: MountOptions\n): Observable> {\n\n /* Render content tab previous button for pagination */\n const prev = renderTabbedControl(\"prev\")\n el.append(prev)\n\n /* Render content tab next button for pagination */\n const next = renderTabbedControl(\"next\")\n el.append(next)\n\n /* Mount component on subscription */\n const container = getElement(\".tabbed-labels\", el)\n return defer(() => {\n const push$ = new Subject()\n const done$ = push$.pipe(takeLast(1))\n combineLatest([push$, watchElementSize(el)])\n .pipe(\n auditTime(1, animationFrameScheduler),\n takeUntil(done$)\n )\n .subscribe({\n\n /* Handle emission */\n next([{ active }, size]) {\n const offset = getElementOffset(active)\n const { width } = getElementSize(active)\n\n /* Set tab indicator offset and width */\n el.style.setProperty(\"--md-indicator-x\", `${offset.x}px`)\n el.style.setProperty(\"--md-indicator-width\", `${width}px`)\n\n /* Scroll container to active content tab */\n const content = getElementContentOffset(container)\n if (\n offset.x < content.x ||\n offset.x + width > content.x + size.width\n )\n container.scrollTo({\n left: Math.max(0, offset.x - 16),\n behavior: \"smooth\"\n })\n },\n\n /* Handle complete */\n complete() {\n el.style.removeProperty(\"--md-indicator-x\")\n el.style.removeProperty(\"--md-indicator-width\")\n }\n })\n\n /* Hide content tab buttons on borders */\n combineLatest([\n watchElementContentOffset(container),\n watchElementSize(container)\n ])\n .pipe(\n takeUntil(done$)\n )\n .subscribe(([offset, size]) => {\n const content = getElementContentSize(container)\n prev.hidden = offset.x < 16\n next.hidden = offset.x > content.width - size.width - 16\n })\n\n /* Paginate content tab container on click */\n merge(\n fromEvent(prev, \"click\").pipe(map(() => -1)),\n fromEvent(next, \"click\").pipe(map(() => +1))\n )\n .pipe(\n takeUntil(done$)\n )\n .subscribe(direction => {\n const { width } = getElementSize(container)\n container.scrollBy({\n left: width * direction,\n behavior: \"smooth\"\n })\n })\n\n /* Set up linking of content tabs, if enabled */\n if (feature(\"content.tabs.link\"))\n push$.pipe(\n skip(1),\n withLatestFrom(viewport$)\n )\n .subscribe(([{ active }, { offset }]) => {\n const tab = active.innerText.trim()\n if (active.hasAttribute(\"data-md-switching\")) {\n active.removeAttribute(\"data-md-switching\")\n\n /* Determine viewport offset of active tab */\n } else {\n const y = el.offsetTop - offset.y\n\n /* Passively activate other tabs */\n for (const set of getElements(\"[data-tabs]\"))\n for (const input of getElements(\n \":scope > input\", set\n )) {\n const label = getElement(`label[for=\"${input.id}\"]`)\n if (\n label !== active &&\n label.innerText.trim() === tab\n ) {\n label.setAttribute(\"data-md-switching\", \"\")\n input.click()\n break\n }\n }\n\n /* Bring active tab into view */\n window.scrollTo({\n top: el.offsetTop - y\n })\n\n /* Persist active tabs in local storage */\n const tabs = __md_get(\"__tabs\") || []\n __md_set(\"__tabs\", [...new Set([tab, ...tabs])])\n }\n })\n\n /* Create and return component */\n return watchContentTabs(el)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n .pipe(\n subscribeOn(asyncScheduler)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, merge } from \"rxjs\"\n\nimport { Viewport, getElements } from \"~/browser\"\n\nimport { Component } from \"../../_\"\nimport { Annotation } from \"../annotation\"\nimport {\n CodeBlock,\n Mermaid,\n mountCodeBlock,\n mountMermaid\n} from \"../code\"\nimport {\n Details,\n mountDetails\n} from \"../details\"\nimport {\n DataTable,\n mountDataTable\n} from \"../table\"\nimport {\n ContentTabs,\n mountContentTabs\n} from \"../tabs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Content\n */\nexport type Content =\n | Annotation\n | ContentTabs\n | CodeBlock\n | Mermaid\n | DataTable\n | Details\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n target$: Observable /* Location target observable */\n print$: Observable /* Media print observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount content\n *\n * This function mounts all components that are found in the content of the\n * actual article, including code blocks, data tables and details.\n *\n * @param el - Content element\n * @param options - Options\n *\n * @returns Content component observable\n */\nexport function mountContent(\n el: HTMLElement, { viewport$, target$, print$ }: MountOptions\n): Observable> {\n return merge(\n\n /* Code blocks */\n ...getElements(\"pre:not(.mermaid) > code\", el)\n .map(child => mountCodeBlock(child, { target$, print$ })),\n\n /* Mermaid diagrams */\n ...getElements(\"pre.mermaid\", el)\n .map(child => mountMermaid(child)),\n\n /* Data tables */\n ...getElements(\"table:not([class])\", el)\n .map(child => mountDataTable(child)),\n\n /* Details */\n ...getElements(\"details\", el)\n .map(child => mountDetails(child, { target$, print$ })),\n\n /* Content tabs */\n ...getElements(\"[data-tabs]\", el)\n .map(child => mountContentTabs(child, { viewport$ }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n defer,\n delay,\n finalize,\n map,\n merge,\n of,\n switchMap,\n tap\n} from \"rxjs\"\n\nimport { getElement } from \"~/browser\"\n\nimport { Component } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Dialog\n */\nexport interface Dialog {\n message: string /* Dialog message */\n active: boolean /* Dialog is active */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n alert$: Subject /* Alert subject */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n alert$: Subject /* Alert subject */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch dialog\n *\n * @param _el - Dialog element\n * @param options - Options\n *\n * @returns Dialog observable\n */\nexport function watchDialog(\n _el: HTMLElement, { alert$ }: WatchOptions\n): Observable {\n return alert$\n .pipe(\n switchMap(message => merge(\n of(true),\n of(false).pipe(delay(2000))\n )\n .pipe(\n map(active => ({ message, active }))\n )\n )\n )\n}\n\n/**\n * Mount dialog\n *\n * This function reveals the dialog in the right corner when a new alert is\n * emitted through the subject that is passed as part of the options.\n *\n * @param el - Dialog element\n * @param options - Options\n *\n * @returns Dialog component observable\n */\nexport function mountDialog(\n el: HTMLElement, options: MountOptions\n): Observable> {\n const inner = getElement(\".md-typeset\", el)\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe(({ message, active }) => {\n el.classList.toggle(\"md-dialog--active\", active)\n inner.textContent = message\n })\n\n /* Create and return component */\n return watchDialog(el, options)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n bufferCount,\n combineLatest,\n combineLatestWith,\n defer,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n filter,\n map,\n of,\n shareReplay,\n startWith,\n switchMap,\n takeLast,\n takeUntil\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport {\n Viewport,\n watchElementSize,\n watchToggle\n} from \"~/browser\"\n\nimport { Component } from \"../../_\"\nimport { Main } from \"../../main\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Header\n */\nexport interface Header {\n height: number /* Header visible height */\n hidden: boolean /* Header is hidden */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n main$: Observable
    /* Main area observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Compute whether the header is hidden\n *\n * If the user scrolls past a certain threshold, the header can be hidden when\n * scrolling down, and shown when scrolling up.\n *\n * @param options - Options\n *\n * @returns Toggle observable\n */\nfunction isHidden({ viewport$ }: WatchOptions): Observable {\n if (!feature(\"header.autohide\"))\n return of(false)\n\n /* Compute direction and turning point */\n const direction$ = viewport$\n .pipe(\n map(({ offset: { y } }) => y),\n bufferCount(2, 1),\n map(([a, b]) => [a < b, b] as const),\n distinctUntilKeyChanged(0)\n )\n\n /* Compute whether header should be hidden */\n const hidden$ = combineLatest([viewport$, direction$])\n .pipe(\n filter(([{ offset }, [, y]]) => Math.abs(y - offset.y) > 100),\n map(([, [direction]]) => direction),\n distinctUntilChanged()\n )\n\n /* Compute threshold for hiding */\n const search$ = watchToggle(\"search\")\n return combineLatest([viewport$, search$])\n .pipe(\n map(([{ offset }, search]) => offset.y > 400 && !search),\n distinctUntilChanged(),\n switchMap(active => active ? hidden$ : of(false)),\n startWith(false)\n )\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch header\n *\n * @param el - Header element\n * @param options - Options\n *\n * @returns Header observable\n */\nexport function watchHeader(\n el: HTMLElement, options: WatchOptions\n): Observable
    {\n return defer(() => combineLatest([\n watchElementSize(el),\n isHidden(options)\n ]))\n .pipe(\n map(([{ height }, hidden]) => ({\n height,\n hidden\n })),\n distinctUntilChanged((a, b) => (\n a.height === b.height &&\n a.hidden === b.hidden\n )),\n shareReplay(1)\n )\n}\n\n/**\n * Mount header\n *\n * This function manages the different states of the header, i.e. whether it's\n * hidden or rendered with a shadow. This depends heavily on the main area.\n *\n * @param el - Header element\n * @param options - Options\n *\n * @returns Header component observable\n */\nexport function mountHeader(\n el: HTMLElement, { header$, main$ }: MountOptions\n): Observable> {\n return defer(() => {\n const push$ = new Subject
    ()\n const done$ = push$.pipe(takeLast(1))\n push$\n .pipe(\n distinctUntilKeyChanged(\"active\"),\n combineLatestWith(header$)\n )\n .subscribe(([{ active }, { hidden }]) => {\n el.classList.toggle(\"md-header--shadow\", active && !hidden)\n el.hidden = hidden\n })\n\n /* Link to main area */\n main$.subscribe(push$)\n\n /* Create and return component */\n return header$\n .pipe(\n takeUntil(done$),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n Subject,\n defer,\n distinctUntilKeyChanged,\n finalize,\n map,\n tap\n} from \"rxjs\"\n\nimport {\n Viewport,\n getElementSize,\n getOptionalElement,\n watchViewportAt\n} from \"~/browser\"\n\nimport { Component } from \"../../_\"\nimport { Header } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Header\n */\nexport interface HeaderTitle {\n active: boolean /* Header title is active */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch header title\n *\n * @param el - Heading element\n * @param options - Options\n *\n * @returns Header title observable\n */\nexport function watchHeaderTitle(\n el: HTMLElement, { viewport$, header$ }: WatchOptions\n): Observable {\n return watchViewportAt(el, { viewport$, header$ })\n .pipe(\n map(({ offset: { y } }) => {\n const { height } = getElementSize(el)\n return {\n active: y >= height\n }\n }),\n distinctUntilKeyChanged(\"active\")\n )\n}\n\n/**\n * Mount header title\n *\n * This function swaps the header title from the site title to the title of the\n * current page when the user scrolls past the first headline.\n *\n * @param el - Header title element\n * @param options - Options\n *\n * @returns Header title component observable\n */\nexport function mountHeaderTitle(\n el: HTMLElement, options: MountOptions\n): Observable> {\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe(({ active }) => {\n el.classList.toggle(\"md-header__title--active\", active)\n })\n\n /* Obtain headline, if any */\n const heading = getOptionalElement(\"article h1\")\n if (typeof heading === \"undefined\")\n return EMPTY\n\n /* Create and return component */\n return watchHeaderTitle(heading, options)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n combineLatest,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n map,\n switchMap\n} from \"rxjs\"\n\nimport {\n Viewport,\n watchElementSize\n} from \"~/browser\"\n\nimport { Header } from \"../header\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Main area\n */\nexport interface Main {\n offset: number /* Main area top offset */\n height: number /* Main area visible height */\n active: boolean /* Main area is active */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch main area\n *\n * This function returns an observable that computes the visual parameters of\n * the main area which depends on the viewport vertical offset and height, as\n * well as the height of the header element, if the header is fixed.\n *\n * @param el - Main area element\n * @param options - Options\n *\n * @returns Main area observable\n */\nexport function watchMain(\n el: HTMLElement, { viewport$, header$ }: WatchOptions\n): Observable
    {\n\n /* Compute necessary adjustment for header */\n const adjust$ = header$\n .pipe(\n map(({ height }) => height),\n distinctUntilChanged()\n )\n\n /* Compute the main area's top and bottom borders */\n const border$ = adjust$\n .pipe(\n switchMap(() => watchElementSize(el)\n .pipe(\n map(({ height }) => ({\n top: el.offsetTop,\n bottom: el.offsetTop + height\n })),\n distinctUntilKeyChanged(\"bottom\")\n )\n )\n )\n\n /* Compute the main area's offset, visible height and if we scrolled past */\n return combineLatest([adjust$, border$, viewport$])\n .pipe(\n map(([header, { top, bottom }, { offset: { y }, size: { height } }]) => {\n height = Math.max(0, height\n - Math.max(0, top - y, header)\n - Math.max(0, height + y - bottom)\n )\n return {\n offset: top - header,\n height,\n active: top - header <= y\n }\n }),\n distinctUntilChanged((a, b) => (\n a.offset === b.offset &&\n a.height === b.height &&\n a.active === b.active\n ))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n asyncScheduler,\n defer,\n finalize,\n fromEvent,\n map,\n mergeMap,\n observeOn,\n of,\n shareReplay,\n startWith,\n tap\n} from \"rxjs\"\n\nimport { getElements } from \"~/browser\"\n\nimport { Component } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Palette colors\n */\nexport interface PaletteColor {\n scheme?: string /* Color scheme */\n primary?: string /* Primary color */\n accent?: string /* Accent color */\n}\n\n/**\n * Palette\n */\nexport interface Palette {\n index: number /* Palette index */\n color: PaletteColor /* Palette colors */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch color palette\n *\n * @param inputs - Color palette element\n *\n * @returns Color palette observable\n */\nexport function watchPalette(\n inputs: HTMLInputElement[]\n): Observable {\n const current = __md_get(\"__palette\") || {\n index: inputs.findIndex(input => matchMedia(\n input.getAttribute(\"data-md-color-media\")!\n ).matches)\n }\n\n /* Emit changes in color palette */\n return of(...inputs)\n .pipe(\n mergeMap(input => fromEvent(input, \"change\")\n .pipe(\n map(() => input)\n )\n ),\n startWith(inputs[Math.max(0, current.index)]),\n map(input => ({\n index: inputs.indexOf(input),\n color: {\n scheme: input.getAttribute(\"data-md-color-scheme\"),\n primary: input.getAttribute(\"data-md-color-primary\"),\n accent: input.getAttribute(\"data-md-color-accent\")\n }\n } as Palette)),\n shareReplay(1)\n )\n}\n\n/**\n * Mount color palette\n *\n * @param el - Color palette element\n *\n * @returns Color palette component observable\n */\nexport function mountPalette(\n el: HTMLElement\n): Observable> {\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe(palette => {\n document.body.setAttribute(\"data-md-color-switching\", \"\")\n\n /* Set color palette */\n for (const [key, value] of Object.entries(palette.color))\n document.body.setAttribute(`data-md-color-${key}`, value)\n\n /* Toggle visibility */\n for (let index = 0; index < inputs.length; index++) {\n const label = inputs[index].nextElementSibling\n if (label instanceof HTMLElement)\n label.hidden = palette.index !== index\n }\n\n /* Persist preference in local storage */\n __md_set(\"__palette\", palette)\n })\n\n /* Revert transition durations after color switch */\n push$.pipe(observeOn(asyncScheduler))\n .subscribe(() => {\n document.body.removeAttribute(\"data-md-color-switching\")\n })\n\n /* Create and return component */\n const inputs = getElements(\"input\", el)\n return watchPalette(inputs)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport ClipboardJS from \"clipboard\"\nimport {\n Observable,\n Subject,\n map,\n tap\n} from \"rxjs\"\n\nimport { translation } from \"~/_\"\nimport { getElement } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Setup options\n */\ninterface SetupOptions {\n alert$: Subject /* Alert subject */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Extract text to copy\n *\n * @param el - HTML element\n *\n * @returns Extracted text\n */\nfunction extract(el: HTMLElement): string {\n el.setAttribute(\"data-md-copying\", \"\")\n const text = el.innerText\n el.removeAttribute(\"data-md-copying\")\n return text\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up Clipboard.js integration\n *\n * @param options - Options\n */\nexport function setupClipboardJS(\n { alert$ }: SetupOptions\n): void {\n if (ClipboardJS.isSupported()) {\n new Observable(subscriber => {\n new ClipboardJS(\"[data-clipboard-target], [data-clipboard-text]\", {\n text: el => (\n el.getAttribute(\"data-clipboard-text\")! ||\n extract(getElement(\n el.getAttribute(\"data-clipboard-target\")!\n ))\n )\n })\n .on(\"success\", ev => subscriber.next(ev))\n })\n .pipe(\n tap(ev => {\n const trigger = ev.trigger as HTMLElement\n trigger.focus()\n }),\n map(() => translation(\"clipboard.copied\"))\n )\n .subscribe(alert$)\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n catchError,\n defaultIfEmpty,\n map,\n of,\n tap\n} from \"rxjs\"\n\nimport { configuration } from \"~/_\"\nimport { getElements, requestXML } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Sitemap, i.e. a list of URLs\n */\nexport type Sitemap = string[]\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Preprocess a list of URLs\n *\n * This function replaces the `site_url` in the sitemap with the actual base\n * URL, to allow instant loading to work in occasions like Netlify previews.\n *\n * @param urls - URLs\n *\n * @returns URL path parts\n */\nfunction preprocess(urls: Sitemap): Sitemap {\n if (urls.length < 2)\n return [\"\"]\n\n /* Take the first two URLs and remove everything after the last slash */\n const [root, next] = [...urls]\n .sort((a, b) => a.length - b.length)\n .map(url => url.replace(/[^/]+$/, \"\"))\n\n /* Compute common prefix */\n let index = 0\n if (root === next)\n index = root.length\n else\n while (root.charCodeAt(index) === next.charCodeAt(index))\n index++\n\n /* Remove common prefix and return in original order */\n return urls.map(url => url.replace(root.slice(0, index), \"\"))\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch the sitemap for the given base URL\n *\n * @param base - Base URL\n *\n * @returns Sitemap observable\n */\nexport function fetchSitemap(base?: URL): Observable {\n const cached = __md_get(\"__sitemap\", sessionStorage, base)\n if (cached) {\n return of(cached)\n } else {\n const config = configuration()\n return requestXML(new URL(\"sitemap.xml\", base || config.base))\n .pipe(\n map(sitemap => preprocess(getElements(\"loc\", sitemap)\n .map(node => node.textContent!)\n )),\n catchError(() => EMPTY), // @todo refactor instant loading\n defaultIfEmpty([]),\n tap(sitemap => __md_set(\"__sitemap\", sitemap, sessionStorage, base))\n )\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n NEVER,\n Observable,\n Subject,\n bufferCount,\n catchError,\n concatMap,\n debounceTime,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n filter,\n fromEvent,\n map,\n merge,\n of,\n sample,\n share,\n skip,\n skipUntil,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"~/_\"\nimport {\n Viewport,\n ViewportOffset,\n getElements,\n getOptionalElement,\n request,\n setLocation,\n setLocationHash\n} from \"~/browser\"\nimport { getComponentElement } from \"~/components\"\nimport { h } from \"~/utilities\"\n\nimport { fetchSitemap } from \"../sitemap\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * History state\n */\nexport interface HistoryState {\n url: URL /* State URL */\n offset?: ViewportOffset /* State viewport offset */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Setup options\n */\ninterface SetupOptions {\n document$: Subject /* Document subject */\n location$: Subject /* Location subject */\n viewport$: Observable /* Viewport observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up instant loading\n *\n * When fetching, theoretically, we could use `responseType: \"document\"`, but\n * since all MkDocs links are relative, we need to make sure that the current\n * location matches the document we just loaded. Otherwise any relative links\n * in the document could use the old location.\n *\n * This is the reason why we need to synchronize history events and the process\n * of fetching the document for navigation changes (except `popstate` events):\n *\n * 1. Fetch document via `XMLHTTPRequest`\n * 2. Set new location via `history.pushState`\n * 3. Parse and emit fetched document\n *\n * For `popstate` events, we must not use `history.pushState`, or the forward\n * history will be irreversibly overwritten. In case the request fails, the\n * location change is dispatched regularly.\n *\n * @param options - Options\n */\nexport function setupInstantLoading(\n { document$, location$, viewport$ }: SetupOptions\n): void {\n const config = configuration()\n if (location.protocol === \"file:\")\n return\n\n /* Disable automatic scroll restoration */\n if (\"scrollRestoration\" in history) {\n history.scrollRestoration = \"manual\"\n\n /* Hack: ensure that reloads restore viewport offset */\n fromEvent(window, \"beforeunload\")\n .subscribe(() => {\n history.scrollRestoration = \"auto\"\n })\n }\n\n /* Hack: ensure absolute favicon link to omit 404s when switching */\n const favicon = getOptionalElement(\"link[rel=icon]\")\n if (typeof favicon !== \"undefined\")\n favicon.href = favicon.href\n\n /* Intercept internal navigation */\n const push$ = fetchSitemap()\n .pipe(\n map(paths => paths.map(path => `${new URL(path, config.base)}`)),\n switchMap(urls => fromEvent(document.body, \"click\")\n .pipe(\n filter(ev => !ev.metaKey && !ev.ctrlKey),\n switchMap(ev => {\n if (ev.target instanceof Element) {\n const el = ev.target.closest(\"a\")\n if (el && !el.target) {\n const url = new URL(el.href)\n\n /* Canonicalize URL */\n url.search = \"\"\n url.hash = \"\"\n\n /* Check if URL should be intercepted */\n if (\n url.pathname !== location.pathname &&\n urls.includes(url.toString())\n ) {\n ev.preventDefault()\n return of({\n url: new URL(el.href)\n })\n }\n }\n }\n return NEVER\n })\n )\n ),\n share()\n )\n\n /* Intercept history back and forward */\n const pop$ = fromEvent(window, \"popstate\")\n .pipe(\n filter(ev => ev.state !== null),\n map(ev => ({\n url: new URL(location.href),\n offset: ev.state\n })),\n share()\n )\n\n /* Emit location change */\n merge(push$, pop$)\n .pipe(\n distinctUntilChanged((a, b) => a.url.href === b.url.href),\n map(({ url }) => url)\n )\n .subscribe(location$)\n\n /* Fetch document via `XMLHTTPRequest` */\n const response$ = location$\n .pipe(\n distinctUntilKeyChanged(\"pathname\"),\n switchMap(url => request(url.href)\n .pipe(\n catchError(() => {\n setLocation(url)\n return NEVER\n })\n )\n ),\n share()\n )\n\n /* Set new location via `history.pushState` */\n push$\n .pipe(\n sample(response$)\n )\n .subscribe(({ url }) => {\n history.pushState({}, \"\", `${url}`)\n })\n\n /* Parse and emit fetched document */\n const dom = new DOMParser()\n response$\n .pipe(\n switchMap(res => res.text()),\n map(res => dom.parseFromString(res, \"text/html\"))\n )\n .subscribe(document$)\n\n /* Replace meta tags and components */\n document$\n .pipe(\n skip(1)\n )\n .subscribe(replacement => {\n for (const selector of [\n\n /* Meta tags */\n \"title\",\n \"link[rel=canonical]\",\n \"meta[name=author]\",\n \"meta[name=description]\",\n\n /* Components */\n \"[data-md-component=announce]\",\n \"[data-md-component=container]\",\n \"[data-md-component=header-topic]\",\n \"[data-md-component=outdated]\",\n \"[data-md-component=logo]\",\n \"[data-md-component=skip]\",\n ...feature(\"navigation.tabs.sticky\")\n ? [\"[data-md-component=tabs]\"]\n : []\n ]) {\n const source = getOptionalElement(selector)\n const target = getOptionalElement(selector, replacement)\n if (\n typeof source !== \"undefined\" &&\n typeof target !== \"undefined\"\n ) {\n source.replaceWith(target)\n }\n }\n })\n\n /* Re-evaluate scripts */\n document$\n .pipe(\n skip(1),\n map(() => getComponentElement(\"container\")),\n switchMap(el => getElements(\"script\", el)),\n concatMap(el => {\n const script = h(\"script\")\n if (el.src) {\n for (const name of el.getAttributeNames())\n script.setAttribute(name, el.getAttribute(name)!)\n el.replaceWith(script)\n\n /* Complete when script is loaded */\n return new Observable(observer => {\n script.onload = () => observer.complete()\n })\n\n /* Complete immediately */\n } else {\n script.textContent = el.textContent\n el.replaceWith(script)\n return EMPTY\n }\n })\n )\n .subscribe()\n\n /* Emit history state change */\n merge(push$, pop$)\n .pipe(\n sample(document$)\n )\n .subscribe(({ url, offset }) => {\n if (url.hash && !offset) {\n setLocationHash(url.hash)\n } else {\n window.scrollTo(0, offset?.y || 0)\n }\n })\n\n /* Debounce update of viewport offset */\n viewport$\n .pipe(\n skipUntil(push$),\n debounceTime(250),\n distinctUntilKeyChanged(\"offset\")\n )\n .subscribe(({ offset }) => {\n history.replaceState(offset, \"\")\n })\n\n /* Set viewport offset from history */\n merge(push$, pop$)\n .pipe(\n bufferCount(2, 1),\n filter(([a, b]) => a.url.pathname === b.url.pathname),\n map(([, state]) => state)\n )\n .subscribe(({ offset }) => {\n window.scrollTo(0, offset?.y || 0)\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport escapeHTML from \"escape-html\"\n\nimport { SearchIndexDocument } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search document\n */\nexport interface SearchDocument extends SearchIndexDocument {\n parent?: SearchIndexDocument /* Parent article */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search document mapping\n */\nexport type SearchDocumentMap = Map\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search document mapping\n *\n * @param docs - Search index documents\n *\n * @returns Search document map\n */\nexport function setupSearchDocumentMap(\n docs: SearchIndexDocument[]\n): SearchDocumentMap {\n const documents = new Map()\n const parents = new Set()\n for (const doc of docs) {\n const [path, hash] = doc.location.split(\"#\")\n\n /* Extract location, title and tags */\n const location = doc.location\n const title = doc.title\n const tags = doc.tags\n\n /* Escape and cleanup text */\n const text = escapeHTML(doc.text)\n .replace(/\\s+(?=[,.:;!?])/g, \"\")\n .replace(/\\s+/g, \" \")\n\n /* Handle section */\n if (hash) {\n const parent = documents.get(path)!\n\n /* Ignore first section, override article */\n if (!parents.has(parent)) {\n parent.title = doc.title\n parent.text = text\n\n /* Remember that we processed the article */\n parents.add(parent)\n\n /* Add subsequent section */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n parent\n })\n }\n\n /* Add article */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n ...tags && { tags }\n })\n }\n }\n return documents\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport escapeHTML from \"escape-html\"\n\nimport { SearchIndexConfig } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search highlight function\n *\n * @param value - Value\n *\n * @returns Highlighted value\n */\nexport type SearchHighlightFn = (value: string) => string\n\n/**\n * Search highlight factory function\n *\n * @param query - Query value\n *\n * @returns Search highlight function\n */\nexport type SearchHighlightFactoryFn = (query: string) => SearchHighlightFn\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search highlighter\n *\n * @param config - Search index configuration\n * @param escape - Whether to escape HTML\n *\n * @returns Search highlight factory function\n */\nexport function setupSearchHighlighter(\n config: SearchIndexConfig, escape: boolean\n): SearchHighlightFactoryFn {\n const separator = new RegExp(config.separator, \"img\")\n const highlight = (_: unknown, data: string, term: string) => {\n return `${data}${term}`\n }\n\n /* Return factory function */\n return (query: string) => {\n query = query\n .replace(/[\\s*+\\-:~^]+/g, \" \")\n .trim()\n\n /* Create search term match expression */\n const match = new RegExp(`(^|${config.separator})(${\n query\n .replace(/[|\\\\{}()[\\]^$+*?.-]/g, \"\\\\$&\")\n .replace(separator, \"|\")\n })`, \"img\")\n\n /* Highlight string value */\n return value => (\n escape\n ? escapeHTML(value)\n : value\n )\n .replace(match, highlight)\n .replace(/<\\/mark>(\\s+)]*>/img, \"$1\")\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search transformation function\n *\n * @param value - Query value\n *\n * @returns Transformed query value\n */\nexport type SearchTransformFn = (value: string) => string\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Default transformation function\n *\n * 1. Search for terms in quotation marks and prepend a `+` modifier to denote\n * that the resulting document must contain all terms, converting the query\n * to an `AND` query (as opposed to the default `OR` behavior). While users\n * may expect terms enclosed in quotation marks to map to span queries, i.e.\n * for which order is important, Lunr.js doesn't support them, so the best\n * we can do is to convert the terms to an `AND` query.\n *\n * 2. Replace control characters which are not located at the beginning of the\n * query or preceded by white space, or are not followed by a non-whitespace\n * character or are at the end of the query string. Furthermore, filter\n * unmatched quotation marks.\n *\n * 3. Trim excess whitespace from left and right.\n *\n * @param query - Query value\n *\n * @returns Transformed query value\n */\nexport function defaultTransform(query: string): string {\n return query\n .split(/\"([^\"]+)\"/g) /* => 1 */\n .map((terms, index) => index & 1\n ? terms.replace(/^\\b|^(?![^\\x00-\\x7F]|$)|\\s+/g, \" +\")\n : terms\n )\n .join(\"\")\n .replace(/\"|(?:^|\\s+)[*+\\-:^~]+(?=\\s+|$)/g, \"\") /* => 2 */\n .trim() /* => 3 */\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A RTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { SearchIndex, SearchResult } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search message type\n */\nexport const enum SearchMessageType {\n SETUP, /* Search index setup */\n READY, /* Search index ready */\n QUERY, /* Search query */\n RESULT /* Search results */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Message containing the data necessary to setup the search index\n */\nexport interface SearchSetupMessage {\n type: SearchMessageType.SETUP /* Message type */\n data: SearchIndex /* Message data */\n}\n\n/**\n * Message indicating the search index is ready\n */\nexport interface SearchReadyMessage {\n type: SearchMessageType.READY /* Message type */\n}\n\n/**\n * Message containing a search query\n */\nexport interface SearchQueryMessage {\n type: SearchMessageType.QUERY /* Message type */\n data: string /* Message data */\n}\n\n/**\n * Message containing results for a search query\n */\nexport interface SearchResultMessage {\n type: SearchMessageType.RESULT /* Message type */\n data: SearchResult /* Message data */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Message exchanged with the search worker\n */\nexport type SearchMessage =\n | SearchSetupMessage\n | SearchReadyMessage\n | SearchQueryMessage\n | SearchResultMessage\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Type guard for search setup messages\n *\n * @param message - Search worker message\n *\n * @returns Test result\n */\nexport function isSearchSetupMessage(\n message: SearchMessage\n): message is SearchSetupMessage {\n return message.type === SearchMessageType.SETUP\n}\n\n/**\n * Type guard for search ready messages\n *\n * @param message - Search worker message\n *\n * @returns Test result\n */\nexport function isSearchReadyMessage(\n message: SearchMessage\n): message is SearchReadyMessage {\n return message.type === SearchMessageType.READY\n}\n\n/**\n * Type guard for search query messages\n *\n * @param message - Search worker message\n *\n * @returns Test result\n */\nexport function isSearchQueryMessage(\n message: SearchMessage\n): message is SearchQueryMessage {\n return message.type === SearchMessageType.QUERY\n}\n\n/**\n * Type guard for search result messages\n *\n * @param message - Search worker message\n *\n * @returns Test result\n */\nexport function isSearchResultMessage(\n message: SearchMessage\n): message is SearchResultMessage {\n return message.type === SearchMessageType.RESULT\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A RTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n ObservableInput,\n Subject,\n from,\n map,\n share\n} from \"rxjs\"\n\nimport { configuration, feature, translation } from \"~/_\"\nimport { WorkerHandler, watchWorker } from \"~/browser\"\n\nimport { SearchIndex } from \"../../_\"\nimport {\n SearchOptions,\n SearchPipeline\n} from \"../../options\"\nimport {\n SearchMessage,\n SearchMessageType,\n SearchSetupMessage,\n isSearchResultMessage\n} from \"../message\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search worker\n */\nexport type SearchWorker = WorkerHandler\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up search index\n *\n * @param data - Search index\n *\n * @returns Search index\n */\nfunction setupSearchIndex({ config, docs }: SearchIndex): SearchIndex {\n\n /* Override default language with value from translation */\n if (config.lang.length === 1 && config.lang[0] === \"en\")\n config.lang = [\n translation(\"search.config.lang\")\n ]\n\n /* Override default separator with value from translation */\n if (config.separator === \"[\\\\s\\\\-]+\")\n config.separator = translation(\"search.config.separator\")\n\n /* Set pipeline from translation */\n const pipeline = translation(\"search.config.pipeline\")\n .split(/\\s*,\\s*/)\n .filter(Boolean) as SearchPipeline\n\n /* Determine search options */\n const options: SearchOptions = {\n pipeline,\n suggestions: feature(\"search.suggest\")\n }\n\n /* Return search index after defaulting */\n return { config, docs, options }\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up search worker\n *\n * This function creates a web worker to set up and query the search index,\n * which is done using Lunr.js. The index must be passed as an observable to\n * enable hacks like _localsearch_ via search index embedding as JSON.\n *\n * @param url - Worker URL\n * @param index - Search index observable input\n *\n * @returns Search worker\n */\nexport function setupSearchWorker(\n url: string, index: ObservableInput\n): SearchWorker {\n const config = configuration()\n const worker = new Worker(url)\n\n /* Create communication channels and resolve relative links */\n const tx$ = new Subject()\n const rx$ = watchWorker(worker, { tx$ })\n .pipe(\n map(message => {\n if (isSearchResultMessage(message)) {\n for (const result of message.data.items)\n for (const document of result)\n document.location = `${new URL(document.location, config.base)}`\n }\n return message\n }),\n share()\n )\n\n /* Set up search index */\n from(index)\n .pipe(\n map(data => ({\n type: SearchMessageType.SETUP,\n data: setupSearchIndex(data)\n } as SearchSetupMessage))\n )\n .subscribe(tx$.next.bind(tx$))\n\n /* Return search worker */\n return { tx$, rx$ }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Subject,\n catchError,\n combineLatest,\n filter,\n fromEvent,\n map,\n of,\n switchMap,\n withLatestFrom\n} from \"rxjs\"\n\nimport { configuration } from \"~/_\"\nimport {\n getElement,\n getLocation,\n requestJSON,\n setLocation\n} from \"~/browser\"\nimport { getComponentElements } from \"~/components\"\nimport {\n Version,\n renderVersionSelector\n} from \"~/templates\"\n\nimport { fetchSitemap } from \"../sitemap\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Setup options\n */\ninterface SetupOptions {\n document$: Subject /* Document subject */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up version selector\n *\n * @param options - Options\n */\nexport function setupVersionSelector(\n { document$ }: SetupOptions\n): void {\n const config = configuration()\n const versions$ = requestJSON(\n new URL(\"../versions.json\", config.base)\n )\n .pipe(\n catchError(() => EMPTY) // @todo refactor instant loading\n )\n\n /* Determine current version */\n const current$ = versions$\n .pipe(\n map(versions => {\n const [, current] = config.base.match(/([^/]+)\\/?$/)!\n return versions.find(({ version, aliases }) => (\n version === current || aliases.includes(current)\n )) || versions[0]\n })\n )\n\n /* Intercept inter-version navigation */\n versions$\n .pipe(\n map(versions => new Map(versions.map(version => [\n `${new URL(`../${version.version}/`, config.base)}`,\n version\n ]))),\n switchMap(urls => fromEvent(document.body, \"click\")\n .pipe(\n filter(ev => !ev.metaKey && !ev.ctrlKey),\n withLatestFrom(current$),\n switchMap(([ev, current]) => {\n if (ev.target instanceof Element) {\n const el = ev.target.closest(\"a\")\n if (el && !el.target && urls.has(el.href)) {\n const url = el.href\n // This is a temporary hack to detect if a version inside the\n // version selector or on another part of the site was clicked.\n // If we're inside the version selector, we definitely want to\n // find the same page, as we might have different deployments\n // due to aliases. However, if we're outside the version\n // selector, we must abort here, because we might otherwise\n // interfere with instant loading. We need to refactor this\n // at some point together with instant loading.\n //\n // See https://github.com/squidfunk/mkdocs-material/issues/4012\n if (!ev.target.closest(\".md-version\")) {\n const version = urls.get(url)!\n if (version === current)\n return EMPTY\n }\n ev.preventDefault()\n return of(url)\n }\n }\n return EMPTY\n }),\n switchMap(url => {\n const { version } = urls.get(url)!\n return fetchSitemap(new URL(url))\n .pipe(\n map(sitemap => {\n const location = getLocation()\n const path = location.href.replace(config.base, \"\")\n return sitemap.includes(path.split(\"#\")[0])\n ? new URL(`../${version}/${path}`, config.base)\n : new URL(url)\n })\n )\n })\n )\n )\n )\n .subscribe(url => setLocation(url))\n\n /* Render version selector and warning */\n combineLatest([versions$, current$])\n .subscribe(([versions, current]) => {\n const topic = getElement(\".md-header__topic\")\n topic.appendChild(renderVersionSelector(versions, current))\n })\n\n /* Integrate outdated version banner with instant loading */\n document$.pipe(switchMap(() => current$))\n .subscribe(current => {\n\n /* Check if version state was already determined */\n let outdated = __md_get(\"__outdated\", sessionStorage)\n if (outdated === null) {\n const latest = config.version?.default || \"latest\"\n outdated = !current.aliases.includes(latest)\n\n /* Persist version state in session storage */\n __md_set(\"__outdated\", outdated, sessionStorage)\n }\n\n /* Unhide outdated version banner */\n if (outdated)\n for (const warning of getComponentElements(\"outdated\"))\n warning.hidden = false\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n combineLatest,\n delay,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n filter,\n finalize,\n fromEvent,\n map,\n merge,\n share,\n shareReplay,\n startWith,\n take,\n takeLast,\n takeUntil,\n tap\n} from \"rxjs\"\n\nimport { translation } from \"~/_\"\nimport {\n getLocation,\n setToggle,\n watchElementFocus,\n watchToggle\n} from \"~/browser\"\nimport {\n SearchMessageType,\n SearchQueryMessage,\n SearchWorker,\n defaultTransform,\n isSearchReadyMessage\n} from \"~/integrations\"\n\nimport { Component } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search query\n */\nexport interface SearchQuery {\n value: string /* Query value */\n focus: boolean /* Query focus */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch search query\n *\n * Note that the focus event which triggers re-reading the current query value\n * is delayed by `1ms` so the input's empty state is allowed to propagate.\n *\n * @param el - Search query element\n * @param worker - Search worker\n *\n * @returns Search query observable\n */\nexport function watchSearchQuery(\n el: HTMLInputElement, { rx$ }: SearchWorker\n): Observable {\n const fn = __search?.transform || defaultTransform\n\n /* Immediately show search dialog */\n const { searchParams } = getLocation()\n if (searchParams.has(\"q\"))\n setToggle(\"search\", true)\n\n /* Intercept query parameter (deep link) */\n const param$ = rx$\n .pipe(\n filter(isSearchReadyMessage),\n take(1),\n map(() => searchParams.get(\"q\") || \"\")\n )\n\n /* Remove query parameter when search is closed */\n watchToggle(\"search\")\n .pipe(\n filter(active => !active),\n take(1)\n )\n .subscribe(() => {\n const url = new URL(location.href)\n url.searchParams.delete(\"q\")\n history.replaceState({}, \"\", `${url}`)\n })\n\n /* Set query from parameter */\n param$.subscribe(value => { // TODO: not ideal - find a better way\n if (value) {\n el.value = value\n el.focus()\n }\n })\n\n /* Intercept focus and input events */\n const focus$ = watchElementFocus(el)\n const value$ = merge(\n fromEvent(el, \"keyup\"),\n fromEvent(el, \"focus\").pipe(delay(1)),\n param$\n )\n .pipe(\n map(() => fn(el.value)),\n startWith(\"\"),\n distinctUntilChanged(),\n )\n\n /* Combine into single observable */\n return combineLatest([value$, focus$])\n .pipe(\n map(([value, focus]) => ({ value, focus })),\n shareReplay(1)\n )\n}\n\n/**\n * Mount search query\n *\n * @param el - Search query element\n * @param worker - Search worker\n *\n * @returns Search query component observable\n */\nexport function mountSearchQuery(\n el: HTMLInputElement, { tx$, rx$ }: SearchWorker\n): Observable> {\n const push$ = new Subject()\n const done$ = push$.pipe(takeLast(1))\n\n /* Handle value changes */\n push$\n .pipe(\n distinctUntilKeyChanged(\"value\"),\n map(({ value }): SearchQueryMessage => ({\n type: SearchMessageType.QUERY,\n data: value\n }))\n )\n .subscribe(tx$.next.bind(tx$))\n\n /* Handle focus changes */\n push$\n .pipe(\n distinctUntilKeyChanged(\"focus\")\n )\n .subscribe(({ focus }) => {\n if (focus) {\n setToggle(\"search\", focus)\n el.placeholder = \"\"\n } else {\n el.placeholder = translation(\"search.placeholder\")\n }\n })\n\n /* Handle reset */\n fromEvent(el.form!, \"reset\")\n .pipe(\n takeUntil(done$)\n )\n .subscribe(() => el.focus())\n\n /* Create and return component */\n return watchSearchQuery(el, { tx$, rx$ })\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state })),\n share()\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n bufferCount,\n filter,\n finalize,\n map,\n merge,\n of,\n skipUntil,\n switchMap,\n take,\n tap,\n withLatestFrom,\n zipWith\n} from \"rxjs\"\n\nimport { translation } from \"~/_\"\nimport {\n getElement,\n watchElementBoundary\n} from \"~/browser\"\nimport {\n SearchResult,\n SearchWorker,\n isSearchReadyMessage,\n isSearchResultMessage\n} from \"~/integrations\"\nimport { renderSearchResultItem } from \"~/templates\"\nimport { round } from \"~/utilities\"\n\nimport { Component } from \"../../_\"\nimport { SearchQuery } from \"../query\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n query$: Observable /* Search query observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search result list\n *\n * This function performs a lazy rendering of the search results, depending on\n * the vertical offset of the search result container.\n *\n * @param el - Search result list element\n * @param worker - Search worker\n * @param options - Options\n *\n * @returns Search result list component observable\n */\nexport function mountSearchResult(\n el: HTMLElement, { rx$ }: SearchWorker, { query$ }: MountOptions\n): Observable> {\n const push$ = new Subject()\n const boundary$ = watchElementBoundary(el.parentElement!)\n .pipe(\n filter(Boolean)\n )\n\n /* Retrieve nested components */\n const meta = getElement(\":scope > :first-child\", el)\n const list = getElement(\":scope > :last-child\", el)\n\n /* Wait until search is ready */\n const ready$ = rx$\n .pipe(\n filter(isSearchReadyMessage),\n take(1)\n )\n\n /* Update search result metadata */\n push$\n .pipe(\n withLatestFrom(query$),\n skipUntil(ready$)\n )\n .subscribe(([{ items }, { value }]) => {\n if (value) {\n switch (items.length) {\n\n /* No results */\n case 0:\n meta.textContent = translation(\"search.result.none\")\n break\n\n /* One result */\n case 1:\n meta.textContent = translation(\"search.result.one\")\n break\n\n /* Multiple result */\n default:\n meta.textContent = translation(\n \"search.result.other\",\n round(items.length)\n )\n }\n } else {\n meta.textContent = translation(\"search.result.placeholder\")\n }\n })\n\n /* Update search result list */\n push$\n .pipe(\n tap(() => list.innerHTML = \"\"),\n switchMap(({ items }) => merge(\n of(...items.slice(0, 10)),\n of(...items.slice(10))\n .pipe(\n bufferCount(4),\n zipWith(boundary$),\n switchMap(([chunk]) => chunk)\n )\n ))\n )\n .subscribe(result => list.appendChild(\n renderSearchResultItem(result)\n ))\n\n /* Filter search result message */\n const result$ = rx$\n .pipe(\n filter(isSearchResultMessage),\n map(({ data }) => data)\n )\n\n /* Create and return component */\n return result$\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n finalize,\n fromEvent,\n map,\n tap\n} from \"rxjs\"\n\nimport { getLocation } from \"~/browser\"\n\nimport { Component } from \"../../_\"\nimport { SearchQuery } from \"../query\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search sharing\n */\nexport interface SearchShare {\n url: URL /* Deep link for sharing */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n query$: Observable /* Search query observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n query$: Observable /* Search query observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search sharing\n *\n * @param _el - Search sharing element\n * @param options - Options\n *\n * @returns Search sharing observable\n */\nexport function watchSearchShare(\n _el: HTMLElement, { query$ }: WatchOptions\n): Observable {\n return query$\n .pipe(\n map(({ value }) => {\n const url = getLocation()\n url.hash = \"\"\n url.searchParams.delete(\"h\")\n url.searchParams.set(\"q\", value)\n return { url }\n })\n )\n}\n\n/**\n * Mount search sharing\n *\n * @param el - Search sharing element\n * @param options - Options\n *\n * @returns Search sharing component observable\n */\nexport function mountSearchShare(\n el: HTMLAnchorElement, options: MountOptions\n): Observable> {\n const push$ = new Subject()\n push$.subscribe(({ url }) => {\n el.setAttribute(\"data-clipboard-text\", el.href)\n el.href = `${url}`\n })\n\n /* Prevent following of link */\n fromEvent(el, \"click\")\n .subscribe(ev => ev.preventDefault())\n\n /* Create and return component */\n return watchSearchShare(el, options)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n asyncScheduler,\n combineLatestWith,\n distinctUntilChanged,\n filter,\n finalize,\n fromEvent,\n map,\n merge,\n observeOn,\n tap\n} from \"rxjs\"\n\nimport { Keyboard } from \"~/browser\"\nimport {\n SearchResult,\n SearchWorker,\n isSearchResultMessage\n} from \"~/integrations\"\n\nimport { Component, getComponentElement } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search suggestions\n */\nexport interface SearchSuggest {}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n keyboard$: Observable /* Keyboard observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search suggestions\n *\n * This function will perform a lazy rendering of the search results, depending\n * on the vertical offset of the search result container.\n *\n * @param el - Search result list element\n * @param worker - Search worker\n * @param options - Options\n *\n * @returns Search result list component observable\n */\nexport function mountSearchSuggest(\n el: HTMLElement, { rx$ }: SearchWorker, { keyboard$ }: MountOptions\n): Observable> {\n const push$ = new Subject()\n\n /* Retrieve query component and track all changes */\n const query = getComponentElement(\"search-query\")\n const query$ = merge(\n fromEvent(query, \"keydown\"),\n fromEvent(query, \"focus\")\n )\n .pipe(\n observeOn(asyncScheduler),\n map(() => query.value),\n distinctUntilChanged(),\n )\n\n /* Update search suggestions */\n push$\n .pipe(\n combineLatestWith(query$),\n map(([{ suggestions }, value]) => {\n const words = value.split(/([\\s-]+)/)\n if (suggestions?.length && words[words.length - 1]) {\n const last = suggestions[suggestions.length - 1]\n if (last.startsWith(words[words.length - 1]))\n words[words.length - 1] = last\n } else {\n words.length = 0\n }\n return words\n })\n )\n .subscribe(words => el.innerHTML = words\n .join(\"\")\n .replace(/\\s/g, \" \")\n )\n\n /* Set up search keyboard handlers */\n keyboard$\n .pipe(\n filter(({ mode }) => mode === \"search\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Right arrow: accept current suggestion */\n case \"ArrowRight\":\n if (\n el.innerText.length &&\n query.selectionStart === query.value.length\n )\n query.value = el.innerText\n break\n }\n })\n\n /* Filter search result message */\n const result$ = rx$\n .pipe(\n filter(isSearchResultMessage),\n map(({ data }) => data)\n )\n\n /* Create and return component */\n return result$\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(() => ({ ref: el }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n NEVER,\n Observable,\n ObservableInput,\n filter,\n merge,\n mergeWith,\n sample,\n take\n} from \"rxjs\"\n\nimport { configuration } from \"~/_\"\nimport {\n Keyboard,\n getActiveElement,\n getElements,\n setToggle\n} from \"~/browser\"\nimport {\n SearchIndex,\n SearchResult,\n isSearchQueryMessage,\n isSearchReadyMessage,\n setupSearchWorker\n} from \"~/integrations\"\n\nimport {\n Component,\n getComponentElement,\n getComponentElements\n} from \"../../_\"\nimport {\n SearchQuery,\n mountSearchQuery\n} from \"../query\"\nimport { mountSearchResult } from \"../result\"\nimport {\n SearchShare,\n mountSearchShare\n} from \"../share\"\nimport {\n SearchSuggest,\n mountSearchSuggest\n} from \"../suggest\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search\n */\nexport type Search =\n | SearchQuery\n | SearchResult\n | SearchShare\n | SearchSuggest\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n index$: ObservableInput /* Search index observable */\n keyboard$: Observable /* Keyboard observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search\n *\n * This function sets up the search functionality, including the underlying\n * web worker and all keyboard bindings.\n *\n * @param el - Search element\n * @param options - Options\n *\n * @returns Search component observable\n */\nexport function mountSearch(\n el: HTMLElement, { index$, keyboard$ }: MountOptions\n): Observable> {\n const config = configuration()\n try {\n const url = __search?.worker || config.search\n const worker = setupSearchWorker(url, index$)\n\n /* Retrieve query and result components */\n const query = getComponentElement(\"search-query\", el)\n const result = getComponentElement(\"search-result\", el)\n\n /* Re-emit query when search is ready */\n const { tx$, rx$ } = worker\n tx$\n .pipe(\n filter(isSearchQueryMessage),\n sample(rx$.pipe(filter(isSearchReadyMessage))),\n take(1)\n )\n .subscribe(tx$.next.bind(tx$))\n\n /* Set up search keyboard handlers */\n keyboard$\n .pipe(\n filter(({ mode }) => mode === \"search\")\n )\n .subscribe(key => {\n const active = getActiveElement()\n switch (key.type) {\n\n /* Enter: go to first (best) result */\n case \"Enter\":\n if (active === query) {\n const anchors = new Map()\n for (const anchor of getElements(\n \":first-child [href]\", result\n )) {\n const article = anchor.firstElementChild!\n anchors.set(anchor, parseFloat(\n article.getAttribute(\"data-md-score\")!\n ))\n }\n\n /* Go to result with highest score, if any */\n if (anchors.size) {\n const [[best]] = [...anchors].sort(([, a], [, b]) => b - a)\n best.click()\n }\n\n /* Otherwise omit form submission */\n key.claim()\n }\n break\n\n /* Escape or Tab: close search */\n case \"Escape\":\n case \"Tab\":\n setToggle(\"search\", false)\n query.blur()\n break\n\n /* Vertical arrows: select previous or next search result */\n case \"ArrowUp\":\n case \"ArrowDown\":\n if (typeof active === \"undefined\") {\n query.focus()\n } else {\n const els = [query, ...getElements(\n \":not(details) > [href], summary, details[open] [href]\",\n result\n )]\n const i = Math.max(0, (\n Math.max(0, els.indexOf(active)) + els.length + (\n key.type === \"ArrowUp\" ? -1 : +1\n )\n ) % els.length)\n els[i].focus()\n }\n\n /* Prevent scrolling of page */\n key.claim()\n break\n\n /* All other keys: hand to search query */\n default:\n if (query !== getActiveElement())\n query.focus()\n }\n })\n\n /* Set up global keyboard handlers */\n keyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\"),\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Open search and select query */\n case \"f\":\n case \"s\":\n case \"/\":\n query.focus()\n query.select()\n\n /* Prevent scrolling of page */\n key.claim()\n break\n }\n })\n\n /* Create and return component */\n const query$ = mountSearchQuery(query, worker)\n const result$ = mountSearchResult(result, worker, { query$ })\n return merge(query$, result$)\n .pipe(\n mergeWith(\n\n /* Search sharing */\n ...getComponentElements(\"search-share\", el)\n .map(child => mountSearchShare(child, { query$ })),\n\n /* Search suggestions */\n ...getComponentElements(\"search-suggest\", el)\n .map(child => mountSearchSuggest(child, worker, { keyboard$ }))\n )\n )\n\n /* Gracefully handle broken search */\n } catch (err) {\n el.hidden = true\n return NEVER\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n ObservableInput,\n combineLatest,\n filter,\n map,\n startWith\n} from \"rxjs\"\n\nimport { getLocation } from \"~/browser\"\nimport {\n SearchIndex,\n setupSearchHighlighter\n} from \"~/integrations\"\nimport { h } from \"~/utilities\"\n\nimport { Component } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search highlighting\n */\nexport interface SearchHighlight {\n nodes: Map /* Map of replacements */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n index$: ObservableInput /* Search index observable */\n location$: Observable /* Location observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search highlighting\n *\n * @param el - Content element\n * @param options - Options\n *\n * @returns Search highlighting component observable\n */\nexport function mountSearchHiglight(\n el: HTMLElement, { index$, location$ }: MountOptions\n): Observable> {\n return combineLatest([\n index$,\n location$\n .pipe(\n startWith(getLocation()),\n filter(url => !!url.searchParams.get(\"h\"))\n )\n ])\n .pipe(\n map(([index, url]) => setupSearchHighlighter(index.config, true)(\n url.searchParams.get(\"h\")!\n )),\n map(fn => {\n const nodes = new Map()\n\n /* Traverse text nodes and collect matches */\n const it = document.createNodeIterator(el, NodeFilter.SHOW_TEXT)\n for (let node = it.nextNode(); node; node = it.nextNode()) {\n if (node.parentElement?.offsetHeight) {\n const original = node.textContent!\n const replaced = fn(original)\n if (replaced.length > original.length)\n nodes.set(node as ChildNode, replaced)\n }\n }\n\n /* Replace original nodes with matches */\n for (const [node, text] of nodes) {\n const { childNodes } = h(\"span\", null, text)\n node.replaceWith(...Array.from(childNodes))\n }\n\n /* Return component */\n return { ref: el, nodes }\n })\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n animationFrameScheduler,\n auditTime,\n combineLatest,\n defer,\n distinctUntilChanged,\n finalize,\n map,\n observeOn,\n take,\n tap,\n withLatestFrom\n} from \"rxjs\"\n\nimport {\n Viewport,\n getElement,\n getElementContainer,\n getElementOffset,\n getElementSize,\n getElements\n} from \"~/browser\"\n\nimport { Component } from \"../_\"\nimport { Header } from \"../header\"\nimport { Main } from \"../main\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Sidebar\n */\nexport interface Sidebar {\n height: number /* Sidebar height */\n locked: boolean /* Sidebar is locked */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n main$: Observable
    /* Main area observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n main$: Observable
    /* Main area observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch sidebar\n *\n * This function returns an observable that computes the visual parameters of\n * the sidebar which depends on the vertical viewport offset, as well as the\n * height of the main area. When the page is scrolled beyond the header, the\n * sidebar is locked and fills the remaining space.\n *\n * @param el - Sidebar element\n * @param options - Options\n *\n * @returns Sidebar observable\n */\nexport function watchSidebar(\n el: HTMLElement, { viewport$, main$ }: WatchOptions\n): Observable {\n const parent = el.parentElement!\n const adjust =\n parent.offsetTop -\n parent.parentElement!.offsetTop\n\n /* Compute the sidebar's available height and if it should be locked */\n return combineLatest([main$, viewport$])\n .pipe(\n map(([{ offset, height }, { offset: { y } }]) => {\n height = height\n + Math.min(adjust, Math.max(0, y - offset))\n - adjust\n return {\n height,\n locked: y >= offset + adjust\n }\n }),\n distinctUntilChanged((a, b) => (\n a.height === b.height &&\n a.locked === b.locked\n ))\n )\n}\n\n/**\n * Mount sidebar\n *\n * This function doesn't set the height of the actual sidebar, but of its first\n * child \u2013 the `.md-sidebar__scrollwrap` element in order to mitigiate jittery\n * sidebars when the footer is scrolled into view. At some point we switched\n * from `absolute` / `fixed` positioning to `sticky` positioning, significantly\n * reducing jitter in some browsers (respectively Firefox and Safari) when\n * scrolling from the top. However, top-aligned sticky positioning means that\n * the sidebar snaps to the bottom when the end of the container is reached.\n * This is what leads to the mentioned jitter, as the sidebar's height may be\n * updated too slowly.\n *\n * This behaviour can be mitigiated by setting the height of the sidebar to `0`\n * while preserving the padding, and the height on its first element.\n *\n * @param el - Sidebar element\n * @param options - Options\n *\n * @returns Sidebar component observable\n */\nexport function mountSidebar(\n el: HTMLElement, { header$, ...options }: MountOptions\n): Observable> {\n const inner = getElement(\".md-sidebar__scrollwrap\", el)\n const { y } = getElementOffset(inner)\n return defer(() => {\n const push$ = new Subject()\n push$\n .pipe(\n auditTime(0, animationFrameScheduler),\n withLatestFrom(header$)\n )\n .subscribe({\n\n /* Handle emission */\n next([{ height }, { height: offset }]) {\n inner.style.height = `${height - 2 * y}px`\n el.style.top = `${offset}px`\n },\n\n /* Handle complete */\n complete() {\n inner.style.height = \"\"\n el.style.top = \"\"\n }\n })\n\n /* Bring active item into view on initial load */\n push$\n .pipe(\n observeOn(animationFrameScheduler),\n take(1)\n )\n .subscribe(() => {\n for (const item of getElements(\".md-nav__link--active[href]\", el)) {\n const container = getElementContainer(item)\n if (typeof container !== \"undefined\") {\n const offset = item.offsetTop - container.offsetTop\n const { height } = getElementSize(container)\n container.scrollTo({\n top: offset - height / 2\n })\n }\n }\n })\n\n /* Create and return component */\n return watchSidebar(el, options)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Repo, User } from \"github-types\"\nimport {\n EMPTY,\n Observable,\n catchError,\n defaultIfEmpty,\n map,\n zip\n} from \"rxjs\"\n\nimport { requestJSON } from \"~/browser\"\n\nimport { SourceFacts } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * GitHub release (partial)\n */\ninterface Release {\n tag_name: string /* Tag name */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch GitHub repository facts\n *\n * @param user - GitHub user or organization\n * @param repo - GitHub repository\n *\n * @returns Repository facts observable\n */\nexport function fetchSourceFactsFromGitHub(\n user: string, repo?: string\n): Observable {\n if (typeof repo !== \"undefined\") {\n const url = `https://api.github.com/repos/${user}/${repo}`\n return zip(\n\n /* Fetch version */\n requestJSON(`${url}/releases/latest`)\n .pipe(\n catchError(() => EMPTY), // @todo refactor instant loading\n map(release => ({\n version: release.tag_name\n })),\n defaultIfEmpty({})\n ),\n\n /* Fetch stars and forks */\n requestJSON(url)\n .pipe(\n catchError(() => EMPTY), // @todo refactor instant loading\n map(info => ({\n stars: info.stargazers_count,\n forks: info.forks_count\n })),\n defaultIfEmpty({})\n )\n )\n .pipe(\n map(([release, info]) => ({ ...release, ...info }))\n )\n\n /* User or organization */\n } else {\n const url = `https://api.github.com/users/${user}`\n return requestJSON(url)\n .pipe(\n map(info => ({\n repositories: info.public_repos\n })),\n defaultIfEmpty({})\n )\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { ProjectSchema } from \"gitlab\"\nimport {\n EMPTY,\n Observable,\n catchError,\n defaultIfEmpty,\n map\n} from \"rxjs\"\n\nimport { requestJSON } from \"~/browser\"\n\nimport { SourceFacts } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch GitLab repository facts\n *\n * @param base - GitLab base\n * @param project - GitLab project\n *\n * @returns Repository facts observable\n */\nexport function fetchSourceFactsFromGitLab(\n base: string, project: string\n): Observable {\n const url = `https://${base}/api/v4/projects/${encodeURIComponent(project)}`\n return requestJSON(url)\n .pipe(\n catchError(() => EMPTY), // @todo refactor instant loading\n map(({ star_count, forks_count }) => ({\n stars: star_count,\n forks: forks_count\n })),\n defaultIfEmpty({})\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { EMPTY, Observable } from \"rxjs\"\n\nimport { fetchSourceFactsFromGitHub } from \"../github\"\nimport { fetchSourceFactsFromGitLab } from \"../gitlab\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Repository facts for repositories\n */\nexport interface RepositoryFacts {\n stars?: number /* Number of stars */\n forks?: number /* Number of forks */\n version?: string /* Latest version */\n}\n\n/**\n * Repository facts for organizations\n */\nexport interface OrganizationFacts {\n repositories?: number /* Number of repositories */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Repository facts\n */\nexport type SourceFacts =\n | RepositoryFacts\n | OrganizationFacts\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch repository facts\n *\n * @param url - Repository URL\n *\n * @returns Repository facts observable\n */\nexport function fetchSourceFacts(\n url: string\n): Observable {\n\n /* Try to match GitHub repository */\n let match = url.match(/^.+github\\.com\\/([^/]+)\\/?([^/]+)?/i)\n if (match) {\n const [, user, repo] = match\n return fetchSourceFactsFromGitHub(user, repo)\n }\n\n /* Try to match GitLab repository */\n match = url.match(/^.+?([^/]*gitlab[^/]+)\\/(.+?)\\/?$/i)\n if (match) {\n const [, base, slug] = match\n return fetchSourceFactsFromGitLab(base, slug)\n }\n\n /* Fallback */\n return EMPTY\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n Subject,\n catchError,\n defer,\n filter,\n finalize,\n map,\n of,\n shareReplay,\n tap\n} from \"rxjs\"\n\nimport { getElement } from \"~/browser\"\nimport { ConsentDefaults } from \"~/components/consent\"\nimport { renderSourceFacts } from \"~/templates\"\n\nimport {\n Component,\n getComponentElements\n} from \"../../_\"\nimport {\n SourceFacts,\n fetchSourceFacts\n} from \"../facts\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Repository information\n */\nexport interface Source {\n facts: SourceFacts /* Repository facts */\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Repository information observable\n */\nlet fetch$: Observable\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch repository information\n *\n * This function tries to read the repository facts from session storage, and\n * if unsuccessful, fetches them from the underlying provider.\n *\n * @param el - Repository information element\n *\n * @returns Repository information observable\n */\nexport function watchSource(\n el: HTMLAnchorElement\n): Observable {\n return fetch$ ||= defer(() => {\n const cached = __md_get(\"__source\", sessionStorage)\n if (cached) {\n return of(cached)\n } else {\n\n /* Check if consent is configured and was given */\n const els = getComponentElements(\"consent\")\n if (els.length) {\n const consent = __md_get(\"__consent\")\n if (!(consent && consent.github))\n return EMPTY\n }\n\n /* Fetch repository facts */\n return fetchSourceFacts(el.href)\n .pipe(\n tap(facts => __md_set(\"__source\", facts, sessionStorage))\n )\n }\n })\n .pipe(\n catchError(() => EMPTY),\n filter(facts => Object.keys(facts).length > 0),\n map(facts => ({ facts })),\n shareReplay(1)\n )\n}\n\n/**\n * Mount repository information\n *\n * @param el - Repository information element\n *\n * @returns Repository information component observable\n */\nexport function mountSource(\n el: HTMLAnchorElement\n): Observable> {\n const inner = getElement(\":scope > :last-child\", el)\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe(({ facts }) => {\n inner.appendChild(renderSourceFacts(facts))\n inner.classList.add(\"md-source__repository--active\")\n })\n\n /* Create and return component */\n return watchSource(el)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n defer,\n distinctUntilKeyChanged,\n finalize,\n map,\n of,\n switchMap,\n tap\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport {\n Viewport,\n watchElementSize,\n watchViewportAt\n} from \"~/browser\"\n\nimport { Component } from \"../_\"\nimport { Header } from \"../header\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Navigation tabs\n */\nexport interface Tabs {\n hidden: boolean /* Navigation tabs are hidden */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch navigation tabs\n *\n * @param el - Navigation tabs element\n * @param options - Options\n *\n * @returns Navigation tabs observable\n */\nexport function watchTabs(\n el: HTMLElement, { viewport$, header$ }: WatchOptions\n): Observable {\n return watchElementSize(document.body)\n .pipe(\n switchMap(() => watchViewportAt(el, { header$, viewport$ })),\n map(({ offset: { y } }) => {\n return {\n hidden: y >= 10\n }\n }),\n distinctUntilKeyChanged(\"hidden\")\n )\n}\n\n/**\n * Mount navigation tabs\n *\n * This function hides the navigation tabs when scrolling past the threshold\n * and makes them reappear in a nice CSS animation when scrolling back up.\n *\n * @param el - Navigation tabs element\n * @param options - Options\n *\n * @returns Navigation tabs component observable\n */\nexport function mountTabs(\n el: HTMLElement, options: MountOptions\n): Observable> {\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe({\n\n /* Handle emission */\n next({ hidden }) {\n el.hidden = hidden\n },\n\n /* Handle complete */\n complete() {\n el.hidden = false\n }\n })\n\n /* Create and return component */\n return (\n feature(\"navigation.tabs.sticky\")\n ? of({ hidden: false })\n : watchTabs(el, options)\n )\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n bufferCount,\n combineLatestWith,\n debounceTime,\n defer,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n filter,\n finalize,\n map,\n merge,\n of,\n repeat,\n scan,\n share,\n skip,\n startWith,\n switchMap,\n takeLast,\n takeUntil,\n tap,\n withLatestFrom\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport {\n Viewport,\n getElement,\n getElementContainer,\n getElementSize,\n getElements,\n getLocation,\n getOptionalElement,\n watchElementSize\n} from \"~/browser\"\n\nimport {\n Component,\n getComponentElement\n} from \"../_\"\nimport { Header } from \"../header\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Table of contents\n */\nexport interface TableOfContents {\n prev: HTMLAnchorElement[][] /* Anchors (previous) */\n next: HTMLAnchorElement[][] /* Anchors (next) */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n target$: Observable /* Location target observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch table of contents\n *\n * This is effectively a scroll spy implementation which will account for the\n * fixed header and automatically re-calculate anchor offsets when the viewport\n * is resized. The returned observable will only emit if the table of contents\n * needs to be repainted.\n *\n * This implementation tracks an anchor element's entire path starting from its\n * level up to the top-most anchor element, e.g. `[h3, h2, h1]`. Although the\n * Material theme currently doesn't make use of this information, it enables\n * the styling of the entire hierarchy through customization.\n *\n * Note that the current anchor is the last item of the `prev` anchor list.\n *\n * @param el - Table of contents element\n * @param options - Options\n *\n * @returns Table of contents observable\n */\nexport function watchTableOfContents(\n el: HTMLElement, { viewport$, header$ }: WatchOptions\n): Observable {\n const table = new Map()\n\n /* Compute anchor-to-target mapping */\n const anchors = getElements(\"[href^=\\\\#]\", el)\n for (const anchor of anchors) {\n const id = decodeURIComponent(anchor.hash.substring(1))\n const target = getOptionalElement(`[id=\"${id}\"]`)\n if (typeof target !== \"undefined\")\n table.set(anchor, target)\n }\n\n /* Compute necessary adjustment for header */\n const adjust$ = header$\n .pipe(\n distinctUntilKeyChanged(\"height\"),\n map(({ height }) => {\n const main = getComponentElement(\"main\")\n const grid = getElement(\":scope > :first-child\", main)\n return height + 0.8 * (\n grid.offsetTop -\n main.offsetTop\n )\n }),\n share()\n )\n\n /* Compute partition of previous and next anchors */\n const partition$ = watchElementSize(document.body)\n .pipe(\n distinctUntilKeyChanged(\"height\"),\n\n /* Build index to map anchor paths to vertical offsets */\n switchMap(body => defer(() => {\n let path: HTMLAnchorElement[] = []\n return of([...table].reduce((index, [anchor, target]) => {\n while (path.length) {\n const last = table.get(path[path.length - 1])!\n if (last.tagName >= target.tagName) {\n path.pop()\n } else {\n break\n }\n }\n\n /* If the current anchor is hidden, continue with its parent */\n let offset = target.offsetTop\n while (!offset && target.parentElement) {\n target = target.parentElement\n offset = target.offsetTop\n }\n\n /* Map reversed anchor path to vertical offset */\n return index.set(\n [...path = [...path, anchor]].reverse(),\n offset\n )\n }, new Map()))\n })\n .pipe(\n\n /* Sort index by vertical offset (see https://bit.ly/30z6QSO) */\n map(index => new Map([...index].sort(([, a], [, b]) => a - b))),\n combineLatestWith(adjust$),\n\n /* Re-compute partition when viewport offset changes */\n switchMap(([index, adjust]) => viewport$\n .pipe(\n scan(([prev, next], { offset: { y }, size }) => {\n const last = y + size.height >= Math.floor(body.height)\n\n /* Look forward */\n while (next.length) {\n const [, offset] = next[0]\n if (offset - adjust < y || last) {\n prev = [...prev, next.shift()!]\n } else {\n break\n }\n }\n\n /* Look backward */\n while (prev.length) {\n const [, offset] = prev[prev.length - 1]\n if (offset - adjust >= y && !last) {\n next = [prev.pop()!, ...next]\n } else {\n break\n }\n }\n\n /* Return partition */\n return [prev, next]\n }, [[], [...index]]),\n distinctUntilChanged((a, b) => (\n a[0] === b[0] &&\n a[1] === b[1]\n ))\n )\n )\n )\n )\n )\n\n /* Compute and return anchor list migrations */\n return partition$\n .pipe(\n map(([prev, next]) => ({\n prev: prev.map(([path]) => path),\n next: next.map(([path]) => path)\n })),\n\n /* Extract anchor list migrations */\n startWith({ prev: [], next: [] }),\n bufferCount(2, 1),\n map(([a, b]) => {\n\n /* Moving down */\n if (a.prev.length < b.prev.length) {\n return {\n prev: b.prev.slice(Math.max(0, a.prev.length - 1), b.prev.length),\n next: []\n }\n\n /* Moving up */\n } else {\n return {\n prev: b.prev.slice(-1),\n next: b.next.slice(0, b.next.length - a.next.length)\n }\n }\n })\n )\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Mount table of contents\n *\n * @param el - Table of contents element\n * @param options - Options\n *\n * @returns Table of contents component observable\n */\nexport function mountTableOfContents(\n el: HTMLElement, { viewport$, header$, target$ }: MountOptions\n): Observable> {\n return defer(() => {\n const push$ = new Subject()\n const done$ = push$.pipe(takeLast(1))\n push$.subscribe(({ prev, next }) => {\n\n /* Look forward */\n for (const [anchor] of next) {\n anchor.classList.remove(\"md-nav__link--passed\")\n anchor.classList.remove(\"md-nav__link--active\")\n }\n\n /* Look backward */\n for (const [index, [anchor]] of prev.entries()) {\n anchor.classList.add(\"md-nav__link--passed\")\n anchor.classList.toggle(\n \"md-nav__link--active\",\n index === prev.length - 1\n )\n }\n })\n\n /* Set up following, if enabled */\n if (feature(\"toc.follow\")) {\n\n /* Toggle smooth scrolling only for anchor clicks */\n const smooth$ = merge(\n viewport$.pipe(debounceTime(1), map(() => undefined)),\n viewport$.pipe(debounceTime(250), map(() => \"smooth\" as const))\n )\n\n /* Bring active anchor into view */\n push$\n .pipe(\n filter(({ prev }) => prev.length > 0),\n withLatestFrom(smooth$)\n )\n .subscribe(([{ prev }, behavior]) => {\n const [anchor] = prev[prev.length - 1]\n if (anchor.offsetHeight) {\n\n /* Retrieve overflowing container and scroll */\n const container = getElementContainer(anchor)\n if (typeof container !== \"undefined\") {\n const offset = anchor.offsetTop - container.offsetTop\n const { height } = getElementSize(container)\n container.scrollTo({\n top: offset - height / 2,\n behavior\n })\n }\n }\n })\n }\n\n /* Set up anchor tracking, if enabled */\n if (feature(\"navigation.tracking\"))\n viewport$\n .pipe(\n takeUntil(done$),\n distinctUntilKeyChanged(\"offset\"),\n debounceTime(250),\n skip(1),\n takeUntil(target$.pipe(skip(1))),\n repeat({ delay: 250 }),\n withLatestFrom(push$)\n )\n .subscribe(([, { prev }]) => {\n const url = getLocation()\n\n /* Set hash fragment to active anchor */\n const anchor = prev[prev.length - 1]\n if (anchor && anchor.length) {\n const [active] = anchor\n const { hash } = new URL(active.href)\n if (url.hash !== hash) {\n url.hash = hash\n history.replaceState({}, \"\", `${url}`)\n }\n\n /* Reset anchor when at the top */\n } else {\n url.hash = \"\"\n history.replaceState({}, \"\", `${url}`)\n }\n })\n\n /* Create and return component */\n return watchTableOfContents(el, { viewport$, header$ })\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n bufferCount,\n combineLatest,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n endWith,\n finalize,\n map,\n repeat,\n skip,\n takeLast,\n takeUntil,\n tap\n} from \"rxjs\"\n\nimport { Viewport } from \"~/browser\"\n\nimport { Component } from \"../_\"\nimport { Header } from \"../header\"\nimport { Main } from \"../main\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Back-to-top button\n */\nexport interface BackToTop {\n hidden: boolean /* Back-to-top button is hidden */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n main$: Observable
    /* Main area observable */\n target$: Observable /* Location target observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n main$: Observable
    /* Main area observable */\n target$: Observable /* Location target observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch back-to-top\n *\n * @param _el - Back-to-top element\n * @param options - Options\n *\n * @returns Back-to-top observable\n */\nexport function watchBackToTop(\n _el: HTMLElement, { viewport$, main$, target$ }: WatchOptions\n): Observable {\n\n /* Compute direction */\n const direction$ = viewport$\n .pipe(\n map(({ offset: { y } }) => y),\n bufferCount(2, 1),\n map(([a, b]) => a > b && b > 0),\n distinctUntilChanged()\n )\n\n /* Compute whether main area is active */\n const active$ = main$\n .pipe(\n map(({ active }) => active)\n )\n\n /* Compute threshold for hiding */\n return combineLatest([active$, direction$])\n .pipe(\n map(([active, direction]) => !(active && direction)),\n distinctUntilChanged(),\n takeUntil(target$.pipe(skip(1))),\n endWith(true),\n repeat({ delay: 250 }),\n map(hidden => ({ hidden }))\n )\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Mount back-to-top\n *\n * @param el - Back-to-top element\n * @param options - Options\n *\n * @returns Back-to-top component observable\n */\nexport function mountBackToTop(\n el: HTMLElement, { viewport$, header$, main$, target$ }: MountOptions\n): Observable> {\n const push$ = new Subject()\n const done$ = push$.pipe(takeLast(1))\n push$.subscribe({\n\n /* Handle emission */\n next({ hidden }) {\n el.hidden = hidden\n if (hidden) {\n el.setAttribute(\"tabindex\", \"-1\")\n el.blur()\n } else {\n el.removeAttribute(\"tabindex\")\n }\n },\n\n /* Handle complete */\n complete() {\n el.style.top = \"\"\n el.hidden = true\n el.removeAttribute(\"tabindex\")\n }\n })\n\n /* Watch header height */\n header$\n .pipe(\n takeUntil(done$),\n distinctUntilKeyChanged(\"height\")\n )\n .subscribe(({ height }) => {\n el.style.top = `${height + 16}px`\n })\n\n /* Create and return component */\n return watchBackToTop(el, { viewport$, main$, target$ })\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n fromEvent,\n map,\n mergeMap,\n switchMap,\n takeWhile,\n tap,\n withLatestFrom\n} from \"rxjs\"\n\nimport { getElements } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch options\n */\ninterface PatchOptions {\n document$: Observable /* Document observable */\n tablet$: Observable /* Media tablet observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch indeterminate checkboxes\n *\n * This function replaces the indeterminate \"pseudo state\" with the actual\n * indeterminate state, which is used to keep navigation always expanded.\n *\n * @param options - Options\n */\nexport function patchIndeterminate(\n { document$, tablet$ }: PatchOptions\n): void {\n document$\n .pipe(\n switchMap(() => getElements(\n // @todo `data-md-state` is deprecated and removed in v9\n \".md-toggle--indeterminate, [data-md-state=indeterminate]\"\n )),\n tap(el => {\n el.indeterminate = true\n el.checked = false\n }),\n mergeMap(el => fromEvent(el, \"change\")\n .pipe(\n takeWhile(() => el.classList.contains(\"md-toggle--indeterminate\")),\n map(() => el)\n )\n ),\n withLatestFrom(tablet$)\n )\n .subscribe(([el, tablet]) => {\n el.classList.remove(\"md-toggle--indeterminate\")\n if (tablet)\n el.checked = false\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n filter,\n fromEvent,\n map,\n mergeMap,\n switchMap,\n tap\n} from \"rxjs\"\n\nimport { getElements } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch options\n */\ninterface PatchOptions {\n document$: Observable /* Document observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Check whether the given device is an Apple device\n *\n * @returns Test result\n */\nfunction isAppleDevice(): boolean {\n return /(iPad|iPhone|iPod)/.test(navigator.userAgent)\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch all elements with `data-md-scrollfix` attributes\n *\n * This is a year-old patch which ensures that overflow scrolling works at the\n * top and bottom of containers on iOS by ensuring a `1px` scroll offset upon\n * the start of a touch event.\n *\n * @see https://bit.ly/2SCtAOO - Original source\n *\n * @param options - Options\n */\nexport function patchScrollfix(\n { document$ }: PatchOptions\n): void {\n document$\n .pipe(\n switchMap(() => getElements(\"[data-md-scrollfix]\")),\n tap(el => el.removeAttribute(\"data-md-scrollfix\")),\n filter(isAppleDevice),\n mergeMap(el => fromEvent(el, \"touchstart\")\n .pipe(\n map(() => el)\n )\n )\n )\n .subscribe(el => {\n const top = el.scrollTop\n\n /* We're at the top of the container */\n if (top === 0) {\n el.scrollTop = 1\n\n /* We're at the bottom of the container */\n } else if (top + el.offsetHeight === el.scrollHeight) {\n el.scrollTop = top - 1\n }\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n combineLatest,\n delay,\n map,\n of,\n switchMap,\n withLatestFrom\n} from \"rxjs\"\n\nimport {\n Viewport,\n watchToggle\n} from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch options\n */\ninterface PatchOptions {\n viewport$: Observable /* Viewport observable */\n tablet$: Observable /* Media tablet observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch the document body to lock when search is open\n *\n * For mobile and tablet viewports, the search is rendered full screen, which\n * leads to scroll leaking when at the top or bottom of the search result. This\n * function locks the body when the search is in full screen mode, and restores\n * the scroll position when leaving.\n *\n * @param options - Options\n */\nexport function patchScrolllock(\n { viewport$, tablet$ }: PatchOptions\n): void {\n combineLatest([watchToggle(\"search\"), tablet$])\n .pipe(\n map(([active, tablet]) => active && !tablet),\n switchMap(active => of(active)\n .pipe(\n delay(active ? 400 : 100)\n )\n ),\n withLatestFrom(viewport$)\n )\n .subscribe(([active, { offset: { y }}]) => {\n if (active) {\n document.body.setAttribute(\"data-md-scrolllock\", \"\")\n document.body.style.top = `-${y}px`\n } else {\n const value = -1 * parseInt(document.body.style.top, 10)\n document.body.removeAttribute(\"data-md-scrolllock\")\n document.body.style.top = \"\"\n if (value)\n window.scrollTo(0, value)\n }\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Polyfills\n * ------------------------------------------------------------------------- */\n\n/* Polyfill `Object.entries` */\nif (!Object.entries)\n Object.entries = function (obj: object) {\n const data: [string, string][] = []\n for (const key of Object.keys(obj))\n // @ts-expect-error - ignore property access warning\n data.push([key, obj[key]])\n\n /* Return entries */\n return data\n }\n\n/* Polyfill `Object.values` */\nif (!Object.values)\n Object.values = function (obj: object) {\n const data: string[] = []\n for (const key of Object.keys(obj))\n // @ts-expect-error - ignore property access warning\n data.push(obj[key])\n\n /* Return values */\n return data\n }\n\n/* ------------------------------------------------------------------------- */\n\n/* Polyfills for `Element` */\nif (typeof Element !== \"undefined\") {\n\n /* Polyfill `Element.scrollTo` */\n if (!Element.prototype.scrollTo)\n Element.prototype.scrollTo = function (\n x?: ScrollToOptions | number, y?: number\n ): void {\n if (typeof x === \"object\") {\n this.scrollLeft = x.left!\n this.scrollTop = x.top!\n } else {\n this.scrollLeft = x!\n this.scrollTop = y!\n }\n }\n\n /* Polyfill `Element.replaceWith` */\n if (!Element.prototype.replaceWith)\n Element.prototype.replaceWith = function (\n ...nodes: Array\n ): void {\n const parent = this.parentNode\n if (parent) {\n if (nodes.length === 0)\n parent.removeChild(this)\n\n /* Replace children and create text nodes */\n for (let i = nodes.length - 1; i >= 0; i--) {\n let node = nodes[i]\n if (typeof node === \"string\")\n node = document.createTextNode(node)\n else if (node.parentNode)\n node.parentNode.removeChild(node)\n\n /* Replace child or insert before previous sibling */\n if (!i)\n parent.replaceChild(node, this)\n else\n parent.insertBefore(this.previousSibling!, node)\n }\n }\n }\n}\n"], + "mappings": "6+BAAA,IAAAA,GAAAC,GAAA,CAAAC,GAAAC,KAAA,EAAC,SAAUC,EAAQC,EAAS,CAC1B,OAAOH,IAAY,UAAY,OAAOC,IAAW,YAAcE,EAAQ,EACvE,OAAO,QAAW,YAAc,OAAO,IAAM,OAAOA,CAAO,EAC1DA,EAAQ,CACX,GAAEH,GAAO,UAAY,CAAE,aASrB,SAASI,EAA0BC,EAAO,CACxC,IAAIC,EAAmB,GACnBC,EAA0B,GAC1BC,EAAiC,KAEjCC,EAAsB,CACxB,KAAM,GACN,OAAQ,GACR,IAAK,GACL,IAAK,GACL,MAAO,GACP,SAAU,GACV,OAAQ,GACR,KAAM,GACN,MAAO,GACP,KAAM,GACN,KAAM,GACN,SAAU,GACV,iBAAkB,EACpB,EAOA,SAASC,EAAmBC,EAAI,CAC9B,MACE,GAAAA,GACAA,IAAO,UACPA,EAAG,WAAa,QAChBA,EAAG,WAAa,QAChB,cAAeA,GACf,aAAcA,EAAG,UAKrB,CASA,SAASC,EAA8BD,EAAI,CACzC,IAAIE,GAAOF,EAAG,KACVG,GAAUH,EAAG,QAUjB,MARI,GAAAG,KAAY,SAAWL,EAAoBI,KAAS,CAACF,EAAG,UAIxDG,KAAY,YAAc,CAACH,EAAG,UAI9BA,EAAG,kBAKT,CAOA,SAASI,EAAqBJ,EAAI,CAC5BA,EAAG,UAAU,SAAS,eAAe,IAGzCA,EAAG,UAAU,IAAI,eAAe,EAChCA,EAAG,aAAa,2BAA4B,EAAE,EAChD,CAOA,SAASK,EAAwBL,EAAI,CAC/B,CAACA,EAAG,aAAa,0BAA0B,IAG/CA,EAAG,UAAU,OAAO,eAAe,EACnCA,EAAG,gBAAgB,0BAA0B,EAC/C,CAUA,SAASM,EAAUC,EAAG,CAChBA,EAAE,SAAWA,EAAE,QAAUA,EAAE,UAI3BR,EAAmBL,EAAM,aAAa,GACxCU,EAAqBV,EAAM,aAAa,EAG1CC,EAAmB,GACrB,CAUA,SAASa,EAAcD,EAAG,CACxBZ,EAAmB,EACrB,CASA,SAASc,EAAQF,EAAG,CAEd,CAACR,EAAmBQ,EAAE,MAAM,IAI5BZ,GAAoBM,EAA8BM,EAAE,MAAM,IAC5DH,EAAqBG,EAAE,MAAM,CAEjC,CAMA,SAASG,EAAOH,EAAG,CACb,CAACR,EAAmBQ,EAAE,MAAM,IAK9BA,EAAE,OAAO,UAAU,SAAS,eAAe,GAC3CA,EAAE,OAAO,aAAa,0BAA0B,KAMhDX,EAA0B,GAC1B,OAAO,aAAaC,CAA8B,EAClDA,EAAiC,OAAO,WAAW,UAAW,CAC5DD,EAA0B,EAC5B,EAAG,GAAG,EACNS,EAAwBE,EAAE,MAAM,EAEpC,CAOA,SAASI,EAAmBJ,EAAG,CACzB,SAAS,kBAAoB,WAK3BX,IACFD,EAAmB,IAErBiB,EAA+B,EAEnC,CAQA,SAASA,GAAiC,CACxC,SAAS,iBAAiB,YAAaC,CAAoB,EAC3D,SAAS,iBAAiB,YAAaA,CAAoB,EAC3D,SAAS,iBAAiB,UAAWA,CAAoB,EACzD,SAAS,iBAAiB,cAAeA,CAAoB,EAC7D,SAAS,iBAAiB,cAAeA,CAAoB,EAC7D,SAAS,iBAAiB,YAAaA,CAAoB,EAC3D,SAAS,iBAAiB,YAAaA,CAAoB,EAC3D,SAAS,iBAAiB,aAAcA,CAAoB,EAC5D,SAAS,iBAAiB,WAAYA,CAAoB,CAC5D,CAEA,SAASC,GAAoC,CAC3C,SAAS,oBAAoB,YAAaD,CAAoB,EAC9D,SAAS,oBAAoB,YAAaA,CAAoB,EAC9D,SAAS,oBAAoB,UAAWA,CAAoB,EAC5D,SAAS,oBAAoB,cAAeA,CAAoB,EAChE,SAAS,oBAAoB,cAAeA,CAAoB,EAChE,SAAS,oBAAoB,YAAaA,CAAoB,EAC9D,SAAS,oBAAoB,YAAaA,CAAoB,EAC9D,SAAS,oBAAoB,aAAcA,CAAoB,EAC/D,SAAS,oBAAoB,WAAYA,CAAoB,CAC/D,CASA,SAASA,EAAqBN,EAAG,CAG3BA,EAAE,OAAO,UAAYA,EAAE,OAAO,SAAS,YAAY,IAAM,SAI7DZ,EAAmB,GACnBmB,EAAkC,EACpC,CAKA,SAAS,iBAAiB,UAAWR,EAAW,EAAI,EACpD,SAAS,iBAAiB,YAAaE,EAAe,EAAI,EAC1D,SAAS,iBAAiB,cAAeA,EAAe,EAAI,EAC5D,SAAS,iBAAiB,aAAcA,EAAe,EAAI,EAC3D,SAAS,iBAAiB,mBAAoBG,EAAoB,EAAI,EAEtEC,EAA+B,EAM/BlB,EAAM,iBAAiB,QAASe,EAAS,EAAI,EAC7Cf,EAAM,iBAAiB,OAAQgB,EAAQ,EAAI,EAOvChB,EAAM,WAAa,KAAK,wBAA0BA,EAAM,KAI1DA,EAAM,KAAK,aAAa,wBAAyB,EAAE,EAC1CA,EAAM,WAAa,KAAK,gBACjC,SAAS,gBAAgB,UAAU,IAAI,kBAAkB,EACzD,SAAS,gBAAgB,aAAa,wBAAyB,EAAE,EAErE,CAKA,GAAI,OAAO,QAAW,aAAe,OAAO,UAAa,YAAa,CAIpE,OAAO,0BAA4BD,EAInC,IAAIsB,EAEJ,GAAI,CACFA,EAAQ,IAAI,YAAY,8BAA8B,CACxD,OAASC,EAAP,CAEAD,EAAQ,SAAS,YAAY,aAAa,EAC1CA,EAAM,gBAAgB,+BAAgC,GAAO,GAAO,CAAC,CAAC,CACxE,CAEA,OAAO,cAAcA,CAAK,CAC5B,CAEI,OAAO,UAAa,aAGtBtB,EAA0B,QAAQ,CAGtC,CAAE,ICvTF,IAAAwB,GAAAC,GAAAC,IAAA,EAAC,SAASC,EAAQ,CAOhB,IAAIC,EAA6B,UAAW,CAC1C,GAAI,CACF,MAAO,CAAC,CAAC,OAAO,QAClB,OAASC,EAAP,CACA,MAAO,EACT,CACF,EAGIC,EAAoBF,EAA2B,EAE/CG,EAAiB,SAASC,EAAO,CACnC,IAAIC,EAAW,CACb,KAAM,UAAW,CACf,IAAIC,EAAQF,EAAM,MAAM,EACxB,MAAO,CAAE,KAAME,IAAU,OAAQ,MAAOA,CAAM,CAChD,CACF,EAEA,OAAIJ,IACFG,EAAS,OAAO,UAAY,UAAW,CACrC,OAAOA,CACT,GAGKA,CACT,EAMIE,EAAiB,SAASD,EAAO,CACnC,OAAO,mBAAmBA,CAAK,EAAE,QAAQ,OAAQ,GAAG,CACtD,EAEIE,EAAmB,SAASF,EAAO,CACrC,OAAO,mBAAmB,OAAOA,CAAK,EAAE,QAAQ,MAAO,GAAG,CAAC,CAC7D,EAEIG,EAA0B,UAAW,CAEvC,IAAIC,EAAkB,SAASC,EAAc,CAC3C,OAAO,eAAe,KAAM,WAAY,CAAE,SAAU,GAAM,MAAO,CAAC,CAAE,CAAC,EACrE,IAAIC,EAAqB,OAAOD,EAEhC,GAAIC,IAAuB,YAEpB,GAAIA,IAAuB,SAC5BD,IAAiB,IACnB,KAAK,YAAYA,CAAY,UAEtBA,aAAwBD,EAAiB,CAClD,IAAIG,EAAQ,KACZF,EAAa,QAAQ,SAASL,EAAOQ,EAAM,CACzCD,EAAM,OAAOC,EAAMR,CAAK,CAC1B,CAAC,CACH,SAAYK,IAAiB,MAAUC,IAAuB,SAC5D,GAAI,OAAO,UAAU,SAAS,KAAKD,CAAY,IAAM,iBACnD,QAASI,EAAI,EAAGA,EAAIJ,EAAa,OAAQI,IAAK,CAC5C,IAAIC,EAAQL,EAAaI,GACzB,GAAK,OAAO,UAAU,SAAS,KAAKC,CAAK,IAAM,kBAAsBA,EAAM,SAAW,EACpF,KAAK,OAAOA,EAAM,GAAIA,EAAM,EAAE,MAE9B,OAAM,IAAI,UAAU,4CAA8CD,EAAI,6BAA8B,CAExG,KAEA,SAASE,KAAON,EACVA,EAAa,eAAeM,CAAG,GACjC,KAAK,OAAOA,EAAKN,EAAaM,EAAI,MAKxC,OAAM,IAAI,UAAU,8CAA+C,CAEvE,EAEIC,EAAQR,EAAgB,UAE5BQ,EAAM,OAAS,SAASJ,EAAMR,EAAO,CAC/BQ,KAAQ,KAAK,SACf,KAAK,SAASA,GAAM,KAAK,OAAOR,CAAK,CAAC,EAEtC,KAAK,SAASQ,GAAQ,CAAC,OAAOR,CAAK,CAAC,CAExC,EAEAY,EAAM,OAAS,SAASJ,EAAM,CAC5B,OAAO,KAAK,SAASA,EACvB,EAEAI,EAAM,IAAM,SAASJ,EAAM,CACzB,OAAQA,KAAQ,KAAK,SAAY,KAAK,SAASA,GAAM,GAAK,IAC5D,EAEAI,EAAM,OAAS,SAASJ,EAAM,CAC5B,OAAQA,KAAQ,KAAK,SAAY,KAAK,SAASA,GAAM,MAAM,CAAC,EAAI,CAAC,CACnE,EAEAI,EAAM,IAAM,SAASJ,EAAM,CACzB,OAAQA,KAAQ,KAAK,QACvB,EAEAI,EAAM,IAAM,SAASJ,EAAMR,EAAO,CAChC,KAAK,SAASQ,GAAQ,CAAC,OAAOR,CAAK,CAAC,CACtC,EAEAY,EAAM,QAAU,SAASC,EAAUC,EAAS,CAC1C,IAAIC,EACJ,QAASP,KAAQ,KAAK,SACpB,GAAI,KAAK,SAAS,eAAeA,CAAI,EAAG,CACtCO,EAAU,KAAK,SAASP,GACxB,QAASC,EAAI,EAAGA,EAAIM,EAAQ,OAAQN,IAClCI,EAAS,KAAKC,EAASC,EAAQN,GAAID,EAAM,IAAI,CAEjD,CAEJ,EAEAI,EAAM,KAAO,UAAW,CACtB,IAAId,EAAQ,CAAC,EACb,YAAK,QAAQ,SAASE,EAAOQ,EAAM,CACjCV,EAAM,KAAKU,CAAI,CACjB,CAAC,EACMX,EAAeC,CAAK,CAC7B,EAEAc,EAAM,OAAS,UAAW,CACxB,IAAId,EAAQ,CAAC,EACb,YAAK,QAAQ,SAASE,EAAO,CAC3BF,EAAM,KAAKE,CAAK,CAClB,CAAC,EACMH,EAAeC,CAAK,CAC7B,EAEAc,EAAM,QAAU,UAAW,CACzB,IAAId,EAAQ,CAAC,EACb,YAAK,QAAQ,SAASE,EAAOQ,EAAM,CACjCV,EAAM,KAAK,CAACU,EAAMR,CAAK,CAAC,CAC1B,CAAC,EACMH,EAAeC,CAAK,CAC7B,EAEIF,IACFgB,EAAM,OAAO,UAAYA,EAAM,SAGjCA,EAAM,SAAW,UAAW,CAC1B,IAAII,EAAc,CAAC,EACnB,YAAK,QAAQ,SAAShB,EAAOQ,EAAM,CACjCQ,EAAY,KAAKf,EAAeO,CAAI,EAAI,IAAMP,EAAeD,CAAK,CAAC,CACrE,CAAC,EACMgB,EAAY,KAAK,GAAG,CAC7B,EAGAvB,EAAO,gBAAkBW,CAC3B,EAEIa,EAAkC,UAAW,CAC/C,GAAI,CACF,IAAIb,EAAkBX,EAAO,gBAE7B,OACG,IAAIW,EAAgB,MAAM,EAAE,SAAS,IAAM,OAC3C,OAAOA,EAAgB,UAAU,KAAQ,YACzC,OAAOA,EAAgB,UAAU,SAAY,UAElD,OAASc,EAAP,CACA,MAAO,EACT,CACF,EAEKD,EAAgC,GACnCd,EAAwB,EAG1B,IAAIS,EAAQnB,EAAO,gBAAgB,UAE/B,OAAOmB,EAAM,MAAS,aACxBA,EAAM,KAAO,UAAW,CACtB,IAAIL,EAAQ,KACRT,EAAQ,CAAC,EACb,KAAK,QAAQ,SAASE,EAAOQ,EAAM,CACjCV,EAAM,KAAK,CAACU,EAAMR,CAAK,CAAC,EACnBO,EAAM,UACTA,EAAM,OAAOC,CAAI,CAErB,CAAC,EACDV,EAAM,KAAK,SAASqB,EAAGC,EAAG,CACxB,OAAID,EAAE,GAAKC,EAAE,GACJ,GACED,EAAE,GAAKC,EAAE,GACX,EAEA,CAEX,CAAC,EACGb,EAAM,WACRA,EAAM,SAAW,CAAC,GAEpB,QAASE,EAAI,EAAGA,EAAIX,EAAM,OAAQW,IAChC,KAAK,OAAOX,EAAMW,GAAG,GAAIX,EAAMW,GAAG,EAAE,CAExC,GAGE,OAAOG,EAAM,aAAgB,YAC/B,OAAO,eAAeA,EAAO,cAAe,CAC1C,WAAY,GACZ,aAAc,GACd,SAAU,GACV,MAAO,SAASP,EAAc,CAC5B,GAAI,KAAK,SACP,KAAK,SAAW,CAAC,MACZ,CACL,IAAIgB,EAAO,CAAC,EACZ,KAAK,QAAQ,SAASrB,EAAOQ,EAAM,CACjCa,EAAK,KAAKb,CAAI,CAChB,CAAC,EACD,QAASC,EAAI,EAAGA,EAAIY,EAAK,OAAQZ,IAC/B,KAAK,OAAOY,EAAKZ,EAAE,CAEvB,CAEAJ,EAAeA,EAAa,QAAQ,MAAO,EAAE,EAG7C,QAFIiB,EAAajB,EAAa,MAAM,GAAG,EACnCkB,EACKd,EAAI,EAAGA,EAAIa,EAAW,OAAQb,IACrCc,EAAYD,EAAWb,GAAG,MAAM,GAAG,EACnC,KAAK,OACHP,EAAiBqB,EAAU,EAAE,EAC5BA,EAAU,OAAS,EAAKrB,EAAiBqB,EAAU,EAAE,EAAI,EAC5D,CAEJ,CACF,CAAC,CAKL,GACG,OAAO,QAAW,YAAe,OAC5B,OAAO,QAAW,YAAe,OACjC,OAAO,MAAS,YAAe,KAAO/B,EAC9C,GAEC,SAASC,EAAQ,CAOhB,IAAI+B,EAAwB,UAAW,CACrC,GAAI,CACF,IAAIC,EAAI,IAAIhC,EAAO,IAAI,IAAK,UAAU,EACtC,OAAAgC,EAAE,SAAW,MACLA,EAAE,OAAS,kBAAqBA,EAAE,YAC5C,OAASP,EAAP,CACA,MAAO,EACT,CACF,EAGIQ,EAAc,UAAW,CAC3B,IAAIC,EAAOlC,EAAO,IAEdmC,EAAM,SAASC,EAAKC,EAAM,CACxB,OAAOD,GAAQ,WAAUA,EAAM,OAAOA,CAAG,GACzCC,GAAQ,OAAOA,GAAS,WAAUA,EAAO,OAAOA,CAAI,GAGxD,IAAIC,EAAM,SAAUC,EACpB,GAAIF,IAASrC,EAAO,WAAa,QAAUqC,IAASrC,EAAO,SAAS,MAAO,CACzEqC,EAAOA,EAAK,YAAY,EACxBC,EAAM,SAAS,eAAe,mBAAmB,EAAE,EACnDC,EAAcD,EAAI,cAAc,MAAM,EACtCC,EAAY,KAAOF,EACnBC,EAAI,KAAK,YAAYC,CAAW,EAChC,GAAI,CACF,GAAIA,EAAY,KAAK,QAAQF,CAAI,IAAM,EAAG,MAAM,IAAI,MAAME,EAAY,IAAI,CAC5E,OAASC,EAAP,CACA,MAAM,IAAI,MAAM,0BAA4BH,EAAO,WAAaG,CAAG,CACrE,CACF,CAEA,IAAIC,EAAgBH,EAAI,cAAc,GAAG,EACzCG,EAAc,KAAOL,EACjBG,IACFD,EAAI,KAAK,YAAYG,CAAa,EAClCA,EAAc,KAAOA,EAAc,MAGrC,IAAIC,EAAeJ,EAAI,cAAc,OAAO,EAI5C,GAHAI,EAAa,KAAO,MACpBA,EAAa,MAAQN,EAEjBK,EAAc,WAAa,KAAO,CAAC,IAAI,KAAKA,EAAc,IAAI,GAAM,CAACC,EAAa,cAAc,GAAK,CAACL,EACxG,MAAM,IAAI,UAAU,aAAa,EAGnC,OAAO,eAAe,KAAM,iBAAkB,CAC5C,MAAOI,CACT,CAAC,EAID,IAAIE,EAAe,IAAI3C,EAAO,gBAAgB,KAAK,MAAM,EACrD4C,EAAqB,GACrBC,EAA2B,GAC3B/B,EAAQ,KACZ,CAAC,SAAU,SAAU,KAAK,EAAE,QAAQ,SAASgC,EAAY,CACvD,IAAIC,GAASJ,EAAaG,GAC1BH,EAAaG,GAAc,UAAW,CACpCC,GAAO,MAAMJ,EAAc,SAAS,EAChCC,IACFC,EAA2B,GAC3B/B,EAAM,OAAS6B,EAAa,SAAS,EACrCE,EAA2B,GAE/B,CACF,CAAC,EAED,OAAO,eAAe,KAAM,eAAgB,CAC1C,MAAOF,EACP,WAAY,EACd,CAAC,EAED,IAAIK,EAAS,OACb,OAAO,eAAe,KAAM,sBAAuB,CACjD,WAAY,GACZ,aAAc,GACd,SAAU,GACV,MAAO,UAAW,CACZ,KAAK,SAAWA,IAClBA,EAAS,KAAK,OACVH,IACFD,EAAqB,GACrB,KAAK,aAAa,YAAY,KAAK,MAAM,EACzCA,EAAqB,IAG3B,CACF,CAAC,CACH,EAEIzB,EAAQgB,EAAI,UAEZc,EAA6B,SAASC,EAAe,CACvD,OAAO,eAAe/B,EAAO+B,EAAe,CAC1C,IAAK,UAAW,CACd,OAAO,KAAK,eAAeA,EAC7B,EACA,IAAK,SAAS3C,EAAO,CACnB,KAAK,eAAe2C,GAAiB3C,CACvC,EACA,WAAY,EACd,CAAC,CACH,EAEA,CAAC,OAAQ,OAAQ,WAAY,OAAQ,UAAU,EAC5C,QAAQ,SAAS2C,EAAe,CAC/BD,EAA2BC,CAAa,CAC1C,CAAC,EAEH,OAAO,eAAe/B,EAAO,SAAU,CACrC,IAAK,UAAW,CACd,OAAO,KAAK,eAAe,MAC7B,EACA,IAAK,SAASZ,EAAO,CACnB,KAAK,eAAe,OAAYA,EAChC,KAAK,oBAAoB,CAC3B,EACA,WAAY,EACd,CAAC,EAED,OAAO,iBAAiBY,EAAO,CAE7B,SAAY,CACV,IAAK,UAAW,CACd,IAAIL,EAAQ,KACZ,OAAO,UAAW,CAChB,OAAOA,EAAM,IACf,CACF,CACF,EAEA,KAAQ,CACN,IAAK,UAAW,CACd,OAAO,KAAK,eAAe,KAAK,QAAQ,MAAO,EAAE,CACnD,EACA,IAAK,SAASP,EAAO,CACnB,KAAK,eAAe,KAAOA,EAC3B,KAAK,oBAAoB,CAC3B,EACA,WAAY,EACd,EAEA,SAAY,CACV,IAAK,UAAW,CACd,OAAO,KAAK,eAAe,SAAS,QAAQ,SAAU,GAAG,CAC3D,EACA,IAAK,SAASA,EAAO,CACnB,KAAK,eAAe,SAAWA,CACjC,EACA,WAAY,EACd,EAEA,OAAU,CACR,IAAK,UAAW,CAEd,IAAI4C,EAAe,CAAE,QAAS,GAAI,SAAU,IAAK,OAAQ,EAAG,EAAE,KAAK,eAAe,UAI9EC,EAAkB,KAAK,eAAe,MAAQD,GAChD,KAAK,eAAe,OAAS,GAE/B,OAAO,KAAK,eAAe,SACzB,KACA,KAAK,eAAe,UACnBC,EAAmB,IAAM,KAAK,eAAe,KAAQ,GAC1D,EACA,WAAY,EACd,EAEA,SAAY,CACV,IAAK,UAAW,CACd,MAAO,EACT,EACA,IAAK,SAAS7C,EAAO,CACrB,EACA,WAAY,EACd,EAEA,SAAY,CACV,IAAK,UAAW,CACd,MAAO,EACT,EACA,IAAK,SAASA,EAAO,CACrB,EACA,WAAY,EACd,CACF,CAAC,EAED4B,EAAI,gBAAkB,SAASkB,EAAM,CACnC,OAAOnB,EAAK,gBAAgB,MAAMA,EAAM,SAAS,CACnD,EAEAC,EAAI,gBAAkB,SAASC,EAAK,CAClC,OAAOF,EAAK,gBAAgB,MAAMA,EAAM,SAAS,CACnD,EAEAlC,EAAO,IAAMmC,CAEf,EAMA,GAJKJ,EAAsB,GACzBE,EAAY,EAGTjC,EAAO,WAAa,QAAW,EAAE,WAAYA,EAAO,UAAW,CAClE,IAAIsD,EAAY,UAAW,CACzB,OAAOtD,EAAO,SAAS,SAAW,KAAOA,EAAO,SAAS,UAAYA,EAAO,SAAS,KAAQ,IAAMA,EAAO,SAAS,KAAQ,GAC7H,EAEA,GAAI,CACF,OAAO,eAAeA,EAAO,SAAU,SAAU,CAC/C,IAAKsD,EACL,WAAY,EACd,CAAC,CACH,OAAS7B,EAAP,CACA,YAAY,UAAW,CACrBzB,EAAO,SAAS,OAASsD,EAAU,CACrC,EAAG,GAAG,CACR,CACF,CAEF,GACG,OAAO,QAAW,YAAe,OAC5B,OAAO,QAAW,YAAe,OACjC,OAAO,MAAS,YAAe,KAAOvD,EAC9C,IC5eA,IAAAwD,GAAAC,GAAA,CAAAC,GAAAC,KAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,gFAeA,IAAIC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,IACH,SAAUC,EAAS,CAChB,IAAIC,EAAO,OAAO,QAAW,SAAW,OAAS,OAAO,MAAS,SAAW,KAAO,OAAO,MAAS,SAAW,KAAO,CAAC,EAClH,OAAO,QAAW,YAAc,OAAO,IACvC,OAAO,QAAS,CAAC,SAAS,EAAG,SAAU3B,EAAS,CAAE0B,EAAQE,EAAeD,EAAMC,EAAe5B,CAAO,CAAC,CAAC,CAAG,CAAC,EAEtG,OAAOC,IAAW,UAAY,OAAOA,GAAO,SAAY,SAC7DyB,EAAQE,EAAeD,EAAMC,EAAe3B,GAAO,OAAO,CAAC,CAAC,EAG5DyB,EAAQE,EAAeD,CAAI,CAAC,EAEhC,SAASC,EAAe5B,EAAS6B,EAAU,CACvC,OAAI7B,IAAY2B,IACR,OAAO,OAAO,QAAW,WACzB,OAAO,eAAe3B,EAAS,aAAc,CAAE,MAAO,EAAK,CAAC,EAG5DA,EAAQ,WAAa,IAGtB,SAAU8B,EAAIC,EAAG,CAAE,OAAO/B,EAAQ8B,GAAMD,EAAWA,EAASC,EAAIC,CAAC,EAAIA,CAAG,CACnF,CACJ,GACC,SAAUC,EAAU,CACjB,IAAIC,EAAgB,OAAO,gBACtB,CAAE,UAAW,CAAC,CAAE,YAAa,OAAS,SAAUC,EAAGC,EAAG,CAAED,EAAE,UAAYC,CAAG,GAC1E,SAAUD,EAAGC,EAAG,CAAE,QAASC,KAAKD,EAAO,OAAO,UAAU,eAAe,KAAKA,EAAGC,CAAC,IAAGF,EAAEE,GAAKD,EAAEC,GAAI,EAEpGlC,GAAY,SAAUgC,EAAGC,EAAG,CACxB,GAAI,OAAOA,GAAM,YAAcA,IAAM,KACjC,MAAM,IAAI,UAAU,uBAAyB,OAAOA,CAAC,EAAI,+BAA+B,EAC5FF,EAAcC,EAAGC,CAAC,EAClB,SAASE,GAAK,CAAE,KAAK,YAAcH,CAAG,CACtCA,EAAE,UAAYC,IAAM,KAAO,OAAO,OAAOA,CAAC,GAAKE,EAAG,UAAYF,EAAE,UAAW,IAAIE,EACnF,EAEAlC,GAAW,OAAO,QAAU,SAAUmC,EAAG,CACrC,QAASC,EAAG,EAAI,EAAGC,EAAI,UAAU,OAAQ,EAAIA,EAAG,IAAK,CACjDD,EAAI,UAAU,GACd,QAASH,KAAKG,EAAO,OAAO,UAAU,eAAe,KAAKA,EAAGH,CAAC,IAAGE,EAAEF,GAAKG,EAAEH,GAC9E,CACA,OAAOE,CACX,EAEAlC,GAAS,SAAUmC,EAAGE,EAAG,CACrB,IAAIH,EAAI,CAAC,EACT,QAASF,KAAKG,EAAO,OAAO,UAAU,eAAe,KAAKA,EAAGH,CAAC,GAAKK,EAAE,QAAQL,CAAC,EAAI,IAC9EE,EAAEF,GAAKG,EAAEH,IACb,GAAIG,GAAK,MAAQ,OAAO,OAAO,uBAA0B,WACrD,QAASG,EAAI,EAAGN,EAAI,OAAO,sBAAsBG,CAAC,EAAGG,EAAIN,EAAE,OAAQM,IAC3DD,EAAE,QAAQL,EAAEM,EAAE,EAAI,GAAK,OAAO,UAAU,qBAAqB,KAAKH,EAAGH,EAAEM,EAAE,IACzEJ,EAAEF,EAAEM,IAAMH,EAAEH,EAAEM,KAE1B,OAAOJ,CACX,EAEAjC,GAAa,SAAUsC,EAAYC,EAAQC,EAAKC,EAAM,CAClD,IAAIC,EAAI,UAAU,OAAQC,EAAID,EAAI,EAAIH,EAASE,IAAS,KAAOA,EAAO,OAAO,yBAAyBF,EAAQC,CAAG,EAAIC,EAAMZ,EAC3H,GAAI,OAAO,SAAY,UAAY,OAAO,QAAQ,UAAa,WAAYc,EAAI,QAAQ,SAASL,EAAYC,EAAQC,EAAKC,CAAI,MACxH,SAASJ,EAAIC,EAAW,OAAS,EAAGD,GAAK,EAAGA,KAASR,EAAIS,EAAWD,MAAIM,GAAKD,EAAI,EAAIb,EAAEc,CAAC,EAAID,EAAI,EAAIb,EAAEU,EAAQC,EAAKG,CAAC,EAAId,EAAEU,EAAQC,CAAG,IAAMG,GAChJ,OAAOD,EAAI,GAAKC,GAAK,OAAO,eAAeJ,EAAQC,EAAKG,CAAC,EAAGA,CAChE,EAEA1C,GAAU,SAAU2C,EAAYC,EAAW,CACvC,OAAO,SAAUN,EAAQC,EAAK,CAAEK,EAAUN,EAAQC,EAAKI,CAAU,CAAG,CACxE,EAEA1C,GAAa,SAAU4C,EAAaC,EAAe,CAC/C,GAAI,OAAO,SAAY,UAAY,OAAO,QAAQ,UAAa,WAAY,OAAO,QAAQ,SAASD,EAAaC,CAAa,CACjI,EAEA5C,GAAY,SAAU6C,EAASC,EAAYC,EAAGC,EAAW,CACrD,SAASC,EAAMC,EAAO,CAAE,OAAOA,aAAiBH,EAAIG,EAAQ,IAAIH,EAAE,SAAUI,EAAS,CAAEA,EAAQD,CAAK,CAAG,CAAC,CAAG,CAC3G,OAAO,IAAKH,IAAMA,EAAI,UAAU,SAAUI,EAASC,EAAQ,CACvD,SAASC,EAAUH,EAAO,CAAE,GAAI,CAAEI,EAAKN,EAAU,KAAKE,CAAK,CAAC,CAAG,OAASjB,EAAP,CAAYmB,EAAOnB,CAAC,CAAG,CAAE,CAC1F,SAASsB,EAASL,EAAO,CAAE,GAAI,CAAEI,EAAKN,EAAU,MAASE,CAAK,CAAC,CAAG,OAASjB,EAAP,CAAYmB,EAAOnB,CAAC,CAAG,CAAE,CAC7F,SAASqB,EAAKE,EAAQ,CAAEA,EAAO,KAAOL,EAAQK,EAAO,KAAK,EAAIP,EAAMO,EAAO,KAAK,EAAE,KAAKH,EAAWE,CAAQ,CAAG,CAC7GD,GAAMN,EAAYA,EAAU,MAAMH,EAASC,GAAc,CAAC,CAAC,GAAG,KAAK,CAAC,CACxE,CAAC,CACL,EAEA7C,GAAc,SAAU4C,EAASY,EAAM,CACnC,IAAIC,EAAI,CAAE,MAAO,EAAG,KAAM,UAAW,CAAE,GAAI5B,EAAE,GAAK,EAAG,MAAMA,EAAE,GAAI,OAAOA,EAAE,EAAI,EAAG,KAAM,CAAC,EAAG,IAAK,CAAC,CAAE,EAAG6B,EAAGC,EAAG9B,EAAG+B,EAC/G,OAAOA,EAAI,CAAE,KAAMC,EAAK,CAAC,EAAG,MAASA,EAAK,CAAC,EAAG,OAAUA,EAAK,CAAC,CAAE,EAAG,OAAO,QAAW,aAAeD,EAAE,OAAO,UAAY,UAAW,CAAE,OAAO,IAAM,GAAIA,EACvJ,SAASC,EAAK9B,EAAG,CAAE,OAAO,SAAUT,EAAG,CAAE,OAAO+B,EAAK,CAACtB,EAAGT,CAAC,CAAC,CAAG,CAAG,CACjE,SAAS+B,EAAKS,EAAI,CACd,GAAIJ,EAAG,MAAM,IAAI,UAAU,iCAAiC,EAC5D,KAAOD,GAAG,GAAI,CACV,GAAIC,EAAI,EAAGC,IAAM9B,EAAIiC,EAAG,GAAK,EAAIH,EAAE,OAAYG,EAAG,GAAKH,EAAE,SAAc9B,EAAI8B,EAAE,SAAc9B,EAAE,KAAK8B,CAAC,EAAG,GAAKA,EAAE,OAAS,EAAE9B,EAAIA,EAAE,KAAK8B,EAAGG,EAAG,EAAE,GAAG,KAAM,OAAOjC,EAE3J,OADI8B,EAAI,EAAG9B,IAAGiC,EAAK,CAACA,EAAG,GAAK,EAAGjC,EAAE,KAAK,GAC9BiC,EAAG,GAAI,CACX,IAAK,GAAG,IAAK,GAAGjC,EAAIiC,EAAI,MACxB,IAAK,GAAG,OAAAL,EAAE,QAAgB,CAAE,MAAOK,EAAG,GAAI,KAAM,EAAM,EACtD,IAAK,GAAGL,EAAE,QAASE,EAAIG,EAAG,GAAIA,EAAK,CAAC,CAAC,EAAG,SACxC,IAAK,GAAGA,EAAKL,EAAE,IAAI,IAAI,EAAGA,EAAE,KAAK,IAAI,EAAG,SACxC,QACI,GAAM5B,EAAI4B,EAAE,KAAM,EAAA5B,EAAIA,EAAE,OAAS,GAAKA,EAAEA,EAAE,OAAS,MAAQiC,EAAG,KAAO,GAAKA,EAAG,KAAO,GAAI,CAAEL,EAAI,EAAG,QAAU,CAC3G,GAAIK,EAAG,KAAO,IAAM,CAACjC,GAAMiC,EAAG,GAAKjC,EAAE,IAAMiC,EAAG,GAAKjC,EAAE,IAAM,CAAE4B,EAAE,MAAQK,EAAG,GAAI,KAAO,CACrF,GAAIA,EAAG,KAAO,GAAKL,EAAE,MAAQ5B,EAAE,GAAI,CAAE4B,EAAE,MAAQ5B,EAAE,GAAIA,EAAIiC,EAAI,KAAO,CACpE,GAAIjC,GAAK4B,EAAE,MAAQ5B,EAAE,GAAI,CAAE4B,EAAE,MAAQ5B,EAAE,GAAI4B,EAAE,IAAI,KAAKK,CAAE,EAAG,KAAO,CAC9DjC,EAAE,IAAI4B,EAAE,IAAI,IAAI,EACpBA,EAAE,KAAK,IAAI,EAAG,QACtB,CACAK,EAAKN,EAAK,KAAKZ,EAASa,CAAC,CAC7B,OAASzB,EAAP,CAAY8B,EAAK,CAAC,EAAG9B,CAAC,EAAG2B,EAAI,CAAG,QAAE,CAAUD,EAAI7B,EAAI,CAAG,CACzD,GAAIiC,EAAG,GAAK,EAAG,MAAMA,EAAG,GAAI,MAAO,CAAE,MAAOA,EAAG,GAAKA,EAAG,GAAK,OAAQ,KAAM,EAAK,CACnF,CACJ,EAEA7D,GAAe,SAAS8D,EAAG,EAAG,CAC1B,QAASpC,KAAKoC,EAAOpC,IAAM,WAAa,CAAC,OAAO,UAAU,eAAe,KAAK,EAAGA,CAAC,GAAGX,GAAgB,EAAG+C,EAAGpC,CAAC,CAChH,EAEAX,GAAkB,OAAO,OAAU,SAASgD,EAAGD,EAAGE,EAAGC,EAAI,CACjDA,IAAO,SAAWA,EAAKD,GAC3B,OAAO,eAAeD,EAAGE,EAAI,CAAE,WAAY,GAAM,IAAK,UAAW,CAAE,OAAOH,EAAEE,EAAI,CAAE,CAAC,CACvF,EAAM,SAASD,EAAGD,EAAGE,EAAGC,EAAI,CACpBA,IAAO,SAAWA,EAAKD,GAC3BD,EAAEE,GAAMH,EAAEE,EACd,EAEA/D,GAAW,SAAU8D,EAAG,CACpB,IAAIlC,EAAI,OAAO,QAAW,YAAc,OAAO,SAAUiC,EAAIjC,GAAKkC,EAAElC,GAAIG,EAAI,EAC5E,GAAI8B,EAAG,OAAOA,EAAE,KAAKC,CAAC,EACtB,GAAIA,GAAK,OAAOA,EAAE,QAAW,SAAU,MAAO,CAC1C,KAAM,UAAY,CACd,OAAIA,GAAK/B,GAAK+B,EAAE,SAAQA,EAAI,QACrB,CAAE,MAAOA,GAAKA,EAAE/B,KAAM,KAAM,CAAC+B,CAAE,CAC1C,CACJ,EACA,MAAM,IAAI,UAAUlC,EAAI,0BAA4B,iCAAiC,CACzF,EAEA3B,GAAS,SAAU6D,EAAGjC,EAAG,CACrB,IAAIgC,EAAI,OAAO,QAAW,YAAcC,EAAE,OAAO,UACjD,GAAI,CAACD,EAAG,OAAOC,EACf,IAAI/B,EAAI8B,EAAE,KAAKC,CAAC,EAAGzB,EAAG4B,EAAK,CAAC,EAAGnC,EAC/B,GAAI,CACA,MAAQD,IAAM,QAAUA,KAAM,IAAM,EAAEQ,EAAIN,EAAE,KAAK,GAAG,MAAMkC,EAAG,KAAK5B,EAAE,KAAK,CAC7E,OACO6B,EAAP,CAAgBpC,EAAI,CAAE,MAAOoC,CAAM,CAAG,QACtC,CACI,GAAI,CACI7B,GAAK,CAACA,EAAE,OAASwB,EAAI9B,EAAE,SAAY8B,EAAE,KAAK9B,CAAC,CACnD,QACA,CAAU,GAAID,EAAG,MAAMA,EAAE,KAAO,CACpC,CACA,OAAOmC,CACX,EAGA/D,GAAW,UAAY,CACnB,QAAS+D,EAAK,CAAC,EAAGlC,EAAI,EAAGA,EAAI,UAAU,OAAQA,IAC3CkC,EAAKA,EAAG,OAAOhE,GAAO,UAAU8B,EAAE,CAAC,EACvC,OAAOkC,CACX,EAGA9D,GAAiB,UAAY,CACzB,QAASyB,EAAI,EAAGG,EAAI,EAAGoC,EAAK,UAAU,OAAQpC,EAAIoC,EAAIpC,IAAKH,GAAK,UAAUG,GAAG,OAC7E,QAASM,EAAI,MAAMT,CAAC,EAAGmC,EAAI,EAAGhC,EAAI,EAAGA,EAAIoC,EAAIpC,IACzC,QAASqC,EAAI,UAAUrC,GAAIsC,EAAI,EAAGC,EAAKF,EAAE,OAAQC,EAAIC,EAAID,IAAKN,IAC1D1B,EAAE0B,GAAKK,EAAEC,GACjB,OAAOhC,CACX,EAEAjC,GAAgB,SAAUmE,EAAIC,EAAMC,EAAM,CACtC,GAAIA,GAAQ,UAAU,SAAW,EAAG,QAAS1C,EAAI,EAAG2C,EAAIF,EAAK,OAAQP,EAAIlC,EAAI2C,EAAG3C,KACxEkC,GAAM,EAAElC,KAAKyC,MACRP,IAAIA,EAAK,MAAM,UAAU,MAAM,KAAKO,EAAM,EAAGzC,CAAC,GACnDkC,EAAGlC,GAAKyC,EAAKzC,IAGrB,OAAOwC,EAAG,OAAON,GAAM,MAAM,UAAU,MAAM,KAAKO,CAAI,CAAC,CAC3D,EAEAnE,GAAU,SAAUe,EAAG,CACnB,OAAO,gBAAgBf,IAAW,KAAK,EAAIe,EAAG,MAAQ,IAAIf,GAAQe,CAAC,CACvE,EAEAd,GAAmB,SAAUoC,EAASC,EAAYE,EAAW,CACzD,GAAI,CAAC,OAAO,cAAe,MAAM,IAAI,UAAU,sCAAsC,EACrF,IAAIa,EAAIb,EAAU,MAAMH,EAASC,GAAc,CAAC,CAAC,EAAGZ,EAAG4C,EAAI,CAAC,EAC5D,OAAO5C,EAAI,CAAC,EAAG4B,EAAK,MAAM,EAAGA,EAAK,OAAO,EAAGA,EAAK,QAAQ,EAAG5B,EAAE,OAAO,eAAiB,UAAY,CAAE,OAAO,IAAM,EAAGA,EACpH,SAAS4B,EAAK9B,EAAG,CAAM6B,EAAE7B,KAAIE,EAAEF,GAAK,SAAUT,EAAG,CAAE,OAAO,IAAI,QAAQ,SAAUgD,EAAG5C,EAAG,CAAEmD,EAAE,KAAK,CAAC9C,EAAGT,EAAGgD,EAAG5C,CAAC,CAAC,EAAI,GAAKoD,EAAO/C,EAAGT,CAAC,CAAG,CAAC,CAAG,EAAG,CACzI,SAASwD,EAAO/C,EAAGT,EAAG,CAAE,GAAI,CAAE+B,EAAKO,EAAE7B,GAAGT,CAAC,CAAC,CAAG,OAASU,EAAP,CAAY+C,EAAOF,EAAE,GAAG,GAAI7C,CAAC,CAAG,CAAE,CACjF,SAASqB,EAAKd,EAAG,CAAEA,EAAE,iBAAiBhC,GAAU,QAAQ,QAAQgC,EAAE,MAAM,CAAC,EAAE,KAAKyC,EAAS7B,CAAM,EAAI4B,EAAOF,EAAE,GAAG,GAAItC,CAAC,CAAI,CACxH,SAASyC,EAAQ/B,EAAO,CAAE6B,EAAO,OAAQ7B,CAAK,CAAG,CACjD,SAASE,EAAOF,EAAO,CAAE6B,EAAO,QAAS7B,CAAK,CAAG,CACjD,SAAS8B,EAAOrB,EAAGpC,EAAG,CAAMoC,EAAEpC,CAAC,EAAGuD,EAAE,MAAM,EAAGA,EAAE,QAAQC,EAAOD,EAAE,GAAG,GAAIA,EAAE,GAAG,EAAE,CAAG,CACrF,EAEApE,GAAmB,SAAUuD,EAAG,CAC5B,IAAI/B,EAAGN,EACP,OAAOM,EAAI,CAAC,EAAG4B,EAAK,MAAM,EAAGA,EAAK,QAAS,SAAU7B,EAAG,CAAE,MAAMA,CAAG,CAAC,EAAG6B,EAAK,QAAQ,EAAG5B,EAAE,OAAO,UAAY,UAAY,CAAE,OAAO,IAAM,EAAGA,EAC1I,SAAS4B,EAAK9B,EAAG2B,EAAG,CAAEzB,EAAEF,GAAKiC,EAAEjC,GAAK,SAAUT,EAAG,CAAE,OAAQK,EAAI,CAACA,GAAK,CAAE,MAAOpB,GAAQyD,EAAEjC,GAAGT,CAAC,CAAC,EAAG,KAAMS,IAAM,QAAS,EAAI2B,EAAIA,EAAEpC,CAAC,EAAIA,CAAG,EAAIoC,CAAG,CAClJ,EAEAhD,GAAgB,SAAUsD,EAAG,CACzB,GAAI,CAAC,OAAO,cAAe,MAAM,IAAI,UAAU,sCAAsC,EACrF,IAAID,EAAIC,EAAE,OAAO,eAAgB,EACjC,OAAOD,EAAIA,EAAE,KAAKC,CAAC,GAAKA,EAAI,OAAO9D,IAAa,WAAaA,GAAS8D,CAAC,EAAIA,EAAE,OAAO,UAAU,EAAG,EAAI,CAAC,EAAGH,EAAK,MAAM,EAAGA,EAAK,OAAO,EAAGA,EAAK,QAAQ,EAAG,EAAE,OAAO,eAAiB,UAAY,CAAE,OAAO,IAAM,EAAG,GAC9M,SAASA,EAAK9B,EAAG,CAAE,EAAEA,GAAKiC,EAAEjC,IAAM,SAAUT,EAAG,CAAE,OAAO,IAAI,QAAQ,SAAU4B,EAASC,EAAQ,CAAE7B,EAAI0C,EAAEjC,GAAGT,CAAC,EAAGyD,EAAO7B,EAASC,EAAQ7B,EAAE,KAAMA,EAAE,KAAK,CAAG,CAAC,CAAG,CAAG,CAC/J,SAASyD,EAAO7B,EAASC,EAAQ1B,EAAGH,EAAG,CAAE,QAAQ,QAAQA,CAAC,EAAE,KAAK,SAASA,EAAG,CAAE4B,EAAQ,CAAE,MAAO5B,EAAG,KAAMG,CAAE,CAAC,CAAG,EAAG0B,CAAM,CAAG,CAC/H,EAEAxC,GAAuB,SAAUsE,EAAQC,EAAK,CAC1C,OAAI,OAAO,eAAkB,OAAO,eAAeD,EAAQ,MAAO,CAAE,MAAOC,CAAI,CAAC,EAAYD,EAAO,IAAMC,EAClGD,CACX,EAEA,IAAIE,EAAqB,OAAO,OAAU,SAASnB,EAAG1C,EAAG,CACrD,OAAO,eAAe0C,EAAG,UAAW,CAAE,WAAY,GAAM,MAAO1C,CAAE,CAAC,CACtE,EAAK,SAAS0C,EAAG1C,EAAG,CAChB0C,EAAE,QAAa1C,CACnB,EAEAV,GAAe,SAAUwE,EAAK,CAC1B,GAAIA,GAAOA,EAAI,WAAY,OAAOA,EAClC,IAAI7B,EAAS,CAAC,EACd,GAAI6B,GAAO,KAAM,QAASnB,KAAKmB,EAASnB,IAAM,WAAa,OAAO,UAAU,eAAe,KAAKmB,EAAKnB,CAAC,GAAGjD,GAAgBuC,EAAQ6B,EAAKnB,CAAC,EACvI,OAAAkB,EAAmB5B,EAAQ6B,CAAG,EACvB7B,CACX,EAEA1C,GAAkB,SAAUuE,EAAK,CAC7B,OAAQA,GAAOA,EAAI,WAAcA,EAAM,CAAE,QAAWA,CAAI,CAC5D,EAEAtE,GAAyB,SAAUuE,EAAUC,EAAOC,EAAM7B,EAAG,CACzD,GAAI6B,IAAS,KAAO,CAAC7B,EAAG,MAAM,IAAI,UAAU,+CAA+C,EAC3F,GAAI,OAAO4B,GAAU,WAAaD,IAAaC,GAAS,CAAC5B,EAAI,CAAC4B,EAAM,IAAID,CAAQ,EAAG,MAAM,IAAI,UAAU,0EAA0E,EACjL,OAAOE,IAAS,IAAM7B,EAAI6B,IAAS,IAAM7B,EAAE,KAAK2B,CAAQ,EAAI3B,EAAIA,EAAE,MAAQ4B,EAAM,IAAID,CAAQ,CAChG,EAEAtE,GAAyB,SAAUsE,EAAUC,EAAOrC,EAAOsC,EAAM7B,EAAG,CAChE,GAAI6B,IAAS,IAAK,MAAM,IAAI,UAAU,gCAAgC,EACtE,GAAIA,IAAS,KAAO,CAAC7B,EAAG,MAAM,IAAI,UAAU,+CAA+C,EAC3F,GAAI,OAAO4B,GAAU,WAAaD,IAAaC,GAAS,CAAC5B,EAAI,CAAC4B,EAAM,IAAID,CAAQ,EAAG,MAAM,IAAI,UAAU,yEAAyE,EAChL,OAAQE,IAAS,IAAM7B,EAAE,KAAK2B,EAAUpC,CAAK,EAAIS,EAAIA,EAAE,MAAQT,EAAQqC,EAAM,IAAID,EAAUpC,CAAK,EAAIA,CACxG,EAEA1B,EAAS,YAAa9B,EAAS,EAC/B8B,EAAS,WAAY7B,EAAQ,EAC7B6B,EAAS,SAAU5B,EAAM,EACzB4B,EAAS,aAAc3B,EAAU,EACjC2B,EAAS,UAAW1B,EAAO,EAC3B0B,EAAS,aAAczB,EAAU,EACjCyB,EAAS,YAAaxB,EAAS,EAC/BwB,EAAS,cAAevB,EAAW,EACnCuB,EAAS,eAAgBtB,EAAY,EACrCsB,EAAS,kBAAmBP,EAAe,EAC3CO,EAAS,WAAYrB,EAAQ,EAC7BqB,EAAS,SAAUpB,EAAM,EACzBoB,EAAS,WAAYnB,EAAQ,EAC7BmB,EAAS,iBAAkBlB,EAAc,EACzCkB,EAAS,gBAAiBjB,EAAa,EACvCiB,EAAS,UAAWhB,EAAO,EAC3BgB,EAAS,mBAAoBf,EAAgB,EAC7Ce,EAAS,mBAAoBd,EAAgB,EAC7Cc,EAAS,gBAAiBb,EAAa,EACvCa,EAAS,uBAAwBZ,EAAoB,EACrDY,EAAS,eAAgBX,EAAY,EACrCW,EAAS,kBAAmBV,EAAe,EAC3CU,EAAS,yBAA0BT,EAAsB,EACzDS,EAAS,yBAA0BR,EAAsB,CAC7D,CAAC,ICjTD,IAAAyE,GAAAC,GAAA,CAAAC,GAAAC,KAAA;AAAA;AAAA;AAAA;AAAA;AAAA,IAMC,SAA0CC,EAAMC,EAAS,CACtD,OAAOH,IAAY,UAAY,OAAOC,IAAW,SACnDA,GAAO,QAAUE,EAAQ,EAClB,OAAO,QAAW,YAAc,OAAO,IAC9C,OAAO,CAAC,EAAGA,CAAO,EACX,OAAOH,IAAY,SAC1BA,GAAQ,YAAiBG,EAAQ,EAEjCD,EAAK,YAAiBC,EAAQ,CAChC,GAAGH,GAAM,UAAW,CACpB,OAAiB,UAAW,CAClB,IAAII,EAAuB,CAE/B,IACC,SAASC,EAAyBC,EAAqBC,EAAqB,CAEnF,aAGAA,EAAoB,EAAED,EAAqB,CACzC,QAAW,UAAW,CAAE,OAAqBE,EAAW,CAC1D,CAAC,EAGD,IAAIC,EAAeF,EAAoB,GAAG,EACtCG,EAAoCH,EAAoB,EAAEE,CAAY,EAEtEE,EAASJ,EAAoB,GAAG,EAChCK,EAA8BL,EAAoB,EAAEI,CAAM,EAE1DE,EAAaN,EAAoB,GAAG,EACpCO,EAA8BP,EAAoB,EAAEM,CAAU,EAOlE,SAASE,EAAQC,EAAM,CACrB,GAAI,CACF,OAAO,SAAS,YAAYA,CAAI,CAClC,OAASC,EAAP,CACA,MAAO,EACT,CACF,CAUA,IAAIC,EAAqB,SAA4BC,EAAQ,CAC3D,IAAIC,EAAeN,EAAe,EAAEK,CAAM,EAC1C,OAAAJ,EAAQ,KAAK,EACNK,CACT,EAEiCC,EAAeH,EAOhD,SAASI,EAAkBC,EAAO,CAChC,IAAIC,EAAQ,SAAS,gBAAgB,aAAa,KAAK,IAAM,MACzDC,EAAc,SAAS,cAAc,UAAU,EAEnDA,EAAY,MAAM,SAAW,OAE7BA,EAAY,MAAM,OAAS,IAC3BA,EAAY,MAAM,QAAU,IAC5BA,EAAY,MAAM,OAAS,IAE3BA,EAAY,MAAM,SAAW,WAC7BA,EAAY,MAAMD,EAAQ,QAAU,QAAU,UAE9C,IAAIE,EAAY,OAAO,aAAe,SAAS,gBAAgB,UAC/D,OAAAD,EAAY,MAAM,IAAM,GAAG,OAAOC,EAAW,IAAI,EACjDD,EAAY,aAAa,WAAY,EAAE,EACvCA,EAAY,MAAQF,EACbE,CACT,CAYA,IAAIE,EAAiB,SAAwBJ,EAAOK,EAAS,CAC3D,IAAIH,EAAcH,EAAkBC,CAAK,EACzCK,EAAQ,UAAU,YAAYH,CAAW,EACzC,IAAIL,EAAeN,EAAe,EAAEW,CAAW,EAC/C,OAAAV,EAAQ,MAAM,EACdU,EAAY,OAAO,EACZL,CACT,EASIS,EAAsB,SAA6BV,EAAQ,CAC7D,IAAIS,EAAU,UAAU,OAAS,GAAK,UAAU,KAAO,OAAY,UAAU,GAAK,CAChF,UAAW,SAAS,IACtB,EACIR,EAAe,GAEnB,OAAI,OAAOD,GAAW,SACpBC,EAAeO,EAAeR,EAAQS,CAAO,EACpCT,aAAkB,kBAAoB,CAAC,CAAC,OAAQ,SAAU,MAAO,MAAO,UAAU,EAAE,SAASA,GAAW,KAA4B,OAASA,EAAO,IAAI,EAEjKC,EAAeO,EAAeR,EAAO,MAAOS,CAAO,GAEnDR,EAAeN,EAAe,EAAEK,CAAM,EACtCJ,EAAQ,MAAM,GAGTK,CACT,EAEiCU,EAAgBD,EAEjD,SAASE,EAAQC,EAAK,CAA6B,OAAI,OAAO,QAAW,YAAc,OAAO,OAAO,UAAa,SAAYD,EAAU,SAAiBC,EAAK,CAAE,OAAO,OAAOA,CAAK,EAAYD,EAAU,SAAiBC,EAAK,CAAE,OAAOA,GAAO,OAAO,QAAW,YAAcA,EAAI,cAAgB,QAAUA,IAAQ,OAAO,UAAY,SAAW,OAAOA,CAAK,EAAYD,EAAQC,CAAG,CAAG,CAUzX,IAAIC,GAAyB,UAAkC,CAC7D,IAAIL,EAAU,UAAU,OAAS,GAAK,UAAU,KAAO,OAAY,UAAU,GAAK,CAAC,EAE/EM,EAAkBN,EAAQ,OAC1BO,EAASD,IAAoB,OAAS,OAASA,EAC/CE,EAAYR,EAAQ,UACpBT,EAASS,EAAQ,OACjBS,GAAOT,EAAQ,KAEnB,GAAIO,IAAW,QAAUA,IAAW,MAClC,MAAM,IAAI,MAAM,oDAAoD,EAItE,GAAIhB,IAAW,OACb,GAAIA,GAAUY,EAAQZ,CAAM,IAAM,UAAYA,EAAO,WAAa,EAAG,CACnE,GAAIgB,IAAW,QAAUhB,EAAO,aAAa,UAAU,EACrD,MAAM,IAAI,MAAM,mFAAmF,EAGrG,GAAIgB,IAAW,QAAUhB,EAAO,aAAa,UAAU,GAAKA,EAAO,aAAa,UAAU,GACxF,MAAM,IAAI,MAAM,uGAAwG,CAE5H,KACE,OAAM,IAAI,MAAM,6CAA6C,EAKjE,GAAIkB,GACF,OAAOP,EAAaO,GAAM,CACxB,UAAWD,CACb,CAAC,EAIH,GAAIjB,EACF,OAAOgB,IAAW,MAAQd,EAAYF,CAAM,EAAIW,EAAaX,EAAQ,CACnE,UAAWiB,CACb,CAAC,CAEL,EAEiCE,GAAmBL,GAEpD,SAASM,GAAiBP,EAAK,CAA6B,OAAI,OAAO,QAAW,YAAc,OAAO,OAAO,UAAa,SAAYO,GAAmB,SAAiBP,EAAK,CAAE,OAAO,OAAOA,CAAK,EAAYO,GAAmB,SAAiBP,EAAK,CAAE,OAAOA,GAAO,OAAO,QAAW,YAAcA,EAAI,cAAgB,QAAUA,IAAQ,OAAO,UAAY,SAAW,OAAOA,CAAK,EAAYO,GAAiBP,CAAG,CAAG,CAE7Z,SAASQ,GAAgBC,EAAUC,EAAa,CAAE,GAAI,EAAED,aAAoBC,GAAgB,MAAM,IAAI,UAAU,mCAAmC,CAAK,CAExJ,SAASC,GAAkBxB,EAAQyB,EAAO,CAAE,QAASC,EAAI,EAAGA,EAAID,EAAM,OAAQC,IAAK,CAAE,IAAIC,EAAaF,EAAMC,GAAIC,EAAW,WAAaA,EAAW,YAAc,GAAOA,EAAW,aAAe,GAAU,UAAWA,IAAYA,EAAW,SAAW,IAAM,OAAO,eAAe3B,EAAQ2B,EAAW,IAAKA,CAAU,CAAG,CAAE,CAE5T,SAASC,GAAaL,EAAaM,EAAYC,EAAa,CAAE,OAAID,GAAYL,GAAkBD,EAAY,UAAWM,CAAU,EAAOC,GAAaN,GAAkBD,EAAaO,CAAW,EAAUP,CAAa,CAEtN,SAASQ,GAAUC,EAAUC,EAAY,CAAE,GAAI,OAAOA,GAAe,YAAcA,IAAe,KAAQ,MAAM,IAAI,UAAU,oDAAoD,EAAKD,EAAS,UAAY,OAAO,OAAOC,GAAcA,EAAW,UAAW,CAAE,YAAa,CAAE,MAAOD,EAAU,SAAU,GAAM,aAAc,EAAK,CAAE,CAAC,EAAOC,GAAYC,GAAgBF,EAAUC,CAAU,CAAG,CAEhY,SAASC,GAAgBC,EAAGC,EAAG,CAAE,OAAAF,GAAkB,OAAO,gBAAkB,SAAyBC,EAAGC,EAAG,CAAE,OAAAD,EAAE,UAAYC,EAAUD,CAAG,EAAUD,GAAgBC,EAAGC,CAAC,CAAG,CAEzK,SAASC,GAAaC,EAAS,CAAE,IAAIC,EAA4BC,GAA0B,EAAG,OAAO,UAAgC,CAAE,IAAIC,EAAQC,GAAgBJ,CAAO,EAAGK,EAAQ,GAAIJ,EAA2B,CAAE,IAAIK,EAAYF,GAAgB,IAAI,EAAE,YAAaC,EAAS,QAAQ,UAAUF,EAAO,UAAWG,CAAS,CAAG,MAASD,EAASF,EAAM,MAAM,KAAM,SAAS,EAAK,OAAOI,GAA2B,KAAMF,CAAM,CAAG,CAAG,CAExa,SAASE,GAA2BC,EAAMC,EAAM,CAAE,OAAIA,IAAS3B,GAAiB2B,CAAI,IAAM,UAAY,OAAOA,GAAS,YAAsBA,EAAeC,GAAuBF,CAAI,CAAG,CAEzL,SAASE,GAAuBF,EAAM,CAAE,GAAIA,IAAS,OAAU,MAAM,IAAI,eAAe,2DAA2D,EAAK,OAAOA,CAAM,CAErK,SAASN,IAA4B,CAA0E,GAApE,OAAO,SAAY,aAAe,CAAC,QAAQ,WAA6B,QAAQ,UAAU,KAAM,MAAO,GAAO,GAAI,OAAO,OAAU,WAAY,MAAO,GAAM,GAAI,CAAE,YAAK,UAAU,SAAS,KAAK,QAAQ,UAAU,KAAM,CAAC,EAAG,UAAY,CAAC,CAAC,CAAC,EAAU,EAAM,OAASS,EAAP,CAAY,MAAO,EAAO,CAAE,CAEnU,SAASP,GAAgBP,EAAG,CAAE,OAAAO,GAAkB,OAAO,eAAiB,OAAO,eAAiB,SAAyBP,EAAG,CAAE,OAAOA,EAAE,WAAa,OAAO,eAAeA,CAAC,CAAG,EAAUO,GAAgBP,CAAC,CAAG,CAa5M,SAASe,GAAkBC,EAAQC,EAAS,CAC1C,IAAIC,EAAY,kBAAkB,OAAOF,CAAM,EAE/C,GAAI,EAACC,EAAQ,aAAaC,CAAS,EAInC,OAAOD,EAAQ,aAAaC,CAAS,CACvC,CAOA,IAAIC,GAAyB,SAAUC,EAAU,CAC/CxB,GAAUuB,EAAWC,CAAQ,EAE7B,IAAIC,EAASnB,GAAaiB,CAAS,EAMnC,SAASA,EAAUG,EAAShD,EAAS,CACnC,IAAIiD,EAEJ,OAAArC,GAAgB,KAAMiC,CAAS,EAE/BI,EAAQF,EAAO,KAAK,IAAI,EAExBE,EAAM,eAAejD,CAAO,EAE5BiD,EAAM,YAAYD,CAAO,EAElBC,CACT,CAQA,OAAA9B,GAAa0B,EAAW,CAAC,CACvB,IAAK,iBACL,MAAO,UAA0B,CAC/B,IAAI7C,EAAU,UAAU,OAAS,GAAK,UAAU,KAAO,OAAY,UAAU,GAAK,CAAC,EACnF,KAAK,OAAS,OAAOA,EAAQ,QAAW,WAAaA,EAAQ,OAAS,KAAK,cAC3E,KAAK,OAAS,OAAOA,EAAQ,QAAW,WAAaA,EAAQ,OAAS,KAAK,cAC3E,KAAK,KAAO,OAAOA,EAAQ,MAAS,WAAaA,EAAQ,KAAO,KAAK,YACrE,KAAK,UAAYW,GAAiBX,EAAQ,SAAS,IAAM,SAAWA,EAAQ,UAAY,SAAS,IACnG,CAMF,EAAG,CACD,IAAK,cACL,MAAO,SAAqBgD,EAAS,CACnC,IAAIE,EAAS,KAEb,KAAK,SAAWlE,EAAe,EAAEgE,EAAS,QAAS,SAAUR,GAAG,CAC9D,OAAOU,EAAO,QAAQV,EAAC,CACzB,CAAC,CACH,CAMF,EAAG,CACD,IAAK,UACL,MAAO,SAAiBA,EAAG,CACzB,IAAIQ,EAAUR,EAAE,gBAAkBA,EAAE,cAChCjC,GAAS,KAAK,OAAOyC,CAAO,GAAK,OACjCvC,GAAOC,GAAgB,CACzB,OAAQH,GACR,UAAW,KAAK,UAChB,OAAQ,KAAK,OAAOyC,CAAO,EAC3B,KAAM,KAAK,KAAKA,CAAO,CACzB,CAAC,EAED,KAAK,KAAKvC,GAAO,UAAY,QAAS,CACpC,OAAQF,GACR,KAAME,GACN,QAASuC,EACT,eAAgB,UAA0B,CACpCA,GACFA,EAAQ,MAAM,EAGhB,OAAO,aAAa,EAAE,gBAAgB,CACxC,CACF,CAAC,CACH,CAMF,EAAG,CACD,IAAK,gBACL,MAAO,SAAuBA,EAAS,CACrC,OAAOP,GAAkB,SAAUO,CAAO,CAC5C,CAMF,EAAG,CACD,IAAK,gBACL,MAAO,SAAuBA,EAAS,CACrC,IAAIG,EAAWV,GAAkB,SAAUO,CAAO,EAElD,GAAIG,EACF,OAAO,SAAS,cAAcA,CAAQ,CAE1C,CAQF,EAAG,CACD,IAAK,cAML,MAAO,SAAqBH,EAAS,CACnC,OAAOP,GAAkB,OAAQO,CAAO,CAC1C,CAKF,EAAG,CACD,IAAK,UACL,MAAO,UAAmB,CACxB,KAAK,SAAS,QAAQ,CACxB,CACF,CAAC,EAAG,CAAC,CACH,IAAK,OACL,MAAO,SAAczD,EAAQ,CAC3B,IAAIS,EAAU,UAAU,OAAS,GAAK,UAAU,KAAO,OAAY,UAAU,GAAK,CAChF,UAAW,SAAS,IACtB,EACA,OAAOE,EAAaX,EAAQS,CAAO,CACrC,CAOF,EAAG,CACD,IAAK,MACL,MAAO,SAAaT,EAAQ,CAC1B,OAAOE,EAAYF,CAAM,CAC3B,CAOF,EAAG,CACD,IAAK,cACL,MAAO,UAAuB,CAC5B,IAAIgB,EAAS,UAAU,OAAS,GAAK,UAAU,KAAO,OAAY,UAAU,GAAK,CAAC,OAAQ,KAAK,EAC3F6C,EAAU,OAAO7C,GAAW,SAAW,CAACA,CAAM,EAAIA,EAClD8C,GAAU,CAAC,CAAC,SAAS,sBACzB,OAAAD,EAAQ,QAAQ,SAAU7C,GAAQ,CAChC8C,GAAUA,IAAW,CAAC,CAAC,SAAS,sBAAsB9C,EAAM,CAC9D,CAAC,EACM8C,EACT,CACF,CAAC,CAAC,EAEKR,CACT,EAAG/D,EAAqB,CAAE,EAEOF,GAAaiE,EAExC,EAEA,IACC,SAASxE,EAAQ,CAExB,IAAIiF,EAAqB,EAKzB,GAAI,OAAO,SAAY,aAAe,CAAC,QAAQ,UAAU,QAAS,CAC9D,IAAIC,EAAQ,QAAQ,UAEpBA,EAAM,QAAUA,EAAM,iBACNA,EAAM,oBACNA,EAAM,mBACNA,EAAM,kBACNA,EAAM,qBAC1B,CASA,SAASC,EAASb,EAASQ,EAAU,CACjC,KAAOR,GAAWA,EAAQ,WAAaW,GAAoB,CACvD,GAAI,OAAOX,EAAQ,SAAY,YAC3BA,EAAQ,QAAQQ,CAAQ,EAC1B,OAAOR,EAETA,EAAUA,EAAQ,UACtB,CACJ,CAEAtE,EAAO,QAAUmF,CAGX,EAEA,IACC,SAASnF,EAAQoF,EAA0B9E,EAAqB,CAEvE,IAAI6E,EAAU7E,EAAoB,GAAG,EAYrC,SAAS+E,EAAUf,EAASQ,EAAU/D,EAAMuE,EAAUC,EAAY,CAC9D,IAAIC,EAAaC,EAAS,MAAM,KAAM,SAAS,EAE/C,OAAAnB,EAAQ,iBAAiBvD,EAAMyE,EAAYD,CAAU,EAE9C,CACH,QAAS,UAAW,CAChBjB,EAAQ,oBAAoBvD,EAAMyE,EAAYD,CAAU,CAC5D,CACJ,CACJ,CAYA,SAASG,EAASC,EAAUb,EAAU/D,EAAMuE,EAAUC,EAAY,CAE9D,OAAI,OAAOI,EAAS,kBAAqB,WAC9BN,EAAU,MAAM,KAAM,SAAS,EAItC,OAAOtE,GAAS,WAGTsE,EAAU,KAAK,KAAM,QAAQ,EAAE,MAAM,KAAM,SAAS,GAI3D,OAAOM,GAAa,WACpBA,EAAW,SAAS,iBAAiBA,CAAQ,GAI1C,MAAM,UAAU,IAAI,KAAKA,EAAU,SAAUrB,EAAS,CACzD,OAAOe,EAAUf,EAASQ,EAAU/D,EAAMuE,EAAUC,CAAU,CAClE,CAAC,EACL,CAWA,SAASE,EAASnB,EAASQ,EAAU/D,EAAMuE,EAAU,CACjD,OAAO,SAASnB,EAAG,CACfA,EAAE,eAAiBgB,EAAQhB,EAAE,OAAQW,CAAQ,EAEzCX,EAAE,gBACFmB,EAAS,KAAKhB,EAASH,CAAC,CAEhC,CACJ,CAEAnE,EAAO,QAAU0F,CAGX,EAEA,IACC,SAAStF,EAAyBL,EAAS,CAQlDA,EAAQ,KAAO,SAASuB,EAAO,CAC3B,OAAOA,IAAU,QACVA,aAAiB,aACjBA,EAAM,WAAa,CAC9B,EAQAvB,EAAQ,SAAW,SAASuB,EAAO,CAC/B,IAAIP,EAAO,OAAO,UAAU,SAAS,KAAKO,CAAK,EAE/C,OAAOA,IAAU,SACTP,IAAS,qBAAuBA,IAAS,4BACzC,WAAYO,IACZA,EAAM,SAAW,GAAKvB,EAAQ,KAAKuB,EAAM,EAAE,EACvD,EAQAvB,EAAQ,OAAS,SAASuB,EAAO,CAC7B,OAAO,OAAOA,GAAU,UACjBA,aAAiB,MAC5B,EAQAvB,EAAQ,GAAK,SAASuB,EAAO,CACzB,IAAIP,EAAO,OAAO,UAAU,SAAS,KAAKO,CAAK,EAE/C,OAAOP,IAAS,mBACpB,CAGM,EAEA,IACC,SAASf,EAAQoF,EAA0B9E,EAAqB,CAEvE,IAAIsF,EAAKtF,EAAoB,GAAG,EAC5BoF,EAAWpF,EAAoB,GAAG,EAWtC,SAASI,EAAOQ,EAAQH,EAAMuE,EAAU,CACpC,GAAI,CAACpE,GAAU,CAACH,GAAQ,CAACuE,EACrB,MAAM,IAAI,MAAM,4BAA4B,EAGhD,GAAI,CAACM,EAAG,OAAO7E,CAAI,EACf,MAAM,IAAI,UAAU,kCAAkC,EAG1D,GAAI,CAAC6E,EAAG,GAAGN,CAAQ,EACf,MAAM,IAAI,UAAU,mCAAmC,EAG3D,GAAIM,EAAG,KAAK1E,CAAM,EACd,OAAO2E,EAAW3E,EAAQH,EAAMuE,CAAQ,EAEvC,GAAIM,EAAG,SAAS1E,CAAM,EACvB,OAAO4E,EAAe5E,EAAQH,EAAMuE,CAAQ,EAE3C,GAAIM,EAAG,OAAO1E,CAAM,EACrB,OAAO6E,EAAe7E,EAAQH,EAAMuE,CAAQ,EAG5C,MAAM,IAAI,UAAU,2EAA2E,CAEvG,CAWA,SAASO,EAAWG,EAAMjF,EAAMuE,EAAU,CACtC,OAAAU,EAAK,iBAAiBjF,EAAMuE,CAAQ,EAE7B,CACH,QAAS,UAAW,CAChBU,EAAK,oBAAoBjF,EAAMuE,CAAQ,CAC3C,CACJ,CACJ,CAWA,SAASQ,EAAeG,EAAUlF,EAAMuE,EAAU,CAC9C,aAAM,UAAU,QAAQ,KAAKW,EAAU,SAASD,EAAM,CAClDA,EAAK,iBAAiBjF,EAAMuE,CAAQ,CACxC,CAAC,EAEM,CACH,QAAS,UAAW,CAChB,MAAM,UAAU,QAAQ,KAAKW,EAAU,SAASD,EAAM,CAClDA,EAAK,oBAAoBjF,EAAMuE,CAAQ,CAC3C,CAAC,CACL,CACJ,CACJ,CAWA,SAASS,EAAejB,EAAU/D,EAAMuE,EAAU,CAC9C,OAAOI,EAAS,SAAS,KAAMZ,EAAU/D,EAAMuE,CAAQ,CAC3D,CAEAtF,EAAO,QAAUU,CAGX,EAEA,IACC,SAASV,EAAQ,CAExB,SAASkG,EAAO5B,EAAS,CACrB,IAAInD,EAEJ,GAAImD,EAAQ,WAAa,SACrBA,EAAQ,MAAM,EAEdnD,EAAemD,EAAQ,cAElBA,EAAQ,WAAa,SAAWA,EAAQ,WAAa,WAAY,CACtE,IAAI6B,EAAa7B,EAAQ,aAAa,UAAU,EAE3C6B,GACD7B,EAAQ,aAAa,WAAY,EAAE,EAGvCA,EAAQ,OAAO,EACfA,EAAQ,kBAAkB,EAAGA,EAAQ,MAAM,MAAM,EAE5C6B,GACD7B,EAAQ,gBAAgB,UAAU,EAGtCnD,EAAemD,EAAQ,KAC3B,KACK,CACGA,EAAQ,aAAa,iBAAiB,GACtCA,EAAQ,MAAM,EAGlB,IAAI8B,EAAY,OAAO,aAAa,EAChCC,EAAQ,SAAS,YAAY,EAEjCA,EAAM,mBAAmB/B,CAAO,EAChC8B,EAAU,gBAAgB,EAC1BA,EAAU,SAASC,CAAK,EAExBlF,EAAeiF,EAAU,SAAS,CACtC,CAEA,OAAOjF,CACX,CAEAnB,EAAO,QAAUkG,CAGX,EAEA,IACC,SAASlG,EAAQ,CAExB,SAASsG,GAAK,CAGd,CAEAA,EAAE,UAAY,CACZ,GAAI,SAAUC,EAAMjB,EAAUkB,EAAK,CACjC,IAAIrC,EAAI,KAAK,IAAM,KAAK,EAAI,CAAC,GAE7B,OAACA,EAAEoC,KAAUpC,EAAEoC,GAAQ,CAAC,IAAI,KAAK,CAC/B,GAAIjB,EACJ,IAAKkB,CACP,CAAC,EAEM,IACT,EAEA,KAAM,SAAUD,EAAMjB,EAAUkB,EAAK,CACnC,IAAIxC,EAAO,KACX,SAASyB,GAAY,CACnBzB,EAAK,IAAIuC,EAAMd,CAAQ,EACvBH,EAAS,MAAMkB,EAAK,SAAS,CAC/B,CAEA,OAAAf,EAAS,EAAIH,EACN,KAAK,GAAGiB,EAAMd,EAAUe,CAAG,CACpC,EAEA,KAAM,SAAUD,EAAM,CACpB,IAAIE,EAAO,CAAC,EAAE,MAAM,KAAK,UAAW,CAAC,EACjCC,IAAW,KAAK,IAAM,KAAK,EAAI,CAAC,IAAIH,IAAS,CAAC,GAAG,MAAM,EACvD3D,EAAI,EACJ+D,EAAMD,EAAO,OAEjB,IAAK9D,EAAGA,EAAI+D,EAAK/D,IACf8D,EAAO9D,GAAG,GAAG,MAAM8D,EAAO9D,GAAG,IAAK6D,CAAI,EAGxC,OAAO,IACT,EAEA,IAAK,SAAUF,EAAMjB,EAAU,CAC7B,IAAInB,EAAI,KAAK,IAAM,KAAK,EAAI,CAAC,GACzByC,EAAOzC,EAAEoC,GACTM,EAAa,CAAC,EAElB,GAAID,GAAQtB,EACV,QAAS1C,EAAI,EAAG+D,EAAMC,EAAK,OAAQhE,EAAI+D,EAAK/D,IACtCgE,EAAKhE,GAAG,KAAO0C,GAAYsB,EAAKhE,GAAG,GAAG,IAAM0C,GAC9CuB,EAAW,KAAKD,EAAKhE,EAAE,EAQ7B,OAACiE,EAAW,OACR1C,EAAEoC,GAAQM,EACV,OAAO1C,EAAEoC,GAEN,IACT,CACF,EAEAvG,EAAO,QAAUsG,EACjBtG,EAAO,QAAQ,YAAcsG,CAGvB,CAEI,EAGIQ,EAA2B,CAAC,EAGhC,SAASxG,EAAoByG,EAAU,CAEtC,GAAGD,EAAyBC,GAC3B,OAAOD,EAAyBC,GAAU,QAG3C,IAAI/G,EAAS8G,EAAyBC,GAAY,CAGjD,QAAS,CAAC,CACX,EAGA,OAAA5G,EAAoB4G,GAAU/G,EAAQA,EAAO,QAASM,CAAmB,EAGlEN,EAAO,OACf,CAIA,OAAC,UAAW,CAEXM,EAAoB,EAAI,SAASN,EAAQ,CACxC,IAAIgH,EAAShH,GAAUA,EAAO,WAC7B,UAAW,CAAE,OAAOA,EAAO,OAAY,EACvC,UAAW,CAAE,OAAOA,CAAQ,EAC7B,OAAAM,EAAoB,EAAE0G,EAAQ,CAAE,EAAGA,CAAO,CAAC,EACpCA,CACR,CACD,EAAE,EAGD,UAAW,CAEX1G,EAAoB,EAAI,SAASP,EAASkH,EAAY,CACrD,QAAQC,KAAOD,EACX3G,EAAoB,EAAE2G,EAAYC,CAAG,GAAK,CAAC5G,EAAoB,EAAEP,EAASmH,CAAG,GAC/E,OAAO,eAAenH,EAASmH,EAAK,CAAE,WAAY,GAAM,IAAKD,EAAWC,EAAK,CAAC,CAGjF,CACD,EAAE,EAGD,UAAW,CACX5G,EAAoB,EAAI,SAASyB,EAAKoF,EAAM,CAAE,OAAO,OAAO,UAAU,eAAe,KAAKpF,EAAKoF,CAAI,CAAG,CACvG,EAAE,EAMK7G,EAAoB,GAAG,CAC/B,EAAG,EACX,OACD,CAAC,ICz3BD,IAAA8G,GAAAC,GAAA,CAAAC,GAAAC,KAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,GAeA,IAAIC,GAAkB,UAOtBD,GAAO,QAAUE,GAUjB,SAASA,GAAWC,EAAQ,CAC1B,IAAIC,EAAM,GAAKD,EACXE,EAAQJ,GAAgB,KAAKG,CAAG,EAEpC,GAAI,CAACC,EACH,OAAOD,EAGT,IAAIE,EACAC,EAAO,GACPC,EAAQ,EACRC,EAAY,EAEhB,IAAKD,EAAQH,EAAM,MAAOG,EAAQJ,EAAI,OAAQI,IAAS,CACrD,OAAQJ,EAAI,WAAWI,CAAK,EAAG,CAC7B,IAAK,IACHF,EAAS,SACT,MACF,IAAK,IACHA,EAAS,QACT,MACF,IAAK,IACHA,EAAS,QACT,MACF,IAAK,IACHA,EAAS,OACT,MACF,IAAK,IACHA,EAAS,OACT,MACF,QACE,QACJ,CAEIG,IAAcD,IAChBD,GAAQH,EAAI,UAAUK,EAAWD,CAAK,GAGxCC,EAAYD,EAAQ,EACpBD,GAAQD,CACV,CAEA,OAAOG,IAAcD,EACjBD,EAAOH,EAAI,UAAUK,EAAWD,CAAK,EACrCD,CACN,IC7EA,MAAM,UAAU,MAAM,OAAO,eAAe,MAAM,UAAU,OAAO,CAAC,aAAa,GAAG,MAAM,SAASG,GAAG,CAAC,IAAI,EAAE,MAAM,UAAU,EAAE,EAAE,EAAE,OAAO,UAAU,EAAE,EAAE,OAAO,EAAE,MAAM,UAAU,OAAO,KAAK,KAAK,SAASC,EAAEC,EAAE,CAAC,OAAO,MAAM,QAAQA,CAAC,EAAED,EAAE,KAAK,MAAMA,EAAED,EAAE,KAAKE,EAAE,EAAE,CAAC,CAAC,EAAED,EAAE,KAAKC,CAAC,EAAED,CAAC,EAAE,CAAC,CAAC,EAAE,MAAM,UAAU,MAAM,KAAK,IAAI,CAAC,EAAE,SAAS,EAAE,CAAC,EAAE,MAAM,UAAU,SAAS,OAAO,eAAe,MAAM,UAAU,UAAU,CAAC,aAAa,GAAG,MAAM,SAASD,EAAE,CAAC,OAAO,MAAM,UAAU,IAAI,MAAM,KAAK,SAAS,EAAE,KAAK,CAAC,EAAE,SAAS,EAAE,CAAC,ECuBxf,IAAAG,GAAO,SCvBP,KAAK,QAAQ,KAAK,MAAM,SAAS,EAAEC,EAAE,CAAC,OAAOA,EAAEA,GAAG,CAAC,EAAE,IAAI,QAAQ,SAASC,EAAEC,EAAE,CAAC,IAAIC,EAAE,IAAI,eAAeC,EAAE,CAAC,EAAEC,EAAE,CAAC,EAAEC,EAAE,CAAC,EAAEC,EAAE,UAAU,CAAC,MAAM,CAAC,IAAOJ,EAAE,OAAO,IAAI,IAAjB,EAAoB,WAAWA,EAAE,WAAW,OAAOA,EAAE,OAAO,IAAIA,EAAE,YAAY,KAAK,UAAU,CAAC,OAAO,QAAQ,QAAQA,EAAE,YAAY,CAAC,EAAE,KAAK,UAAU,CAAC,OAAO,QAAQ,QAAQA,EAAE,YAAY,EAAE,KAAK,KAAK,KAAK,CAAC,EAAE,KAAK,UAAU,CAAC,OAAO,QAAQ,QAAQ,IAAI,KAAK,CAACA,EAAE,QAAQ,CAAC,CAAC,CAAC,EAAE,MAAMI,EAAE,QAAQ,CAAC,KAAK,UAAU,CAAC,OAAOH,CAAC,EAAE,QAAQ,UAAU,CAAC,OAAOC,CAAC,EAAE,IAAI,SAASG,EAAE,CAAC,OAAOF,EAAEE,EAAE,YAAY,EAAE,EAAE,IAAI,SAASA,EAAE,CAAC,OAAOA,EAAE,YAAY,IAAIF,CAAC,CAAC,CAAC,CAAC,EAAE,QAAQG,KAAKN,EAAE,KAAKH,EAAE,QAAQ,MAAM,EAAE,EAAE,EAAEG,EAAE,OAAO,UAAU,CAACA,EAAE,sBAAsB,EAAE,QAAQ,+BAA+B,SAASK,EAAER,EAAEC,EAAE,CAACG,EAAE,KAAKJ,EAAEA,EAAE,YAAY,CAAC,EAAEK,EAAE,KAAK,CAACL,EAAEC,CAAC,CAAC,EAAEK,EAAEN,GAAGM,EAAEN,GAAGM,EAAEN,GAAG,IAAIC,EAAEA,CAAC,CAAC,EAAEA,EAAEM,EAAE,CAAC,CAAC,EAAEJ,EAAE,QAAQD,EAAEC,EAAE,gBAA2BH,EAAE,aAAb,UAAyBA,EAAE,QAAQG,EAAE,iBAAiBM,EAAET,EAAE,QAAQS,EAAE,EAAEN,EAAE,KAAKH,EAAE,MAAM,IAAI,CAAC,CAAC,CAAC,GDyBj5B,IAAAU,GAAO,SEzBP,IAAAC,GAAkB,WACZ,CACF,UAAAC,GACA,SAAAC,GACA,OAAAC,GACA,WAAAC,GACA,QAAAC,GACA,WAAAC,GACA,UAAAC,GACA,YAAAC,GACA,aAAAC,GACA,gBAAAC,GACA,SAAAC,GACA,OAAAC,EACA,SAAAC,GACA,eAAAC,GACA,cAAAC,EACA,QAAAC,GACA,iBAAAC,GACA,iBAAAC,GACA,cAAAC,GACA,qBAAAC,GACA,aAAAC,GACA,gBAAAC,GACA,uBAAAC,GACA,uBAAAC,EACJ,EAAI,GAAAC,QCtBE,SAAUC,EAAWC,EAAU,CACnC,OAAO,OAAOA,GAAU,UAC1B,CCGM,SAAUC,GAAoBC,EAAgC,CAClE,IAAMC,EAAS,SAACC,EAAa,CAC3B,MAAM,KAAKA,CAAQ,EACnBA,EAAS,MAAQ,IAAI,MAAK,EAAG,KAC/B,EAEMC,EAAWH,EAAWC,CAAM,EAClC,OAAAE,EAAS,UAAY,OAAO,OAAO,MAAM,SAAS,EAClDA,EAAS,UAAU,YAAcA,EAC1BA,CACT,CCDO,IAAMC,GAA+CC,GAC1D,SAACC,EAAM,CACL,OAAA,SAA4CC,EAA0B,CACpED,EAAO,IAAI,EACX,KAAK,QAAUC,EACRA,EAAO,OAAM;EACxBA,EAAO,IAAI,SAACC,EAAKC,EAAC,CAAK,OAAGA,EAAI,EAAC,KAAKD,EAAI,SAAQ,CAAzB,CAA6B,EAAE,KAAK;GAAM,EACzD,GACJ,KAAK,KAAO,sBACZ,KAAK,OAASD,CAChB,CARA,CAQC,ECvBC,SAAUG,GAAaC,EAA6BC,EAAO,CAC/D,GAAID,EAAK,CACP,IAAME,EAAQF,EAAI,QAAQC,CAAI,EAC9B,GAAKC,GAASF,EAAI,OAAOE,EAAO,CAAC,EAErC,CCOA,IAAAC,GAAA,UAAA,CAyBE,SAAAA,EAAoBC,EAA4B,CAA5B,KAAA,gBAAAA,EAdb,KAAA,OAAS,GAER,KAAA,WAAmD,KAMnD,KAAA,YAAqD,IAMV,CAQnD,OAAAD,EAAA,UAAA,YAAA,UAAA,aACME,EAEJ,GAAI,CAAC,KAAK,OAAQ,CAChB,KAAK,OAAS,GAGN,IAAAC,EAAe,KAAI,WAC3B,GAAIA,EAEF,GADA,KAAK,WAAa,KACd,MAAM,QAAQA,CAAU,MAC1B,QAAqBC,EAAAC,GAAAF,CAAU,EAAAG,EAAAF,EAAA,KAAA,EAAA,CAAAE,EAAA,KAAAA,EAAAF,EAAA,KAAA,EAAE,CAA5B,IAAMG,EAAMD,EAAA,MACfC,EAAO,OAAO,IAAI,yGAGpBJ,EAAW,OAAO,IAAI,EAIlB,IAAiBK,EAAqB,KAAI,gBAClD,GAAIC,EAAWD,CAAgB,EAC7B,GAAI,CACFA,EAAgB,QACTE,EAAP,CACAR,EAASQ,aAAaC,GAAsBD,EAAE,OAAS,CAACA,CAAC,EAIrD,IAAAE,EAAgB,KAAI,YAC5B,GAAIA,EAAa,CACf,KAAK,YAAc,SACnB,QAAwBC,EAAAR,GAAAO,CAAW,EAAAE,EAAAD,EAAA,KAAA,EAAA,CAAAC,EAAA,KAAAA,EAAAD,EAAA,KAAA,EAAE,CAAhC,IAAME,EAASD,EAAA,MAClB,GAAI,CACFE,GAAcD,CAAS,QAChBE,EAAP,CACAf,EAASA,GAAM,KAANA,EAAU,CAAA,EACfe,aAAeN,GACjBT,EAAMgB,EAAAA,EAAA,CAAA,EAAAC,EAAOjB,CAAM,CAAA,EAAAiB,EAAKF,EAAI,MAAM,CAAA,EAElCf,EAAO,KAAKe,CAAG,sGAMvB,GAAIf,EACF,MAAM,IAAIS,GAAoBT,CAAM,EAG1C,EAoBAF,EAAA,UAAA,IAAA,SAAIoB,EAAuB,OAGzB,GAAIA,GAAYA,IAAa,KAC3B,GAAI,KAAK,OAGPJ,GAAcI,CAAQ,MACjB,CACL,GAAIA,aAAoBpB,EAAc,CAGpC,GAAIoB,EAAS,QAAUA,EAAS,WAAW,IAAI,EAC7C,OAEFA,EAAS,WAAW,IAAI,GAEzB,KAAK,aAAcC,EAAA,KAAK,eAAW,MAAAA,IAAA,OAAAA,EAAI,CAAA,GAAI,KAAKD,CAAQ,EAG/D,EAOQpB,EAAA,UAAA,WAAR,SAAmBsB,EAAoB,CAC7B,IAAAnB,EAAe,KAAI,WAC3B,OAAOA,IAAemB,GAAW,MAAM,QAAQnB,CAAU,GAAKA,EAAW,SAASmB,CAAM,CAC1F,EASQtB,EAAA,UAAA,WAAR,SAAmBsB,EAAoB,CAC7B,IAAAnB,EAAe,KAAI,WAC3B,KAAK,WAAa,MAAM,QAAQA,CAAU,GAAKA,EAAW,KAAKmB,CAAM,EAAGnB,GAAcA,EAAa,CAACA,EAAYmB,CAAM,EAAIA,CAC5H,EAMQtB,EAAA,UAAA,cAAR,SAAsBsB,EAAoB,CAChC,IAAAnB,EAAe,KAAI,WACvBA,IAAemB,EACjB,KAAK,WAAa,KACT,MAAM,QAAQnB,CAAU,GACjCoB,GAAUpB,EAAYmB,CAAM,CAEhC,EAgBAtB,EAAA,UAAA,OAAA,SAAOoB,EAAsC,CACnC,IAAAR,EAAgB,KAAI,YAC5BA,GAAeW,GAAUX,EAAaQ,CAAQ,EAE1CA,aAAoBpB,GACtBoB,EAAS,cAAc,IAAI,CAE/B,EAlLcpB,EAAA,MAAS,UAAA,CACrB,IAAMwB,EAAQ,IAAIxB,EAClB,OAAAwB,EAAM,OAAS,GACRA,CACT,EAAE,EA+KJxB,GArLA,EAuLO,IAAMyB,GAAqBC,GAAa,MAEzC,SAAUC,GAAeC,EAAU,CACvC,OACEA,aAAiBF,IAChBE,GAAS,WAAYA,GAASC,EAAWD,EAAM,MAAM,GAAKC,EAAWD,EAAM,GAAG,GAAKC,EAAWD,EAAM,WAAW,CAEpH,CAEA,SAASE,GAAcC,EAAwC,CACzDF,EAAWE,CAAS,EACtBA,EAAS,EAETA,EAAU,YAAW,CAEzB,CChNO,IAAMC,GAAuB,CAClC,iBAAkB,KAClB,sBAAuB,KACvB,QAAS,OACT,sCAAuC,GACvC,yBAA0B,ICGrB,IAAMC,GAAmC,CAG9C,WAAA,SAAWC,EAAqBC,EAAgB,SAAEC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,EAAA,GAAA,UAAAA,GACxC,IAAAC,EAAaL,GAAe,SACpC,OAAIK,GAAQ,MAARA,EAAU,WACLA,EAAS,WAAU,MAAnBA,EAAQC,EAAA,CAAYL,EAASC,CAAO,EAAAK,EAAKJ,CAAI,CAAA,CAAA,EAE/C,WAAU,MAAA,OAAAG,EAAA,CAACL,EAASC,CAAO,EAAAK,EAAKJ,CAAI,CAAA,CAAA,CAC7C,EACA,aAAA,SAAaK,EAAM,CACT,IAAAH,EAAaL,GAAe,SACpC,QAAQK,GAAQ,KAAA,OAARA,EAAU,eAAgB,cAAcG,CAAa,CAC/D,EACA,SAAU,QCjBN,SAAUC,GAAqBC,EAAQ,CAC3CC,GAAgB,WAAW,UAAA,CACjB,IAAAC,EAAqBC,GAAM,iBACnC,GAAID,EAEFA,EAAiBF,CAAG,MAGpB,OAAMA,CAEV,CAAC,CACH,CCtBM,SAAUI,IAAI,CAAK,CCMlB,IAAMC,GAAyB,UAAA,CAAM,OAAAC,GAAmB,IAAK,OAAW,MAAS,CAA5C,EAAsE,EAO5G,SAAUC,GAAkBC,EAAU,CAC1C,OAAOF,GAAmB,IAAK,OAAWE,CAAK,CACjD,CAOM,SAAUC,GAAoBC,EAAQ,CAC1C,OAAOJ,GAAmB,IAAKI,EAAO,MAAS,CACjD,CAQM,SAAUJ,GAAmBK,EAAuBD,EAAYF,EAAU,CAC9E,MAAO,CACL,KAAIG,EACJ,MAAKD,EACL,MAAKF,EAET,CCrCA,IAAII,GAAuD,KASrD,SAAUC,GAAaC,EAAc,CACzC,GAAIC,GAAO,sCAAuC,CAChD,IAAMC,EAAS,CAACJ,GAKhB,GAJII,IACFJ,GAAU,CAAE,YAAa,GAAO,MAAO,IAAI,GAE7CE,EAAE,EACEE,EAAQ,CACJ,IAAAC,EAAyBL,GAAvBM,EAAWD,EAAA,YAAEE,EAAKF,EAAA,MAE1B,GADAL,GAAU,KACNM,EACF,MAAMC,QAMVL,EAAE,CAEN,CAMM,SAAUM,GAAaC,EAAQ,CAC/BN,GAAO,uCAAyCH,KAClDA,GAAQ,YAAc,GACtBA,GAAQ,MAAQS,EAEpB,CCrBA,IAAAC,GAAA,SAAAC,EAAA,CAAmCC,GAAAF,EAAAC,CAAA,EA6BjC,SAAAD,EAAYG,EAA6C,CAAzD,IAAAC,EACEH,EAAA,KAAA,IAAA,GAAO,KATC,OAAAG,EAAA,UAAqB,GAUzBD,GACFC,EAAK,YAAcD,EAGfE,GAAeF,CAAW,GAC5BA,EAAY,IAAIC,CAAI,GAGtBA,EAAK,YAAcE,IAEvB,CAzBO,OAAAN,EAAA,OAAP,SAAiBO,EAAwBC,EAA2BC,EAAqB,CACvF,OAAO,IAAIC,GAAeH,EAAMC,EAAOC,CAAQ,CACjD,EAgCAT,EAAA,UAAA,KAAA,SAAKW,EAAS,CACR,KAAK,UACPC,GAA0BC,GAAiBF,CAAK,EAAG,IAAI,EAEvD,KAAK,MAAMA,CAAM,CAErB,EASAX,EAAA,UAAA,MAAA,SAAMc,EAAS,CACT,KAAK,UACPF,GAA0BG,GAAkBD,CAAG,EAAG,IAAI,GAEtD,KAAK,UAAY,GACjB,KAAK,OAAOA,CAAG,EAEnB,EAQAd,EAAA,UAAA,SAAA,UAAA,CACM,KAAK,UACPY,GAA0BI,GAAuB,IAAI,GAErD,KAAK,UAAY,GACjB,KAAK,UAAS,EAElB,EAEAhB,EAAA,UAAA,YAAA,UAAA,CACO,KAAK,SACR,KAAK,UAAY,GACjBC,EAAA,UAAM,YAAW,KAAA,IAAA,EACjB,KAAK,YAAc,KAEvB,EAEUD,EAAA,UAAA,MAAV,SAAgBW,EAAQ,CACtB,KAAK,YAAY,KAAKA,CAAK,CAC7B,EAEUX,EAAA,UAAA,OAAV,SAAiBc,EAAQ,CACvB,GAAI,CACF,KAAK,YAAY,MAAMA,CAAG,UAE1B,KAAK,YAAW,EAEpB,EAEUd,EAAA,UAAA,UAAV,UAAA,CACE,GAAI,CACF,KAAK,YAAY,SAAQ,UAEzB,KAAK,YAAW,EAEpB,EACFA,CAAA,EApHmCiB,EAAY,EA2H/C,IAAMC,GAAQ,SAAS,UAAU,KAEjC,SAASC,GAAyCC,EAAQC,EAAY,CACpE,OAAOH,GAAM,KAAKE,EAAIC,CAAO,CAC/B,CAMA,IAAAC,GAAA,UAAA,CACE,SAAAA,EAAoBC,EAAqC,CAArC,KAAA,gBAAAA,CAAwC,CAE5D,OAAAD,EAAA,UAAA,KAAA,SAAKE,EAAQ,CACH,IAAAD,EAAoB,KAAI,gBAChC,GAAIA,EAAgB,KAClB,GAAI,CACFA,EAAgB,KAAKC,CAAK,QACnBC,EAAP,CACAC,GAAqBD,CAAK,EAGhC,EAEAH,EAAA,UAAA,MAAA,SAAMK,EAAQ,CACJ,IAAAJ,EAAoB,KAAI,gBAChC,GAAIA,EAAgB,MAClB,GAAI,CACFA,EAAgB,MAAMI,CAAG,QAClBF,EAAP,CACAC,GAAqBD,CAAK,OAG5BC,GAAqBC,CAAG,CAE5B,EAEAL,EAAA,UAAA,SAAA,UAAA,CACU,IAAAC,EAAoB,KAAI,gBAChC,GAAIA,EAAgB,SAClB,GAAI,CACFA,EAAgB,SAAQ,QACjBE,EAAP,CACAC,GAAqBD,CAAK,EAGhC,EACFH,CAAA,EArCA,EAuCAM,GAAA,SAAAC,EAAA,CAAuCC,GAAAF,EAAAC,CAAA,EACrC,SAAAD,EACEG,EACAN,EACAO,EAA8B,CAHhC,IAAAC,EAKEJ,EAAA,KAAA,IAAA,GAAO,KAEHN,EACJ,GAAIW,EAAWH,CAAc,GAAK,CAACA,EAGjCR,EAAkB,CAChB,KAAOQ,GAAc,KAAdA,EAAkB,OACzB,MAAON,GAAK,KAALA,EAAS,OAChB,SAAUO,GAAQ,KAARA,EAAY,YAEnB,CAEL,IAAIG,EACAF,GAAQG,GAAO,0BAIjBD,EAAU,OAAO,OAAOJ,CAAc,EACtCI,EAAQ,YAAc,UAAA,CAAM,OAAAF,EAAK,YAAW,CAAhB,EAC5BV,EAAkB,CAChB,KAAMQ,EAAe,MAAQZ,GAAKY,EAAe,KAAMI,CAAO,EAC9D,MAAOJ,EAAe,OAASZ,GAAKY,EAAe,MAAOI,CAAO,EACjE,SAAUJ,EAAe,UAAYZ,GAAKY,EAAe,SAAUI,CAAO,IAI5EZ,EAAkBQ,EAMtB,OAAAE,EAAK,YAAc,IAAIX,GAAiBC,CAAe,GACzD,CACF,OAAAK,CAAA,EAzCuCS,EAAU,EA2CjD,SAASC,GAAqBC,EAAU,CAClCC,GAAO,sCACTC,GAAaF,CAAK,EAIlBG,GAAqBH,CAAK,CAE9B,CAQA,SAASI,GAAoBC,EAAQ,CACnC,MAAMA,CACR,CAOA,SAASC,GAA0BC,EAA2CC,EAA2B,CAC/F,IAAAC,EAA0BR,GAAM,sBACxCQ,GAAyBC,GAAgB,WAAW,UAAA,CAAM,OAAAD,EAAsBF,EAAcC,CAAU,CAA9C,CAA+C,CAC3G,CAOO,IAAMG,GAA6D,CACxE,OAAQ,GACR,KAAMC,GACN,MAAOR,GACP,SAAUQ,ICjRL,IAAMC,GAA+B,UAAA,CAAM,OAAC,OAAO,QAAW,YAAc,OAAO,YAAe,cAAvD,EAAsE,ECyClH,SAAUC,GAAYC,EAAI,CAC9B,OAAOA,CACT,CCiCM,SAAUC,IAAI,SAACC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACnB,OAAOC,GAAcF,CAAG,CAC1B,CAGM,SAAUE,GAAoBF,EAA+B,CACjE,OAAIA,EAAI,SAAW,EACVG,GAGLH,EAAI,SAAW,EACVA,EAAI,GAGN,SAAeI,EAAQ,CAC5B,OAAOJ,EAAI,OAAO,SAACK,EAAWC,EAAuB,CAAK,OAAAA,EAAGD,CAAI,CAAP,EAAUD,CAAY,CAClF,CACF,CC9EA,IAAAG,EAAA,UAAA,CAkBE,SAAAA,EAAYC,EAA6E,CACnFA,IACF,KAAK,WAAaA,EAEtB,CA4BA,OAAAD,EAAA,UAAA,KAAA,SAAQE,EAAyB,CAC/B,IAAMC,EAAa,IAAIH,EACvB,OAAAG,EAAW,OAAS,KACpBA,EAAW,SAAWD,EACfC,CACT,EA8IAH,EAAA,UAAA,UAAA,SACEI,EACAC,EACAC,EAA8B,CAHhC,IAAAC,EAAA,KAKQC,EAAaC,GAAaL,CAAc,EAAIA,EAAiB,IAAIM,GAAeN,EAAgBC,EAAOC,CAAQ,EAErH,OAAAK,GAAa,UAAA,CACL,IAAAC,EAAuBL,EAArBL,EAAQU,EAAA,SAAEC,EAAMD,EAAA,OACxBJ,EAAW,IACTN,EAGIA,EAAS,KAAKM,EAAYK,CAAM,EAChCA,EAIAN,EAAK,WAAWC,CAAU,EAG1BD,EAAK,cAAcC,CAAU,CAAC,CAEtC,CAAC,EAEMA,CACT,EAGUR,EAAA,UAAA,cAAV,SAAwBc,EAAmB,CACzC,GAAI,CACF,OAAO,KAAK,WAAWA,CAAI,QACpBC,EAAP,CAIAD,EAAK,MAAMC,CAAG,EAElB,EA6DAf,EAAA,UAAA,QAAA,SAAQgB,EAA0BC,EAAoC,CAAtE,IAAAV,EAAA,KACE,OAAAU,EAAcC,GAAeD,CAAW,EAEjC,IAAIA,EAAkB,SAACE,EAASC,EAAM,CAC3C,IAAMZ,EAAa,IAAIE,GAAkB,CACvC,KAAM,SAACW,EAAK,CACV,GAAI,CACFL,EAAKK,CAAK,QACHN,EAAP,CACAK,EAAOL,CAAG,EACVP,EAAW,YAAW,EAE1B,EACA,MAAOY,EACP,SAAUD,EACX,EACDZ,EAAK,UAAUC,CAAU,CAC3B,CAAC,CACH,EAGUR,EAAA,UAAA,WAAV,SAAqBQ,EAA2B,OAC9C,OAAOI,EAAA,KAAK,UAAM,MAAAA,IAAA,OAAA,OAAAA,EAAE,UAAUJ,CAAU,CAC1C,EAOAR,EAAA,UAACG,IAAD,UAAA,CACE,OAAO,IACT,EA4FAH,EAAA,UAAA,KAAA,UAAA,SAAKsB,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACH,OAAOC,GAAcF,CAAU,EAAE,IAAI,CACvC,EA6BAtB,EAAA,UAAA,UAAA,SAAUiB,EAAoC,CAA9C,IAAAV,EAAA,KACE,OAAAU,EAAcC,GAAeD,CAAW,EAEjC,IAAIA,EAAY,SAACE,EAASC,EAAM,CACrC,IAAIC,EACJd,EAAK,UACH,SAACkB,EAAI,CAAK,OAACJ,EAAQI,CAAT,EACV,SAACV,EAAQ,CAAK,OAAAK,EAAOL,CAAG,CAAV,EACd,UAAA,CAAM,OAAAI,EAAQE,CAAK,CAAb,CAAc,CAExB,CAAC,CACH,EA3aOrB,EAAA,OAAkC,SAAIC,EAAwD,CACnG,OAAO,IAAID,EAAcC,CAAS,CACpC,EA0aFD,GA/cA,EAwdA,SAAS0B,GAAeC,EAA+C,OACrE,OAAOC,EAAAD,GAAW,KAAXA,EAAeE,GAAO,WAAO,MAAAD,IAAA,OAAAA,EAAI,OAC1C,CAEA,SAASE,GAAcC,EAAU,CAC/B,OAAOA,GAASC,EAAWD,EAAM,IAAI,GAAKC,EAAWD,EAAM,KAAK,GAAKC,EAAWD,EAAM,QAAQ,CAChG,CAEA,SAASE,GAAgBF,EAAU,CACjC,OAAQA,GAASA,aAAiBG,IAAgBJ,GAAWC,CAAK,GAAKI,GAAeJ,CAAK,CAC7F,CC1eM,SAAUK,GAAQC,EAAW,CACjC,OAAOC,EAAWD,GAAM,KAAA,OAANA,EAAQ,IAAI,CAChC,CAMM,SAAUE,EACdC,EAAqF,CAErF,OAAO,SAACH,EAAqB,CAC3B,GAAID,GAAQC,CAAM,EAChB,OAAOA,EAAO,KAAK,SAA+BI,EAA2B,CAC3E,GAAI,CACF,OAAOD,EAAKC,EAAc,IAAI,QACvBC,EAAP,CACA,KAAK,MAAMA,CAAG,EAElB,CAAC,EAEH,MAAM,IAAI,UAAU,wCAAwC,CAC9D,CACF,CCjBM,SAAUC,EACdC,EACAC,EACAC,EACAC,EACAC,EAAuB,CAEvB,OAAO,IAAIC,GAAmBL,EAAaC,EAAQC,EAAYC,EAASC,CAAU,CACpF,CAMA,IAAAC,GAAA,SAAAC,EAAA,CAA2CC,GAAAF,EAAAC,CAAA,EAiBzC,SAAAD,EACEL,EACAC,EACAC,EACAC,EACQC,EACAI,EAAiC,CAN3C,IAAAC,EAoBEH,EAAA,KAAA,KAAMN,CAAW,GAAC,KAfV,OAAAS,EAAA,WAAAL,EACAK,EAAA,kBAAAD,EAeRC,EAAK,MAAQR,EACT,SAAuCS,EAAQ,CAC7C,GAAI,CACFT,EAAOS,CAAK,QACLC,EAAP,CACAX,EAAY,MAAMW,CAAG,EAEzB,EACAL,EAAA,UAAM,MACVG,EAAK,OAASN,EACV,SAAuCQ,EAAQ,CAC7C,GAAI,CACFR,EAAQQ,CAAG,QACJA,EAAP,CAEAX,EAAY,MAAMW,CAAG,UAGrB,KAAK,YAAW,EAEpB,EACAL,EAAA,UAAM,OACVG,EAAK,UAAYP,EACb,UAAA,CACE,GAAI,CACFA,EAAU,QACHS,EAAP,CAEAX,EAAY,MAAMW,CAAG,UAGrB,KAAK,YAAW,EAEpB,EACAL,EAAA,UAAM,WACZ,CAEA,OAAAD,EAAA,UAAA,YAAA,UAAA,OACE,GAAI,CAAC,KAAK,mBAAqB,KAAK,kBAAiB,EAAI,CAC/C,IAAAO,EAAW,KAAI,OACvBN,EAAA,UAAM,YAAW,KAAA,IAAA,EAEjB,CAACM,KAAUC,EAAA,KAAK,cAAU,MAAAA,IAAA,QAAAA,EAAA,KAAf,IAAI,GAEnB,EACFR,CAAA,EAnF2CS,EAAU,ECd9C,IAAMC,GAAiD,CAG5D,SAAA,SAASC,EAAQ,CACf,IAAIC,EAAU,sBACVC,EAAkD,qBAC9CC,EAAaJ,GAAsB,SACvCI,IACFF,EAAUE,EAAS,sBACnBD,EAASC,EAAS,sBAEpB,IAAMC,EAASH,EAAQ,SAACI,EAAS,CAI/BH,EAAS,OACTF,EAASK,CAAS,CACpB,CAAC,EACD,OAAO,IAAIC,GAAa,UAAA,CAAM,OAAAJ,GAAM,KAAA,OAANA,EAASE,CAAM,CAAf,CAAgB,CAChD,EACA,sBAAqB,UAAA,SAACG,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACZ,IAAAL,EAAaJ,GAAsB,SAC3C,QAAQI,GAAQ,KAAA,OAARA,EAAU,wBAAyB,uBAAsB,MAAA,OAAAM,EAAA,CAAA,EAAAC,EAAIH,CAAI,CAAA,CAAA,CAC3E,EACA,qBAAoB,UAAA,SAACA,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACX,IAAAL,EAAaJ,GAAsB,SAC3C,QAAQI,GAAQ,KAAA,OAARA,EAAU,uBAAwB,sBAAqB,MAAA,OAAAM,EAAA,CAAA,EAAAC,EAAIH,CAAI,CAAA,CAAA,CACzE,EACA,SAAU,QCrBL,IAAMI,GAAuDC,GAClE,SAACC,EAAM,CACL,OAAA,UAAoC,CAClCA,EAAO,IAAI,EACX,KAAK,KAAO,0BACZ,KAAK,QAAU,qBACjB,CAJA,CAIC,ECXL,IAAAC,EAAA,SAAAC,EAAA,CAAgCC,GAAAF,EAAAC,CAAA,EAwB9B,SAAAD,GAAA,CAAA,IAAAG,EAEEF,EAAA,KAAA,IAAA,GAAO,KAzBT,OAAAE,EAAA,OAAS,GAEDA,EAAA,iBAAyC,KAGjDA,EAAA,UAA2B,CAAA,EAE3BA,EAAA,UAAY,GAEZA,EAAA,SAAW,GAEXA,EAAA,YAAmB,MAenB,CAGA,OAAAH,EAAA,UAAA,KAAA,SAAQI,EAAwB,CAC9B,IAAMC,EAAU,IAAIC,GAAiB,KAAM,IAAI,EAC/C,OAAAD,EAAQ,SAAWD,EACZC,CACT,EAGUL,EAAA,UAAA,eAAV,UAAA,CACE,GAAI,KAAK,OACP,MAAM,IAAIO,EAEd,EAEAP,EAAA,UAAA,KAAA,SAAKQ,EAAQ,CAAb,IAAAL,EAAA,KACEM,GAAa,UAAA,SAEX,GADAN,EAAK,eAAc,EACf,CAACA,EAAK,UAAW,CACdA,EAAK,mBACRA,EAAK,iBAAmB,MAAM,KAAKA,EAAK,SAAS,OAEnD,QAAuBO,EAAAC,GAAAR,EAAK,gBAAgB,EAAAS,EAAAF,EAAA,KAAA,EAAA,CAAAE,EAAA,KAAAA,EAAAF,EAAA,KAAA,EAAE,CAAzC,IAAMG,EAAQD,EAAA,MACjBC,EAAS,KAAKL,CAAK,qGAGzB,CAAC,CACH,EAEAR,EAAA,UAAA,MAAA,SAAMc,EAAQ,CAAd,IAAAX,EAAA,KACEM,GAAa,UAAA,CAEX,GADAN,EAAK,eAAc,EACf,CAACA,EAAK,UAAW,CACnBA,EAAK,SAAWA,EAAK,UAAY,GACjCA,EAAK,YAAcW,EAEnB,QADQC,EAAcZ,EAAI,UACnBY,EAAU,QACfA,EAAU,MAAK,EAAI,MAAMD,CAAG,EAGlC,CAAC,CACH,EAEAd,EAAA,UAAA,SAAA,UAAA,CAAA,IAAAG,EAAA,KACEM,GAAa,UAAA,CAEX,GADAN,EAAK,eAAc,EACf,CAACA,EAAK,UAAW,CACnBA,EAAK,UAAY,GAEjB,QADQY,EAAcZ,EAAI,UACnBY,EAAU,QACfA,EAAU,MAAK,EAAI,SAAQ,EAGjC,CAAC,CACH,EAEAf,EAAA,UAAA,YAAA,UAAA,CACE,KAAK,UAAY,KAAK,OAAS,GAC/B,KAAK,UAAY,KAAK,iBAAmB,IAC3C,EAEA,OAAA,eAAIA,EAAA,UAAA,WAAQ,KAAZ,UAAA,OACE,QAAOgB,EAAA,KAAK,aAAS,MAAAA,IAAA,OAAA,OAAAA,EAAE,QAAS,CAClC,kCAGUhB,EAAA,UAAA,cAAV,SAAwBiB,EAAyB,CAC/C,YAAK,eAAc,EACZhB,EAAA,UAAM,cAAa,KAAA,KAACgB,CAAU,CACvC,EAGUjB,EAAA,UAAA,WAAV,SAAqBiB,EAAyB,CAC5C,YAAK,eAAc,EACnB,KAAK,wBAAwBA,CAAU,EAChC,KAAK,gBAAgBA,CAAU,CACxC,EAGUjB,EAAA,UAAA,gBAAV,SAA0BiB,EAA2B,CAArD,IAAAd,EAAA,KACQa,EAAqC,KAAnCE,EAAQF,EAAA,SAAEG,EAASH,EAAA,UAAED,EAASC,EAAA,UACtC,OAAIE,GAAYC,EACPC,IAET,KAAK,iBAAmB,KACxBL,EAAU,KAAKE,CAAU,EAClB,IAAII,GAAa,UAAA,CACtBlB,EAAK,iBAAmB,KACxBmB,GAAUP,EAAWE,CAAU,CACjC,CAAC,EACH,EAGUjB,EAAA,UAAA,wBAAV,SAAkCiB,EAA2B,CACrD,IAAAD,EAAuC,KAArCE,EAAQF,EAAA,SAAEO,EAAWP,EAAA,YAAEG,EAASH,EAAA,UACpCE,EACFD,EAAW,MAAMM,CAAW,EACnBJ,GACTF,EAAW,SAAQ,CAEvB,EAQAjB,EAAA,UAAA,aAAA,UAAA,CACE,IAAMwB,EAAkB,IAAIC,EAC5B,OAAAD,EAAW,OAAS,KACbA,CACT,EAxHOxB,EAAA,OAAkC,SAAI0B,EAA0BC,EAAqB,CAC1F,OAAO,IAAIrB,GAAoBoB,EAAaC,CAAM,CACpD,EAuHF3B,GA7IgCyB,CAAU,EAkJ1C,IAAAG,GAAA,SAAAC,EAAA,CAAyCC,GAAAF,EAAAC,CAAA,EACvC,SAAAD,EAESG,EACPC,EAAsB,CAHxB,IAAAC,EAKEJ,EAAA,KAAA,IAAA,GAAO,KAHA,OAAAI,EAAA,YAAAF,EAIPE,EAAK,OAASD,GAChB,CAEA,OAAAJ,EAAA,UAAA,KAAA,SAAKM,EAAQ,UACXC,GAAAC,EAAA,KAAK,eAAW,MAAAA,IAAA,OAAA,OAAAA,EAAE,QAAI,MAAAD,IAAA,QAAAA,EAAA,KAAAC,EAAGF,CAAK,CAChC,EAEAN,EAAA,UAAA,MAAA,SAAMS,EAAQ,UACZF,GAAAC,EAAA,KAAK,eAAW,MAAAA,IAAA,OAAA,OAAAA,EAAE,SAAK,MAAAD,IAAA,QAAAA,EAAA,KAAAC,EAAGC,CAAG,CAC/B,EAEAT,EAAA,UAAA,SAAA,UAAA,UACEO,GAAAC,EAAA,KAAK,eAAW,MAAAA,IAAA,OAAA,OAAAA,EAAE,YAAQ,MAAAD,IAAA,QAAAA,EAAA,KAAAC,CAAA,CAC5B,EAGUR,EAAA,UAAA,WAAV,SAAqBU,EAAyB,SAC5C,OAAOH,GAAAC,EAAA,KAAK,UAAM,MAAAA,IAAA,OAAA,OAAAA,EAAE,UAAUE,CAAU,KAAC,MAAAH,IAAA,OAAAA,EAAII,EAC/C,EACFX,CAAA,EA1ByCY,CAAO,EC5JzC,IAAMC,GAA+C,CAC1D,IAAG,UAAA,CAGD,OAAQA,GAAsB,UAAY,MAAM,IAAG,CACrD,EACA,SAAU,QCwBZ,IAAAC,GAAA,SAAAC,EAAA,CAAsCC,GAAAF,EAAAC,CAAA,EAUpC,SAAAD,EACUG,EACAC,EACAC,EAA6D,CAF7DF,IAAA,SAAAA,EAAA,KACAC,IAAA,SAAAA,EAAA,KACAC,IAAA,SAAAA,EAAAC,IAHV,IAAAC,EAKEN,EAAA,KAAA,IAAA,GAAO,KAJC,OAAAM,EAAA,YAAAJ,EACAI,EAAA,YAAAH,EACAG,EAAA,mBAAAF,EAZFE,EAAA,QAA0B,CAAA,EAC1BA,EAAA,oBAAsB,GAc5BA,EAAK,oBAAsBH,IAAgB,IAC3CG,EAAK,YAAc,KAAK,IAAI,EAAGJ,CAAW,EAC1CI,EAAK,YAAc,KAAK,IAAI,EAAGH,CAAW,GAC5C,CAEA,OAAAJ,EAAA,UAAA,KAAA,SAAKQ,EAAQ,CACL,IAAAC,EAA+E,KAA7EC,EAASD,EAAA,UAAEE,EAAOF,EAAA,QAAEG,EAAmBH,EAAA,oBAAEJ,EAAkBI,EAAA,mBAAEL,EAAWK,EAAA,YAC3EC,IACHC,EAAQ,KAAKH,CAAK,EAClB,CAACI,GAAuBD,EAAQ,KAAKN,EAAmB,IAAG,EAAKD,CAAW,GAE7E,KAAK,YAAW,EAChBH,EAAA,UAAM,KAAI,KAAA,KAACO,CAAK,CAClB,EAGUR,EAAA,UAAA,WAAV,SAAqBa,EAAyB,CAC5C,KAAK,eAAc,EACnB,KAAK,YAAW,EAQhB,QANMC,EAAe,KAAK,gBAAgBD,CAAU,EAE9CJ,EAAmC,KAAjCG,EAAmBH,EAAA,oBAAEE,EAAOF,EAAA,QAG9BM,EAAOJ,EAAQ,MAAK,EACjBK,EAAI,EAAGA,EAAID,EAAK,QAAU,CAACF,EAAW,OAAQG,GAAKJ,EAAsB,EAAI,EACpFC,EAAW,KAAKE,EAAKC,EAAO,EAG9B,YAAK,wBAAwBH,CAAU,EAEhCC,CACT,EAEQd,EAAA,UAAA,YAAR,UAAA,CACQ,IAAAS,EAAoE,KAAlEN,EAAWM,EAAA,YAAEJ,EAAkBI,EAAA,mBAAEE,EAAOF,EAAA,QAAEG,EAAmBH,EAAA,oBAK/DQ,GAAsBL,EAAsB,EAAI,GAAKT,EAK3D,GAJAA,EAAc,KAAYc,EAAqBN,EAAQ,QAAUA,EAAQ,OAAO,EAAGA,EAAQ,OAASM,CAAkB,EAIlH,CAACL,EAAqB,CAKxB,QAJMM,EAAMb,EAAmB,IAAG,EAC9Bc,EAAO,EAGFH,EAAI,EAAGA,EAAIL,EAAQ,QAAWA,EAAQK,IAAiBE,EAAKF,GAAK,EACxEG,EAAOH,EAETG,GAAQR,EAAQ,OAAO,EAAGQ,EAAO,CAAC,EAEtC,EACFnB,CAAA,EAzEsCoB,CAAO,EClB7C,IAAAC,GAAA,SAAAC,EAAA,CAA+BC,GAAAF,EAAAC,CAAA,EAC7B,SAAAD,EAAYG,EAAsBC,EAAmD,QACnFH,EAAA,KAAA,IAAA,GAAO,IACT,CAWO,OAAAD,EAAA,UAAA,SAAP,SAAgBK,EAAWC,EAAiB,CAAjB,OAAAA,IAAA,SAAAA,EAAA,GAClB,IACT,EACFN,CAAA,EAjB+BO,EAAY,ECHpC,IAAMC,GAAqC,CAGhD,YAAA,SAAYC,EAAqBC,EAAgB,SAAEC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,EAAA,GAAA,UAAAA,GACzC,IAAAC,EAAaL,GAAgB,SACrC,OAAIK,GAAQ,MAARA,EAAU,YACLA,EAAS,YAAW,MAApBA,EAAQC,EAAA,CAAaL,EAASC,CAAO,EAAAK,EAAKJ,CAAI,CAAA,CAAA,EAEhD,YAAW,MAAA,OAAAG,EAAA,CAACL,EAASC,CAAO,EAAAK,EAAKJ,CAAI,CAAA,CAAA,CAC9C,EACA,cAAA,SAAcK,EAAM,CACV,IAAAH,EAAaL,GAAgB,SACrC,QAAQK,GAAQ,KAAA,OAARA,EAAU,gBAAiB,eAAeG,CAAa,CACjE,EACA,SAAU,QCrBZ,IAAAC,GAAA,SAAAC,EAAA,CAAoCC,GAAAF,EAAAC,CAAA,EAOlC,SAAAD,EAAsBG,EAAqCC,EAAmD,CAA9G,IAAAC,EACEJ,EAAA,KAAA,KAAME,EAAWC,CAAI,GAAC,KADF,OAAAC,EAAA,UAAAF,EAAqCE,EAAA,KAAAD,EAFjDC,EAAA,QAAmB,IAI7B,CAEO,OAAAL,EAAA,UAAA,SAAP,SAAgBM,EAAWC,EAAiB,OAC1C,GADyBA,IAAA,SAAAA,EAAA,GACrB,KAAK,OACP,OAAO,KAIT,KAAK,MAAQD,EAEb,IAAME,EAAK,KAAK,GACVL,EAAY,KAAK,UAuBvB,OAAIK,GAAM,OACR,KAAK,GAAK,KAAK,eAAeL,EAAWK,EAAID,CAAK,GAKpD,KAAK,QAAU,GAEf,KAAK,MAAQA,EAEb,KAAK,IAAKE,EAAA,KAAK,MAAE,MAAAA,IAAA,OAAAA,EAAI,KAAK,eAAeN,EAAW,KAAK,GAAII,CAAK,EAE3D,IACT,EAEUP,EAAA,UAAA,eAAV,SAAyBG,EAA2BO,EAAmBH,EAAiB,CAAjB,OAAAA,IAAA,SAAAA,EAAA,GAC9DI,GAAiB,YAAYR,EAAU,MAAM,KAAKA,EAAW,IAAI,EAAGI,CAAK,CAClF,EAEUP,EAAA,UAAA,eAAV,SAAyBY,EAA4BJ,EAAkBD,EAAwB,CAE7F,GAFqEA,IAAA,SAAAA,EAAA,GAEjEA,GAAS,MAAQ,KAAK,QAAUA,GAAS,KAAK,UAAY,GAC5D,OAAOC,EAILA,GAAM,MACRG,GAAiB,cAAcH,CAAE,CAIrC,EAMOR,EAAA,UAAA,QAAP,SAAeM,EAAUC,EAAa,CACpC,GAAI,KAAK,OACP,OAAO,IAAI,MAAM,8BAA8B,EAGjD,KAAK,QAAU,GACf,IAAMM,EAAQ,KAAK,SAASP,EAAOC,CAAK,EACxC,GAAIM,EACF,OAAOA,EACE,KAAK,UAAY,IAAS,KAAK,IAAM,OAc9C,KAAK,GAAK,KAAK,eAAe,KAAK,UAAW,KAAK,GAAI,IAAI,EAE/D,EAEUb,EAAA,UAAA,SAAV,SAAmBM,EAAUQ,EAAc,CACzC,IAAIC,EAAmB,GACnBC,EACJ,GAAI,CACF,KAAK,KAAKV,CAAK,QACRW,EAAP,CACAF,EAAU,GAIVC,EAAaC,GAAQ,IAAI,MAAM,oCAAoC,EAErE,GAAIF,EACF,YAAK,YAAW,EACTC,CAEX,EAEAhB,EAAA,UAAA,YAAA,UAAA,CACE,GAAI,CAAC,KAAK,OAAQ,CACV,IAAAS,EAAoB,KAAlBD,EAAEC,EAAA,GAAEN,EAASM,EAAA,UACbS,EAAYf,EAAS,QAE7B,KAAK,KAAO,KAAK,MAAQ,KAAK,UAAY,KAC1C,KAAK,QAAU,GAEfgB,GAAUD,EAAS,IAAI,EACnBV,GAAM,OACR,KAAK,GAAK,KAAK,eAAeL,EAAWK,EAAI,IAAI,GAGnD,KAAK,MAAQ,KACbP,EAAA,UAAM,YAAW,KAAA,IAAA,EAErB,EACFD,CAAA,EA9IoCoB,EAAM,ECgB1C,IAAAC,GAAA,UAAA,CAGE,SAAAA,EAAoBC,EAAoCC,EAAiC,CAAjCA,IAAA,SAAAA,EAAoBF,EAAU,KAAlE,KAAA,oBAAAC,EAClB,KAAK,IAAMC,CACb,CA6BO,OAAAF,EAAA,UAAA,SAAP,SAAmBG,EAAqDC,EAAmBC,EAAS,CAA5B,OAAAD,IAAA,SAAAA,EAAA,GAC/D,IAAI,KAAK,oBAAuB,KAAMD,CAAI,EAAE,SAASE,EAAOD,CAAK,CAC1E,EAnCcJ,EAAA,IAAoBM,GAAsB,IAoC1DN,GArCA,ECnBA,IAAAO,GAAA,SAAAC,EAAA,CAAoCC,GAAAF,EAAAC,CAAA,EAkBlC,SAAAD,EAAYG,EAAgCC,EAAiC,CAAjCA,IAAA,SAAAA,EAAoBC,GAAU,KAA1E,IAAAC,EACEL,EAAA,KAAA,KAAME,EAAiBC,CAAG,GAAC,KAlBtB,OAAAE,EAAA,QAAmC,CAAA,EAOnCA,EAAA,QAAmB,IAY1B,CAEO,OAAAN,EAAA,UAAA,MAAP,SAAaO,EAAwB,CAC3B,IAAAC,EAAY,KAAI,QAExB,GAAI,KAAK,QAAS,CAChBA,EAAQ,KAAKD,CAAM,EACnB,OAGF,IAAIE,EACJ,KAAK,QAAU,GAEf,EACE,IAAKA,EAAQF,EAAO,QAAQA,EAAO,MAAOA,EAAO,KAAK,EACpD,YAEMA,EAASC,EAAQ,MAAK,GAIhC,GAFA,KAAK,QAAU,GAEXC,EAAO,CACT,KAAQF,EAASC,EAAQ,MAAK,GAC5BD,EAAO,YAAW,EAEpB,MAAME,EAEV,EACFT,CAAA,EAhDoCK,EAAS,EC6CtC,IAAMK,GAAiB,IAAIC,GAAeC,EAAW,EAK/CC,GAAQH,GCjDrB,IAAAI,GAAA,SAAAC,EAAA,CAA6CC,GAAAF,EAAAC,CAAA,EAC3C,SAAAD,EAAsBG,EAA8CC,EAAmD,CAAvH,IAAAC,EACEJ,EAAA,KAAA,KAAME,EAAWC,CAAI,GAAC,KADF,OAAAC,EAAA,UAAAF,EAA8CE,EAAA,KAAAD,GAEpE,CAEU,OAAAJ,EAAA,UAAA,eAAV,SAAyBG,EAAoCG,EAAkBC,EAAiB,CAE9F,OAF6EA,IAAA,SAAAA,EAAA,GAEzEA,IAAU,MAAQA,EAAQ,EACrBN,EAAA,UAAM,eAAc,KAAA,KAACE,EAAWG,EAAIC,CAAK,GAGlDJ,EAAU,QAAQ,KAAK,IAAI,EAIpBA,EAAU,aAAeA,EAAU,WAAaK,GAAuB,sBAAsB,UAAA,CAAM,OAAAL,EAAU,MAAM,MAAS,CAAzB,CAA0B,GACtI,EAEUH,EAAA,UAAA,eAAV,SAAyBG,EAAoCG,EAAkBC,EAAiB,OAI9F,GAJ6EA,IAAA,SAAAA,EAAA,GAIzEA,GAAS,KAAOA,EAAQ,EAAI,KAAK,MAAQ,EAC3C,OAAON,EAAA,UAAM,eAAc,KAAA,KAACE,EAAWG,EAAIC,CAAK,EAK1C,IAAAE,EAAYN,EAAS,QACzBG,GAAM,QAAQI,EAAAD,EAAQA,EAAQ,OAAS,MAAE,MAAAC,IAAA,OAAA,OAAAA,EAAE,MAAOJ,IACpDE,GAAuB,qBAAqBF,CAAY,EACxDH,EAAU,WAAa,OAI3B,EACFH,CAAA,EApC6CW,EAAW,ECHxD,IAAAC,GAAA,SAAAC,EAAA,CAA6CC,GAAAF,EAAAC,CAAA,EAA7C,SAAAD,GAAA,+CAkCA,CAjCS,OAAAA,EAAA,UAAA,MAAP,SAAaG,EAAyB,CACpC,KAAK,QAAU,GAUf,IAAMC,EAAU,KAAK,WACrB,KAAK,WAAa,OAEV,IAAAC,EAAY,KAAI,QACpBC,EACJH,EAASA,GAAUE,EAAQ,MAAK,EAEhC,EACE,IAAKC,EAAQH,EAAO,QAAQA,EAAO,MAAOA,EAAO,KAAK,EACpD,aAEMA,EAASE,EAAQ,KAAOF,EAAO,KAAOC,GAAWC,EAAQ,MAAK,GAIxE,GAFA,KAAK,QAAU,GAEXC,EAAO,CACT,MAAQH,EAASE,EAAQ,KAAOF,EAAO,KAAOC,GAAWC,EAAQ,MAAK,GACpEF,EAAO,YAAW,EAEpB,MAAMG,EAEV,EACFN,CAAA,EAlC6CO,EAAc,ECgCpD,IAAMC,GAA0B,IAAIC,GAAwBC,EAAoB,EC8BhF,IAAMC,EAAQ,IAAIC,EAAkB,SAACC,EAAU,CAAK,OAAAA,EAAW,SAAQ,CAAnB,CAAqB,EC9D1E,SAAUC,GAAYC,EAAU,CACpC,OAAOA,GAASC,EAAWD,EAAM,QAAQ,CAC3C,CCDA,SAASE,GAAQC,EAAQ,CACvB,OAAOA,EAAIA,EAAI,OAAS,EAC1B,CAEM,SAAUC,GAAkBC,EAAW,CAC3C,OAAOC,EAAWJ,GAAKG,CAAI,CAAC,EAAIA,EAAK,IAAG,EAAK,MAC/C,CAEM,SAAUE,GAAaF,EAAW,CACtC,OAAOG,GAAYN,GAAKG,CAAI,CAAC,EAAIA,EAAK,IAAG,EAAK,MAChD,CAEM,SAAUI,GAAUJ,EAAaK,EAAoB,CACzD,OAAO,OAAOR,GAAKG,CAAI,GAAM,SAAWA,EAAK,IAAG,EAAMK,CACxD,CClBO,IAAMC,GAAe,SAAIC,EAAM,CAAwB,OAAAA,GAAK,OAAOA,EAAE,QAAW,UAAY,OAAOA,GAAM,UAAlD,ECMxD,SAAUC,GAAUC,EAAU,CAClC,OAAOC,EAAWD,GAAK,KAAA,OAALA,EAAO,IAAI,CAC/B,CCHM,SAAUE,GAAoBC,EAAU,CAC5C,OAAOC,EAAWD,EAAME,GAAkB,CAC5C,CCLM,SAAUC,GAAmBC,EAAQ,CACzC,OAAO,OAAO,eAAiBC,EAAWD,GAAG,KAAA,OAAHA,EAAM,OAAO,cAAc,CACvE,CCAM,SAAUE,GAAiCC,EAAU,CAEzD,OAAO,IAAI,UACT,iBACEA,IAAU,MAAQ,OAAOA,GAAU,SAAW,oBAAsB,IAAIA,EAAK,KAAG,0HACwC,CAE9H,CCXM,SAAUC,IAAiB,CAC/B,OAAI,OAAO,QAAW,YAAc,CAAC,OAAO,SACnC,aAGF,OAAO,QAChB,CAEO,IAAMC,GAAWD,GAAiB,ECJnC,SAAUE,GAAWC,EAAU,CACnC,OAAOC,EAAWD,GAAK,KAAA,OAALA,EAAQE,GAAgB,CAC5C,CCHM,SAAiBC,GAAsCC,EAAqC,mGAC1FC,EAASD,EAAe,UAAS,2DAGX,MAAA,CAAA,EAAAE,GAAMD,EAAO,KAAI,CAAE,CAAA,gBAArCE,EAAkBC,EAAA,KAAA,EAAhBC,EAAKF,EAAA,MAAEG,EAAIH,EAAA,KACfG,iBAAA,CAAA,EAAA,CAAA,SACF,MAAA,CAAA,EAAAF,EAAA,KAAA,CAAA,qBAEIC,CAAM,CAAA,SAAZ,MAAA,CAAA,EAAAD,EAAA,KAAA,CAAA,SAAA,OAAAA,EAAA,KAAA,mCAGF,OAAAH,EAAO,YAAW,6BAIhB,SAAUM,GAAwBC,EAAQ,CAG9C,OAAOC,EAAWD,GAAG,KAAA,OAAHA,EAAK,SAAS,CAClC,CCPM,SAAUE,EAAaC,EAAyB,CACpD,GAAIA,aAAiBC,EACnB,OAAOD,EAET,GAAIA,GAAS,KAAM,CACjB,GAAIE,GAAoBF,CAAK,EAC3B,OAAOG,GAAsBH,CAAK,EAEpC,GAAII,GAAYJ,CAAK,EACnB,OAAOK,GAAcL,CAAK,EAE5B,GAAIM,GAAUN,CAAK,EACjB,OAAOO,GAAYP,CAAK,EAE1B,GAAIQ,GAAgBR,CAAK,EACvB,OAAOS,GAAkBT,CAAK,EAEhC,GAAIU,GAAWV,CAAK,EAClB,OAAOW,GAAaX,CAAK,EAE3B,GAAIY,GAAqBZ,CAAK,EAC5B,OAAOa,GAAuBb,CAAK,EAIvC,MAAMc,GAAiCd,CAAK,CAC9C,CAMM,SAAUG,GAAyBY,EAAQ,CAC/C,OAAO,IAAId,EAAW,SAACe,EAAyB,CAC9C,IAAMC,EAAMF,EAAIG,IAAkB,EAClC,GAAIC,EAAWF,EAAI,SAAS,EAC1B,OAAOA,EAAI,UAAUD,CAAU,EAGjC,MAAM,IAAI,UAAU,gEAAgE,CACtF,CAAC,CACH,CASM,SAAUX,GAAiBe,EAAmB,CAClD,OAAO,IAAInB,EAAW,SAACe,EAAyB,CAU9C,QAASK,EAAI,EAAGA,EAAID,EAAM,QAAU,CAACJ,EAAW,OAAQK,IACtDL,EAAW,KAAKI,EAAMC,EAAE,EAE1BL,EAAW,SAAQ,CACrB,CAAC,CACH,CAEM,SAAUT,GAAee,EAAuB,CACpD,OAAO,IAAIrB,EAAW,SAACe,EAAyB,CAC9CM,EACG,KACC,SAACC,EAAK,CACCP,EAAW,SACdA,EAAW,KAAKO,CAAK,EACrBP,EAAW,SAAQ,EAEvB,EACA,SAACQ,EAAQ,CAAK,OAAAR,EAAW,MAAMQ,CAAG,CAApB,CAAqB,EAEpC,KAAK,KAAMC,EAAoB,CACpC,CAAC,CACH,CAEM,SAAUd,GAAgBe,EAAqB,CACnD,OAAO,IAAIzB,EAAW,SAACe,EAAyB,aAC9C,QAAoBW,EAAAC,GAAAF,CAAQ,EAAAG,EAAAF,EAAA,KAAA,EAAA,CAAAE,EAAA,KAAAA,EAAAF,EAAA,KAAA,EAAE,CAAzB,IAAMJ,EAAKM,EAAA,MAEd,GADAb,EAAW,KAAKO,CAAK,EACjBP,EAAW,OACb,yGAGJA,EAAW,SAAQ,CACrB,CAAC,CACH,CAEM,SAAUP,GAAqBqB,EAA+B,CAClE,OAAO,IAAI7B,EAAW,SAACe,EAAyB,CAC9Ce,GAAQD,EAAed,CAAU,EAAE,MAAM,SAACQ,EAAG,CAAK,OAAAR,EAAW,MAAMQ,CAAG,CAApB,CAAqB,CACzE,CAAC,CACH,CAEM,SAAUX,GAA0BmB,EAAqC,CAC7E,OAAOvB,GAAkBwB,GAAmCD,CAAc,CAAC,CAC7E,CAEA,SAAeD,GAAWD,EAAiCd,EAAyB,uIACxDkB,EAAAC,GAAAL,CAAa,gFAIrC,GAJeP,EAAKa,EAAA,MACpBpB,EAAW,KAAKO,CAAK,EAGjBP,EAAW,OACb,MAAA,CAAA,CAAA,6RAGJ,OAAAA,EAAW,SAAQ,WChHf,SAAUqB,GACdC,EACAC,EACAC,EACAC,EACAC,EAAc,CADdD,IAAA,SAAAA,EAAA,GACAC,IAAA,SAAAA,EAAA,IAEA,IAAMC,EAAuBJ,EAAU,SAAS,UAAA,CAC9CC,EAAI,EACAE,EACFJ,EAAmB,IAAI,KAAK,SAAS,KAAMG,CAAK,CAAC,EAEjD,KAAK,YAAW,CAEpB,EAAGA,CAAK,EAIR,GAFAH,EAAmB,IAAIK,CAAoB,EAEvC,CAACD,EAKH,OAAOC,CAEX,CCeM,SAAUC,GAAaC,EAA0BC,EAAS,CAAT,OAAAA,IAAA,SAAAA,EAAA,GAC9CC,EAAQ,SAACC,EAAQC,EAAU,CAChCD,EAAO,UACLE,EACED,EACA,SAACE,EAAK,CAAK,OAAAC,GAAgBH,EAAYJ,EAAW,UAAA,CAAM,OAAAI,EAAW,KAAKE,CAAK,CAArB,EAAwBL,CAAK,CAA1E,EACX,UAAA,CAAM,OAAAM,GAAgBH,EAAYJ,EAAW,UAAA,CAAM,OAAAI,EAAW,SAAQ,CAAnB,EAAuBH,CAAK,CAAzE,EACN,SAACO,EAAG,CAAK,OAAAD,GAAgBH,EAAYJ,EAAW,UAAA,CAAM,OAAAI,EAAW,MAAMI,CAAG,CAApB,EAAuBP,CAAK,CAAzE,CAA0E,CACpF,CAEL,CAAC,CACH,CCPM,SAAUQ,GAAeC,EAA0BC,EAAiB,CAAjB,OAAAA,IAAA,SAAAA,EAAA,GAChDC,EAAQ,SAACC,EAAQC,EAAU,CAChCA,EAAW,IAAIJ,EAAU,SAAS,UAAA,CAAM,OAAAG,EAAO,UAAUC,CAAU,CAA3B,EAA8BH,CAAK,CAAC,CAC9E,CAAC,CACH,CC7DM,SAAUI,GAAsBC,EAA6BC,EAAwB,CACzF,OAAOC,EAAUF,CAAK,EAAE,KAAKG,GAAYF,CAAS,EAAGG,GAAUH,CAAS,CAAC,CAC3E,CCFM,SAAUI,GAAmBC,EAAuBC,EAAwB,CAChF,OAAOC,EAAUF,CAAK,EAAE,KAAKG,GAAYF,CAAS,EAAGG,GAAUH,CAAS,CAAC,CAC3E,CCJM,SAAUI,GAAiBC,EAAqBC,EAAwB,CAC5E,OAAO,IAAIC,EAAc,SAACC,EAAU,CAElC,IAAIC,EAAI,EAER,OAAOH,EAAU,SAAS,UAAA,CACpBG,IAAMJ,EAAM,OAGdG,EAAW,SAAQ,GAInBA,EAAW,KAAKH,EAAMI,IAAI,EAIrBD,EAAW,QACd,KAAK,SAAQ,EAGnB,CAAC,CACH,CAAC,CACH,CCfM,SAAUE,GAAoBC,EAAoBC,EAAwB,CAC9E,OAAO,IAAIC,EAAc,SAACC,EAAU,CAClC,IAAIC,EAKJ,OAAAC,GAAgBF,EAAYF,EAAW,UAAA,CAErCG,EAAYJ,EAAcI,IAAgB,EAE1CC,GACEF,EACAF,EACA,UAAA,OACMK,EACAC,EACJ,GAAI,CAEDC,EAAkBJ,EAAS,KAAI,EAA7BE,EAAKE,EAAA,MAAED,EAAIC,EAAA,WACPC,EAAP,CAEAN,EAAW,MAAMM,CAAG,EACpB,OAGEF,EAKFJ,EAAW,SAAQ,EAGnBA,EAAW,KAAKG,CAAK,CAEzB,EACA,EACA,EAAI,CAER,CAAC,EAMM,UAAA,CAAM,OAAAI,EAAWN,GAAQ,KAAA,OAARA,EAAU,MAAM,GAAKA,EAAS,OAAM,CAA/C,CACf,CAAC,CACH,CCvDM,SAAUO,GAAyBC,EAAyBC,EAAwB,CACxF,GAAI,CAACD,EACH,MAAM,IAAI,MAAM,yBAAyB,EAE3C,OAAO,IAAIE,EAAc,SAACC,EAAU,CAClCC,GAAgBD,EAAYF,EAAW,UAAA,CACrC,IAAMI,EAAWL,EAAM,OAAO,eAAc,EAC5CI,GACED,EACAF,EACA,UAAA,CACEI,EAAS,KAAI,EAAG,KAAK,SAACC,EAAM,CACtBA,EAAO,KAGTH,EAAW,SAAQ,EAEnBA,EAAW,KAAKG,EAAO,KAAK,CAEhC,CAAC,CACH,EACA,EACA,EAAI,CAER,CAAC,CACH,CAAC,CACH,CCzBM,SAAUC,GAA8BC,EAA8BC,EAAwB,CAClG,OAAOC,GAAsBC,GAAmCH,CAAK,EAAGC,CAAS,CACnF,CCoBM,SAAUG,GAAaC,EAA2BC,EAAwB,CAC9E,GAAID,GAAS,KAAM,CACjB,GAAIE,GAAoBF,CAAK,EAC3B,OAAOG,GAAmBH,EAAOC,CAAS,EAE5C,GAAIG,GAAYJ,CAAK,EACnB,OAAOK,GAAcL,EAAOC,CAAS,EAEvC,GAAIK,GAAUN,CAAK,EACjB,OAAOO,GAAgBP,EAAOC,CAAS,EAEzC,GAAIO,GAAgBR,CAAK,EACvB,OAAOS,GAAsBT,EAAOC,CAAS,EAE/C,GAAIS,GAAWV,CAAK,EAClB,OAAOW,GAAiBX,EAAOC,CAAS,EAE1C,GAAIW,GAAqBZ,CAAK,EAC5B,OAAOa,GAA2Bb,EAAOC,CAAS,EAGtD,MAAMa,GAAiCd,CAAK,CAC9C,CCoDM,SAAUe,GAAQC,EAA2BC,EAAyB,CAC1E,OAAOA,EAAYC,GAAUF,EAAOC,CAAS,EAAIE,EAAUH,CAAK,CAClE,CCxBM,SAAUI,GAAE,SAAIC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACpB,IAAMC,EAAYC,GAAaH,CAAI,EACnC,OAAOI,GAAKJ,EAAaE,CAAS,CACpC,CCsCM,SAAUG,GAAWC,EAA0BC,EAAyB,CAC5E,IAAMC,EAAeC,EAAWH,CAAmB,EAAIA,EAAsB,UAAA,CAAM,OAAAA,CAAA,EAC7EI,EAAO,SAACC,EAA6B,CAAK,OAAAA,EAAW,MAAMH,EAAY,CAAE,CAA/B,EAChD,OAAO,IAAII,EAAWL,EAAY,SAACI,EAAU,CAAK,OAAAJ,EAAU,SAASG,EAAa,EAAGC,CAAU,CAA7C,EAAiDD,CAAI,CACzG,CCrHM,SAAUG,GAAYC,EAAU,CACpC,OAAOA,aAAiB,MAAQ,CAAC,MAAMA,CAAY,CACrD,CCsCM,SAAUC,EAAUC,EAAyCC,EAAa,CAC9E,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAEhC,IAAIC,EAAQ,EAGZF,EAAO,UACLG,EAAyBF,EAAY,SAACG,EAAQ,CAG5CH,EAAW,KAAKJ,EAAQ,KAAKC,EAASM,EAAOF,GAAO,CAAC,CACvD,CAAC,CAAC,CAEN,CAAC,CACH,CC1DQ,IAAAG,GAAY,MAAK,QAEzB,SAASC,GAAkBC,EAA6BC,EAAW,CAC/D,OAAOH,GAAQG,CAAI,EAAID,EAAE,MAAA,OAAAE,EAAA,CAAA,EAAAC,EAAIF,CAAI,CAAA,CAAA,EAAID,EAAGC,CAAI,CAChD,CAMM,SAAUG,GAAuBJ,EAA2B,CAC9D,OAAOK,EAAI,SAAAJ,EAAI,CAAI,OAAAF,GAAYC,EAAIC,CAAI,CAApB,CAAqB,CAC5C,CCfQ,IAAAK,GAAY,MAAK,QACjBC,GAA0D,OAAM,eAArCC,GAA+B,OAAM,UAAlBC,GAAY,OAAM,KAQlE,SAAUC,GAAqDC,EAAuB,CAC1F,GAAIA,EAAK,SAAW,EAAG,CACrB,IAAMC,EAAQD,EAAK,GACnB,GAAIL,GAAQM,CAAK,EACf,MAAO,CAAE,KAAMA,EAAO,KAAM,IAAI,EAElC,GAAIC,GAAOD,CAAK,EAAG,CACjB,IAAME,EAAOL,GAAQG,CAAK,EAC1B,MAAO,CACL,KAAME,EAAK,IAAI,SAACC,EAAG,CAAK,OAAAH,EAAMG,EAAN,CAAU,EAClC,KAAID,IAKV,MAAO,CAAE,KAAMH,EAAa,KAAM,IAAI,CACxC,CAEA,SAASE,GAAOG,EAAQ,CACtB,OAAOA,GAAO,OAAOA,GAAQ,UAAYT,GAAeS,CAAG,IAAMR,EACnE,CC7BM,SAAUS,GAAaC,EAAgBC,EAAa,CACxD,OAAOD,EAAK,OAAO,SAACE,EAAQC,EAAKC,EAAC,CAAK,OAAEF,EAAOC,GAAOF,EAAOG,GAAKF,CAA5B,EAAqC,CAAA,CAAS,CACvF,CCsMM,SAAUG,GAAa,SAAoCC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAC/D,IAAMC,EAAYC,GAAaH,CAAI,EAC7BI,EAAiBC,GAAkBL,CAAI,EAEvCM,EAA8BC,GAAqBP,CAAI,EAA/CQ,EAAWF,EAAA,KAAEG,EAAIH,EAAA,KAE/B,GAAIE,EAAY,SAAW,EAIzB,OAAOE,GAAK,CAAA,EAAIR,CAAgB,EAGlC,IAAMS,EAAS,IAAIC,EACjBC,GACEL,EACAN,EACAO,EAEI,SAACK,EAAM,CAAK,OAAAC,GAAaN,EAAMK,CAAM,CAAzB,EAEZE,EAAQ,CACb,EAGH,OAAOZ,EAAkBO,EAAO,KAAKM,GAAiBb,CAAc,CAAC,EAAsBO,CAC7F,CAEM,SAAUE,GACdL,EACAN,EACAgB,EAAiD,CAAjD,OAAAA,IAAA,SAAAA,EAAAF,IAEO,SAACG,EAA2B,CAGjCC,GACElB,EACA,UAAA,CAaE,QAZQmB,EAAWb,EAAW,OAExBM,EAAS,IAAI,MAAMO,CAAM,EAG3BC,EAASD,EAITE,EAAuBF,aAGlBG,EAAC,CACRJ,GACElB,EACA,UAAA,CACE,IAAMuB,EAASf,GAAKF,EAAYgB,GAAItB,CAAgB,EAChDwB,EAAgB,GACpBD,EAAO,UACLE,EACER,EACA,SAACS,EAAK,CAEJd,EAAOU,GAAKI,EACPF,IAEHA,EAAgB,GAChBH,KAEGA,GAGHJ,EAAW,KAAKD,EAAeJ,EAAO,MAAK,CAAE,CAAC,CAElD,EACA,UAAA,CACO,EAAEQ,GAGLH,EAAW,SAAQ,CAEvB,CAAC,CACF,CAEL,EACAA,CAAU,GAjCLK,EAAI,EAAGA,EAAIH,EAAQG,MAAnBA,CAAC,CAoCZ,EACAL,CAAU,CAEd,CACF,CAMA,SAASC,GAAclB,EAAsC2B,EAAqBC,EAA0B,CACtG5B,EACF6B,GAAgBD,EAAc5B,EAAW2B,CAAO,EAEhDA,EAAO,CAEX,CC3RM,SAAUG,GACdC,EACAC,EACAC,EACAC,EACAC,EACAC,EACAC,EACAC,EAAgC,CAGhC,IAAMC,EAAc,CAAA,EAEhBC,EAAS,EAETC,EAAQ,EAERC,EAAa,GAKXC,EAAgB,UAAA,CAIhBD,GAAc,CAACH,EAAO,QAAU,CAACC,GACnCR,EAAW,SAAQ,CAEvB,EAGMY,EAAY,SAACC,EAAQ,CAAK,OAACL,EAASN,EAAaY,EAAWD,CAAK,EAAIN,EAAO,KAAKM,CAAK,CAA5D,EAE1BC,EAAa,SAACD,EAAQ,CAI1BT,GAAUJ,EAAW,KAAKa,CAAY,EAItCL,IAKA,IAAIO,EAAgB,GAGpBC,EAAUf,EAAQY,EAAOJ,GAAO,CAAC,EAAE,UACjCQ,EACEjB,EACA,SAACkB,EAAU,CAGTf,GAAY,MAAZA,EAAee,CAAU,EAErBd,EAGFQ,EAAUM,CAAiB,EAG3BlB,EAAW,KAAKkB,CAAU,CAE9B,EACA,UAAA,CAGEH,EAAgB,EAClB,EAEA,OACA,UAAA,CAIE,GAAIA,EAKF,GAAI,CAIFP,IAKA,qBACE,IAAMW,EAAgBZ,EAAO,MAAK,EAI9BF,EACFe,GAAgBpB,EAAYK,EAAmB,UAAA,CAAM,OAAAS,EAAWK,CAAa,CAAxB,CAAyB,EAE9EL,EAAWK,CAAa,GARrBZ,EAAO,QAAUC,EAASN,OAYjCS,EAAa,QACNU,EAAP,CACArB,EAAW,MAAMqB,CAAG,EAG1B,CAAC,CACF,CAEL,EAGA,OAAAtB,EAAO,UACLkB,EAAyBjB,EAAYY,EAAW,UAAA,CAE9CF,EAAa,GACbC,EAAa,CACf,CAAC,CAAC,EAKG,UAAA,CACLL,GAAmB,MAAnBA,EAAmB,CACrB,CACF,CClEM,SAAUgB,GACdC,EACAC,EACAC,EAA6B,CAE7B,OAFAA,IAAA,SAAAA,EAAA,KAEIC,EAAWF,CAAc,EAEpBF,GAAS,SAACK,EAAGC,EAAC,CAAK,OAAAC,EAAI,SAACC,EAAQC,EAAU,CAAK,OAAAP,EAAeG,EAAGG,EAAGF,EAAGG,CAAE,CAA1B,CAA2B,EAAEC,EAAUT,EAAQI,EAAGC,CAAC,CAAC,CAAC,CAAjF,EAAoFH,CAAU,GAC/G,OAAOD,GAAmB,WACnCC,EAAaD,GAGRS,EAAQ,SAACC,EAAQC,EAAU,CAAK,OAAAC,GAAeF,EAAQC,EAAYZ,EAASE,CAAU,CAAtD,CAAuD,EAChG,CChCM,SAAUY,GAAyCC,EAA6B,CAA7B,OAAAA,IAAA,SAAAA,EAAA,KAChDC,GAASC,GAAUF,CAAU,CACtC,CCNM,SAAUG,IAAS,CACvB,OAAOC,GAAS,CAAC,CACnB,CCmDM,SAAUC,IAAM,SAACC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACrB,OAAOC,GAAS,EAAGC,GAAKH,EAAMI,GAAaJ,CAAI,CAAC,CAAC,CACnD,CC9DM,SAAUK,EAAsCC,EAA0B,CAC9E,OAAO,IAAIC,EAA+B,SAACC,EAAU,CACnDC,EAAUH,EAAiB,CAAE,EAAE,UAAUE,CAAU,CACrD,CAAC,CACH,CChDA,IAAME,GAA0B,CAAC,cAAe,gBAAgB,EAC1DC,GAAqB,CAAC,mBAAoB,qBAAqB,EAC/DC,GAAgB,CAAC,KAAM,KAAK,EA8N5B,SAAUC,EACdC,EACAC,EACAC,EACAC,EAAsC,CAMtC,GAJIC,EAAWF,CAAO,IACpBC,EAAiBD,EACjBA,EAAU,QAERC,EACF,OAAOJ,EAAaC,EAAQC,EAAWC,CAA+B,EAAE,KAAKG,GAAiBF,CAAc,CAAC,EAUzG,IAAAG,EAAAC,EAEJC,GAAcR,CAAM,EAChBH,GAAmB,IAAI,SAACY,EAAU,CAAK,OAAA,SAACC,EAAY,CAAK,OAAAV,EAAOS,GAAYR,EAAWS,EAASR,CAA+B,CAAtE,CAAlB,CAAyF,EAElIS,GAAwBX,CAAM,EAC5BJ,GAAwB,IAAIgB,GAAwBZ,EAAQC,CAAS,CAAC,EACtEY,GAA0Bb,CAAM,EAChCF,GAAc,IAAIc,GAAwBZ,EAAQC,CAAS,CAAC,EAC5D,CAAA,EAAE,CAAA,EATDa,EAAGR,EAAA,GAAES,EAAMT,EAAA,GAgBlB,GAAI,CAACQ,GACCE,GAAYhB,CAAM,EACpB,OAAOiB,GAAS,SAACC,EAAc,CAAK,OAAAnB,EAAUmB,EAAWjB,EAAWC,CAA+B,CAA/D,CAAgE,EAClGiB,EAAUnB,CAAM,CAAC,EAOvB,GAAI,CAACc,EACH,MAAM,IAAI,UAAU,sBAAsB,EAG5C,OAAO,IAAIM,EAAc,SAACC,EAAU,CAIlC,IAAMX,EAAU,UAAA,SAACY,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAAmB,OAAAF,EAAW,KAAK,EAAIC,EAAK,OAASA,EAAOA,EAAK,EAAE,CAAhD,EAEpC,OAAAR,EAAIJ,CAAO,EAEJ,UAAA,CAAM,OAAAK,EAAQL,CAAO,CAAf,CACf,CAAC,CACH,CASA,SAASE,GAAwBZ,EAAaC,EAAiB,CAC7D,OAAO,SAACQ,EAAkB,CAAK,OAAA,SAACC,EAAY,CAAK,OAAAV,EAAOS,GAAYR,EAAWS,CAAO,CAArC,CAAlB,CACjC,CAOA,SAASC,GAAwBX,EAAW,CAC1C,OAAOI,EAAWJ,EAAO,WAAW,GAAKI,EAAWJ,EAAO,cAAc,CAC3E,CAOA,SAASa,GAA0Bb,EAAW,CAC5C,OAAOI,EAAWJ,EAAO,EAAE,GAAKI,EAAWJ,EAAO,GAAG,CACvD,CAOA,SAASQ,GAAcR,EAAW,CAChC,OAAOI,EAAWJ,EAAO,gBAAgB,GAAKI,EAAWJ,EAAO,mBAAmB,CACrF,CC/LM,SAAUwB,GACdC,EACAC,EACAC,EAAsC,CAEtC,OAAIA,EACKH,GAAoBC,EAAYC,CAAa,EAAE,KAAKE,GAAiBD,CAAc,CAAC,EAGtF,IAAIE,EAAoB,SAACC,EAAU,CACxC,IAAMC,EAAU,UAAA,SAACC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAAc,OAAAH,EAAW,KAAKE,EAAE,SAAW,EAAIA,EAAE,GAAKA,CAAC,CAAzC,EACzBE,EAAWT,EAAWM,CAAO,EACnC,OAAOI,EAAWT,CAAa,EAAI,UAAA,CAAM,OAAAA,EAAcK,EAASG,CAAQ,CAA/B,EAAmC,MAC9E,CAAC,CACH,CCtBM,SAAUE,GACdC,EACAC,EACAC,EAAyC,CAFzCF,IAAA,SAAAA,EAAA,GAEAE,IAAA,SAAAA,EAAAC,IAIA,IAAIC,EAAmB,GAEvB,OAAIH,GAAuB,OAIrBI,GAAYJ,CAAmB,EACjCC,EAAYD,EAIZG,EAAmBH,GAIhB,IAAIK,EAAW,SAACC,EAAU,CAI/B,IAAIC,EAAMC,GAAYT,CAAO,EAAI,CAACA,EAAUE,EAAW,IAAG,EAAKF,EAE3DQ,EAAM,IAERA,EAAM,GAIR,IAAIE,EAAI,EAGR,OAAOR,EAAU,SAAS,UAAA,CACnBK,EAAW,SAEdA,EAAW,KAAKG,GAAG,EAEf,GAAKN,EAGP,KAAK,SAAS,OAAWA,CAAgB,EAGzCG,EAAW,SAAQ,EAGzB,EAAGC,CAAG,CACR,CAAC,CACH,CChGM,SAAUG,GAAK,SAACC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACpB,IAAMC,EAAYC,GAAaH,CAAI,EAC7BI,EAAaC,GAAUL,EAAM,GAAQ,EACrCM,EAAUN,EAChB,OAAQM,EAAQ,OAGZA,EAAQ,SAAW,EAEnBC,EAAUD,EAAQ,EAAE,EAEpBE,GAASJ,CAAU,EAAEK,GAAKH,EAASJ,CAAS,CAAC,EAL7CQ,CAMN,CCjEO,IAAMC,GAAQ,IAAIC,EAAkBC,EAAI,ECpCvC,IAAAC,GAAY,MAAK,QAMnB,SAAUC,GAAkBC,EAAiB,CACjD,OAAOA,EAAK,SAAW,GAAKF,GAAQE,EAAK,EAAE,EAAIA,EAAK,GAAMA,CAC5D,CCoDM,SAAUC,EAAUC,EAAiDC,EAAa,CACtF,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAEhC,IAAIC,EAAQ,EAIZF,EAAO,UAILG,EAAyBF,EAAY,SAACG,EAAK,CAAK,OAAAP,EAAU,KAAKC,EAASM,EAAOF,GAAO,GAAKD,EAAW,KAAKG,CAAK,CAAhE,CAAiE,CAAC,CAEtH,CAAC,CACH,CCxBM,SAAUC,IAAG,SAACC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAClB,IAAMC,EAAiBC,GAAkBH,CAAI,EAEvCI,EAAUC,GAAeL,CAAI,EAEnC,OAAOI,EAAQ,OACX,IAAIE,EAAsB,SAACC,EAAU,CAGnC,IAAIC,EAAuBJ,EAAQ,IAAI,UAAA,CAAM,MAAA,CAAA,CAAA,CAAE,EAK3CK,EAAYL,EAAQ,IAAI,UAAA,CAAM,MAAA,EAAA,CAAK,EAGvCG,EAAW,IAAI,UAAA,CACbC,EAAUC,EAAY,IACxB,CAAC,EAKD,mBAASC,EAAW,CAClBC,EAAUP,EAAQM,EAAY,EAAE,UAC9BE,EACEL,EACA,SAACM,EAAK,CAKJ,GAJAL,EAAQE,GAAa,KAAKG,CAAK,EAI3BL,EAAQ,MAAM,SAACM,EAAM,CAAK,OAAAA,EAAO,MAAP,CAAa,EAAG,CAC5C,IAAMC,EAAcP,EAAQ,IAAI,SAACM,EAAM,CAAK,OAAAA,EAAO,MAAK,CAAZ,CAAe,EAE3DP,EAAW,KAAKL,EAAiBA,EAAc,MAAA,OAAAc,EAAA,CAAA,EAAAC,EAAIF,CAAM,CAAA,CAAA,EAAIA,CAAM,EAI/DP,EAAQ,KAAK,SAACM,EAAQI,EAAC,CAAK,MAAA,CAACJ,EAAO,QAAUL,EAAUS,EAA5B,CAA8B,GAC5DX,EAAW,SAAQ,EAGzB,EACA,UAAA,CAGEE,EAAUC,GAAe,GAIzB,CAACF,EAAQE,GAAa,QAAUH,EAAW,SAAQ,CACrD,CAAC,CACF,GA9BIG,EAAc,EAAG,CAACH,EAAW,QAAUG,EAAcN,EAAQ,OAAQM,MAArEA,CAAW,EAmCpB,OAAO,UAAA,CACLF,EAAUC,EAAY,IACxB,CACF,CAAC,EACDU,CACN,CC9DM,SAAUC,GAASC,EAAoD,CAC3E,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAW,GACXC,EAAsB,KACtBC,EAA6C,KAC7CC,EAAa,GAEXC,EAAc,UAAA,CAGlB,GAFAF,GAAkB,MAAlBA,EAAoB,YAAW,EAC/BA,EAAqB,KACjBF,EAAU,CACZA,EAAW,GACX,IAAMK,EAAQJ,EACdA,EAAY,KACZF,EAAW,KAAKM,CAAK,EAEvBF,GAAcJ,EAAW,SAAQ,CACnC,EAEMO,EAAkB,UAAA,CACtBJ,EAAqB,KACrBC,GAAcJ,EAAW,SAAQ,CACnC,EAEAD,EAAO,UACLS,EACER,EACA,SAACM,EAAK,CACJL,EAAW,GACXC,EAAYI,EACPH,GACHM,EAAUZ,EAAiBS,CAAK,CAAC,EAAE,UAChCH,EAAqBK,EAAyBR,EAAYK,EAAaE,CAAe,CAAE,CAG/F,EACA,UAAA,CACEH,EAAa,IACZ,CAACH,GAAY,CAACE,GAAsBA,EAAmB,SAAWH,EAAW,SAAQ,CACxF,CAAC,CACF,CAEL,CAAC,CACH,CC3CM,SAAUU,GAAaC,EAAkBC,EAAyC,CAAzC,OAAAA,IAAA,SAAAA,EAAAC,IACtCC,GAAM,UAAA,CAAM,OAAAC,GAAMJ,EAAUC,CAAS,CAAzB,CAA0B,CAC/C,CCEM,SAAUI,GAAeC,EAAoBC,EAAsC,CAAtC,OAAAA,IAAA,SAAAA,EAAA,MAGjDA,EAAmBA,GAAgB,KAAhBA,EAAoBD,EAEhCE,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAiB,CAAA,EACjBC,EAAQ,EAEZH,EAAO,UACLI,EACEH,EACA,SAACI,EAAK,aACAC,EAAuB,KAKvBH,IAAUL,IAAsB,GAClCI,EAAQ,KAAK,CAAA,CAAE,MAIjB,QAAqBK,EAAAC,GAAAN,CAAO,EAAAO,EAAAF,EAAA,KAAA,EAAA,CAAAE,EAAA,KAAAA,EAAAF,EAAA,KAAA,EAAE,CAAzB,IAAMG,EAAMD,EAAA,MACfC,EAAO,KAAKL,CAAK,EAMbR,GAAca,EAAO,SACvBJ,EAASA,GAAM,KAANA,EAAU,CAAA,EACnBA,EAAO,KAAKI,CAAM,qGAItB,GAAIJ,MAIF,QAAqBK,EAAAH,GAAAF,CAAM,EAAAM,EAAAD,EAAA,KAAA,EAAA,CAAAC,EAAA,KAAAA,EAAAD,EAAA,KAAA,EAAE,CAAxB,IAAMD,EAAME,EAAA,MACfC,GAAUX,EAASQ,CAAM,EACzBT,EAAW,KAAKS,CAAM,oGAG5B,EACA,UAAA,aAGE,QAAqBI,EAAAN,GAAAN,CAAO,EAAAa,EAAAD,EAAA,KAAA,EAAA,CAAAC,EAAA,KAAAA,EAAAD,EAAA,KAAA,EAAE,CAAzB,IAAMJ,EAAMK,EAAA,MACfd,EAAW,KAAKS,CAAM,oGAExBT,EAAW,SAAQ,CACrB,EAEA,OACA,UAAA,CAEEC,EAAU,IACZ,CAAC,CACF,CAEL,CAAC,CACH,CCbM,SAAUc,GACdC,EAAgD,CAEhD,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAgC,KAChCC,EAAY,GACZC,EAEJF,EAAWF,EAAO,UAChBK,EAAyBJ,EAAY,OAAW,OAAW,SAACK,EAAG,CAC7DF,EAAgBG,EAAUT,EAASQ,EAAKT,GAAWC,CAAQ,EAAEE,CAAM,CAAC,CAAC,EACjEE,GACFA,EAAS,YAAW,EACpBA,EAAW,KACXE,EAAc,UAAUH,CAAU,GAIlCE,EAAY,EAEhB,CAAC,CAAC,EAGAA,IAMFD,EAAS,YAAW,EACpBA,EAAW,KACXE,EAAe,UAAUH,CAAU,EAEvC,CAAC,CACH,CC/HM,SAAUO,GACdC,EACAC,EACAC,EACAC,EACAC,EAAqC,CAErC,OAAO,SAACC,EAAuBC,EAA2B,CAIxD,IAAIC,EAAWL,EAIXM,EAAaP,EAEbQ,EAAQ,EAGZJ,EAAO,UACLK,EACEJ,EACA,SAACK,EAAK,CAEJ,IAAMC,EAAIH,IAEVD,EAAQD,EAEJP,EAAYQ,EAAOG,EAAOC,CAAC,GAIzBL,EAAW,GAAOI,GAGxBR,GAAcG,EAAW,KAAKE,CAAK,CACrC,EAGAJ,GACG,UAAA,CACCG,GAAYD,EAAW,KAAKE,CAAK,EACjCF,EAAW,SAAQ,CACrB,CAAE,CACL,CAEL,CACF,CCnCM,SAAUO,IAAa,SAAOC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAClC,IAAMC,EAAiBC,GAAkBH,CAAI,EAC7C,OAAOE,EACHE,GAAKL,GAAa,MAAA,OAAAM,EAAA,CAAA,EAAAC,EAAKN,CAAoC,CAAA,CAAA,EAAGO,GAAiBL,CAAc,CAAC,EAC9FM,EAAQ,SAACC,EAAQC,EAAU,CACzBC,GAAiBN,EAAA,CAAEI,CAAM,EAAAH,EAAKM,GAAeZ,CAAI,CAAC,CAAA,CAAA,EAAGU,CAAU,CACjE,CAAC,CACP,CCUM,SAAUG,IAAiB,SAC/BC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAEA,OAAOC,GAAa,MAAA,OAAAC,EAAA,CAAA,EAAAC,EAAIJ,CAAY,CAAA,CAAA,CACtC,CC+BM,SAAUK,GACdC,EACAC,EAA6G,CAE7G,OAAOC,EAAWD,CAAc,EAAIE,GAASH,EAASC,EAAgB,CAAC,EAAIE,GAASH,EAAS,CAAC,CAChG,CCpBM,SAAUI,GAAgBC,EAAiBC,EAAyC,CAAzC,OAAAA,IAAA,SAAAA,EAAAC,IACxCC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAkC,KAClCC,EAAsB,KACtBC,EAA0B,KAExBC,EAAO,UAAA,CACX,GAAIH,EAAY,CAEdA,EAAW,YAAW,EACtBA,EAAa,KACb,IAAMI,EAAQH,EACdA,EAAY,KACZF,EAAW,KAAKK,CAAK,EAEzB,EACA,SAASC,GAAY,CAInB,IAAMC,EAAaJ,EAAYR,EACzBa,EAAMZ,EAAU,IAAG,EACzB,GAAIY,EAAMD,EAAY,CAEpBN,EAAa,KAAK,SAAS,OAAWM,EAAaC,CAAG,EACtDR,EAAW,IAAIC,CAAU,EACzB,OAGFG,EAAI,CACN,CAEAL,EAAO,UACLU,EACET,EACA,SAACK,EAAQ,CACPH,EAAYG,EACZF,EAAWP,EAAU,IAAG,EAGnBK,IACHA,EAAaL,EAAU,SAASU,EAAcX,CAAO,EACrDK,EAAW,IAAIC,CAAU,EAE7B,EACA,UAAA,CAGEG,EAAI,EACJJ,EAAW,SAAQ,CACrB,EAEA,OACA,UAAA,CAEEE,EAAYD,EAAa,IAC3B,CAAC,CACF,CAEL,CAAC,CACH,CCpFM,SAAUS,GAAqBC,EAAe,CAClD,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAW,GACfF,EAAO,UACLG,EACEF,EACA,SAACG,EAAK,CACJF,EAAW,GACXD,EAAW,KAAKG,CAAK,CACvB,EACA,UAAA,CACOF,GACHD,EAAW,KAAKH,CAAa,EAE/BG,EAAW,SAAQ,CACrB,CAAC,CACF,CAEL,CAAC,CACH,CCXM,SAAUI,GAAQC,EAAa,CACnC,OAAOA,GAAS,EAEZ,UAAA,CAAM,OAAAC,CAAA,EACNC,EAAQ,SAACC,EAAQC,EAAU,CACzB,IAAIC,EAAO,EACXF,EAAO,UACLG,EAAyBF,EAAY,SAACG,EAAK,CAIrC,EAAEF,GAAQL,IACZI,EAAW,KAAKG,CAAK,EAIjBP,GAASK,GACXD,EAAW,SAAQ,EAGzB,CAAC,CAAC,CAEN,CAAC,CACP,CC9BM,SAAUI,IAAc,CAC5B,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChCD,EAAO,UAAUE,EAAyBD,EAAYE,EAAI,CAAC,CAC7D,CAAC,CACH,CCCM,SAAUC,GAASC,EAAQ,CAC/B,OAAOC,EAAI,UAAA,CAAM,OAAAD,CAAA,CAAK,CACxB,CCyCM,SAAUE,GACdC,EACAC,EAAmC,CAEnC,OAAIA,EAEK,SAACC,EAAqB,CAC3B,OAAAC,GAAOF,EAAkB,KAAKG,GAAK,CAAC,EAAGC,GAAc,CAAE,EAAGH,EAAO,KAAKH,GAAUC,CAAqB,CAAC,CAAC,CAAvG,EAGGM,GAAS,SAACC,EAAOC,EAAK,CAAK,OAAAR,EAAsBO,EAAOC,CAAK,EAAE,KAAKJ,GAAK,CAAC,EAAGK,GAAMF,CAAK,CAAC,CAA9D,CAA+D,CACnG,CCtCM,SAAUG,GAASC,EAAoBC,EAAyC,CAAzCA,IAAA,SAAAA,EAAAC,IAC3C,IAAMC,EAAWC,GAAMJ,EAAKC,CAAS,EACrC,OAAOI,GAAU,UAAA,CAAM,OAAAF,CAAA,CAAQ,CACjC,CC0EM,SAAUG,EACdC,EACAC,EAA0D,CAA1D,OAAAA,IAAA,SAAAA,EAA+BC,IAK/BF,EAAaA,GAAU,KAAVA,EAAcG,GAEpBC,EAAQ,SAACC,EAAQC,EAAU,CAGhC,IAAIC,EAEAC,EAAQ,GAEZH,EAAO,UACLI,EAAyBH,EAAY,SAACI,EAAK,CAEzC,IAAMC,EAAaV,EAAYS,CAAK,GAKhCF,GAAS,CAACR,EAAYO,EAAaI,CAAU,KAM/CH,EAAQ,GACRD,EAAcI,EAGdL,EAAW,KAAKI,CAAK,EAEzB,CAAC,CAAC,CAEN,CAAC,CACH,CAEA,SAASP,GAAeS,EAAQC,EAAM,CACpC,OAAOD,IAAMC,CACf,CCjHM,SAAUC,EAA8CC,EAAQC,EAAuC,CAC3G,OAAOC,EAAqB,SAACC,EAAMC,EAAI,CAAK,OAAAH,EAAUA,EAAQE,EAAEH,GAAMI,EAAEJ,EAAI,EAAIG,EAAEH,KAASI,EAAEJ,EAAjD,CAAqD,CACnG,CCLM,SAAUK,IAAO,SAAIC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACzB,OAAO,SAACC,EAAqB,CAAK,OAAAC,GAAOD,EAAQE,EAAE,MAAA,OAAAC,EAAA,CAAA,EAAAC,EAAIN,CAAM,CAAA,CAAA,CAAA,CAA3B,CACpC,CCHM,SAAUO,EAAYC,EAAoB,CAC9C,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAGhC,GAAI,CACFD,EAAO,UAAUC,CAAU,UAE3BA,EAAW,IAAIH,CAAQ,EAE3B,CAAC,CACH,CC9BM,SAAUI,GAAYC,EAAa,CACvC,OAAOA,GAAS,EACZ,UAAA,CAAM,OAAAC,CAAA,EACNC,EAAQ,SAACC,EAAQC,EAAU,CAKzB,IAAIC,EAAc,CAAA,EAClBF,EAAO,UACLG,EACEF,EACA,SAACG,EAAK,CAEJF,EAAO,KAAKE,CAAK,EAGjBP,EAAQK,EAAO,QAAUA,EAAO,MAAK,CACvC,EACA,UAAA,aAGE,QAAoBG,EAAAC,GAAAJ,CAAM,EAAAK,EAAAF,EAAA,KAAA,EAAA,CAAAE,EAAA,KAAAA,EAAAF,EAAA,KAAA,EAAE,CAAvB,IAAMD,EAAKG,EAAA,MACdN,EAAW,KAAKG,CAAK,oGAEvBH,EAAW,SAAQ,CACrB,EAEA,OACA,UAAA,CAEEC,EAAS,IACX,CAAC,CACF,CAEL,CAAC,CACP,CC1DM,SAAUM,IAAK,SAAIC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACvB,IAAMC,EAAYC,GAAaH,CAAI,EAC7BI,EAAaC,GAAUL,EAAM,GAAQ,EAC3C,OAAAA,EAAOM,GAAeN,CAAI,EAEnBO,EAAQ,SAACC,EAAQC,EAAU,CAChCC,GAASN,CAAU,EAAEO,GAAIC,EAAA,CAAEJ,CAAM,EAAAK,EAAMb,CAA6B,CAAA,EAAGE,CAAS,CAAC,EAAE,UAAUO,CAAU,CACzG,CAAC,CACH,CCcM,SAAUK,IAAS,SACvBC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAEA,OAAOC,GAAK,MAAA,OAAAC,EAAA,CAAA,EAAAC,EAAIJ,CAAY,CAAA,CAAA,CAC9B,CCmEM,SAAUK,GAAUC,EAAqC,OACzDC,EAAQ,IACRC,EAEJ,OAAIF,GAAiB,OACf,OAAOA,GAAkB,UACxBG,EAA4BH,EAAa,MAAzCC,EAAKE,IAAA,OAAG,IAAQA,EAAED,EAAUF,EAAa,OAE5CC,EAAQD,GAILC,GAAS,EACZ,UAAA,CAAM,OAAAG,CAAA,EACNC,EAAQ,SAACC,EAAQC,EAAU,CACzB,IAAIC,EAAQ,EACRC,EAEEC,EAAc,UAAA,CAGlB,GAFAD,GAAS,MAATA,EAAW,YAAW,EACtBA,EAAY,KACRP,GAAS,KAAM,CACjB,IAAMS,EAAW,OAAOT,GAAU,SAAWU,GAAMV,CAAK,EAAIW,EAAUX,EAAMM,CAAK,CAAC,EAC5EM,EAAqBC,EAAyBR,EAAY,UAAA,CAC9DO,EAAmB,YAAW,EAC9BE,EAAiB,CACnB,CAAC,EACDL,EAAS,UAAUG,CAAkB,OAErCE,EAAiB,CAErB,EAEMA,EAAoB,UAAA,CACxB,IAAIC,EAAY,GAChBR,EAAYH,EAAO,UACjBS,EAAyBR,EAAY,OAAW,UAAA,CAC1C,EAAEC,EAAQP,EACRQ,EACFC,EAAW,EAEXO,EAAY,GAGdV,EAAW,SAAQ,CAEvB,CAAC,CAAC,EAGAU,GACFP,EAAW,CAEf,EAEAM,EAAiB,CACnB,CAAC,CACP,CC7HM,SAAUE,GAAUC,EAAyB,CACjD,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAW,GACXC,EAAsB,KAC1BH,EAAO,UACLI,EAAyBH,EAAY,SAACI,EAAK,CACzCH,EAAW,GACXC,EAAYE,CACd,CAAC,CAAC,EAEJP,EAAS,UACPM,EACEH,EACA,UAAA,CACE,GAAIC,EAAU,CACZA,EAAW,GACX,IAAMG,EAAQF,EACdA,EAAY,KACZF,EAAW,KAAKI,CAAK,EAEzB,EACAC,EAAI,CACL,CAEL,CAAC,CACH,CCgBM,SAAUC,GAAcC,EAA6DC,EAAQ,CAMjG,OAAOC,EAAQC,GAAcH,EAAaC,EAAW,UAAU,QAAU,EAAG,EAAI,CAAC,CACnF,CCgDM,SAAUG,GAASC,EAA4B,CAA5BA,IAAA,SAAAA,EAAA,CAAA,GACf,IAAAC,EAAgHD,EAAO,UAAvHE,EAASD,IAAA,OAAG,UAAA,CAAM,OAAA,IAAIE,CAAJ,EAAgBF,EAAEG,EAA4EJ,EAAO,aAAnFK,EAAYD,IAAA,OAAG,GAAIA,EAAEE,EAAuDN,EAAO,gBAA9DO,EAAeD,IAAA,OAAG,GAAIA,EAAEE,EAA+BR,EAAO,oBAAtCS,EAAmBD,IAAA,OAAG,GAAIA,EAUnH,OAAO,SAACE,EAAa,CACnB,IAAIC,EACAC,EACAC,EACAC,EAAW,EACXC,EAAe,GACfC,EAAa,GAEXC,EAAc,UAAA,CAClBL,GAAe,MAAfA,EAAiB,YAAW,EAC5BA,EAAkB,MACpB,EAGMM,EAAQ,UAAA,CACZD,EAAW,EACXN,EAAaE,EAAU,OACvBE,EAAeC,EAAa,EAC9B,EACMG,EAAsB,UAAA,CAG1B,IAAMC,EAAOT,EACbO,EAAK,EACLE,GAAI,MAAJA,EAAM,YAAW,CACnB,EAEA,OAAOC,EAAc,SAACC,EAAQC,GAAU,CACtCT,IACI,CAACE,GAAc,CAACD,GAClBE,EAAW,EAOb,IAAMO,GAAQX,EAAUA,GAAO,KAAPA,EAAWX,EAAS,EAO5CqB,GAAW,IAAI,UAAA,CACbT,IAKIA,IAAa,GAAK,CAACE,GAAc,CAACD,IACpCH,EAAkBa,GAAYN,EAAqBV,CAAmB,EAE1E,CAAC,EAIDe,GAAK,UAAUD,EAAU,EAGvB,CAACZ,GAIDG,EAAW,IAOXH,EAAa,IAAIe,GAAe,CAC9B,KAAM,SAACC,GAAK,CAAK,OAAAH,GAAK,KAAKG,EAAK,CAAf,EACjB,MAAO,SAACC,GAAG,CACTZ,EAAa,GACbC,EAAW,EACXL,EAAkBa,GAAYP,EAAOb,EAAcuB,EAAG,EACtDJ,GAAK,MAAMI,EAAG,CAChB,EACA,SAAU,UAAA,CACRb,EAAe,GACfE,EAAW,EACXL,EAAkBa,GAAYP,EAAOX,CAAe,EACpDiB,GAAK,SAAQ,CACf,EACD,EACDK,EAAUP,CAAM,EAAE,UAAUX,CAAU,EAE1C,CAAC,EAAED,CAAa,CAClB,CACF,CAEA,SAASe,GACPP,EACAY,EAA+C,SAC/CC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,EAAA,GAAA,UAAAA,GAEA,GAAIF,IAAO,GAAM,CACfZ,EAAK,EACL,OAGF,GAAIY,IAAO,GAIX,KAAMG,EAAe,IAAIP,GAAe,CACtC,KAAM,UAAA,CACJO,EAAa,YAAW,EACxBf,EAAK,CACP,EACD,EAED,OAAOY,EAAE,MAAA,OAAAI,EAAA,CAAA,EAAAC,EAAIJ,CAAI,CAAA,CAAA,EAAE,UAAUE,CAAY,EAC3C,CCjHM,SAAUG,EACdC,EACAC,EACAC,EAAyB,WAErBC,EACAC,EAAW,GACf,OAAIJ,GAAsB,OAAOA,GAAuB,UACnDK,EAA8EL,EAAkB,WAAhGG,EAAUE,IAAA,OAAG,IAAQA,EAAEC,EAAuDN,EAAkB,WAAzEC,EAAUK,IAAA,OAAG,IAAQA,EAAEC,EAAgCP,EAAkB,SAAlDI,EAAQG,IAAA,OAAG,GAAKA,EAAEL,EAAcF,EAAkB,WAEnGG,EAAcH,GAAkB,KAAlBA,EAAsB,IAE/BQ,GAAS,CACd,UAAW,UAAA,CAAM,OAAA,IAAIC,GAAcN,EAAYF,EAAYC,CAAS,CAAnD,EACjB,aAAc,GACd,gBAAiB,GACjB,oBAAqBE,EACtB,CACH,CCxIM,SAAUM,GAAQC,EAAa,CACnC,OAAOC,EAAO,SAACC,EAAGC,EAAK,CAAK,OAAAH,GAASG,CAAT,CAAc,CAC5C,CCWM,SAAUC,GAAaC,EAAyB,CACpD,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAS,GAEPC,EAAiBC,EACrBH,EACA,UAAA,CACEE,GAAc,MAAdA,EAAgB,YAAW,EAC3BD,EAAS,EACX,EACAG,EAAI,EAGNC,EAAUR,CAAQ,EAAE,UAAUK,CAAc,EAE5CH,EAAO,UAAUI,EAAyBH,EAAY,SAACM,EAAK,CAAK,OAAAL,GAAUD,EAAW,KAAKM,CAAK,CAA/B,CAAgC,CAAC,CACpG,CAAC,CACH,CCRM,SAAUC,GAAS,SAAOC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAC9B,IAAMC,EAAYC,GAAaH,CAAM,EACrC,OAAOI,EAAQ,SAACC,EAAQC,EAAU,EAI/BJ,EAAYK,GAAOP,EAAQK,EAAQH,CAAS,EAAIK,GAAOP,EAAQK,CAAM,GAAG,UAAUC,CAAU,CAC/F,CAAC,CACH,CCmBM,SAAUE,EACdC,EACAC,EAA6G,CAE7G,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAyD,KACzDC,EAAQ,EAERC,EAAa,GAIXC,EAAgB,UAAA,CAAM,OAAAD,GAAc,CAACF,GAAmBD,EAAW,SAAQ,CAArD,EAE5BD,EAAO,UACLM,EACEL,EACA,SAACM,EAAK,CAEJL,GAAe,MAAfA,EAAiB,YAAW,EAC5B,IAAIM,EAAa,EACXC,EAAaN,IAEnBO,EAAUb,EAAQU,EAAOE,CAAU,CAAC,EAAE,UACnCP,EAAkBI,EACjBL,EAIA,SAACU,EAAU,CAAK,OAAAV,EAAW,KAAKH,EAAiBA,EAAeS,EAAOI,EAAYF,EAAYD,GAAY,EAAIG,CAAU,CAAzG,EAChB,UAAA,CAIET,EAAkB,KAClBG,EAAa,CACf,CAAC,CACD,CAEN,EACA,UAAA,CACED,EAAa,GACbC,EAAa,CACf,CAAC,CACF,CAEL,CAAC,CACH,CCvFM,SAAUO,GAAaC,EAA8B,CACzD,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChCC,EAAUJ,CAAQ,EAAE,UAAUK,EAAyBF,EAAY,UAAA,CAAM,OAAAA,EAAW,SAAQ,CAAnB,EAAuBG,EAAI,CAAC,EACrG,CAACH,EAAW,QAAUD,EAAO,UAAUC,CAAU,CACnD,CAAC,CACH,CCIM,SAAUI,GAAaC,EAAiDC,EAAiB,CAAjB,OAAAA,IAAA,SAAAA,EAAA,IACrEC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAQ,EACZF,EAAO,UACLG,EAAyBF,EAAY,SAACG,EAAK,CACzC,IAAMC,EAASR,EAAUO,EAAOF,GAAO,GACtCG,GAAUP,IAAcG,EAAW,KAAKG,CAAK,EAC9C,CAACC,GAAUJ,EAAW,SAAQ,CAChC,CAAC,CAAC,CAEN,CAAC,CACH,CCyCM,SAAUK,EACdC,EACAC,EACAC,EAA8B,CAK9B,IAAMC,EACJC,EAAWJ,CAAc,GAAKC,GAASC,EAElC,CAAE,KAAMF,EAA2E,MAAKC,EAAE,SAAQC,CAAA,EACnGF,EAEN,OAAOG,EACHE,EAAQ,SAACC,EAAQC,EAAU,QACzBC,EAAAL,EAAY,aAAS,MAAAK,IAAA,QAAAA,EAAA,KAArBL,CAAW,EACX,IAAIM,EAAU,GACdH,EAAO,UACLI,EACEH,EACA,SAACI,EAAK,QACJH,EAAAL,EAAY,QAAI,MAAAK,IAAA,QAAAA,EAAA,KAAhBL,EAAmBQ,CAAK,EACxBJ,EAAW,KAAKI,CAAK,CACvB,EACA,UAAA,OACEF,EAAU,IACVD,EAAAL,EAAY,YAAQ,MAAAK,IAAA,QAAAA,EAAA,KAApBL,CAAW,EACXI,EAAW,SAAQ,CACrB,EACA,SAACK,EAAG,OACFH,EAAU,IACVD,EAAAL,EAAY,SAAK,MAAAK,IAAA,QAAAA,EAAA,KAAjBL,EAAoBS,CAAG,EACvBL,EAAW,MAAMK,CAAG,CACtB,EACA,UAAA,SACMH,KACFD,EAAAL,EAAY,eAAW,MAAAK,IAAA,QAAAA,EAAA,KAAvBL,CAAW,IAEbU,EAAAV,EAAY,YAAQ,MAAAU,IAAA,QAAAA,EAAA,KAApBV,CAAW,CACb,CAAC,CACF,CAEL,CAAC,EAIDW,EACN,CC9IO,IAAMC,GAAwC,CACnD,QAAS,GACT,SAAU,IAiDN,SAAUC,GACdC,EACAC,EAA8C,CAA9C,OAAAA,IAAA,SAAAA,EAAAH,IAEOI,EAAQ,SAACC,EAAQC,EAAU,CACxB,IAAAC,EAAsBJ,EAAM,QAAnBK,EAAaL,EAAM,SAChCM,EAAW,GACXC,EAAsB,KACtBC,EAAiC,KACjCC,EAAa,GAEXC,EAAgB,UAAA,CACpBF,GAAS,MAATA,EAAW,YAAW,EACtBA,EAAY,KACRH,IACFM,EAAI,EACJF,GAAcN,EAAW,SAAQ,EAErC,EAEMS,EAAoB,UAAA,CACxBJ,EAAY,KACZC,GAAcN,EAAW,SAAQ,CACnC,EAEMU,EAAgB,SAACC,EAAQ,CAC7B,OAACN,EAAYO,EAAUhB,EAAiBe,CAAK,CAAC,EAAE,UAAUE,EAAyBb,EAAYO,EAAeE,CAAiB,CAAC,CAAhI,EAEID,EAAO,UAAA,CACX,GAAIL,EAAU,CAIZA,EAAW,GACX,IAAMQ,EAAQP,EACdA,EAAY,KAEZJ,EAAW,KAAKW,CAAK,EACrB,CAACL,GAAcI,EAAcC,CAAK,EAEtC,EAEAZ,EAAO,UACLc,EACEb,EAMA,SAACW,EAAK,CACJR,EAAW,GACXC,EAAYO,EACZ,EAAEN,GAAa,CAACA,EAAU,UAAYJ,EAAUO,EAAI,EAAKE,EAAcC,CAAK,EAC9E,EACA,UAAA,CACEL,EAAa,GACb,EAAEJ,GAAYC,GAAYE,GAAa,CAACA,EAAU,SAAWL,EAAW,SAAQ,CAClF,CAAC,CACF,CAEL,CAAC,CACH,CCvEM,SAAUc,GACdC,EACAC,EACAC,EAA8B,CAD9BD,IAAA,SAAAA,EAAAE,IACAD,IAAA,SAAAA,EAAAE,IAEA,IAAMC,EAAYC,GAAMN,EAAUC,CAAS,EAC3C,OAAOM,GAAS,UAAA,CAAM,OAAAF,CAAA,EAAWH,CAAM,CACzC,CCJM,SAAUM,IAAc,SAAOC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACnC,IAAMC,EAAUC,GAAkBH,CAAM,EAExC,OAAOI,EAAQ,SAACC,EAAQC,EAAU,CAehC,QAdMC,EAAMP,EAAO,OACbQ,EAAc,IAAI,MAAMD,CAAG,EAI7BE,EAAWT,EAAO,IAAI,UAAA,CAAM,MAAA,EAAA,CAAK,EAGjCU,EAAQ,cAMHC,EAAC,CACRC,EAAUZ,EAAOW,EAAE,EAAE,UACnBE,EACEP,EACA,SAACQ,EAAK,CACJN,EAAYG,GAAKG,EACb,CAACJ,GAAS,CAACD,EAASE,KAEtBF,EAASE,GAAK,IAKbD,EAAQD,EAAS,MAAMM,EAAQ,KAAON,EAAW,MAEtD,EAGAO,EAAI,CACL,GAnBIL,EAAI,EAAGA,EAAIJ,EAAKI,MAAhBA,CAAC,EAwBVN,EAAO,UACLQ,EAAyBP,EAAY,SAACQ,EAAK,CACzC,GAAIJ,EAAO,CAET,IAAMO,EAAMC,EAAA,CAAIJ,CAAK,EAAAK,EAAKX,CAAW,CAAA,EACrCF,EAAW,KAAKJ,EAAUA,EAAO,MAAA,OAAAgB,EAAA,CAAA,EAAAC,EAAIF,CAAM,CAAA,CAAA,EAAIA,CAAM,EAEzD,CAAC,CAAC,CAEN,CAAC,CACH,CCxFM,SAAUG,IAAG,SAAOC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACxB,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChCL,GAAS,MAAA,OAAAM,EAAA,CAACF,CAA8B,EAAAG,EAAMN,CAAuC,CAAA,CAAA,EAAE,UAAUI,CAAU,CAC7G,CAAC,CACH,CCCM,SAAUG,IAAO,SAAkCC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACvD,OAAOC,GAAG,MAAA,OAAAC,EAAA,CAAA,EAAAC,EAAIJ,CAAW,CAAA,CAAA,CAC3B,CCYO,SAASK,IAAmC,CACjD,IAAMC,EAAY,IAAIC,GAAwB,CAAC,EAC/C,OAAAC,EAAU,SAAU,mBAAoB,CAAE,KAAM,EAAK,CAAC,EACnD,UAAU,IAAMF,EAAU,KAAK,QAAQ,CAAC,EAGpCA,CACT,CCHO,SAASG,EACdC,EAAkBC,EAAmB,SAChC,CACL,OAAO,MAAM,KAAKA,EAAK,iBAAoBD,CAAQ,CAAC,CACtD,CAuBO,SAASE,EACdF,EAAkBC,EAAmB,SAClC,CACH,IAAME,EAAKC,GAAsBJ,EAAUC,CAAI,EAC/C,GAAI,OAAOE,GAAO,YAChB,MAAM,IAAI,eACR,8BAA8BH,kBAChC,EAGF,OAAOG,CACT,CAsBO,SAASC,GACdJ,EAAkBC,EAAmB,SACtB,CACf,OAAOA,EAAK,cAAiBD,CAAQ,GAAK,MAC5C,CAOO,SAASK,IAA4C,CAC1D,OAAO,SAAS,yBAAyB,aACrC,SAAS,eAAiB,MAEhC,CClEO,SAASC,GACdC,EACqB,CACrB,OAAOC,EACLC,EAAU,SAAS,KAAM,SAAS,EAClCA,EAAU,SAAS,KAAM,UAAU,CACrC,EACG,KACCC,GAAa,CAAC,EACdC,EAAI,IAAM,CACR,IAAMC,EAASC,GAAiB,EAChC,OAAO,OAAOD,GAAW,YACrBL,EAAG,SAASK,CAAM,EAClB,EACN,CAAC,EACDE,EAAUP,IAAOM,GAAiB,CAAC,EACnCE,EAAqB,CACvB,CACJ,CChBO,SAASC,GACdC,EACe,CACf,MAAO,CACL,EAAGA,EAAG,WACN,EAAGA,EAAG,SACR,CACF,CAWO,SAASC,GACdD,EAC2B,CAC3B,OAAOE,EACLC,EAAU,OAAQ,MAAM,EACxBA,EAAU,OAAQ,QAAQ,CAC5B,EACG,KACCC,GAAU,EAAGC,EAAuB,EACpCC,EAAI,IAAMP,GAAiBC,CAAE,CAAC,EAC9BO,EAAUR,GAAiBC,CAAE,CAAC,CAChC,CACJ,CCxCO,SAASQ,GACdC,EACe,CACf,MAAO,CACL,EAAGA,EAAG,WACN,EAAGA,EAAG,SACR,CACF,CAWO,SAASC,GACdD,EAC2B,CAC3B,OAAOE,EACLC,EAAUH,EAAI,QAAQ,EACtBG,EAAU,OAAQ,QAAQ,CAC5B,EACG,KACCC,GAAU,EAAGC,EAAuB,EACpCC,EAAI,IAAMP,GAAwBC,CAAE,CAAC,EACrCO,EAAUR,GAAwBC,CAAE,CAAC,CACvC,CACJ,CCpEA,IAAIQ,GAAW,UAAY,CACvB,GAAI,OAAO,KAAQ,YACf,OAAO,IASX,SAASC,EAASC,EAAKC,EAAK,CACxB,IAAIC,EAAS,GACb,OAAAF,EAAI,KAAK,SAAUG,EAAOC,EAAO,CAC7B,OAAID,EAAM,KAAOF,GACbC,EAASE,EACF,IAEJ,EACX,CAAC,EACMF,CACX,CACA,OAAsB,UAAY,CAC9B,SAASG,GAAU,CACf,KAAK,YAAc,CAAC,CACxB,CACA,cAAO,eAAeA,EAAQ,UAAW,OAAQ,CAI7C,IAAK,UAAY,CACb,OAAO,KAAK,YAAY,MAC5B,EACA,WAAY,GACZ,aAAc,EAClB,CAAC,EAKDA,EAAQ,UAAU,IAAM,SAAUJ,EAAK,CACnC,IAAIG,EAAQL,EAAS,KAAK,YAAaE,CAAG,EACtCE,EAAQ,KAAK,YAAYC,GAC7B,OAAOD,GAASA,EAAM,EAC1B,EAMAE,EAAQ,UAAU,IAAM,SAAUJ,EAAKK,EAAO,CAC1C,IAAIF,EAAQL,EAAS,KAAK,YAAaE,CAAG,EACtC,CAACG,EACD,KAAK,YAAYA,GAAO,GAAKE,EAG7B,KAAK,YAAY,KAAK,CAACL,EAAKK,CAAK,CAAC,CAE1C,EAKAD,EAAQ,UAAU,OAAS,SAAUJ,EAAK,CACtC,IAAIM,EAAU,KAAK,YACfH,EAAQL,EAASQ,EAASN,CAAG,EAC7B,CAACG,GACDG,EAAQ,OAAOH,EAAO,CAAC,CAE/B,EAKAC,EAAQ,UAAU,IAAM,SAAUJ,EAAK,CACnC,MAAO,CAAC,CAAC,CAACF,EAAS,KAAK,YAAaE,CAAG,CAC5C,EAIAI,EAAQ,UAAU,MAAQ,UAAY,CAClC,KAAK,YAAY,OAAO,CAAC,CAC7B,EAMAA,EAAQ,UAAU,QAAU,SAAUG,EAAUC,EAAK,CAC7CA,IAAQ,SAAUA,EAAM,MAC5B,QAASC,EAAK,EAAGC,EAAK,KAAK,YAAaD,EAAKC,EAAG,OAAQD,IAAM,CAC1D,IAAIP,EAAQQ,EAAGD,GACfF,EAAS,KAAKC,EAAKN,EAAM,GAAIA,EAAM,EAAE,CACzC,CACJ,EACOE,CACX,EAAE,CACN,EAAG,EAKCO,GAAY,OAAO,QAAW,aAAe,OAAO,UAAa,aAAe,OAAO,WAAa,SAGpGC,GAAY,UAAY,CACxB,OAAI,OAAO,QAAW,aAAe,OAAO,OAAS,KAC1C,OAEP,OAAO,MAAS,aAAe,KAAK,OAAS,KACtC,KAEP,OAAO,QAAW,aAAe,OAAO,OAAS,KAC1C,OAGJ,SAAS,aAAa,EAAE,CACnC,EAAG,EAQCC,GAA2B,UAAY,CACvC,OAAI,OAAO,uBAA0B,WAI1B,sBAAsB,KAAKD,EAAQ,EAEvC,SAAUL,EAAU,CAAE,OAAO,WAAW,UAAY,CAAE,OAAOA,EAAS,KAAK,IAAI,CAAC,CAAG,EAAG,IAAO,EAAE,CAAG,CAC7G,EAAG,EAGCO,GAAkB,EAStB,SAASC,GAAUR,EAAUS,EAAO,CAChC,IAAIC,EAAc,GAAOC,EAAe,GAAOC,EAAe,EAO9D,SAASC,GAAiB,CAClBH,IACAA,EAAc,GACdV,EAAS,GAETW,GACAG,EAAM,CAEd,CAQA,SAASC,GAAkB,CACvBT,GAAwBO,CAAc,CAC1C,CAMA,SAASC,GAAQ,CACb,IAAIE,EAAY,KAAK,IAAI,EACzB,GAAIN,EAAa,CAEb,GAAIM,EAAYJ,EAAeL,GAC3B,OAMJI,EAAe,EACnB,MAEID,EAAc,GACdC,EAAe,GACf,WAAWI,EAAiBN,CAAK,EAErCG,EAAeI,CACnB,CACA,OAAOF,CACX,CAGA,IAAIG,GAAgB,GAGhBC,GAAiB,CAAC,MAAO,QAAS,SAAU,OAAQ,QAAS,SAAU,OAAQ,QAAQ,EAEvFC,GAA4B,OAAO,kBAAqB,YAIxDC,GAA0C,UAAY,CAMtD,SAASA,GAA2B,CAMhC,KAAK,WAAa,GAMlB,KAAK,qBAAuB,GAM5B,KAAK,mBAAqB,KAM1B,KAAK,WAAa,CAAC,EACnB,KAAK,iBAAmB,KAAK,iBAAiB,KAAK,IAAI,EACvD,KAAK,QAAUZ,GAAS,KAAK,QAAQ,KAAK,IAAI,EAAGS,EAAa,CAClE,CAOA,OAAAG,EAAyB,UAAU,YAAc,SAAUC,EAAU,CAC5D,CAAC,KAAK,WAAW,QAAQA,CAAQ,GAClC,KAAK,WAAW,KAAKA,CAAQ,EAG5B,KAAK,YACN,KAAK,SAAS,CAEtB,EAOAD,EAAyB,UAAU,eAAiB,SAAUC,EAAU,CACpE,IAAIC,EAAY,KAAK,WACjB1B,EAAQ0B,EAAU,QAAQD,CAAQ,EAElC,CAACzB,GACD0B,EAAU,OAAO1B,EAAO,CAAC,EAGzB,CAAC0B,EAAU,QAAU,KAAK,YAC1B,KAAK,YAAY,CAEzB,EAOAF,EAAyB,UAAU,QAAU,UAAY,CACrD,IAAIG,EAAkB,KAAK,iBAAiB,EAGxCA,GACA,KAAK,QAAQ,CAErB,EASAH,EAAyB,UAAU,iBAAmB,UAAY,CAE9D,IAAII,EAAkB,KAAK,WAAW,OAAO,SAAUH,EAAU,CAC7D,OAAOA,EAAS,aAAa,EAAGA,EAAS,UAAU,CACvD,CAAC,EAMD,OAAAG,EAAgB,QAAQ,SAAUH,EAAU,CAAE,OAAOA,EAAS,gBAAgB,CAAG,CAAC,EAC3EG,EAAgB,OAAS,CACpC,EAOAJ,EAAyB,UAAU,SAAW,UAAY,CAGlD,CAAChB,IAAa,KAAK,aAMvB,SAAS,iBAAiB,gBAAiB,KAAK,gBAAgB,EAChE,OAAO,iBAAiB,SAAU,KAAK,OAAO,EAC1Ce,IACA,KAAK,mBAAqB,IAAI,iBAAiB,KAAK,OAAO,EAC3D,KAAK,mBAAmB,QAAQ,SAAU,CACtC,WAAY,GACZ,UAAW,GACX,cAAe,GACf,QAAS,EACb,CAAC,IAGD,SAAS,iBAAiB,qBAAsB,KAAK,OAAO,EAC5D,KAAK,qBAAuB,IAEhC,KAAK,WAAa,GACtB,EAOAC,EAAyB,UAAU,YAAc,UAAY,CAGrD,CAAChB,IAAa,CAAC,KAAK,aAGxB,SAAS,oBAAoB,gBAAiB,KAAK,gBAAgB,EACnE,OAAO,oBAAoB,SAAU,KAAK,OAAO,EAC7C,KAAK,oBACL,KAAK,mBAAmB,WAAW,EAEnC,KAAK,sBACL,SAAS,oBAAoB,qBAAsB,KAAK,OAAO,EAEnE,KAAK,mBAAqB,KAC1B,KAAK,qBAAuB,GAC5B,KAAK,WAAa,GACtB,EAQAgB,EAAyB,UAAU,iBAAmB,SAAUjB,EAAI,CAChE,IAAIsB,EAAKtB,EAAG,aAAcuB,EAAeD,IAAO,OAAS,GAAKA,EAE1DE,EAAmBT,GAAe,KAAK,SAAUzB,EAAK,CACtD,MAAO,CAAC,CAAC,CAACiC,EAAa,QAAQjC,CAAG,CACtC,CAAC,EACGkC,GACA,KAAK,QAAQ,CAErB,EAMAP,EAAyB,YAAc,UAAY,CAC/C,OAAK,KAAK,YACN,KAAK,UAAY,IAAIA,GAElB,KAAK,SAChB,EAMAA,EAAyB,UAAY,KAC9BA,CACX,EAAE,EASEQ,GAAsB,SAAUC,EAAQC,EAAO,CAC/C,QAAS5B,EAAK,EAAGC,EAAK,OAAO,KAAK2B,CAAK,EAAG5B,EAAKC,EAAG,OAAQD,IAAM,CAC5D,IAAIT,EAAMU,EAAGD,GACb,OAAO,eAAe2B,EAAQpC,EAAK,CAC/B,MAAOqC,EAAMrC,GACb,WAAY,GACZ,SAAU,GACV,aAAc,EAClB,CAAC,CACL,CACA,OAAOoC,CACX,EAQIE,GAAe,SAAUF,EAAQ,CAIjC,IAAIG,EAAcH,GAAUA,EAAO,eAAiBA,EAAO,cAAc,YAGzE,OAAOG,GAAe3B,EAC1B,EAGI4B,GAAYC,GAAe,EAAG,EAAG,EAAG,CAAC,EAOzC,SAASC,GAAQrC,EAAO,CACpB,OAAO,WAAWA,CAAK,GAAK,CAChC,CAQA,SAASsC,GAAeC,EAAQ,CAE5B,QADIC,EAAY,CAAC,EACRpC,EAAK,EAAGA,EAAK,UAAU,OAAQA,IACpCoC,EAAUpC,EAAK,GAAK,UAAUA,GAElC,OAAOoC,EAAU,OAAO,SAAUC,EAAMC,EAAU,CAC9C,IAAI1C,EAAQuC,EAAO,UAAYG,EAAW,UAC1C,OAAOD,EAAOJ,GAAQrC,CAAK,CAC/B,EAAG,CAAC,CACR,CAOA,SAAS2C,GAAYJ,EAAQ,CAGzB,QAFIC,EAAY,CAAC,MAAO,QAAS,SAAU,MAAM,EAC7CI,EAAW,CAAC,EACPxC,EAAK,EAAGyC,EAAcL,EAAWpC,EAAKyC,EAAY,OAAQzC,IAAM,CACrE,IAAIsC,EAAWG,EAAYzC,GACvBJ,EAAQuC,EAAO,WAAaG,GAChCE,EAASF,GAAYL,GAAQrC,CAAK,CACtC,CACA,OAAO4C,CACX,CAQA,SAASE,GAAkBf,EAAQ,CAC/B,IAAIgB,EAAOhB,EAAO,QAAQ,EAC1B,OAAOK,GAAe,EAAG,EAAGW,EAAK,MAAOA,EAAK,MAAM,CACvD,CAOA,SAASC,GAA0BjB,EAAQ,CAGvC,IAAIkB,EAAclB,EAAO,YAAamB,EAAenB,EAAO,aAS5D,GAAI,CAACkB,GAAe,CAACC,EACjB,OAAOf,GAEX,IAAII,EAASN,GAAYF,CAAM,EAAE,iBAAiBA,CAAM,EACpDa,EAAWD,GAAYJ,CAAM,EAC7BY,EAAWP,EAAS,KAAOA,EAAS,MACpCQ,EAAUR,EAAS,IAAMA,EAAS,OAKlCS,EAAQhB,GAAQE,EAAO,KAAK,EAAGe,EAASjB,GAAQE,EAAO,MAAM,EAqBjE,GAlBIA,EAAO,YAAc,eAOjB,KAAK,MAAMc,EAAQF,CAAQ,IAAMF,IACjCI,GAASf,GAAeC,EAAQ,OAAQ,OAAO,EAAIY,GAEnD,KAAK,MAAMG,EAASF,CAAO,IAAMF,IACjCI,GAAUhB,GAAeC,EAAQ,MAAO,QAAQ,EAAIa,IAOxD,CAACG,GAAkBxB,CAAM,EAAG,CAK5B,IAAIyB,EAAgB,KAAK,MAAMH,EAAQF,CAAQ,EAAIF,EAC/CQ,EAAiB,KAAK,MAAMH,EAASF,CAAO,EAAIF,EAMhD,KAAK,IAAIM,CAAa,IAAM,IAC5BH,GAASG,GAET,KAAK,IAAIC,CAAc,IAAM,IAC7BH,GAAUG,EAElB,CACA,OAAOrB,GAAeQ,EAAS,KAAMA,EAAS,IAAKS,EAAOC,CAAM,CACpE,CAOA,IAAII,GAAwB,UAAY,CAGpC,OAAI,OAAO,oBAAuB,YACvB,SAAU3B,EAAQ,CAAE,OAAOA,aAAkBE,GAAYF,CAAM,EAAE,kBAAoB,EAKzF,SAAUA,EAAQ,CAAE,OAAQA,aAAkBE,GAAYF,CAAM,EAAE,YACrE,OAAOA,EAAO,SAAY,UAAa,CAC/C,EAAG,EAOH,SAASwB,GAAkBxB,EAAQ,CAC/B,OAAOA,IAAWE,GAAYF,CAAM,EAAE,SAAS,eACnD,CAOA,SAAS4B,GAAe5B,EAAQ,CAC5B,OAAKzB,GAGDoD,GAAqB3B,CAAM,EACpBe,GAAkBf,CAAM,EAE5BiB,GAA0BjB,CAAM,EAL5BI,EAMf,CAQA,SAASyB,GAAmBvD,EAAI,CAC5B,IAAIwD,EAAIxD,EAAG,EAAGyD,EAAIzD,EAAG,EAAGgD,EAAQhD,EAAG,MAAOiD,EAASjD,EAAG,OAElD0D,EAAS,OAAO,iBAAoB,YAAc,gBAAkB,OACpEC,EAAO,OAAO,OAAOD,EAAO,SAAS,EAEzC,OAAAjC,GAAmBkC,EAAM,CACrB,EAAGH,EAAG,EAAGC,EAAG,MAAOT,EAAO,OAAQC,EAClC,IAAKQ,EACL,MAAOD,EAAIR,EACX,OAAQC,EAASQ,EACjB,KAAMD,CACV,CAAC,EACMG,CACX,CAWA,SAAS5B,GAAeyB,EAAGC,EAAGT,EAAOC,EAAQ,CACzC,MAAO,CAAE,EAAGO,EAAG,EAAGC,EAAG,MAAOT,EAAO,OAAQC,CAAO,CACtD,CAMA,IAAIW,GAAmC,UAAY,CAM/C,SAASA,EAAkBlC,EAAQ,CAM/B,KAAK,eAAiB,EAMtB,KAAK,gBAAkB,EAMvB,KAAK,aAAeK,GAAe,EAAG,EAAG,EAAG,CAAC,EAC7C,KAAK,OAASL,CAClB,CAOA,OAAAkC,EAAkB,UAAU,SAAW,UAAY,CAC/C,IAAID,EAAOL,GAAe,KAAK,MAAM,EACrC,YAAK,aAAeK,EACZA,EAAK,QAAU,KAAK,gBACxBA,EAAK,SAAW,KAAK,eAC7B,EAOAC,EAAkB,UAAU,cAAgB,UAAY,CACpD,IAAID,EAAO,KAAK,aAChB,YAAK,eAAiBA,EAAK,MAC3B,KAAK,gBAAkBA,EAAK,OACrBA,CACX,EACOC,CACX,EAAE,EAEEC,GAAqC,UAAY,CAOjD,SAASA,EAAoBnC,EAAQoC,EAAU,CAC3C,IAAIC,EAAcR,GAAmBO,CAAQ,EAO7CrC,GAAmB,KAAM,CAAE,OAAQC,EAAQ,YAAaqC,CAAY,CAAC,CACzE,CACA,OAAOF,CACX,EAAE,EAEEG,GAAmC,UAAY,CAW/C,SAASA,EAAkBnE,EAAUoE,EAAYC,EAAa,CAc1D,GAPA,KAAK,oBAAsB,CAAC,EAM5B,KAAK,cAAgB,IAAI/E,GACrB,OAAOU,GAAa,WACpB,MAAM,IAAI,UAAU,yDAAyD,EAEjF,KAAK,UAAYA,EACjB,KAAK,YAAcoE,EACnB,KAAK,aAAeC,CACxB,CAOA,OAAAF,EAAkB,UAAU,QAAU,SAAUtC,EAAQ,CACpD,GAAI,CAAC,UAAU,OACX,MAAM,IAAI,UAAU,0CAA0C,EAGlE,GAAI,SAAO,SAAY,aAAe,EAAE,mBAAmB,SAG3D,IAAI,EAAEA,aAAkBE,GAAYF,CAAM,EAAE,SACxC,MAAM,IAAI,UAAU,uCAAuC,EAE/D,IAAIyC,EAAe,KAAK,cAEpBA,EAAa,IAAIzC,CAAM,IAG3ByC,EAAa,IAAIzC,EAAQ,IAAIkC,GAAkBlC,CAAM,CAAC,EACtD,KAAK,YAAY,YAAY,IAAI,EAEjC,KAAK,YAAY,QAAQ,GAC7B,EAOAsC,EAAkB,UAAU,UAAY,SAAUtC,EAAQ,CACtD,GAAI,CAAC,UAAU,OACX,MAAM,IAAI,UAAU,0CAA0C,EAGlE,GAAI,SAAO,SAAY,aAAe,EAAE,mBAAmB,SAG3D,IAAI,EAAEA,aAAkBE,GAAYF,CAAM,EAAE,SACxC,MAAM,IAAI,UAAU,uCAAuC,EAE/D,IAAIyC,EAAe,KAAK,cAEpB,CAACA,EAAa,IAAIzC,CAAM,IAG5ByC,EAAa,OAAOzC,CAAM,EACrByC,EAAa,MACd,KAAK,YAAY,eAAe,IAAI,GAE5C,EAMAH,EAAkB,UAAU,WAAa,UAAY,CACjD,KAAK,YAAY,EACjB,KAAK,cAAc,MAAM,EACzB,KAAK,YAAY,eAAe,IAAI,CACxC,EAOAA,EAAkB,UAAU,aAAe,UAAY,CACnD,IAAII,EAAQ,KACZ,KAAK,YAAY,EACjB,KAAK,cAAc,QAAQ,SAAUC,EAAa,CAC1CA,EAAY,SAAS,GACrBD,EAAM,oBAAoB,KAAKC,CAAW,CAElD,CAAC,CACL,EAOAL,EAAkB,UAAU,gBAAkB,UAAY,CAEtD,GAAI,EAAC,KAAK,UAAU,EAGpB,KAAIlE,EAAM,KAAK,aAEXF,EAAU,KAAK,oBAAoB,IAAI,SAAUyE,EAAa,CAC9D,OAAO,IAAIR,GAAoBQ,EAAY,OAAQA,EAAY,cAAc,CAAC,CAClF,CAAC,EACD,KAAK,UAAU,KAAKvE,EAAKF,EAASE,CAAG,EACrC,KAAK,YAAY,EACrB,EAMAkE,EAAkB,UAAU,YAAc,UAAY,CAClD,KAAK,oBAAoB,OAAO,CAAC,CACrC,EAMAA,EAAkB,UAAU,UAAY,UAAY,CAChD,OAAO,KAAK,oBAAoB,OAAS,CAC7C,EACOA,CACX,EAAE,EAKE7C,GAAY,OAAO,SAAY,YAAc,IAAI,QAAY,IAAIhC,GAKjEmF,GAAgC,UAAY,CAO5C,SAASA,EAAezE,EAAU,CAC9B,GAAI,EAAE,gBAAgByE,GAClB,MAAM,IAAI,UAAU,oCAAoC,EAE5D,GAAI,CAAC,UAAU,OACX,MAAM,IAAI,UAAU,0CAA0C,EAElE,IAAIL,EAAahD,GAAyB,YAAY,EAClDC,EAAW,IAAI8C,GAAkBnE,EAAUoE,EAAY,IAAI,EAC/D9C,GAAU,IAAI,KAAMD,CAAQ,CAChC,CACA,OAAOoD,CACX,EAAE,EAEF,CACI,UACA,YACA,YACJ,EAAE,QAAQ,SAAUC,EAAQ,CACxBD,GAAe,UAAUC,GAAU,UAAY,CAC3C,IAAIvE,EACJ,OAAQA,EAAKmB,GAAU,IAAI,IAAI,GAAGoD,GAAQ,MAAMvE,EAAI,SAAS,CACjE,CACJ,CAAC,EAED,IAAIP,GAAS,UAAY,CAErB,OAAI,OAAOS,GAAS,gBAAmB,YAC5BA,GAAS,eAEboE,EACX,EAAG,EAEIE,GAAQ/E,GCr2Bf,IAAMgF,GAAS,IAAIC,EAYbC,GAAYC,EAAM,IAAMC,EAC5B,IAAIC,GAAeC,GAAW,CAC5B,QAAWC,KAASD,EAClBN,GAAO,KAAKO,CAAK,CACrB,CAAC,CACH,CAAC,EACE,KACCC,EAAUC,GAAYC,EAAMC,GAAOP,EAAGK,CAAQ,CAAC,EAC5C,KACCG,EAAS,IAAMH,EAAS,WAAW,CAAC,CACtC,CACF,EACAI,EAAY,CAAC,CACf,EAaK,SAASC,GACdC,EACa,CACb,MAAO,CACL,MAAQA,EAAG,YACX,OAAQA,EAAG,YACb,CACF,CAuBO,SAASC,GACdD,EACyB,CACzB,OAAOb,GACJ,KACCe,EAAIR,GAAYA,EAAS,QAAQM,CAAE,CAAC,EACpCP,EAAUC,GAAYT,GACnB,KACCkB,EAAO,CAAC,CAAE,OAAAC,CAAO,IAAMA,IAAWJ,CAAE,EACpCH,EAAS,IAAMH,EAAS,UAAUM,CAAE,CAAC,EACrCK,EAAI,IAAMN,GAAeC,CAAE,CAAC,CAC9B,CACF,EACAM,EAAUP,GAAeC,CAAE,CAAC,CAC9B,CACJ,CC1GO,SAASO,GACdC,EACa,CACb,MAAO,CACL,MAAQA,EAAG,YACX,OAAQA,EAAG,YACb,CACF,CASO,SAASC,GACdD,EACyB,CACzB,IAAIE,EAASF,EAAG,cAChB,KAAOE,IAEHF,EAAG,aAAeE,EAAO,aACzBF,EAAG,cAAgBE,EAAO,eAE1BA,GAAUF,EAAKE,GAAQ,cAK3B,OAAOA,EAASF,EAAK,MACvB,CCfA,IAAMG,GAAS,IAAIC,EAUbC,GAAYC,EAAM,IAAMC,EAC5B,IAAI,qBAAqBC,GAAW,CAClC,QAAWC,KAASD,EAClBL,GAAO,KAAKM,CAAK,CACrB,EAAG,CACD,UAAW,CACb,CAAC,CACH,CAAC,EACE,KACCC,EAAUC,GAAYC,EAAMC,GAAON,EAAGI,CAAQ,CAAC,EAC5C,KACCG,EAAS,IAAMH,EAAS,WAAW,CAAC,CACtC,CACF,EACAI,EAAY,CAAC,CACf,EAaK,SAASC,GACdC,EACqB,CACrB,OAAOZ,GACJ,KACCa,EAAIP,GAAYA,EAAS,QAAQM,CAAE,CAAC,EACpCP,EAAUC,GAAYR,GACnB,KACCgB,EAAO,CAAC,CAAE,OAAAC,CAAO,IAAMA,IAAWH,CAAE,EACpCH,EAAS,IAAMH,EAAS,UAAUM,CAAE,CAAC,EACrCI,EAAI,CAAC,CAAE,eAAAC,CAAe,IAAMA,CAAc,CAC5C,CACF,CACF,CACJ,CAaO,SAASC,GACdN,EAAiBO,EAAY,GACR,CACrB,OAAOC,GAA0BR,CAAE,EAChC,KACCI,EAAI,CAAC,CAAE,EAAAK,CAAE,IAAM,CACb,IAAMC,EAAUC,GAAeX,CAAE,EAC3BY,EAAUC,GAAsBb,CAAE,EACxC,OAAOS,GACLG,EAAQ,OAASF,EAAQ,OAASH,CAEtC,CAAC,EACDO,EAAqB,CACvB,CACJ,CCjFA,IAAMC,GAA4C,CAChD,OAAQC,EAAW,yBAAyB,EAC5C,OAAQA,EAAW,yBAAyB,CAC9C,EAaO,SAASC,GAAUC,EAAuB,CAC/C,OAAOH,GAAQG,GAAM,OACvB,CAaO,SAASC,GAAUD,EAAcE,EAAsB,CACxDL,GAAQG,GAAM,UAAYE,GAC5BL,GAAQG,GAAM,MAAM,CACxB,CAWO,SAASG,GAAYH,EAAmC,CAC7D,IAAMI,EAAKP,GAAQG,GACnB,OAAOK,EAAUD,EAAI,QAAQ,EAC1B,KACCE,EAAI,IAAMF,EAAG,OAAO,EACpBG,EAAUH,EAAG,OAAO,CACtB,CACJ,CClCA,SAASI,GACPC,EAAiBC,EACR,CACT,OAAQD,EAAG,YAAa,CAGtB,KAAK,iBAEH,OAAIA,EAAG,OAAS,QACP,SAAS,KAAKC,CAAI,EAElB,GAGX,KAAK,kBACL,KAAK,oBACH,MAAO,GAGT,QACE,OAAOD,EAAG,iBACd,CACF,CAWO,SAASE,IAAsC,CACpD,OAAOC,EAAyB,OAAQ,SAAS,EAC9C,KACCC,EAAOC,GAAM,EAAEA,EAAG,SAAWA,EAAG,QAAQ,EACxCC,EAAID,IAAO,CACT,KAAME,GAAU,QAAQ,EAAI,SAAW,SACvC,KAAMF,EAAG,IACT,OAAQ,CACNA,EAAG,eAAe,EAClBA,EAAG,gBAAgB,CACrB,CACF,EAAc,EACdD,EAAO,CAAC,CAAE,KAAAI,EAAM,KAAAP,CAAK,IAAM,CACzB,GAAIO,IAAS,SAAU,CACrB,IAAMC,EAASC,GAAiB,EAChC,GAAI,OAAOD,GAAW,YACpB,MAAO,CAACV,GAAwBU,EAAQR,CAAI,CAChD,CACA,MAAO,EACT,CAAC,EACDU,GAAM,CACR,CACJ,CCpFO,SAASC,IAAmB,CACjC,OAAO,IAAI,IAAI,SAAS,IAAI,CAC9B,CAOO,SAASC,GAAYC,EAAgB,CAC1C,SAAS,KAAOA,EAAI,IACtB,CASO,SAASC,IAA8B,CAC5C,OAAO,IAAIC,CACb,CCLA,SAASC,GAAYC,EAAiBC,EAA8B,CAGlE,GAAI,OAAOA,GAAU,UAAY,OAAOA,GAAU,SAChDD,EAAG,WAAaC,EAAM,SAAS,UAGtBA,aAAiB,KAC1BD,EAAG,YAAYC,CAAK,UAGX,MAAM,QAAQA,CAAK,EAC5B,QAAWC,KAAQD,EACjBF,GAAYC,EAAIE,CAAI,CAE1B,CAyBO,SAASC,EACdC,EAAaC,KAAmCC,EAC7C,CACH,IAAMN,EAAK,SAAS,cAAcI,CAAG,EAGrC,GAAIC,EACF,QAAWE,KAAQ,OAAO,KAAKF,CAAU,EACnC,OAAOA,EAAWE,IAAU,cAI5B,OAAOF,EAAWE,IAAU,UAC9BP,EAAG,aAAaO,EAAMF,EAAWE,EAAK,EAEtCP,EAAG,aAAaO,EAAM,EAAE,GAI9B,QAAWN,KAASK,EAClBP,GAAYC,EAAIC,CAAK,EAGvB,OAAOD,CACT,CChFO,SAASQ,GAASC,EAAeC,EAAmB,CACzD,IAAIC,EAAID,EACR,GAAID,EAAM,OAASE,EAAG,CACpB,KAAOF,EAAME,KAAO,KAAO,EAAEA,EAAI,GAAG,CACpC,MAAO,GAAGF,EAAM,UAAU,EAAGE,CAAC,MAChC,CACA,OAAOF,CACT,CAkBO,SAASG,GAAMH,EAAuB,CAC3C,GAAIA,EAAQ,IAAK,CACf,IAAMI,EAAS,GAAGJ,EAAQ,KAAO,IAAO,IACxC,MAAO,KAAKA,EAAQ,MAAY,KAAM,QAAQI,CAAM,IACtD,KACE,QAAOJ,EAAM,SAAS,CAE1B,CC5BO,SAASK,IAA0B,CACxC,OAAO,SAAS,KAAK,UAAU,CAAC,CAClC,CAYO,SAASC,GAAgBC,EAAoB,CAClD,IAAMC,EAAKC,EAAE,IAAK,CAAE,KAAMF,CAAK,CAAC,EAChCC,EAAG,iBAAiB,QAASE,GAAMA,EAAG,gBAAgB,CAAC,EACvDF,EAAG,MAAM,CACX,CASO,SAASG,IAAwC,CACtD,OAAOC,EAA2B,OAAQ,YAAY,EACnD,KACCC,EAAIR,EAAe,EACnBS,EAAUT,GAAgB,CAAC,EAC3BU,EAAOR,GAAQA,EAAK,OAAS,CAAC,EAC9BS,EAAY,CAAC,CACf,CACJ,CAOO,SAASC,IAA+C,CAC7D,OAAON,GAAkB,EACtB,KACCE,EAAIK,GAAMC,GAAmB,QAAQD,KAAM,CAAE,EAC7CH,EAAOP,GAAM,OAAOA,GAAO,WAAW,CACxC,CACJ,CC1CO,SAASY,GAAWC,EAAoC,CAC7D,IAAMC,EAAQ,WAAWD,CAAK,EAC9B,OAAOE,GAA0BC,GAC/BF,EAAM,YAAY,IAAME,EAAKF,EAAM,OAAO,CAAC,CAC5C,EACE,KACCG,EAAUH,EAAM,OAAO,CACzB,CACJ,CAOO,SAASI,IAAkC,CAChD,IAAMJ,EAAQ,WAAW,OAAO,EAChC,OAAOK,EACLC,EAAU,OAAQ,aAAa,EAAE,KAAKC,EAAI,IAAM,EAAI,CAAC,EACrDD,EAAU,OAAQ,YAAY,EAAE,KAAKC,EAAI,IAAM,EAAK,CAAC,CACvD,EACG,KACCJ,EAAUH,EAAM,OAAO,CACzB,CACJ,CAcO,SAASQ,GACdC,EAA6BC,EACd,CACf,OAAOD,EACJ,KACCE,EAAUC,GAAUA,EAASF,EAAQ,EAAIG,CAAK,CAChD,CACJ,CC7CO,SAASC,GACdC,EAAmBC,EAAuB,CAAE,YAAa,aAAc,EACjD,CACtB,OAAOC,GAAK,MAAM,GAAGF,IAAOC,CAAO,CAAC,EACjC,KACCE,GAAW,IAAMC,CAAK,EACtBC,EAAUC,GAAOA,EAAI,SAAW,IAC5BC,GAAW,IAAM,IAAI,MAAMD,EAAI,UAAU,CAAC,EAC1CE,EAAGF,CAAG,CACV,CACF,CACJ,CAYO,SAASG,GACdT,EAAmBC,EACJ,CACf,OAAOF,GAAQC,EAAKC,CAAO,EACxB,KACCI,EAAUC,GAAOA,EAAI,KAAK,CAAC,EAC3BI,EAAY,CAAC,CACf,CACJ,CAUO,SAASC,GACdX,EAAmBC,EACG,CACtB,IAAMW,EAAM,IAAI,UAChB,OAAOb,GAAQC,EAAKC,CAAO,EACxB,KACCI,EAAUC,GAAOA,EAAI,KAAK,CAAC,EAC3BO,EAAIP,GAAOM,EAAI,gBAAgBN,EAAK,UAAU,CAAC,EAC/CI,EAAY,CAAC,CACf,CACJ,CClDO,SAASI,GAAYC,EAA+B,CACzD,IAAMC,EAASC,EAAE,SAAU,CAAE,IAAAF,CAAI,CAAC,EAClC,OAAOG,EAAM,KACX,SAAS,KAAK,YAAYF,CAAM,EACzBG,EACLC,EAAUJ,EAAQ,MAAM,EACxBI,EAAUJ,EAAQ,OAAO,EACtB,KACCK,EAAU,IACRC,GAAW,IAAM,IAAI,eAAe,mBAAmBP,GAAK,CAAC,CAC9D,CACH,CACJ,EACG,KACCQ,EAAI,IAAG,EAAY,EACnBC,EAAS,IAAM,SAAS,KAAK,YAAYR,CAAM,CAAC,EAChDS,GAAK,CAAC,CACR,EACH,CACH,CCfO,SAASC,IAAoC,CAClD,MAAO,CACL,EAAG,KAAK,IAAI,EAAG,OAAO,EACtB,EAAG,KAAK,IAAI,EAAG,OAAO,CACxB,CACF,CASO,SAASC,IAAkD,CAChE,OAAOC,EACLC,EAAU,OAAQ,SAAU,CAAE,QAAS,EAAK,CAAC,EAC7CA,EAAU,OAAQ,SAAU,CAAE,QAAS,EAAK,CAAC,CAC/C,EACG,KACCC,EAAIJ,EAAiB,EACrBK,EAAUL,GAAkB,CAAC,CAC/B,CACJ,CC3BO,SAASM,IAAgC,CAC9C,MAAO,CACL,MAAQ,WACR,OAAQ,WACV,CACF,CASO,SAASC,IAA8C,CAC5D,OAAOC,EAAU,OAAQ,SAAU,CAAE,QAAS,EAAK,CAAC,EACjD,KACCC,EAAIH,EAAe,EACnBI,EAAUJ,GAAgB,CAAC,CAC7B,CACJ,CCXO,SAASK,IAAsC,CACpD,OAAOC,EAAc,CACnBC,GAAoB,EACpBC,GAAkB,CACpB,CAAC,EACE,KACCC,EAAI,CAAC,CAACC,EAAQC,CAAI,KAAO,CAAE,OAAAD,EAAQ,KAAAC,CAAK,EAAE,EAC1CC,EAAY,CAAC,CACf,CACJ,CCVO,SAASC,GACdC,EAAiB,CAAE,UAAAC,EAAW,QAAAC,CAAQ,EAChB,CACtB,IAAMC,EAAQF,EACX,KACCG,EAAwB,MAAM,CAChC,EAGIC,EAAUC,EAAc,CAACH,EAAOD,CAAO,CAAC,EAC3C,KACCK,EAAI,IAAMC,GAAiBR,CAAE,CAAC,CAChC,EAGF,OAAOM,EAAc,CAACJ,EAASD,EAAWI,CAAO,CAAC,EAC/C,KACCE,EAAI,CAAC,CAAC,CAAE,OAAAE,CAAO,EAAG,CAAE,OAAAC,EAAQ,KAAAC,CAAK,EAAG,CAAE,EAAAC,EAAG,EAAAC,CAAE,CAAC,KAAO,CACjD,OAAQ,CACN,EAAGH,EAAO,EAAIE,EACd,EAAGF,EAAO,EAAIG,EAAIJ,CACpB,EACA,KAAAE,CACF,EAAE,CACJ,CACJ,CCIO,SAASG,GACdC,EAAgB,CAAE,IAAAC,CAAI,EACP,CAGf,IAAMC,EAAMC,EAAwBH,EAAQ,SAAS,EAClD,KACCI,EAAI,CAAC,CAAE,KAAAC,CAAK,IAAMA,CAAS,CAC7B,EAGF,OAAOJ,EACJ,KACCK,GAAS,IAAMJ,EAAK,CAAE,QAAS,GAAM,SAAU,EAAK,CAAC,EACrDK,EAAIC,GAAWR,EAAO,YAAYQ,CAAO,CAAC,EAC1CC,EAAU,IAAMP,CAAG,EACnBQ,GAAM,CACR,CACJ,CCCA,IAAMC,GAASC,EAAW,WAAW,EAC/BC,GAAiB,KAAK,MAAMF,GAAO,WAAY,EACrDE,GAAO,KAAO,GAAG,IAAI,IAAIA,GAAO,KAAMC,GAAY,CAAC,IAW5C,SAASC,IAAwB,CACtC,OAAOF,EACT,CASO,SAASG,EAAQC,EAAqB,CAC3C,OAAOJ,GAAO,SAAS,SAASI,CAAI,CACtC,CAUO,SAASC,GACdC,EAAkBC,EACV,CACR,OAAO,OAAOA,GAAU,YACpBP,GAAO,aAAaM,GAAK,QAAQ,IAAKC,EAAM,SAAS,CAAC,EACtDP,GAAO,aAAaM,EAC1B,CCjCO,SAASE,GACdC,EAASC,EAAmB,SACP,CACrB,OAAOC,EAAW,sBAAsBF,KAASC,CAAI,CACvD,CAYO,SAASE,GACdH,EAASC,EAAmB,SACL,CACvB,OAAOG,EAAY,sBAAsBJ,KAASC,CAAI,CACxD,CC1EO,SAASI,GACdC,EACsB,CACtB,IAAMC,EAASC,EAAW,6BAA8BF,CAAE,EAC1D,OAAOG,EAAUF,EAAQ,QAAS,CAAE,KAAM,EAAK,CAAC,EAC7C,KACCG,EAAI,IAAMF,EAAW,cAAeF,CAAE,CAAC,EACvCI,EAAIC,IAAY,CAAE,KAAM,UAAUA,EAAQ,SAAS,CAAE,EAAE,CACzD,CACJ,CASO,SAASC,GACdN,EACiC,CACjC,MAAI,CAACO,EAAQ,kBAAkB,GAAK,CAACP,EAAG,kBAC/BQ,EAGFC,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAClB,OAAAD,EACG,KACCE,EAAU,CAAE,KAAM,SAAiB,YAAY,CAAE,CAAC,CACpD,EACG,UAAU,CAAC,CAAE,KAAAC,CAAK,IAAM,CA5FjC,IAAAC,EA6FcD,GAAQA,MAAUC,EAAA,SAAiB,YAAY,IAA7B,KAAAA,EAAkCD,KACtDb,EAAG,OAAS,GAGZ,SAAiB,aAAca,CAAI,EAEvC,CAAC,EAGEd,GAAcC,CAAE,EACpB,KACCe,EAAIC,GAASN,EAAM,KAAKM,CAAK,CAAC,EAC9BC,EAAS,IAAMP,EAAM,SAAS,CAAC,EAC/BN,EAAIY,GAAUE,EAAA,CAAE,IAAKlB,GAAOgB,EAAQ,CACtC,CACJ,CAAC,CACH,CC5BO,SAASG,GACdC,EAAiB,CAAE,QAAAC,CAAQ,EACN,CACrB,OAAOA,EACJ,KACCC,EAAIC,IAAW,CAAE,OAAQA,IAAWH,CAAG,EAAE,CAC3C,CACJ,CAYO,SAASI,GACdJ,EAAiBK,EACe,CAChC,IAAMC,EAAY,IAAIC,EACtB,OAAAD,EAAU,UAAU,CAAC,CAAE,OAAAE,CAAO,IAAM,CAClCR,EAAG,OAASQ,CACd,CAAC,EAGMT,GAAaC,EAAIK,CAAO,EAC5B,KACCI,EAAIC,GAASJ,EAAU,KAAKI,CAAK,CAAC,EAClCC,EAAS,IAAML,EAAU,SAAS,CAAC,EACnCJ,EAAIQ,GAAUE,EAAA,CAAE,IAAKZ,GAAOU,EAAQ,CACtC,CACJ,CC7FA,IAAAG,GAAwB,SCajB,SAASC,GAAcC,EAA0B,CACtD,OACEC,EAAC,OAAI,MAAM,aAAa,GAAID,GAC1BC,EAAC,OAAI,MAAM,+BAA+B,CAC5C,CAEJ,CCHO,SAASC,GACdC,EAAqBC,EACR,CAIb,GAHAA,EAASA,EAAS,GAAGA,gBAAqBD,IAAO,OAG7CC,EAAQ,CACV,IAAMC,EAASD,EAAS,IAAIA,IAAW,OACvC,OACEE,EAAC,SAAM,MAAM,gBAAgB,SAAU,GACpCC,GAAcH,CAAM,EACrBE,EAAC,KAAE,KAAMD,EAAQ,MAAM,uBAAuB,SAAU,IACtDC,EAAC,QAAK,wBAAuBH,EAAI,CACnC,CACF,CAEJ,KACE,QACEG,EAAC,SAAM,MAAM,gBAAgB,SAAU,GACpCC,GAAcH,CAAM,EACrBE,EAAC,QAAK,MAAM,uBAAuB,SAAU,IAC3CA,EAAC,QAAK,wBAAuBH,EAAI,CACnC,CACF,CAGN,CC5BO,SAASK,GAAsBC,EAAyB,CAC7D,OACEC,EAAC,UACC,MAAM,uBACN,MAAOC,GAAY,gBAAgB,EACnC,wBAAuB,IAAIF,WAC5B,CAEL,CCYA,SAASG,GACPC,EAA2CC,EAC9B,CACb,IAAMC,EAASD,EAAO,EAChBE,EAASF,EAAO,EAGhBG,EAAU,OAAO,KAAKJ,EAAS,KAAK,EACvC,OAAOK,GAAO,CAACL,EAAS,MAAMK,EAAI,EAClC,OAAyB,CAACC,EAAMD,IAAQ,CACvC,GAAGC,EAAMC,EAAC,WAAKF,CAAI,EAAQ,GAC7B,EAAG,CAAC,CAAC,EACJ,MAAM,EAAG,EAAE,EAGRG,EAAM,IAAI,IAAIR,EAAS,QAAQ,EACjCS,EAAQ,kBAAkB,GAC5BD,EAAI,aAAa,IAAI,IAAK,OAAO,QAAQR,EAAS,KAAK,EACpD,OAAO,CAAC,CAAC,CAAEU,CAAK,IAAMA,CAAK,EAC3B,OAAO,CAACC,EAAW,CAACC,CAAK,IAAM,GAAGD,KAAaC,IAAQ,KAAK,EAAG,EAAE,CACpE,EAGF,GAAM,CAAE,KAAAC,CAAK,EAAIC,GAAc,EAC/B,OACEP,EAAC,KAAE,KAAM,GAAGC,IAAO,MAAM,yBAAyB,SAAU,IAC1DD,EAAC,WACC,MAAO,CAAC,4BAA6B,GAAGL,EACpC,CAAC,qCAAqC,EACtC,CAAC,CACL,EAAE,KAAK,GAAG,EACV,gBAAeF,EAAS,MAAM,QAAQ,CAAC,GAEtCE,EAAS,GAAKK,EAAC,OAAI,MAAM,iCAAiC,EAC3DA,EAAC,MAAG,MAAM,2BAA2BP,EAAS,KAAM,EACnDG,EAAS,GAAKH,EAAS,KAAK,OAAS,GACpCO,EAAC,KAAE,MAAM,4BACNQ,GAASf,EAAS,KAAM,GAAG,CAC9B,EAEDA,EAAS,MACRO,EAAC,OAAI,MAAM,cACRP,EAAS,KAAK,IAAIgB,GAAO,CACxB,IAAMC,EAAKD,EAAI,QAAQ,WAAY,EAAE,EAC/BE,EAAOL,EACTI,KAAMJ,EACJ,4BAA4BA,EAAKI,KACjC,cACF,GACJ,OACEV,EAAC,QAAK,MAAO,UAAUW,KAASF,CAAI,CAExC,CAAC,CACH,EAEDb,EAAS,GAAKC,EAAQ,OAAS,GAC9BG,EAAC,KAAE,MAAM,2BACNY,GAAY,4BAA4B,EAAE,KAAG,GAAGf,CACnD,CAEJ,CACF,CAEJ,CAaO,SAASgB,GACdC,EACa,CACb,IAAMC,EAAYD,EAAO,GAAG,MACtBE,EAAO,CAAC,GAAGF,CAAM,EAGjBnB,EAASqB,EAAK,UAAUC,GAAO,CAACA,EAAI,SAAS,SAAS,GAAG,CAAC,EAC1D,CAACC,CAAO,EAAIF,EAAK,OAAOrB,EAAQ,CAAC,EAGnCwB,EAAQH,EAAK,UAAUC,GAAOA,EAAI,MAAQF,CAAS,EACnDI,IAAU,KACZA,EAAQH,EAAK,QAGf,IAAMI,EAAOJ,EAAK,MAAM,EAAGG,CAAK,EAC1BE,EAAOL,EAAK,MAAMG,CAAK,EAGvBG,EAAW,CACf9B,GAAqB0B,EAAS,EAAc,EAAE,CAACvB,GAAUwB,IAAU,EAAE,EACrE,GAAGC,EAAK,IAAIG,GAAW/B,GAAqB+B,EAAS,CAAW,CAAC,EACjE,GAAGF,EAAK,OAAS,CACfrB,EAAC,WAAQ,MAAM,0BACbA,EAAC,WAAQ,SAAU,IAChBqB,EAAK,OAAS,GAAKA,EAAK,SAAW,EAChCT,GAAY,wBAAwB,EACpCA,GAAY,2BAA4BS,EAAK,MAAM,CAEzD,EACC,GAAGA,EAAK,IAAIE,GAAW/B,GAAqB+B,EAAS,CAAW,CAAC,CACpE,CACF,EAAI,CAAC,CACP,EAGA,OACEvB,EAAC,MAAG,MAAM,0BACPsB,CACH,CAEJ,CC1IO,SAASE,GAAkBC,EAAiC,CACjE,OACEC,EAAC,MAAG,MAAM,oBACP,OAAO,QAAQD,CAAK,EAAE,IAAI,CAAC,CAACE,EAAKC,CAAK,IACrCF,EAAC,MAAG,MAAO,oCAAoCC,KAC5C,OAAOC,GAAU,SAAWC,GAAMD,CAAK,EAAIA,CAC9C,CACD,CACH,CAEJ,CCAO,SAASE,GACdC,EACa,CACb,IAAMC,EAAU,kCAAkCD,IAClD,OACEE,EAAC,OAAI,MAAOD,EAAS,OAAM,IACzBC,EAAC,UAAO,MAAM,gBAAgB,SAAU,GAAI,CAC9C,CAEJ,CCpBO,SAASC,GAAYC,EAAiC,CAC3D,OACEC,EAAC,OAAI,MAAM,0BACTA,EAAC,OAAI,MAAM,qBACRD,CACH,CACF,CAEJ,CCMA,SAASE,GAAcC,EAA+B,CACpD,IAAMC,EAASC,GAAc,EAGvBC,EAAM,IAAI,IAAI,MAAMH,EAAQ,WAAYC,EAAO,IAAI,EACzD,OACEG,EAAC,MAAG,MAAM,oBACRA,EAAC,KAAE,KAAM,GAAGD,IAAO,MAAM,oBACtBH,EAAQ,KACX,CACF,CAEJ,CAcO,SAASK,GACdC,EAAqBC,EACR,CACb,OACEH,EAAC,OAAI,MAAM,cACTA,EAAC,UACC,MAAM,sBACN,aAAYI,GAAY,sBAAsB,GAE7CD,EAAO,KACV,EACAH,EAAC,MAAG,MAAM,oBACPE,EAAS,IAAIP,EAAa,CAC7B,CACF,CAEJ,CCCO,SAASU,GACdC,EAAiBC,EACO,CACxB,IAAMC,EAAUC,EAAM,IAAMC,EAAc,CACxCC,GAAmBL,CAAE,EACrBM,GAA0BL,CAAS,CACrC,CAAC,CAAC,EACC,KACCM,EAAI,CAAC,CAAC,CAAE,EAAAC,EAAG,EAAAC,CAAE,EAAGC,CAAM,IAAqB,CACzC,GAAM,CAAE,MAAAC,EAAO,OAAAC,CAAO,EAAIC,GAAeb,CAAE,EAC3C,MAAQ,CACN,EAAGQ,EAAIE,EAAO,EAAIC,EAAQ,EAC1B,EAAGF,EAAIC,EAAO,EAAIE,EAAS,CAC7B,CACF,CAAC,CACH,EAGF,OAAOE,GAAkBd,CAAE,EACxB,KACCe,EAAUC,GAAUd,EACjB,KACCK,EAAIU,IAAW,CAAE,OAAAD,EAAQ,OAAAC,CAAO,EAAE,EAClCC,GAAK,CAAC,CAACF,GAAU,GAAQ,CAC3B,CACF,CACF,CACJ,CAWO,SAASG,GACdnB,EAAiBC,EAAwB,CAAE,QAAAmB,CAAQ,EAChB,CACnC,GAAM,CAACC,EAASC,CAAK,EAAI,MAAM,KAAKtB,EAAG,QAAQ,EAG/C,OAAOG,EAAM,IAAM,CACjB,IAAMoB,EAAQ,IAAIC,EACZC,EAAQF,EAAM,KAAKG,GAAS,CAAC,CAAC,EACpC,OAAAH,EAAM,UAAU,CAGd,KAAK,CAAE,OAAAN,CAAO,EAAG,CACfjB,EAAG,MAAM,YAAY,iBAAkB,GAAGiB,EAAO,KAAK,EACtDjB,EAAG,MAAM,YAAY,iBAAkB,GAAGiB,EAAO,KAAK,CACxD,EAGA,UAAW,CACTjB,EAAG,MAAM,eAAe,gBAAgB,EACxCA,EAAG,MAAM,eAAe,gBAAgB,CAC1C,CACF,CAAC,EAGD2B,GAAuB3B,CAAE,EACtB,KACC4B,GAAUH,CAAK,CACjB,EACG,UAAUI,GAAW,CACpB7B,EAAG,gBAAgB,kBAAmB6B,CAAO,CAC/C,CAAC,EAGLC,EACEP,EAAM,KAAKQ,EAAO,CAAC,CAAE,OAAAf,CAAO,IAAMA,CAAM,CAAC,EACzCO,EAAM,KAAKS,GAAa,GAAG,EAAGD,EAAO,CAAC,CAAE,OAAAf,CAAO,IAAM,CAACA,CAAM,CAAC,CAC/D,EACG,UAAU,CAGT,KAAK,CAAE,OAAAA,CAAO,EAAG,CACXA,EACFhB,EAAG,QAAQqB,CAAO,EAElBA,EAAQ,OAAO,CACnB,EAGA,UAAW,CACTrB,EAAG,QAAQqB,CAAO,CACpB,CACF,CAAC,EAGHE,EACG,KACCU,GAAU,GAAIC,EAAuB,CACvC,EACG,UAAU,CAAC,CAAE,OAAAlB,CAAO,IAAM,CACzBK,EAAQ,UAAU,OAAO,qBAAsBL,CAAM,CACvD,CAAC,EAGLO,EACG,KACCY,GAAa,IAAKD,EAAuB,EACzCH,EAAO,IAAM,CAAC,CAAC/B,EAAG,YAAY,EAC9BO,EAAI,IAAMP,EAAG,aAAc,sBAAsB,CAAC,EAClDO,EAAI,CAAC,CAAE,EAAAC,CAAE,IAAMA,CAAC,CAClB,EACG,UAAU,CAGT,KAAK4B,EAAQ,CACPA,EACFpC,EAAG,MAAM,YAAY,iBAAkB,GAAG,CAACoC,KAAU,EAErDpC,EAAG,MAAM,eAAe,gBAAgB,CAC5C,EAGA,UAAW,CACTA,EAAG,MAAM,eAAe,gBAAgB,CAC1C,CACF,CAAC,EAGLqC,EAAsBf,EAAO,OAAO,EACjC,KACCM,GAAUH,CAAK,EACfM,EAAOO,GAAM,EAAEA,EAAG,SAAWA,EAAG,QAAQ,CAC1C,EACG,UAAUA,GAAMA,EAAG,eAAe,CAAC,EAGxCD,EAAsBf,EAAO,WAAW,EACrC,KACCM,GAAUH,CAAK,EACfc,GAAehB,CAAK,CACtB,EACG,UAAU,CAAC,CAACe,EAAI,CAAE,OAAAtB,CAAO,CAAC,IAAM,CAvOzC,IAAAwB,EA0OU,GAAIF,EAAG,SAAW,GAAKA,EAAG,SAAWA,EAAG,QACtCA,EAAG,eAAe,UAGTtB,EAAQ,CACjBsB,EAAG,eAAe,EAGlB,IAAMG,EAASzC,EAAG,cAAe,QAAQ,gBAAgB,EACrDyC,aAAkB,YACpBA,EAAO,MAAM,GAEbD,EAAAE,GAAiB,IAAjB,MAAAF,EAAoB,MACxB,CACF,CAAC,EAGLpB,EACG,KACCQ,GAAUH,CAAK,EACfM,EAAOY,GAAUA,IAAWtB,CAAO,EACnCuB,GAAM,GAAG,CACX,EACG,UAAU,IAAM5C,EAAG,MAAM,CAAC,EAGxBD,GAAgBC,EAAIC,CAAS,EACjC,KACC4C,EAAIC,GAASvB,EAAM,KAAKuB,CAAK,CAAC,EAC9BC,EAAS,IAAMxB,EAAM,SAAS,CAAC,EAC/BhB,EAAIuC,GAAUE,EAAA,CAAE,IAAKhD,GAAO8C,EAAQ,CACtC,CACJ,CAAC,CACH,CCrMA,SAASG,GAAsBC,EAAgC,CAC7D,IAAMC,EAAkB,CAAC,EACzB,QAAWC,KAAMC,EAAY,eAAgBH,CAAS,EAAG,CACvD,IAAMI,EAAgB,CAAC,EAGjBC,EAAK,SAAS,mBAAmBH,EAAI,WAAW,SAAS,EAC/D,QAASI,EAAOD,EAAG,SAAS,EAAGC,EAAMA,EAAOD,EAAG,SAAS,EACtDD,EAAM,KAAKE,CAAY,EAGzB,QAASC,KAAQH,EAAO,CACtB,IAAII,EAGJ,KAAQA,EAAQ,gBAAgB,KAAKD,EAAK,WAAY,GAAI,CACxD,GAAM,CAAC,CAAEE,EAAIC,CAAK,EAAIF,EACtB,GAAI,OAAOE,GAAU,YAAa,CAChC,IAAMC,EAASJ,EAAK,UAAUC,EAAM,KAAK,EACzCD,EAAOI,EAAO,UAAUF,EAAG,MAAM,EACjCR,EAAQ,KAAKU,CAAM,CAGrB,KAAO,CACLJ,EAAK,YAAcE,EACnBR,EAAQ,KAAKM,CAAI,EACjB,KACF,CACF,CACF,CACF,CACA,OAAON,CACT,CAQA,SAASW,GAAKC,EAAqBC,EAA2B,CAC5DA,EAAO,OAAO,GAAG,MAAM,KAAKD,EAAO,UAAU,CAAC,CAChD,CAoBO,SAASE,GACdb,EAAiBF,EAAwB,CAAE,QAAAgB,EAAS,OAAAC,CAAO,EACxB,CAGnC,IAAMC,EAASlB,EAAU,QAAQ,MAAM,EACjCmB,EAASD,GAAA,YAAAA,EAAQ,GAGjBE,EAAc,IAAI,IACxB,QAAWT,KAAUZ,GAAsBC,CAAS,EAAG,CACrD,GAAM,CAAC,CAAES,CAAE,EAAIE,EAAO,YAAa,MAAM,WAAW,EAChDU,GAAmB,gBAAgBZ,KAAOP,CAAE,IAC9CkB,EAAY,IAAIX,EAAIa,GAAiBb,EAAIU,CAAM,CAAC,EAChDR,EAAO,YAAYS,EAAY,IAAIX,CAAE,CAAE,EAE3C,CAGA,OAAIW,EAAY,OAAS,EAChBG,EAGFC,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAGZC,EAAsC,CAAC,EAC7C,OAAW,CAAClB,EAAImB,CAAU,IAAKR,EAC7BO,EAAM,KAAK,CACTE,EAAW,cAAeD,CAAU,EACpCC,EAAW,gBAAgBpB,KAAOP,CAAE,CACtC,CAAC,EAGH,OAAAe,EACG,KACCa,GAAUL,EAAM,KAAKM,GAAS,CAAC,CAAC,CAAC,CACnC,EACG,UAAUC,GAAU,CACnB9B,EAAG,OAAS,CAAC8B,EAGb,OAAW,CAACC,EAAOC,CAAK,IAAKP,EACtBK,EAGHpB,GAAKqB,EAAOC,CAAK,EAFjBtB,GAAKsB,EAAOD,CAAK,CAGvB,CAAC,EAGEE,EAAM,GAAG,CAAC,GAAGf,CAAW,EAC5B,IAAI,CAAC,CAAC,CAAEQ,CAAU,IACjBQ,GAAgBR,EAAY5B,EAAW,CAAE,QAAAgB,CAAQ,CAAC,CACnD,CACH,EACG,KACCqB,EAAS,IAAMZ,EAAM,SAAS,CAAC,EAC/Ba,GAAM,CACR,CACJ,CAAC,CACH,CV9GA,IAAIC,GAAW,EAaf,SAASC,GAAkBC,EAA0C,CACnE,GAAIA,EAAG,mBAAoB,CACzB,IAAMC,EAAUD,EAAG,mBACnB,GAAIC,EAAQ,UAAY,KACtB,OAAOA,EAGJ,GAAIA,EAAQ,UAAY,KAAO,CAACA,EAAQ,SAAS,OACpD,OAAOF,GAAkBE,CAAO,CACpC,CAIF,CAgBO,SAASC,GACdF,EACuB,CACvB,OAAOG,GAAiBH,CAAE,EACvB,KACCI,EAAI,CAAC,CAAE,MAAAC,CAAM,KAEJ,CACL,WAFcC,GAAsBN,CAAE,EAElB,MAAQK,CAC9B,EACD,EACDE,EAAwB,YAAY,CACtC,CACJ,CAoBO,SAASC,GACdR,EAAiBS,EAC8B,CAC/C,GAAM,CAAE,QAASC,CAAM,EAAI,WAAW,SAAS,EAGzCC,EAAWC,EAAM,IAAM,CAC3B,IAAMC,EAAQ,IAAIC,EASlB,GARAD,EAAM,UAAU,CAAC,CAAE,WAAAE,CAAW,IAAM,CAC9BA,GAAcL,EAChBV,EAAG,aAAa,WAAY,GAAG,EAE/BA,EAAG,gBAAgB,UAAU,CACjC,CAAC,EAGG,GAAAgB,QAAY,YAAY,EAAG,CAC7B,IAAMC,EAASjB,EAAG,QAAQ,KAAK,EAC/BiB,EAAO,GAAK,UAAU,EAAEnB,KACxBmB,EAAO,aACLC,GAAsBD,EAAO,EAAE,EAC/BjB,CACF,CACF,CAGA,IAAMmB,EAAYnB,EAAG,QAAQ,YAAY,EACzC,GAAImB,aAAqB,YAAa,CACpC,IAAMC,EAAOrB,GAAkBoB,CAAS,EAGxC,GAAI,OAAOC,GAAS,cAClBD,EAAU,UAAU,SAAS,UAAU,GACvCE,EAAQ,uBAAuB,GAC9B,CACD,IAAMC,EAAeC,GAAoBH,EAAMpB,EAAIS,CAAO,EAG1D,OAAOP,GAAeF,CAAE,EACrB,KACCwB,EAAIC,GAASZ,EAAM,KAAKY,CAAK,CAAC,EAC9BC,EAAS,IAAMb,EAAM,SAAS,CAAC,EAC/BT,EAAIqB,GAAUE,EAAA,CAAE,IAAK3B,GAAOyB,EAAQ,EACpCG,GACEzB,GAAiBgB,CAAS,EACvB,KACCf,EAAI,CAAC,CAAE,MAAAC,EAAO,OAAAwB,CAAO,IAAMxB,GAASwB,CAAM,EAC1CC,EAAqB,EACrBC,EAAUC,GAAUA,EAASV,EAAeW,CAAK,CACnD,CACJ,CACF,CACJ,CACF,CAGA,OAAO/B,GAAeF,CAAE,EACrB,KACCwB,EAAIC,GAASZ,EAAM,KAAKY,CAAK,CAAC,EAC9BC,EAAS,IAAMb,EAAM,SAAS,CAAC,EAC/BT,EAAIqB,GAAUE,EAAA,CAAE,IAAK3B,GAAOyB,EAAQ,CACtC,CACJ,CAAC,EAGD,OAAIJ,EAAQ,cAAc,EACjBa,GAAuBlC,CAAE,EAC7B,KACCmC,EAAOC,GAAWA,CAAO,EACzBC,GAAK,CAAC,EACNN,EAAU,IAAMpB,CAAQ,CAC1B,EAGGA,CACT,iyJWpLA,IAAI2B,GAKAC,GAAW,EAWf,SAASC,IAAiC,CACxC,OAAO,OAAO,SAAY,aAAe,mBAAmB,QACxDC,GAAY,qDAAqD,EACjEC,EAAG,MAAS,CAClB,CAaO,SAASC,GACdC,EACgC,CAChC,OAAAA,EAAG,UAAU,OAAO,SAAS,EAC7BN,QAAaE,GAAa,EACvB,KACCK,EAAI,IAAM,QAAQ,WAAW,CAC3B,YAAa,GACb,SAAAC,GACA,SAAU,CACR,cAAe,OACf,gBAAiB,OACjB,aAAc,MAChB,CACF,CAAC,CAAC,EACFC,EAAI,IAAG,EAAY,EACnBC,EAAY,CAAC,CACf,GAGFV,GAAS,UAAU,IAAM,CACvBM,EAAG,UAAU,IAAI,SAAS,EAC1B,IAAMK,EAAK,aAAaV,OAClBW,EAAOC,EAAE,MAAO,CAAE,MAAO,SAAU,CAAC,EAC1C,QAAQ,WAAW,OAAOF,EAAIL,EAAG,YAAcQ,GAAgB,CAG7D,IAAMC,EAASH,EAAK,aAAa,CAAE,KAAM,QAAS,CAAC,EACnDG,EAAO,UAAYD,EAGnBR,EAAG,YAAYM,CAAI,CACrB,CAAC,CACH,CAAC,EAGMZ,GACJ,KACCS,EAAI,KAAO,CAAE,IAAKH,CAAG,EAAE,CACzB,CACJ,CC/CO,SAASU,GACdC,EAAwB,CAAE,QAAAC,EAAS,OAAAC,CAAO,EACrB,CACrB,IAAIC,EAAO,GACX,OAAOC,EAGLH,EACG,KACCI,EAAIC,GAAUA,EAAO,QAAQ,qBAAqB,CAAE,EACpDC,EAAOC,GAAWR,IAAOQ,CAAO,EAChCH,EAAI,KAAO,CACT,OAAQ,OAAQ,OAAQ,EAC1B,EAAa,CACf,EAGFH,EACG,KACCK,EAAOE,GAAUA,GAAU,CAACN,CAAI,EAChCO,EAAI,IAAMP,EAAOH,EAAG,IAAI,EACxBK,EAAII,IAAW,CACb,OAAQA,EAAS,OAAS,OAC5B,EAAa,CACf,CACJ,CACF,CAaO,SAASE,GACdX,EAAwBY,EACQ,CAChC,OAAOC,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAClB,OAAAD,EAAM,UAAU,CAAC,CAAE,OAAAE,EAAQ,OAAAC,CAAO,IAAM,CACtCjB,EAAG,gBAAgB,OAAQgB,IAAW,MAAM,EACxCC,GACFjB,EAAG,eAAe,CACtB,CAAC,EAGMD,GAAaC,EAAIY,CAAO,EAC5B,KACCF,EAAIQ,GAASJ,EAAM,KAAKI,CAAK,CAAC,EAC9BC,EAAS,IAAML,EAAM,SAAS,CAAC,EAC/BT,EAAIa,GAAUE,EAAA,CAAE,IAAKpB,GAAOkB,EAAQ,CACtC,CACJ,CAAC,CACH,CC5FA,IAAMG,GAAWC,EAAE,OAAO,EAgBnB,SAASC,GACdC,EACkC,CAClC,OAAAA,EAAG,YAAYH,EAAQ,EACvBA,GAAS,YAAYI,GAAYD,CAAE,CAAC,EAG7BE,EAAG,CAAE,IAAKF,CAAG,CAAC,CACvB,CCuBO,SAASG,GACdC,EACyB,CACzB,IAAMC,EAASC,EAA8B,iBAAkBF,CAAE,EAC3DG,EAAUF,EAAO,KAAKG,GAASA,EAAM,OAAO,GAAKH,EAAO,GAC9D,OAAOI,EAAM,GAAGJ,EAAO,IAAIG,GAASE,EAAUF,EAAO,QAAQ,EAC1D,KACCG,EAAI,IAAMC,EAA6B,cAAcJ,EAAM,MAAM,CAAC,CACpE,CACF,CAAC,EACE,KACCK,EAAUD,EAA6B,cAAcL,EAAQ,MAAM,CAAC,EACpEI,EAAIG,IAAW,CAAE,OAAAA,CAAO,EAAE,CAC5B,CACJ,CAeO,SAASC,GACdX,EAAiB,CAAE,UAAAY,CAAU,EACO,CAGpC,IAAMC,EAAOC,GAAoB,MAAM,EACvCd,EAAG,OAAOa,CAAI,EAGd,IAAME,EAAOD,GAAoB,MAAM,EACvCd,EAAG,OAAOe,CAAI,EAGd,IAAMC,EAAYR,EAAW,iBAAkBR,CAAE,EACjD,OAAOiB,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EACZC,EAAQF,EAAM,KAAKG,GAAS,CAAC,CAAC,EACpC,OAAAC,EAAc,CAACJ,EAAOK,GAAiBvB,CAAE,CAAC,CAAC,EACxC,KACCwB,GAAU,EAAGC,EAAuB,EACpCC,GAAUN,CAAK,CACjB,EACG,UAAU,CAGT,KAAK,CAAC,CAAE,OAAAV,CAAO,EAAGiB,CAAI,EAAG,CACvB,IAAMC,EAASC,GAAiBnB,CAAM,EAChC,CAAE,MAAAoB,CAAM,EAAIC,GAAerB,CAAM,EAGvCV,EAAG,MAAM,YAAY,mBAAoB,GAAG4B,EAAO,KAAK,EACxD5B,EAAG,MAAM,YAAY,uBAAwB,GAAG8B,KAAS,EAGzD,IAAME,EAAUC,GAAwBjB,CAAS,GAE/CY,EAAO,EAAYI,EAAQ,GAC3BJ,EAAO,EAAIE,EAAQE,EAAQ,EAAIL,EAAK,QAEpCX,EAAU,SAAS,CACjB,KAAM,KAAK,IAAI,EAAGY,EAAO,EAAI,EAAE,EAC/B,SAAU,QACZ,CAAC,CACL,EAGA,UAAW,CACT5B,EAAG,MAAM,eAAe,kBAAkB,EAC1CA,EAAG,MAAM,eAAe,sBAAsB,CAChD,CACF,CAAC,EAGLsB,EAAc,CACZY,GAA0BlB,CAAS,EACnCO,GAAiBP,CAAS,CAC5B,CAAC,EACE,KACCU,GAAUN,CAAK,CACjB,EACG,UAAU,CAAC,CAACQ,EAAQD,CAAI,IAAM,CAC7B,IAAMK,EAAUG,GAAsBnB,CAAS,EAC/CH,EAAK,OAASe,EAAO,EAAI,GACzBb,EAAK,OAASa,EAAO,EAAII,EAAQ,MAAQL,EAAK,MAAQ,EACxD,CAAC,EAGLtB,EACEC,EAAUO,EAAM,OAAO,EAAE,KAAKN,EAAI,IAAM,EAAE,CAAC,EAC3CD,EAAUS,EAAM,OAAO,EAAE,KAAKR,EAAI,IAAM,CAAE,CAAC,CAC7C,EACG,KACCmB,GAAUN,CAAK,CACjB,EACG,UAAUgB,GAAa,CACtB,GAAM,CAAE,MAAAN,CAAM,EAAIC,GAAef,CAAS,EAC1CA,EAAU,SAAS,CACjB,KAAMc,EAAQM,EACd,SAAU,QACZ,CAAC,CACH,CAAC,EAGDC,EAAQ,mBAAmB,GAC7BnB,EAAM,KACJoB,GAAK,CAAC,EACNC,GAAe3B,CAAS,CAC1B,EACG,UAAU,CAAC,CAAC,CAAE,OAAAF,CAAO,EAAG,CAAE,OAAAkB,CAAO,CAAC,IAAM,CACvC,IAAMY,EAAM9B,EAAO,UAAU,KAAK,EAClC,GAAIA,EAAO,aAAa,mBAAmB,EACzCA,EAAO,gBAAgB,mBAAmB,MAGrC,CACL,IAAM+B,EAAIzC,EAAG,UAAY4B,EAAO,EAGhC,QAAWc,KAAOxC,EAAY,aAAa,EACzC,QAAWE,KAASF,EAClB,iBAAkBwC,CACpB,EAAG,CACD,IAAMC,EAAQnC,EAAW,cAAcJ,EAAM,MAAM,EACnD,GACEuC,IAAUjC,GACViC,EAAM,UAAU,KAAK,IAAMH,EAC3B,CACAG,EAAM,aAAa,oBAAqB,EAAE,EAC1CvC,EAAM,MAAM,EACZ,KACF,CACF,CAGF,OAAO,SAAS,CACd,IAAKJ,EAAG,UAAYyC,CACtB,CAAC,EAGD,IAAMG,EAAO,SAAmB,QAAQ,GAAK,CAAC,EAC9C,SAAS,SAAU,CAAC,GAAG,IAAI,IAAI,CAACJ,EAAK,GAAGI,CAAI,CAAC,CAAC,CAAC,CACjD,CACF,CAAC,EAGE7C,GAAiBC,CAAE,EACvB,KACC6C,EAAIC,GAAS5B,EAAM,KAAK4B,CAAK,CAAC,EAC9BC,EAAS,IAAM7B,EAAM,SAAS,CAAC,EAC/BX,EAAIuC,GAAUE,EAAA,CAAE,IAAKhD,GAAO8C,EAAQ,CACtC,CACJ,CAAC,EACE,KACCG,GAAYC,EAAc,CAC5B,CACJ,CCtKO,SAASC,GACdC,EAAiB,CAAE,UAAAC,EAAW,QAAAC,EAAS,OAAAC,CAAO,EACd,CAChC,OAAOC,EAGL,GAAGC,EAAY,2BAA4BL,CAAE,EAC1C,IAAIM,GAASC,GAAeD,EAAO,CAAE,QAAAJ,EAAS,OAAAC,CAAO,CAAC,CAAC,EAG1D,GAAGE,EAAY,cAAeL,CAAE,EAC7B,IAAIM,GAASE,GAAaF,CAAK,CAAC,EAGnC,GAAGD,EAAY,qBAAsBL,CAAE,EACpC,IAAIM,GAASG,GAAeH,CAAK,CAAC,EAGrC,GAAGD,EAAY,UAAWL,CAAE,EACzB,IAAIM,GAASI,GAAaJ,EAAO,CAAE,QAAAJ,EAAS,OAAAC,CAAO,CAAC,CAAC,EAGxD,GAAGE,EAAY,cAAeL,CAAE,EAC7B,IAAIM,GAASK,GAAiBL,EAAO,CAAE,UAAAL,CAAU,CAAC,CAAC,CACxD,CACF,CClCO,SAASW,GACdC,EAAkB,CAAE,OAAAC,CAAO,EACP,CACpB,OAAOA,EACJ,KACCC,EAAUC,GAAWC,EACnBC,EAAG,EAAI,EACPA,EAAG,EAAK,EAAE,KAAKC,GAAM,GAAI,CAAC,CAC5B,EACG,KACCC,EAAIC,IAAW,CAAE,QAAAL,EAAS,OAAAK,CAAO,EAAE,CACrC,CACF,CACF,CACJ,CAaO,SAASC,GACdC,EAAiBC,EACc,CAC/B,IAAMC,EAAQC,EAAW,cAAeH,CAAE,EAC1C,OAAOI,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAClB,OAAAD,EAAM,UAAU,CAAC,CAAE,QAAAZ,EAAS,OAAAK,CAAO,IAAM,CACvCE,EAAG,UAAU,OAAO,oBAAqBF,CAAM,EAC/CI,EAAM,YAAcT,CACtB,CAAC,EAGMJ,GAAYW,EAAIC,CAAO,EAC3B,KACCM,EAAIC,GAASH,EAAM,KAAKG,CAAK,CAAC,EAC9BC,EAAS,IAAMJ,EAAM,SAAS,CAAC,EAC/BR,EAAIW,GAAUE,EAAA,CAAE,IAAKV,GAAOQ,EAAQ,CACtC,CACJ,CAAC,CACH,CC9BA,SAASG,GAAS,CAAE,UAAAC,CAAU,EAAsC,CAClE,GAAI,CAACC,EAAQ,iBAAiB,EAC5B,OAAOC,EAAG,EAAK,EAGjB,IAAMC,EAAaH,EAChB,KACCI,EAAI,CAAC,CAAE,OAAQ,CAAE,EAAAC,CAAE,CAAE,IAAMA,CAAC,EAC5BC,GAAY,EAAG,CAAC,EAChBF,EAAI,CAAC,CAACG,EAAGC,CAAC,IAAM,CAACD,EAAIC,EAAGA,CAAC,CAAU,EACnCC,EAAwB,CAAC,CAC3B,EAGIC,EAAUC,EAAc,CAACX,EAAWG,CAAU,CAAC,EAClD,KACCS,EAAO,CAAC,CAAC,CAAE,OAAAC,CAAO,EAAG,CAAC,CAAER,CAAC,CAAC,IAAM,KAAK,IAAIA,EAAIQ,EAAO,CAAC,EAAI,GAAG,EAC5DT,EAAI,CAAC,CAAC,CAAE,CAACU,CAAS,CAAC,IAAMA,CAAS,EAClCC,EAAqB,CACvB,EAGIC,EAAUC,GAAY,QAAQ,EACpC,OAAON,EAAc,CAACX,EAAWgB,CAAO,CAAC,EACtC,KACCZ,EAAI,CAAC,CAAC,CAAE,OAAAS,CAAO,EAAGK,CAAM,IAAML,EAAO,EAAI,KAAO,CAACK,CAAM,EACvDH,EAAqB,EACrBI,EAAUC,GAAUA,EAASV,EAAUR,EAAG,EAAK,CAAC,EAChDmB,EAAU,EAAK,CACjB,CACJ,CAcO,SAASC,GACdC,EAAiBC,EACG,CACpB,OAAOC,EAAM,IAAMd,EAAc,CAC/Be,GAAiBH,CAAE,EACnBxB,GAASyB,CAAO,CAClB,CAAC,CAAC,EACC,KACCpB,EAAI,CAAC,CAAC,CAAE,OAAAuB,CAAO,EAAGC,CAAM,KAAO,CAC7B,OAAAD,EACA,OAAAC,CACF,EAAE,EACFb,EAAqB,CAACR,EAAGC,IACvBD,EAAE,SAAWC,EAAE,QACfD,EAAE,SAAWC,EAAE,MAChB,EACDqB,EAAY,CAAC,CACf,CACJ,CAaO,SAASC,GACdP,EAAiB,CAAE,QAAAQ,EAAS,MAAAC,CAAM,EACH,CAC/B,OAAOP,EAAM,IAAM,CACjB,IAAMQ,EAAQ,IAAIC,EACZC,EAAQF,EAAM,KAAKG,GAAS,CAAC,CAAC,EACpC,OAAAH,EACG,KACCxB,EAAwB,QAAQ,EAChC4B,GAAkBN,CAAO,CAC3B,EACG,UAAU,CAAC,CAAC,CAAE,OAAAX,CAAO,EAAG,CAAE,OAAAQ,CAAO,CAAC,IAAM,CACvCL,EAAG,UAAU,OAAO,oBAAqBH,GAAU,CAACQ,CAAM,EAC1DL,EAAG,OAASK,CACd,CAAC,EAGLI,EAAM,UAAUC,CAAK,EAGdF,EACJ,KACCO,GAAUH,CAAK,EACf/B,EAAImC,GAAUC,EAAA,CAAE,IAAKjB,GAAOgB,EAAQ,CACtC,CACJ,CAAC,CACH,CChHO,SAASE,GACdC,EAAiB,CAAE,UAAAC,EAAW,QAAAC,CAAQ,EACb,CACzB,OAAOC,GAAgBH,EAAI,CAAE,UAAAC,EAAW,QAAAC,CAAQ,CAAC,EAC9C,KACCE,EAAI,CAAC,CAAE,OAAQ,CAAE,EAAAC,CAAE,CAAE,IAAM,CACzB,GAAM,CAAE,OAAAC,CAAO,EAAIC,GAAeP,CAAE,EACpC,MAAO,CACL,OAAQK,GAAKC,CACf,CACF,CAAC,EACDE,EAAwB,QAAQ,CAClC,CACJ,CAaO,SAASC,GACdT,EAAiBU,EACmB,CACpC,OAAOC,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAClBD,EAAM,UAAU,CAAC,CAAE,OAAAE,CAAO,IAAM,CAC9Bd,EAAG,UAAU,OAAO,2BAA4Bc,CAAM,CACxD,CAAC,EAGD,IAAMC,EAAUC,GAAmB,YAAY,EAC/C,OAAI,OAAOD,GAAY,YACdE,EAGFlB,GAAiBgB,EAASL,CAAO,EACrC,KACCQ,EAAIC,GAASP,EAAM,KAAKO,CAAK,CAAC,EAC9BC,EAAS,IAAMR,EAAM,SAAS,CAAC,EAC/BR,EAAIe,GAAUE,EAAA,CAAE,IAAKrB,GAAOmB,EAAQ,CACtC,CACJ,CAAC,CACH,CCvDO,SAASG,GACdC,EAAiB,CAAE,UAAAC,EAAW,QAAAC,CAAQ,EACpB,CAGlB,IAAMC,EAAUD,EACb,KACCE,EAAI,CAAC,CAAE,OAAAC,CAAO,IAAMA,CAAM,EAC1BC,EAAqB,CACvB,EAGIC,EAAUJ,EACb,KACCK,EAAU,IAAMC,GAAiBT,CAAE,EAChC,KACCI,EAAI,CAAC,CAAE,OAAAC,CAAO,KAAO,CACnB,IAAQL,EAAG,UACX,OAAQA,EAAG,UAAYK,CACzB,EAAE,EACFK,EAAwB,QAAQ,CAClC,CACF,CACF,EAGF,OAAOC,EAAc,CAACR,EAASI,EAASN,CAAS,CAAC,EAC/C,KACCG,EAAI,CAAC,CAACQ,EAAQ,CAAE,IAAAC,EAAK,OAAAC,CAAO,EAAG,CAAE,OAAQ,CAAE,EAAAC,CAAE,EAAG,KAAM,CAAE,OAAAV,CAAO,CAAE,CAAC,KAChEA,EAAS,KAAK,IAAI,EAAGA,EACjB,KAAK,IAAI,EAAGQ,EAASE,EAAIH,CAAM,EAC/B,KAAK,IAAI,EAAGP,EAASU,EAAID,CAAM,CACnC,EACO,CACL,OAAQD,EAAMD,EACd,OAAAP,EACA,OAAQQ,EAAMD,GAAUG,CAC1B,EACD,EACDT,EAAqB,CAACU,EAAGC,IACvBD,EAAE,SAAWC,EAAE,QACfD,EAAE,SAAWC,EAAE,QACfD,EAAE,SAAWC,EAAE,MAChB,CACH,CACJ,CClDO,SAASC,GACdC,EACqB,CACrB,IAAMC,EAAU,SAAkB,WAAW,GAAK,CAChD,MAAOD,EAAO,UAAUE,GAAS,WAC/BA,EAAM,aAAa,qBAAqB,CAC1C,EAAE,OAAO,CACX,EAGA,OAAOC,EAAG,GAAGH,CAAM,EAChB,KACCI,GAASF,GAASG,EAAUH,EAAO,QAAQ,EACxC,KACCI,EAAI,IAAMJ,CAAK,CACjB,CACF,EACAK,EAAUP,EAAO,KAAK,IAAI,EAAGC,EAAQ,KAAK,EAAE,EAC5CK,EAAIJ,IAAU,CACZ,MAAOF,EAAO,QAAQE,CAAK,EAC3B,MAAO,CACL,OAASA,EAAM,aAAa,sBAAsB,EAClD,QAASA,EAAM,aAAa,uBAAuB,EACnD,OAASA,EAAM,aAAa,sBAAsB,CACpD,CACF,EAAa,EACbM,EAAY,CAAC,CACf,CACJ,CASO,SAASC,GACdC,EACgC,CAChC,OAAOC,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAClBD,EAAM,UAAUE,GAAW,CACzB,SAAS,KAAK,aAAa,0BAA2B,EAAE,EAGxD,OAAW,CAACC,EAAKC,CAAK,IAAK,OAAO,QAAQF,EAAQ,KAAK,EACrD,SAAS,KAAK,aAAa,iBAAiBC,IAAOC,CAAK,EAG1D,QAASC,EAAQ,EAAGA,EAAQjB,EAAO,OAAQiB,IAAS,CAClD,IAAMC,EAAQlB,EAAOiB,GAAO,mBACxBC,aAAiB,cACnBA,EAAM,OAASJ,EAAQ,QAAUG,EACrC,CAGA,SAAS,YAAaH,CAAO,CAC/B,CAAC,EAGDF,EAAM,KAAKO,GAAUC,EAAc,CAAC,EACjC,UAAU,IAAM,CACf,SAAS,KAAK,gBAAgB,yBAAyB,CACzD,CAAC,EAGH,IAAMpB,EAASqB,EAA8B,QAASX,CAAE,EACxD,OAAOX,GAAaC,CAAM,EACvB,KACCsB,EAAIC,GAASX,EAAM,KAAKW,CAAK,CAAC,EAC9BC,EAAS,IAAMZ,EAAM,SAAS,CAAC,EAC/BN,EAAIiB,GAAUE,EAAA,CAAE,IAAKf,GAAOa,EAAQ,CACtC,CACJ,CAAC,CACH,CC/HA,IAAAG,GAAwB,SAiCxB,SAASC,GAAQC,EAAyB,CACxCA,EAAG,aAAa,kBAAmB,EAAE,EACrC,IAAMC,EAAOD,EAAG,UAChB,OAAAA,EAAG,gBAAgB,iBAAiB,EAC7BC,CACT,CAWO,SAASC,GACd,CAAE,OAAAC,CAAO,EACH,CACF,GAAAC,QAAY,YAAY,GAC1B,IAAIC,EAA8BC,GAAc,CAC9C,IAAI,GAAAF,QAAY,iDAAkD,CAChE,KAAMJ,GACJA,EAAG,aAAa,qBAAqB,GACrCD,GAAQQ,EACNP,EAAG,aAAa,uBAAuB,CACzC,CAAC,CAEL,CAAC,EACE,GAAG,UAAWQ,GAAMF,EAAW,KAAKE,CAAE,CAAC,CAC5C,CAAC,EACE,KACCC,EAAID,GAAM,CACQA,EAAG,QACX,MAAM,CAChB,CAAC,EACDE,EAAI,IAAMC,GAAY,kBAAkB,CAAC,CAC3C,EACG,UAAUR,CAAM,CAEzB,CCrCA,SAASS,GAAWC,EAAwB,CAC1C,GAAIA,EAAK,OAAS,EAChB,MAAO,CAAC,EAAE,EAGZ,GAAM,CAACC,EAAMC,CAAI,EAAI,CAAC,GAAGF,CAAI,EAC1B,KAAK,CAACG,EAAGC,IAAMD,EAAE,OAASC,EAAE,MAAM,EAClC,IAAIC,GAAOA,EAAI,QAAQ,SAAU,EAAE,CAAC,EAGnCC,EAAQ,EACZ,GAAIL,IAASC,EACXI,EAAQL,EAAK,WAEb,MAAOA,EAAK,WAAWK,CAAK,IAAMJ,EAAK,WAAWI,CAAK,GACrDA,IAGJ,OAAON,EAAK,IAAIK,GAAOA,EAAI,QAAQJ,EAAK,MAAM,EAAGK,CAAK,EAAG,EAAE,CAAC,CAC9D,CAaO,SAASC,GAAaC,EAAiC,CAC5D,IAAMC,EAAS,SAAkB,YAAa,eAAgBD,CAAI,EAClE,GAAIC,EACF,OAAOC,EAAGD,CAAM,EACX,CACL,IAAME,EAASC,GAAc,EAC7B,OAAOC,GAAW,IAAI,IAAI,cAAeL,GAAQG,EAAO,IAAI,CAAC,EAC1D,KACCG,EAAIC,GAAWhB,GAAWiB,EAAY,MAAOD,CAAO,EACjD,IAAIE,GAAQA,EAAK,WAAY,CAChC,CAAC,EACDC,GAAW,IAAMC,CAAK,EACtBC,GAAe,CAAC,CAAC,EACjBC,EAAIN,GAAW,SAAS,YAAaA,EAAS,eAAgBP,CAAI,CAAC,CACrE,CACJ,CACF,CCIO,SAASc,GACd,CAAE,UAAAC,EAAW,UAAAC,EAAW,UAAAC,CAAU,EAC5B,CACN,IAAMC,EAASC,GAAc,EAC7B,GAAI,SAAS,WAAa,QACxB,OAGE,sBAAuB,UACzB,QAAQ,kBAAoB,SAG5BC,EAAU,OAAQ,cAAc,EAC7B,UAAU,IAAM,CACf,QAAQ,kBAAoB,MAC9B,CAAC,GAIL,IAAMC,EAAUC,GAAoC,gBAAgB,EAChE,OAAOD,GAAY,cACrBA,EAAQ,KAAOA,EAAQ,MAGzB,IAAME,EAAQC,GAAa,EACxB,KACCC,EAAIC,GAASA,EAAM,IAAIC,GAAQ,GAAG,IAAI,IAAIA,EAAMT,EAAO,IAAI,GAAG,CAAC,EAC/DU,EAAUC,GAAQT,EAAsB,SAAS,KAAM,OAAO,EAC3D,KACCU,EAAOC,GAAM,CAACA,EAAG,SAAW,CAACA,EAAG,OAAO,EACvCH,EAAUG,GAAM,CACd,GAAIA,EAAG,kBAAkB,QAAS,CAChC,IAAMC,EAAKD,EAAG,OAAO,QAAQ,GAAG,EAChC,GAAIC,GAAM,CAACA,EAAG,OAAQ,CACpB,IAAMC,EAAM,IAAI,IAAID,EAAG,IAAI,EAO3B,GAJAC,EAAI,OAAS,GACbA,EAAI,KAAO,GAITA,EAAI,WAAa,SAAS,UAC1BJ,EAAK,SAASI,EAAI,SAAS,CAAC,EAE5B,OAAAF,EAAG,eAAe,EACXG,EAAG,CACR,IAAK,IAAI,IAAIF,EAAG,IAAI,CACtB,CAAC,CAEL,CACF,CACA,OAAOG,EACT,CAAC,CACH,CACF,EACAC,GAAoB,CACtB,EAGIC,EAAOjB,EAAyB,OAAQ,UAAU,EACrD,KACCU,EAAOC,GAAMA,EAAG,QAAU,IAAI,EAC9BN,EAAIM,IAAO,CACT,IAAK,IAAI,IAAI,SAAS,IAAI,EAC1B,OAAQA,EAAG,KACb,EAAE,EACFK,GAAoB,CACtB,EAGFE,EAAMf,EAAOc,CAAI,EACd,KACCE,EAAqB,CAACC,EAAGC,IAAMD,EAAE,IAAI,OAASC,EAAE,IAAI,IAAI,EACxDhB,EAAI,CAAC,CAAE,IAAAQ,CAAI,IAAMA,CAAG,CACtB,EACG,UAAUjB,CAAS,EAGxB,IAAM0B,EAAY1B,EACf,KACC2B,EAAwB,UAAU,EAClCf,EAAUK,GAAOW,GAAQX,EAAI,IAAI,EAC9B,KACCY,GAAW,KACTC,GAAYb,CAAG,EACRE,GACR,CACH,CACF,EACAC,GAAM,CACR,EAGFb,EACG,KACCwB,GAAOL,CAAS,CAClB,EACG,UAAU,CAAC,CAAE,IAAAT,CAAI,IAAM,CACtB,QAAQ,UAAU,CAAC,EAAG,GAAI,GAAGA,GAAK,CACpC,CAAC,EAGL,IAAMe,EAAM,IAAI,UAChBN,EACG,KACCd,EAAUqB,GAAOA,EAAI,KAAK,CAAC,EAC3BxB,EAAIwB,GAAOD,EAAI,gBAAgBC,EAAK,WAAW,CAAC,CAClD,EACG,UAAUlC,CAAS,EAGxBA,EACG,KACCmC,GAAK,CAAC,CACR,EACG,UAAUC,GAAe,CACxB,QAAWC,IAAY,CAGrB,QACA,sBACA,oBACA,yBAGA,+BACA,gCACA,mCACA,+BACA,2BACA,2BACA,GAAGC,EAAQ,wBAAwB,EAC/B,CAAC,0BAA0B,EAC3B,CAAC,CACP,EAAG,CACD,IAAMC,EAAShC,GAAmB8B,CAAQ,EACpCG,EAASjC,GAAmB8B,EAAUD,CAAW,EAErD,OAAOG,GAAW,aAClB,OAAOC,GAAW,aAElBD,EAAO,YAAYC,CAAM,CAE7B,CACF,CAAC,EAGLxC,EACG,KACCmC,GAAK,CAAC,EACNzB,EAAI,IAAM+B,GAAoB,WAAW,CAAC,EAC1C5B,EAAUI,GAAMyB,EAAY,SAAUzB,CAAE,CAAC,EACzC0B,GAAU1B,GAAM,CACd,IAAM2B,EAASC,EAAE,QAAQ,EACzB,GAAI5B,EAAG,IAAK,CACV,QAAW6B,KAAQ7B,EAAG,kBAAkB,EACtC2B,EAAO,aAAaE,EAAM7B,EAAG,aAAa6B,CAAI,CAAE,EAClD,OAAA7B,EAAG,YAAY2B,CAAM,EAGd,IAAIG,EAAWC,GAAY,CAChCJ,EAAO,OAAS,IAAMI,EAAS,SAAS,CAC1C,CAAC,CAGH,KACE,QAAAJ,EAAO,YAAc3B,EAAG,YACxBA,EAAG,YAAY2B,CAAM,EACdK,CAEX,CAAC,CACH,EACG,UAAU,EAGf1B,EAAMf,EAAOc,CAAI,EACd,KACCU,GAAOhC,CAAS,CAClB,EACG,UAAU,CAAC,CAAE,IAAAkB,EAAK,OAAAgC,CAAO,IAAM,CAC1BhC,EAAI,MAAQ,CAACgC,EACfC,GAAgBjC,EAAI,IAAI,EAExB,OAAO,SAAS,GAAGgC,GAAA,YAAAA,EAAQ,IAAK,CAAC,CAErC,CAAC,EAGLhD,EACG,KACCkD,GAAU5C,CAAK,EACf6C,GAAa,GAAG,EAChBzB,EAAwB,QAAQ,CAClC,EACG,UAAU,CAAC,CAAE,OAAAsB,CAAO,IAAM,CACzB,QAAQ,aAAaA,EAAQ,EAAE,CACjC,CAAC,EAGL3B,EAAMf,EAAOc,CAAI,EACd,KACCgC,GAAY,EAAG,CAAC,EAChBvC,EAAO,CAAC,CAACU,EAAGC,CAAC,IAAMD,EAAE,IAAI,WAAaC,EAAE,IAAI,QAAQ,EACpDhB,EAAI,CAAC,CAAC,CAAE6C,CAAK,IAAMA,CAAK,CAC1B,EACG,UAAU,CAAC,CAAE,OAAAL,CAAO,IAAM,CACzB,OAAO,SAAS,GAAGA,GAAA,YAAAA,EAAQ,IAAK,CAAC,CACnC,CAAC,CACP,CCzSA,IAAAM,GAAuB,SCAvB,IAAAC,GAAuB,SAsChB,SAASC,GACdC,EAA2BC,EACD,CAC1B,IAAMC,EAAY,IAAI,OAAOF,EAAO,UAAW,KAAK,EAC9CG,EAAY,CAACC,EAAYC,EAAcC,IACpC,GAAGD,4BAA+BC,WAI3C,OAAQC,GAAkB,CACxBA,EAAQA,EACL,QAAQ,gBAAiB,GAAG,EAC5B,KAAK,EAGR,IAAMC,EAAQ,IAAI,OAAO,MAAMR,EAAO,cACpCO,EACG,QAAQ,uBAAwB,MAAM,EACtC,QAAQL,EAAW,GAAG,KACtB,KAAK,EAGV,OAAOO,IACLR,KACI,GAAAS,SAAWD,CAAK,EAChBA,GAED,QAAQD,EAAOL,CAAS,EACxB,QAAQ,8BAA+B,IAAI,CAClD,CACF,CC9BO,SAASQ,GAAiBC,EAAuB,CACtD,OAAOA,EACJ,MAAM,YAAY,EAChB,IAAI,CAACC,EAAOC,IAAUA,EAAQ,EAC3BD,EAAM,QAAQ,+BAAgC,IAAI,EAClDA,CACJ,EACC,KAAK,EAAE,EACT,QAAQ,kCAAmC,EAAE,EAC7C,KAAK,CACV,CCoCO,SAASE,GACdC,EAC+B,CAC/B,OAAOA,EAAQ,OAAS,CAC1B,CASO,SAASC,GACdD,EAC+B,CAC/B,OAAOA,EAAQ,OAAS,CAC1B,CASO,SAASE,GACdF,EACgC,CAChC,OAAOA,EAAQ,OAAS,CAC1B,CCvEA,SAASG,GAAiB,CAAE,OAAAC,EAAQ,KAAAC,CAAK,EAA6B,CAGhED,EAAO,KAAK,SAAW,GAAKA,EAAO,KAAK,KAAO,OACjDA,EAAO,KAAO,CACZE,GAAY,oBAAoB,CAClC,GAGEF,EAAO,YAAc,cACvBA,EAAO,UAAYE,GAAY,yBAAyB,GAQ1D,IAAMC,EAAyB,CAC7B,SANeD,GAAY,wBAAwB,EAClD,MAAM,SAAS,EACf,OAAO,OAAO,EAKf,YAAaE,EAAQ,gBAAgB,CACvC,EAGA,MAAO,CAAE,OAAAJ,EAAQ,KAAAC,EAAM,QAAAE,CAAQ,CACjC,CAkBO,SAASE,GACdC,EAAaC,EACC,CACd,IAAMP,EAASQ,GAAc,EACvBC,EAAS,IAAI,OAAOH,CAAG,EAGvBI,EAAM,IAAIC,EACVC,EAAMC,GAAYJ,EAAQ,CAAE,IAAAC,CAAI,CAAC,EACpC,KACCI,EAAIC,GAAW,CACb,GAAIC,GAAsBD,CAAO,EAC/B,QAAWE,KAAUF,EAAQ,KAAK,MAChC,QAAWG,KAAYD,EACrBC,EAAS,SAAW,GAAG,IAAI,IAAIA,EAAS,SAAUlB,EAAO,IAAI,IAEnE,OAAOe,CACT,CAAC,EACDI,GAAM,CACR,EAGF,OAAAC,GAAKb,CAAK,EACP,KACCO,EAAIO,IAAS,CACX,OACA,KAAMtB,GAAiBsB,CAAI,CAC7B,EAAwB,CAC1B,EACG,UAAUX,EAAI,KAAK,KAAKA,CAAG,CAAC,EAG1B,CAAE,IAAAA,EAAK,IAAAE,CAAI,CACpB,CCvEO,SAASU,GACd,CAAE,UAAAC,CAAU,EACN,CACN,IAAMC,EAASC,GAAc,EACvBC,EAAYC,GAChB,IAAI,IAAI,mBAAoBH,EAAO,IAAI,CACzC,EACG,KACCI,GAAW,IAAMC,CAAK,CACxB,EAGIC,EAAWJ,EACd,KACCK,EAAIC,GAAY,CACd,GAAM,CAAC,CAAEC,CAAO,EAAIT,EAAO,KAAK,MAAM,aAAa,EACnD,OAAOQ,EAAS,KAAK,CAAC,CAAE,QAAAE,EAAS,QAAAC,CAAQ,IACvCD,IAAYD,GAAWE,EAAQ,SAASF,CAAO,CAChD,GAAKD,EAAS,EACjB,CAAC,CACH,EAGFN,EACG,KACCK,EAAIC,GAAY,IAAI,IAAIA,EAAS,IAAIE,GAAW,CAC9C,GAAG,IAAI,IAAI,MAAMA,EAAQ,WAAYV,EAAO,IAAI,IAChDU,CACF,CAAC,CAAC,CAAC,EACHE,EAAUC,GAAQC,EAAsB,SAAS,KAAM,OAAO,EAC3D,KACCC,EAAOC,GAAM,CAACA,EAAG,SAAW,CAACA,EAAG,OAAO,EACvCC,GAAeX,CAAQ,EACvBM,EAAU,CAAC,CAACI,EAAIP,CAAO,IAAM,CAC3B,GAAIO,EAAG,kBAAkB,QAAS,CAChC,IAAME,EAAKF,EAAG,OAAO,QAAQ,GAAG,EAChC,GAAIE,GAAM,CAACA,EAAG,QAAUL,EAAK,IAAIK,EAAG,IAAI,EAAG,CACzC,IAAMC,EAAMD,EAAG,KAWf,MAAI,CAACF,EAAG,OAAO,QAAQ,aAAa,GAClBH,EAAK,IAAIM,CAAG,IACZV,EACPJ,GAEXW,EAAG,eAAe,EACXI,EAAGD,CAAG,EACf,CACF,CACA,OAAOd,CACT,CAAC,EACDO,EAAUO,GAAO,CACf,GAAM,CAAE,QAAAT,CAAQ,EAAIG,EAAK,IAAIM,CAAG,EAChC,OAAOE,GAAa,IAAI,IAAIF,CAAG,CAAC,EAC7B,KACCZ,EAAIe,GAAW,CAEb,IAAMC,EADWC,GAAY,EACP,KAAK,QAAQxB,EAAO,KAAM,EAAE,EAClD,OAAOsB,EAAQ,SAASC,EAAK,MAAM,GAAG,EAAE,EAAE,EACtC,IAAI,IAAI,MAAMb,KAAWa,IAAQvB,EAAO,IAAI,EAC5C,IAAI,IAAImB,CAAG,CACjB,CAAC,CACH,CACJ,CAAC,CACH,CACF,CACF,EACG,UAAUA,GAAOM,GAAYN,CAAG,CAAC,EAGtCO,EAAc,CAACxB,EAAWI,CAAQ,CAAC,EAChC,UAAU,CAAC,CAACE,EAAUC,CAAO,IAAM,CACpBkB,EAAW,mBAAmB,EACtC,YAAYC,GAAsBpB,EAAUC,CAAO,CAAC,CAC5D,CAAC,EAGHV,EAAU,KAAKa,EAAU,IAAMN,CAAQ,CAAC,EACrC,UAAUG,GAAW,CA5J1B,IAAAoB,EA+JM,IAAIC,EAAW,SAAS,aAAc,cAAc,EACpD,GAAIA,IAAa,KAAM,CACrB,IAAMC,IAASF,EAAA7B,EAAO,UAAP,YAAA6B,EAAgB,UAAW,SAC1CC,EAAW,CAACrB,EAAQ,QAAQ,SAASsB,CAAM,EAG3C,SAAS,aAAcD,EAAU,cAAc,CACjD,CAGA,GAAIA,EACF,QAAWE,KAAWC,GAAqB,UAAU,EACnDD,EAAQ,OAAS,EACvB,CAAC,CACL,CCtFO,SAASE,GACdC,EAAsB,CAAE,IAAAC,CAAI,EACH,CACzB,IAAMC,GAAK,+BAAU,YAAaC,GAG5B,CAAE,aAAAC,CAAa,EAAIC,GAAY,EACjCD,EAAa,IAAI,GAAG,GACtBE,GAAU,SAAU,EAAI,EAG1B,IAAMC,EAASN,EACZ,KACCO,EAAOC,EAAoB,EAC3BC,GAAK,CAAC,EACNC,EAAI,IAAMP,EAAa,IAAI,GAAG,GAAK,EAAE,CACvC,EAGFQ,GAAY,QAAQ,EACjB,KACCJ,EAAOK,GAAU,CAACA,CAAM,EACxBH,GAAK,CAAC,CACR,EACG,UAAU,IAAM,CACf,IAAMI,EAAM,IAAI,IAAI,SAAS,IAAI,EACjCA,EAAI,aAAa,OAAO,GAAG,EAC3B,QAAQ,aAAa,CAAC,EAAG,GAAI,GAAGA,GAAK,CACvC,CAAC,EAGLP,EAAO,UAAUQ,GAAS,CACpBA,IACFf,EAAG,MAAQe,EACXf,EAAG,MAAM,EAEb,CAAC,EAGD,IAAMgB,EAASC,GAAkBjB,CAAE,EAC7BkB,EAASC,EACbC,EAAUpB,EAAI,OAAO,EACrBoB,EAAUpB,EAAI,OAAO,EAAE,KAAKqB,GAAM,CAAC,CAAC,EACpCd,CACF,EACG,KACCI,EAAI,IAAMT,EAAGF,EAAG,KAAK,CAAC,EACtBsB,EAAU,EAAE,EACZC,EAAqB,CACvB,EAGF,OAAOC,EAAc,CAACN,EAAQF,CAAM,CAAC,EAClC,KACCL,EAAI,CAAC,CAACI,EAAOU,CAAK,KAAO,CAAE,MAAAV,EAAO,MAAAU,CAAM,EAAE,EAC1CC,EAAY,CAAC,CACf,CACJ,CAUO,SAASC,GACd3B,EAAsB,CAAE,IAAA4B,EAAK,IAAA3B,CAAI,EACqB,CACtD,IAAM4B,EAAQ,IAAIC,EACZC,EAAQF,EAAM,KAAKG,GAAS,CAAC,CAAC,EAGpC,OAAAH,EACG,KACCI,EAAwB,OAAO,EAC/BtB,EAAI,CAAC,CAAE,MAAAI,CAAM,KAA2B,CACtC,OACA,KAAMA,CACR,EAAE,CACJ,EACG,UAAUa,EAAI,KAAK,KAAKA,CAAG,CAAC,EAGjCC,EACG,KACCI,EAAwB,OAAO,CACjC,EACG,UAAU,CAAC,CAAE,MAAAR,CAAM,IAAM,CACpBA,GACFnB,GAAU,SAAUmB,CAAK,EACzBzB,EAAG,YAAc,IAEjBA,EAAG,YAAckC,GAAY,oBAAoB,CAErD,CAAC,EAGLd,EAAUpB,EAAG,KAAO,OAAO,EACxB,KACCmC,GAAUJ,CAAK,CACjB,EACG,UAAU,IAAM/B,EAAG,MAAM,CAAC,EAGxBD,GAAiBC,EAAI,CAAE,IAAA4B,EAAK,IAAA3B,CAAI,CAAC,EACrC,KACCmC,EAAIC,GAASR,EAAM,KAAKQ,CAAK,CAAC,EAC9BC,EAAS,IAAMT,EAAM,SAAS,CAAC,EAC/BlB,EAAI0B,GAAUE,EAAA,CAAE,IAAKvC,GAAOqC,EAAQ,EACpCG,GAAM,CACR,CACJ,CCrHO,SAASC,GACdC,EAAiB,CAAE,IAAAC,CAAI,EAAiB,CAAE,OAAAC,CAAO,EACZ,CACrC,IAAMC,EAAQ,IAAIC,EACZC,EAAYC,GAAqBN,EAAG,aAAc,EACrD,KACCO,EAAO,OAAO,CAChB,EAGIC,EAAOC,EAAW,wBAAyBT,CAAE,EAC7CU,EAAOD,EAAW,uBAAwBT,CAAE,EAG5CW,EAASV,EACZ,KACCM,EAAOK,EAAoB,EAC3BC,GAAK,CAAC,CACR,EAGF,OAAAV,EACG,KACCW,GAAeZ,CAAM,EACrBa,GAAUJ,CAAM,CAClB,EACG,UAAU,CAAC,CAAC,CAAE,MAAAK,CAAM,EAAG,CAAE,MAAAC,CAAM,CAAC,IAAM,CACrC,GAAIA,EACF,OAAQD,EAAM,OAAQ,CAGpB,IAAK,GACHR,EAAK,YAAcU,GAAY,oBAAoB,EACnD,MAGF,IAAK,GACHV,EAAK,YAAcU,GAAY,mBAAmB,EAClD,MAGF,QACEV,EAAK,YAAcU,GACjB,sBACAC,GAAMH,EAAM,MAAM,CACpB,CACJ,MAEAR,EAAK,YAAcU,GAAY,2BAA2B,CAE9D,CAAC,EAGLf,EACG,KACCiB,EAAI,IAAMV,EAAK,UAAY,EAAE,EAC7BW,EAAU,CAAC,CAAE,MAAAL,CAAM,IAAMM,EACvBC,EAAG,GAAGP,EAAM,MAAM,EAAG,EAAE,CAAC,EACxBO,EAAG,GAAGP,EAAM,MAAM,EAAE,CAAC,EAClB,KACCQ,GAAY,CAAC,EACbC,GAAQpB,CAAS,EACjBgB,EAAU,CAAC,CAACK,CAAK,IAAMA,CAAK,CAC9B,CACJ,CAAC,CACH,EACG,UAAUC,GAAUjB,EAAK,YACxBkB,GAAuBD,CAAM,CAC/B,CAAC,EAGW1B,EACb,KACCM,EAAOsB,EAAqB,EAC5BC,EAAI,CAAC,CAAE,KAAAC,CAAK,IAAMA,CAAI,CACxB,EAIC,KACCX,EAAIY,GAAS7B,EAAM,KAAK6B,CAAK,CAAC,EAC9BC,EAAS,IAAM9B,EAAM,SAAS,CAAC,EAC/B2B,EAAIE,GAAUE,EAAA,CAAE,IAAKlC,GAAOgC,EAAQ,CACtC,CACJ,CC1FO,SAASG,GACdC,EAAkB,CAAE,OAAAC,CAAO,EACF,CACzB,OAAOA,EACJ,KACCC,EAAI,CAAC,CAAE,MAAAC,CAAM,IAAM,CACjB,IAAMC,EAAMC,GAAY,EACxB,OAAAD,EAAI,KAAO,GACXA,EAAI,aAAa,OAAO,GAAG,EAC3BA,EAAI,aAAa,IAAI,IAAKD,CAAK,EACxB,CAAE,IAAAC,CAAI,CACf,CAAC,CACH,CACJ,CAUO,SAASE,GACdC,EAAuBC,EACa,CACpC,IAAMC,EAAQ,IAAIC,EAClB,OAAAD,EAAM,UAAU,CAAC,CAAE,IAAAL,CAAI,IAAM,CAC3BG,EAAG,aAAa,sBAAuBA,EAAG,IAAI,EAC9CA,EAAG,KAAO,GAAGH,GACf,CAAC,EAGDO,EAAUJ,EAAI,OAAO,EAClB,UAAUK,GAAMA,EAAG,eAAe,CAAC,EAG/Bb,GAAiBQ,EAAIC,CAAO,EAChC,KACCK,EAAIC,GAASL,EAAM,KAAKK,CAAK,CAAC,EAC9BC,EAAS,IAAMN,EAAM,SAAS,CAAC,EAC/BP,EAAIY,GAAUE,EAAA,CAAE,IAAKT,GAAOO,EAAQ,CACtC,CACJ,CCtCO,SAASG,GACdC,EAAiB,CAAE,IAAAC,CAAI,EAAiB,CAAE,UAAAC,CAAU,EACd,CACtC,IAAMC,EAAQ,IAAIC,EAGZC,EAASC,GAAoB,cAAc,EAC3CC,EAASC,EACbC,EAAUJ,EAAO,SAAS,EAC1BI,EAAUJ,EAAO,OAAO,CAC1B,EACG,KACCK,GAAUC,EAAc,EACxBC,EAAI,IAAMP,EAAM,KAAK,EACrBQ,EAAqB,CACvB,EAGF,OAAAV,EACG,KACCW,GAAkBP,CAAM,EACxBK,EAAI,CAAC,CAAC,CAAE,YAAAG,CAAY,EAAGC,CAAK,IAAM,CAChC,IAAMC,EAAQD,EAAM,MAAM,UAAU,EACpC,IAAID,GAAA,YAAAA,EAAa,SAAUE,EAAMA,EAAM,OAAS,GAAI,CAClD,IAAMC,EAAOH,EAAYA,EAAY,OAAS,GAC1CG,EAAK,WAAWD,EAAMA,EAAM,OAAS,EAAE,IACzCA,EAAMA,EAAM,OAAS,GAAKC,EAC9B,MACED,EAAM,OAAS,EAEjB,OAAOA,CACT,CAAC,CACH,EACG,UAAUA,GAASjB,EAAG,UAAYiB,EAChC,KAAK,EAAE,EACP,QAAQ,MAAO,QAAQ,CAC1B,EAGJf,EACG,KACCiB,EAAO,CAAC,CAAE,KAAAC,CAAK,IAAMA,IAAS,QAAQ,CACxC,EACG,UAAUC,GAAO,CAChB,OAAQA,EAAI,KAAM,CAGhB,IAAK,aAEDrB,EAAG,UAAU,QACbK,EAAM,iBAAmBA,EAAM,MAAM,SAErCA,EAAM,MAAQL,EAAG,WACnB,KACJ,CACF,CAAC,EAGWC,EACb,KACCkB,EAAOG,EAAqB,EAC5BV,EAAI,CAAC,CAAE,KAAAW,CAAK,IAAMA,CAAI,CACxB,EAIC,KACCC,EAAIC,GAAStB,EAAM,KAAKsB,CAAK,CAAC,EAC9BC,EAAS,IAAMvB,EAAM,SAAS,CAAC,EAC/BS,EAAI,KAAO,CAAE,IAAKZ,CAAG,EAAE,CACzB,CACJ,CC9CO,SAAS2B,GACdC,EAAiB,CAAE,OAAAC,EAAQ,UAAAC,CAAU,EACN,CAC/B,IAAMC,EAASC,GAAc,EAC7B,GAAI,CACF,IAAMC,GAAM,+BAAU,SAAUF,EAAO,OACjCG,EAASC,GAAkBF,EAAKJ,CAAM,EAGtCO,EAASC,GAAoB,eAAgBT,CAAE,EAC/CU,EAASD,GAAoB,gBAAiBT,CAAE,EAGhD,CAAE,IAAAW,EAAK,IAAAC,CAAI,EAAIN,EACrBK,EACG,KACCE,EAAOC,EAAoB,EAC3BC,GAAOH,EAAI,KAAKC,EAAOG,EAAoB,CAAC,CAAC,EAC7CC,GAAK,CAAC,CACR,EACG,UAAUN,EAAI,KAAK,KAAKA,CAAG,CAAC,EAGjCT,EACG,KACCW,EAAO,CAAC,CAAE,KAAAK,CAAK,IAAMA,IAAS,QAAQ,CACxC,EACG,UAAUC,GAAO,CAChB,IAAMC,EAASC,GAAiB,EAChC,OAAQF,EAAI,KAAM,CAGhB,IAAK,QACH,GAAIC,IAAWZ,EAAO,CACpB,IAAMc,EAAU,IAAI,IACpB,QAAWC,KAAUC,EACnB,sBAAuBd,CACzB,EAAG,CACD,IAAMe,EAAUF,EAAO,kBACvBD,EAAQ,IAAIC,EAAQ,WAClBE,EAAQ,aAAa,eAAe,CACtC,CAAC,CACH,CAGA,GAAIH,EAAQ,KAAM,CAChB,GAAM,CAAC,CAACI,CAAI,CAAC,EAAI,CAAC,GAAGJ,CAAO,EAAE,KAAK,CAAC,CAAC,CAAEK,CAAC,EAAG,CAAC,CAAEC,CAAC,IAAMA,EAAID,CAAC,EAC1DD,EAAK,MAAM,CACb,CAGAP,EAAI,MAAM,CACZ,CACA,MAGF,IAAK,SACL,IAAK,MACHU,GAAU,SAAU,EAAK,EACzBrB,EAAM,KAAK,EACX,MAGF,IAAK,UACL,IAAK,YACH,GAAI,OAAOY,GAAW,YACpBZ,EAAM,MAAM,MACP,CACL,IAAMsB,EAAM,CAACtB,EAAO,GAAGgB,EACrB,wDACAd,CACF,CAAC,EACKqB,EAAI,KAAK,IAAI,GACjB,KAAK,IAAI,EAAGD,EAAI,QAAQV,CAAM,CAAC,EAAIU,EAAI,QACrCX,EAAI,OAAS,UAAY,GAAK,IAE9BW,EAAI,MAAM,EACdA,EAAIC,GAAG,MAAM,CACf,CAGAZ,EAAI,MAAM,EACV,MAGF,QACMX,IAAUa,GAAiB,GAC7Bb,EAAM,MAAM,CAClB,CACF,CAAC,EAGLN,EACG,KACCW,EAAO,CAAC,CAAE,KAAAK,CAAK,IAAMA,IAAS,QAAQ,CACxC,EACG,UAAUC,GAAO,CAChB,OAAQA,EAAI,KAAM,CAGhB,IAAK,IACL,IAAK,IACL,IAAK,IACHX,EAAM,MAAM,EACZA,EAAM,OAAO,EAGbW,EAAI,MAAM,EACV,KACJ,CACF,CAAC,EAGL,IAAMa,EAAUC,GAAiBzB,EAAOF,CAAM,EACxC4B,EAAUC,GAAkBzB,EAAQJ,EAAQ,CAAE,OAAA0B,CAAO,CAAC,EAC5D,OAAOI,EAAMJ,EAAQE,CAAO,EACzB,KACCG,GAGE,GAAGC,GAAqB,eAAgBtC,CAAE,EACvC,IAAIuC,GAASC,GAAiBD,EAAO,CAAE,OAAAP,CAAO,CAAC,CAAC,EAGnD,GAAGM,GAAqB,iBAAkBtC,CAAE,EACzC,IAAIuC,GAASE,GAAmBF,EAAOjC,EAAQ,CAAE,UAAAJ,CAAU,CAAC,CAAC,CAClE,CACF,CAGJ,OAASwC,EAAP,CACA,OAAA1C,EAAG,OAAS,GACL2C,EACT,CACF,CCtKO,SAASC,GACdC,EAAiB,CAAE,OAAAC,EAAQ,UAAAC,CAAU,EACG,CACxC,OAAOC,EAAc,CACnBF,EACAC,EACG,KACCE,EAAUC,GAAY,CAAC,EACvBC,EAAOC,GAAO,CAAC,CAACA,EAAI,aAAa,IAAI,GAAG,CAAC,CAC3C,CACJ,CAAC,EACE,KACCC,EAAI,CAAC,CAACC,EAAOF,CAAG,IAAMG,GAAuBD,EAAM,OAAQ,EAAI,EAC7DF,EAAI,aAAa,IAAI,GAAG,CAC1B,CAAC,EACDC,EAAIG,GAAM,CA1FhB,IAAAC,EA2FQ,IAAMC,EAAQ,IAAI,IAGZC,EAAK,SAAS,mBAAmBd,EAAI,WAAW,SAAS,EAC/D,QAASe,EAAOD,EAAG,SAAS,EAAGC,EAAMA,EAAOD,EAAG,SAAS,EACtD,IAAIF,EAAAG,EAAK,gBAAL,MAAAH,EAAoB,aAAc,CACpC,IAAMI,EAAWD,EAAK,YAChBE,EAAWN,EAAGK,CAAQ,EACxBC,EAAS,OAASD,EAAS,QAC7BH,EAAM,IAAIE,EAAmBE,CAAQ,CACzC,CAIF,OAAW,CAACF,EAAMG,CAAI,IAAKL,EAAO,CAChC,GAAM,CAAE,WAAAM,CAAW,EAAIC,EAAE,OAAQ,KAAMF,CAAI,EAC3CH,EAAK,YAAY,GAAG,MAAM,KAAKI,CAAU,CAAC,CAC5C,CAGA,MAAO,CAAE,IAAKnB,EAAI,MAAAa,CAAM,CAC1B,CAAC,CACH,CACJ,CCbO,SAASQ,GACdC,EAAiB,CAAE,UAAAC,EAAW,MAAAC,CAAM,EACf,CACrB,IAAMC,EAASH,EAAG,cACZI,EACJD,EAAO,UACPA,EAAO,cAAe,UAGxB,OAAOE,EAAc,CAACH,EAAOD,CAAS,CAAC,EACpC,KACCK,EAAI,CAAC,CAAC,CAAE,OAAAC,EAAQ,OAAAC,CAAO,EAAG,CAAE,OAAQ,CAAE,EAAAC,CAAE,CAAE,CAAC,KACzCD,EAASA,EACL,KAAK,IAAIJ,EAAQ,KAAK,IAAI,EAAGK,EAAIF,CAAM,CAAC,EACxCH,EACG,CACL,OAAAI,EACA,OAAQC,GAAKF,EAASH,CACxB,EACD,EACDM,EAAqB,CAACC,EAAGC,IACvBD,EAAE,SAAWC,EAAE,QACfD,EAAE,SAAWC,EAAE,MAChB,CACH,CACJ,CAuBO,SAASC,GACdb,EAAiBc,EACe,CADf,IAAAC,EAAAD,EAAE,SAAAE,CAtJrB,EAsJmBD,EAAcE,EAAAC,GAAdH,EAAc,CAAZ,YAEnB,IAAMI,EAAQC,EAAW,0BAA2BpB,CAAE,EAChD,CAAE,EAAAS,CAAE,EAAIY,GAAiBF,CAAK,EACpC,OAAOG,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAClB,OAAAD,EACG,KACCE,GAAU,EAAGC,EAAuB,EACpCC,GAAeX,CAAO,CACxB,EACG,UAAU,CAGT,KAAK,CAAC,CAAE,OAAAR,CAAO,EAAG,CAAE,OAAQD,CAAO,CAAC,EAAG,CACrCY,EAAM,MAAM,OAAS,GAAGX,EAAS,EAAIC,MACrCT,EAAG,MAAM,IAAY,GAAGO,KAC1B,EAGA,UAAW,CACTY,EAAM,MAAM,OAAS,GACrBnB,EAAG,MAAM,IAAY,EACvB,CACF,CAAC,EAGLuB,EACG,KACCK,GAAUF,EAAuB,EACjCG,GAAK,CAAC,CACR,EACG,UAAU,IAAM,CACf,QAAWC,KAAQC,EAAY,8BAA+B/B,CAAE,EAAG,CACjE,IAAMgC,EAAYC,GAAoBH,CAAI,EAC1C,GAAI,OAAOE,GAAc,YAAa,CACpC,IAAMzB,EAASuB,EAAK,UAAYE,EAAU,UACpC,CAAE,OAAAxB,CAAO,EAAI0B,GAAeF,CAAS,EAC3CA,EAAU,SAAS,CACjB,IAAKzB,EAASC,EAAS,CACzB,CAAC,CACH,CACF,CACF,CAAC,EAGET,GAAaC,EAAIiB,CAAO,EAC5B,KACCkB,EAAIC,GAASb,EAAM,KAAKa,CAAK,CAAC,EAC9BC,EAAS,IAAMd,EAAM,SAAS,CAAC,EAC/BjB,EAAI8B,GAAUE,EAAA,CAAE,IAAKtC,GAAOoC,EAAQ,CACtC,CACJ,CAAC,CACH,CChJO,SAASG,GACdC,EAAcC,EACW,CACzB,GAAI,OAAOA,GAAS,YAAa,CAC/B,IAAMC,EAAM,gCAAgCF,KAAQC,IACpD,OAAOE,GAGLC,GAAqB,GAAGF,mBAAqB,EAC1C,KACCG,GAAW,IAAMC,CAAK,EACtBC,EAAIC,IAAY,CACd,QAASA,EAAQ,QACnB,EAAE,EACFC,GAAe,CAAC,CAAC,CACnB,EAGFL,GAAkBF,CAAG,EAClB,KACCG,GAAW,IAAMC,CAAK,EACtBC,EAAIG,IAAS,CACX,MAAOA,EAAK,iBACZ,MAAOA,EAAK,WACd,EAAE,EACFD,GAAe,CAAC,CAAC,CACnB,CACJ,EACG,KACCF,EAAI,CAAC,CAACC,EAASE,CAAI,IAAOC,IAAA,GAAKH,GAAYE,EAAO,CACpD,CAGJ,KAAO,CACL,IAAMR,EAAM,gCAAgCF,IAC5C,OAAOI,GAAkBF,CAAG,EACzB,KACCK,EAAIG,IAAS,CACX,aAAcA,EAAK,YACrB,EAAE,EACFD,GAAe,CAAC,CAAC,CACnB,CACJ,CACF,CCvDO,SAASG,GACdC,EAAcC,EACW,CACzB,IAAMC,EAAM,WAAWF,qBAAwB,mBAAmBC,CAAO,IACzE,OAAOE,GAA2BD,CAAG,EAClC,KACCE,GAAW,IAAMC,CAAK,EACtBC,EAAI,CAAC,CAAE,WAAAC,EAAY,YAAAC,CAAY,KAAO,CACpC,MAAOD,EACP,MAAOC,CACT,EAAE,EACFC,GAAe,CAAC,CAAC,CACnB,CACJ,CCOO,SAASC,GACdC,EACyB,CAGzB,IAAIC,EAAQD,EAAI,MAAM,qCAAqC,EAC3D,GAAIC,EAAO,CACT,GAAM,CAAC,CAAEC,EAAMC,CAAI,EAAIF,EACvB,OAAOG,GAA2BF,EAAMC,CAAI,CAC9C,CAIA,GADAF,EAAQD,EAAI,MAAM,oCAAoC,EAClDC,EAAO,CACT,GAAM,CAAC,CAAEI,EAAMC,CAAI,EAAIL,EACvB,OAAOM,GAA2BF,EAAMC,CAAI,CAC9C,CAGA,OAAOE,CACT,CCpBA,IAAIC,GAgBG,SAASC,GACdC,EACoB,CACpB,OAAOF,QAAWG,EAAM,IAAM,CAC5B,IAAMC,EAAS,SAAsB,WAAY,cAAc,EAC/D,GAAIA,EACF,OAAOC,EAAGD,CAAM,EAKhB,GADYE,GAAqB,SAAS,EAClC,OAAQ,CACd,IAAMC,EAAU,SAA0B,WAAW,EACrD,GAAI,EAAEA,GAAWA,EAAQ,QACvB,OAAOC,CACX,CAGA,OAAOC,GAAiBP,EAAG,IAAI,EAC5B,KACCQ,EAAIC,GAAS,SAAS,WAAYA,EAAO,cAAc,CAAC,CAC1D,CAEN,CAAC,EACE,KACCC,GAAW,IAAMJ,CAAK,EACtBK,EAAOF,GAAS,OAAO,KAAKA,CAAK,EAAE,OAAS,CAAC,EAC7CG,EAAIH,IAAU,CAAE,MAAAA,CAAM,EAAE,EACxBI,EAAY,CAAC,CACf,EACJ,CASO,SAASC,GACdd,EAC+B,CAC/B,IAAMe,EAAQC,EAAW,uBAAwBhB,CAAE,EACnD,OAAOC,EAAM,IAAM,CACjB,IAAMgB,EAAQ,IAAIC,EAClB,OAAAD,EAAM,UAAU,CAAC,CAAE,MAAAR,CAAM,IAAM,CAC7BM,EAAM,YAAYI,GAAkBV,CAAK,CAAC,EAC1CM,EAAM,UAAU,IAAI,+BAA+B,CACrD,CAAC,EAGMhB,GAAYC,CAAE,EAClB,KACCQ,EAAIY,GAASH,EAAM,KAAKG,CAAK,CAAC,EAC9BC,EAAS,IAAMJ,EAAM,SAAS,CAAC,EAC/BL,EAAIQ,GAAUE,EAAA,CAAE,IAAKtB,GAAOoB,EAAQ,CACtC,CACJ,CAAC,CACH,CCtDO,SAASG,GACdC,EAAiB,CAAE,UAAAC,EAAW,QAAAC,CAAQ,EACpB,CAClB,OAAOC,GAAiB,SAAS,IAAI,EAClC,KACCC,EAAU,IAAMC,GAAgBL,EAAI,CAAE,QAAAE,EAAS,UAAAD,CAAU,CAAC,CAAC,EAC3DK,EAAI,CAAC,CAAE,OAAQ,CAAE,EAAAC,CAAE,CAAE,KACZ,CACL,OAAQA,GAAK,EACf,EACD,EACDC,EAAwB,QAAQ,CAClC,CACJ,CAaO,SAASC,GACdT,EAAiBU,EACY,CAC7B,OAAOC,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAClB,OAAAD,EAAM,UAAU,CAGd,KAAK,CAAE,OAAAE,CAAO,EAAG,CACfd,EAAG,OAASc,CACd,EAGA,UAAW,CACTd,EAAG,OAAS,EACd,CACF,CAAC,GAICe,EAAQ,wBAAwB,EAC5BC,EAAG,CAAE,OAAQ,EAAM,CAAC,EACpBjB,GAAUC,EAAIU,CAAO,GAExB,KACCO,EAAIC,GAASN,EAAM,KAAKM,CAAK,CAAC,EAC9BC,EAAS,IAAMP,EAAM,SAAS,CAAC,EAC/BN,EAAIY,GAAUE,EAAA,CAAE,IAAKpB,GAAOkB,EAAQ,CACtC,CACJ,CAAC,CACH,CCpBO,SAASG,GACdC,EAAiB,CAAE,UAAAC,EAAW,QAAAC,CAAQ,EACT,CAC7B,IAAMC,EAAQ,IAAI,IAGZC,EAAUC,EAA+B,cAAeL,CAAE,EAChE,QAAWM,KAAUF,EAAS,CAC5B,IAAMG,EAAK,mBAAmBD,EAAO,KAAK,UAAU,CAAC,CAAC,EAChDE,EAASC,GAAmB,QAAQF,KAAM,EAC5C,OAAOC,GAAW,aACpBL,EAAM,IAAIG,EAAQE,CAAM,CAC5B,CAGA,IAAME,EAAUR,EACb,KACCS,EAAwB,QAAQ,EAChCC,EAAI,CAAC,CAAE,OAAAC,CAAO,IAAM,CAClB,IAAMC,EAAOC,GAAoB,MAAM,EACjCC,EAAOC,EAAW,wBAAyBH,CAAI,EACrD,OAAOD,EAAS,IACdG,EAAK,UACLF,EAAK,UAET,CAAC,EACDI,GAAM,CACR,EAgFF,OA7EmBC,GAAiB,SAAS,IAAI,EAC9C,KACCR,EAAwB,QAAQ,EAGhCS,EAAUC,GAAQC,EAAM,IAAM,CAC5B,IAAIC,EAA4B,CAAC,EACjC,OAAOC,EAAG,CAAC,GAAGrB,CAAK,EAAE,OAAO,CAACsB,EAAO,CAACnB,EAAQE,CAAM,IAAM,CACvD,KAAOe,EAAK,QACGpB,EAAM,IAAIoB,EAAKA,EAAK,OAAS,EAAE,EACnC,SAAWf,EAAO,SACzBe,EAAK,IAAI,EAOb,IAAIG,EAASlB,EAAO,UACpB,KAAO,CAACkB,GAAUlB,EAAO,eACvBA,EAASA,EAAO,cAChBkB,EAASlB,EAAO,UAIlB,OAAOiB,EAAM,IACX,CAAC,GAAGF,EAAO,CAAC,GAAGA,EAAMjB,CAAM,CAAC,EAAE,QAAQ,EACtCoB,CACF,CACF,EAAG,IAAI,GAAkC,CAAC,CAC5C,CAAC,EACE,KAGCd,EAAIa,GAAS,IAAI,IAAI,CAAC,GAAGA,CAAK,EAAE,KAAK,CAAC,CAAC,CAAEE,CAAC,EAAG,CAAC,CAAEC,CAAC,IAAMD,EAAIC,CAAC,CAAC,CAAC,EAC9DC,GAAkBnB,CAAO,EAGzBU,EAAU,CAAC,CAACK,EAAOK,CAAM,IAAM7B,EAC5B,KACC8B,GAAK,CAAC,CAACC,EAAMC,CAAI,EAAG,CAAE,OAAQ,CAAE,EAAAC,CAAE,EAAG,KAAAC,CAAK,IAAM,CAC9C,IAAMC,EAAOF,EAAIC,EAAK,QAAU,KAAK,MAAMd,EAAK,MAAM,EAGtD,KAAOY,EAAK,QAAQ,CAClB,GAAM,CAAC,CAAEP,CAAM,EAAIO,EAAK,GACxB,GAAIP,EAASI,EAASI,GAAKE,EACzBJ,EAAO,CAAC,GAAGA,EAAMC,EAAK,MAAM,CAAE,MAE9B,MAEJ,CAGA,KAAOD,EAAK,QAAQ,CAClB,GAAM,CAAC,CAAEN,CAAM,EAAIM,EAAKA,EAAK,OAAS,GACtC,GAAIN,EAASI,GAAUI,GAAK,CAACE,EAC3BH,EAAO,CAACD,EAAK,IAAI,EAAI,GAAGC,CAAI,MAE5B,MAEJ,CAGA,MAAO,CAACD,EAAMC,CAAI,CACpB,EAAG,CAAC,CAAC,EAAG,CAAC,GAAGR,CAAK,CAAC,CAAC,EACnBY,EAAqB,CAACV,EAAGC,IACvBD,EAAE,KAAOC,EAAE,IACXD,EAAE,KAAOC,EAAE,EACZ,CACH,CACF,CACF,CACF,CACF,EAIC,KACChB,EAAI,CAAC,CAACoB,EAAMC,CAAI,KAAO,CACrB,KAAMD,EAAK,IAAI,CAAC,CAACT,CAAI,IAAMA,CAAI,EAC/B,KAAMU,EAAK,IAAI,CAAC,CAACV,CAAI,IAAMA,CAAI,CACjC,EAAE,EAGFe,EAAU,CAAE,KAAM,CAAC,EAAG,KAAM,CAAC,CAAE,CAAC,EAChCC,GAAY,EAAG,CAAC,EAChB3B,EAAI,CAAC,CAAC,EAAGgB,CAAC,IAGJ,EAAE,KAAK,OAASA,EAAE,KAAK,OAClB,CACL,KAAMA,EAAE,KAAK,MAAM,KAAK,IAAI,EAAG,EAAE,KAAK,OAAS,CAAC,EAAGA,EAAE,KAAK,MAAM,EAChE,KAAM,CAAC,CACT,EAIO,CACL,KAAMA,EAAE,KAAK,MAAM,EAAE,EACrB,KAAMA,EAAE,KAAK,MAAM,EAAGA,EAAE,KAAK,OAAS,EAAE,KAAK,MAAM,CACrD,CAEH,CACH,CACJ,CAYO,SAASY,GACdxC,EAAiB,CAAE,UAAAC,EAAW,QAAAC,EAAS,QAAAuC,CAAQ,EACP,CACxC,OAAOnB,EAAM,IAAM,CACjB,IAAMoB,EAAQ,IAAIC,EACZC,EAAQF,EAAM,KAAKG,GAAS,CAAC,CAAC,EAoBpC,GAnBAH,EAAM,UAAU,CAAC,CAAE,KAAAV,EAAM,KAAAC,CAAK,IAAM,CAGlC,OAAW,CAAC3B,CAAM,IAAK2B,EACrB3B,EAAO,UAAU,OAAO,sBAAsB,EAC9CA,EAAO,UAAU,OAAO,sBAAsB,EAIhD,OAAW,CAACmB,EAAO,CAACnB,CAAM,CAAC,IAAK0B,EAAK,QAAQ,EAC3C1B,EAAO,UAAU,IAAI,sBAAsB,EAC3CA,EAAO,UAAU,OACf,uBACAmB,IAAUO,EAAK,OAAS,CAC1B,CAEJ,CAAC,EAGGc,EAAQ,YAAY,EAAG,CAGzB,IAAMC,EAAUC,EACd/C,EAAU,KAAKgD,GAAa,CAAC,EAAGrC,EAAI,IAAG,EAAY,CAAC,EACpDX,EAAU,KAAKgD,GAAa,GAAG,EAAGrC,EAAI,IAAM,QAAiB,CAAC,CAChE,EAGA8B,EACG,KACCQ,EAAO,CAAC,CAAE,KAAAlB,CAAK,IAAMA,EAAK,OAAS,CAAC,EACpCmB,GAAeJ,CAAO,CACxB,EACG,UAAU,CAAC,CAAC,CAAE,KAAAf,CAAK,EAAGoB,CAAQ,IAAM,CACnC,GAAM,CAAC9C,CAAM,EAAI0B,EAAKA,EAAK,OAAS,GACpC,GAAI1B,EAAO,aAAc,CAGvB,IAAM+C,EAAYC,GAAoBhD,CAAM,EAC5C,GAAI,OAAO+C,GAAc,YAAa,CACpC,IAAM3B,EAASpB,EAAO,UAAY+C,EAAU,UACtC,CAAE,OAAAxC,CAAO,EAAI0C,GAAeF,CAAS,EAC3CA,EAAU,SAAS,CACjB,IAAK3B,EAASb,EAAS,EACvB,SAAAuC,CACF,CAAC,CACH,CACF,CACF,CAAC,CACP,CAGA,OAAIN,EAAQ,qBAAqB,GAC/B7C,EACG,KACCuD,GAAUZ,CAAK,EACfjC,EAAwB,QAAQ,EAChCsC,GAAa,GAAG,EAChBQ,GAAK,CAAC,EACND,GAAUf,EAAQ,KAAKgB,GAAK,CAAC,CAAC,CAAC,EAC/BC,GAAO,CAAE,MAAO,GAAI,CAAC,EACrBP,GAAeT,CAAK,CACtB,EACG,UAAU,CAAC,CAAC,CAAE,CAAE,KAAAV,CAAK,CAAC,IAAM,CAC3B,IAAM2B,EAAMC,GAAY,EAGlBtD,EAAS0B,EAAKA,EAAK,OAAS,GAClC,GAAI1B,GAAUA,EAAO,OAAQ,CAC3B,GAAM,CAACuD,CAAM,EAAIvD,EACX,CAAE,KAAAwD,CAAK,EAAI,IAAI,IAAID,EAAO,IAAI,EAChCF,EAAI,OAASG,IACfH,EAAI,KAAOG,EACX,QAAQ,aAAa,CAAC,EAAG,GAAI,GAAGH,GAAK,EAIzC,MACEA,EAAI,KAAO,GACX,QAAQ,aAAa,CAAC,EAAG,GAAI,GAAGA,GAAK,CAEzC,CAAC,EAGA5D,GAAqBC,EAAI,CAAE,UAAAC,EAAW,QAAAC,CAAQ,CAAC,EACnD,KACC6D,EAAIC,GAAStB,EAAM,KAAKsB,CAAK,CAAC,EAC9BC,EAAS,IAAMvB,EAAM,SAAS,CAAC,EAC/B9B,EAAIoD,GAAUE,EAAA,CAAE,IAAKlE,GAAOgE,EAAQ,CACtC,CACJ,CAAC,CACH,CCpRO,SAASG,GACdC,EAAkB,CAAE,UAAAC,EAAW,MAAAC,EAAO,QAAAC,CAAQ,EACvB,CAGvB,IAAMC,EAAaH,EAChB,KACCI,EAAI,CAAC,CAAE,OAAQ,CAAE,EAAAC,CAAE,CAAE,IAAMA,CAAC,EAC5BC,GAAY,EAAG,CAAC,EAChBF,EAAI,CAAC,CAACG,EAAGC,CAAC,IAAMD,EAAIC,GAAKA,EAAI,CAAC,EAC9BC,EAAqB,CACvB,EAGIC,EAAUT,EACb,KACCG,EAAI,CAAC,CAAE,OAAAO,CAAO,IAAMA,CAAM,CAC5B,EAGF,OAAOC,EAAc,CAACF,EAASP,CAAU,CAAC,EACvC,KACCC,EAAI,CAAC,CAACO,EAAQE,CAAS,IAAM,EAAEF,GAAUE,EAAU,EACnDJ,EAAqB,EACrBK,GAAUZ,EAAQ,KAAKa,GAAK,CAAC,CAAC,CAAC,EAC/BC,GAAQ,EAAI,EACZC,GAAO,CAAE,MAAO,GAAI,CAAC,EACrBb,EAAIc,IAAW,CAAE,OAAAA,CAAO,EAAE,CAC5B,CACJ,CAYO,SAASC,GACdC,EAAiB,CAAE,UAAApB,EAAW,QAAAqB,EAAS,MAAApB,EAAO,QAAAC,CAAQ,EACpB,CAClC,IAAMoB,EAAQ,IAAIC,EACZC,EAAQF,EAAM,KAAKG,GAAS,CAAC,CAAC,EACpC,OAAAH,EAAM,UAAU,CAGd,KAAK,CAAE,OAAAJ,CAAO,EAAG,CACfE,EAAG,OAASF,EACRA,GACFE,EAAG,aAAa,WAAY,IAAI,EAChCA,EAAG,KAAK,GAERA,EAAG,gBAAgB,UAAU,CAEjC,EAGA,UAAW,CACTA,EAAG,MAAM,IAAM,GACfA,EAAG,OAAS,GACZA,EAAG,gBAAgB,UAAU,CAC/B,CACF,CAAC,EAGDC,EACG,KACCP,GAAUU,CAAK,EACfE,EAAwB,QAAQ,CAClC,EACG,UAAU,CAAC,CAAE,OAAAC,CAAO,IAAM,CACzBP,EAAG,MAAM,IAAM,GAAGO,EAAS,MAC7B,CAAC,EAGE7B,GAAesB,EAAI,CAAE,UAAApB,EAAW,MAAAC,EAAO,QAAAC,CAAQ,CAAC,EACpD,KACC0B,EAAIC,GAASP,EAAM,KAAKO,CAAK,CAAC,EAC9BC,EAAS,IAAMR,EAAM,SAAS,CAAC,EAC/BlB,EAAIyB,GAAUE,EAAA,CAAE,IAAKX,GAAOS,EAAQ,CACtC,CACJ,CCpHO,SAASG,GACd,CAAE,UAAAC,EAAW,QAAAC,CAAQ,EACf,CACND,EACG,KACCE,EAAU,IAAMC,EAEd,0DACF,CAAC,EACDC,EAAIC,GAAM,CACRA,EAAG,cAAgB,GACnBA,EAAG,QAAU,EACf,CAAC,EACDC,GAASD,GAAME,EAAUF,EAAI,QAAQ,EAClC,KACCG,GAAU,IAAMH,EAAG,UAAU,SAAS,0BAA0B,CAAC,EACjEI,EAAI,IAAMJ,CAAE,CACd,CACF,EACAK,GAAeT,CAAO,CACxB,EACG,UAAU,CAAC,CAACI,EAAIM,CAAM,IAAM,CAC3BN,EAAG,UAAU,OAAO,0BAA0B,EAC1CM,IACFN,EAAG,QAAU,GACjB,CAAC,CACP,CC/BA,SAASO,IAAyB,CAChC,MAAO,qBAAqB,KAAK,UAAU,SAAS,CACtD,CAiBO,SAASC,GACd,CAAE,UAAAC,CAAU,EACN,CACNA,EACG,KACCC,EAAU,IAAMC,EAAY,qBAAqB,CAAC,EAClDC,EAAIC,GAAMA,EAAG,gBAAgB,mBAAmB,CAAC,EACjDC,EAAOP,EAAa,EACpBQ,GAASF,GAAMG,EAAUH,EAAI,YAAY,EACtC,KACCI,EAAI,IAAMJ,CAAE,CACd,CACF,CACF,EACG,UAAUA,GAAM,CACf,IAAMK,EAAML,EAAG,UAGXK,IAAQ,EACVL,EAAG,UAAY,EAGNK,EAAML,EAAG,eAAiBA,EAAG,eACtCA,EAAG,UAAYK,EAAM,EAEzB,CAAC,CACP,CCpCO,SAASC,GACd,CAAE,UAAAC,EAAW,QAAAC,CAAQ,EACf,CACNC,EAAc,CAACC,GAAY,QAAQ,EAAGF,CAAO,CAAC,EAC3C,KACCG,EAAI,CAAC,CAACC,EAAQC,CAAM,IAAMD,GAAU,CAACC,CAAM,EAC3CC,EAAUF,GAAUG,EAAGH,CAAM,EAC1B,KACCI,GAAMJ,EAAS,IAAM,GAAG,CAC1B,CACF,EACAK,GAAeV,CAAS,CAC1B,EACG,UAAU,CAAC,CAACK,EAAQ,CAAE,OAAQ,CAAE,EAAAM,CAAE,CAAC,CAAC,IAAM,CACzC,GAAIN,EACF,SAAS,KAAK,aAAa,qBAAsB,EAAE,EACnD,SAAS,KAAK,MAAM,IAAM,IAAIM,UACzB,CACL,IAAMC,EAAQ,GAAK,SAAS,SAAS,KAAK,MAAM,IAAK,EAAE,EACvD,SAAS,KAAK,gBAAgB,oBAAoB,EAClD,SAAS,KAAK,MAAM,IAAM,GACtBA,GACF,OAAO,SAAS,EAAGA,CAAK,CAC5B,CACF,CAAC,CACP,CC7DK,OAAO,UACV,OAAO,QAAU,SAAUC,EAAa,CACtC,IAAMC,EAA2B,CAAC,EAClC,QAAWC,KAAO,OAAO,KAAKF,CAAG,EAE/BC,EAAK,KAAK,CAACC,EAAKF,EAAIE,EAAI,CAAC,EAG3B,OAAOD,CACT,GAGG,OAAO,SACV,OAAO,OAAS,SAAUD,EAAa,CACrC,IAAMC,EAAiB,CAAC,EACxB,QAAWC,KAAO,OAAO,KAAKF,CAAG,EAE/BC,EAAK,KAAKD,EAAIE,EAAI,EAGpB,OAAOD,CACT,GAKE,OAAO,SAAY,cAGhB,QAAQ,UAAU,WACrB,QAAQ,UAAU,SAAW,SAC3BE,EAA8BC,EACxB,CACF,OAAOD,GAAM,UACf,KAAK,WAAaA,EAAE,KACpB,KAAK,UAAYA,EAAE,MAEnB,KAAK,WAAaA,EAClB,KAAK,UAAYC,EAErB,GAGG,QAAQ,UAAU,cACrB,QAAQ,UAAU,YAAc,YAC3BC,EACG,CACN,IAAMC,EAAS,KAAK,WACpB,GAAIA,EAAQ,CACND,EAAM,SAAW,GACnBC,EAAO,YAAY,IAAI,EAGzB,QAASC,EAAIF,EAAM,OAAS,EAAGE,GAAK,EAAGA,IAAK,CAC1C,IAAIC,EAAOH,EAAME,GACb,OAAOC,GAAS,SAClBA,EAAO,SAAS,eAAeA,CAAI,EAC5BA,EAAK,YACZA,EAAK,WAAW,YAAYA,CAAI,EAG7BD,EAGHD,EAAO,aAAa,KAAK,gBAAkBE,CAAI,EAF/CF,EAAO,aAAaE,EAAM,IAAI,CAGlC,CACF,CACF,IjMDJ,SAAS,gBAAgB,UAAU,OAAO,OAAO,EACjD,SAAS,gBAAgB,UAAU,IAAI,IAAI,EAG3C,IAAMC,GAAYC,GAAc,EAC1BC,GAAYC,GAAc,EAC1BC,GAAYC,GAAoB,EAChCC,GAAYC,GAAc,EAG1BC,GAAYC,GAAc,EAC1BC,GAAYC,GAAW,oBAAoB,EAC3CC,GAAYD,GAAW,qBAAqB,EAC5CE,GAAYC,GAAW,EAGvBC,GAASC,GAAc,EACvBC,GAAS,SAAS,MAAM,UAAU,QAAQ,GAC5C,+BAAU,QAASC,GACnB,IAAI,IAAI,2BAA4BH,GAAO,IAAI,CACjD,EACEI,GAGEC,GAAS,IAAIC,EACnBC,GAAiB,CAAE,OAAAF,EAAO,CAAC,EAGvBG,EAAQ,oBAAoB,GAC9BC,GAAoB,CAAE,UAAAxB,GAAW,UAAAE,GAAW,UAAAM,EAAU,CAAC,EA1HzD,IAAAiB,KA6HIA,GAAAV,GAAO,UAAP,YAAAU,GAAgB,YAAa,QAC/BC,GAAqB,CAAE,UAAA1B,EAAU,CAAC,EAGpC2B,EAAMzB,GAAWE,EAAO,EACrB,KACCwB,GAAM,GAAG,CACX,EACG,UAAU,IAAM,CACfC,GAAU,SAAU,EAAK,EACzBA,GAAU,SAAU,EAAK,CAC3B,CAAC,EAGLvB,GACG,KACCwB,EAAO,CAAC,CAAE,KAAAC,CAAK,IAAMA,IAAS,QAAQ,CACxC,EACG,UAAUC,GAAO,CAChB,OAAQA,EAAI,KAAM,CAGhB,IAAK,IACL,IAAK,IACH,IAAMC,EAAOC,GAAmB,kBAAkB,EAC9C,OAAOD,GAAS,aAClBA,EAAK,MAAM,EACb,MAGF,IAAK,IACL,IAAK,IACH,IAAME,EAAOD,GAAmB,kBAAkB,EAC9C,OAAOC,GAAS,aAClBA,EAAK,MAAM,EACb,KACJ,CACF,CAAC,EAGLC,GAAmB,CAAE,UAAApC,GAAW,QAAAU,EAAQ,CAAC,EACzC2B,GAAe,CAAE,UAAArC,EAAU,CAAC,EAC5BsC,GAAgB,CAAE,UAAA9B,GAAW,QAAAE,EAAQ,CAAC,EAGtC,IAAM6B,GAAUC,GAAYC,GAAoB,QAAQ,EAAG,CAAE,UAAAjC,EAAU,CAAC,EAClEkC,GAAQ1C,GACX,KACC2C,EAAI,IAAMF,GAAoB,MAAM,CAAC,EACrCG,EAAUC,GAAMC,GAAUD,EAAI,CAAE,UAAArC,GAAW,QAAA+B,EAAQ,CAAC,CAAC,EACrDQ,EAAY,CAAC,CACf,EAGIC,GAAWrB,EAGf,GAAGsB,GAAqB,SAAS,EAC9B,IAAIJ,GAAMK,GAAaL,EAAI,CAAE,QAAAzC,EAAQ,CAAC,CAAC,EAG1C,GAAG6C,GAAqB,QAAQ,EAC7B,IAAIJ,GAAMM,GAAYN,EAAI,CAAE,OAAAzB,EAAO,CAAC,CAAC,EAGxC,GAAG6B,GAAqB,QAAQ,EAC7B,IAAIJ,GAAMO,GAAYP,EAAI,CAAE,UAAArC,GAAW,QAAA+B,GAAS,MAAAG,EAAM,CAAC,CAAC,EAG3D,GAAGO,GAAqB,SAAS,EAC9B,IAAIJ,GAAMQ,GAAaR,CAAE,CAAC,EAG7B,GAAGI,GAAqB,QAAQ,EAC7B,IAAIJ,GAAMS,GAAYT,EAAI,CAAE,OAAA5B,GAAQ,UAAAX,EAAU,CAAC,CAAC,EAGnD,GAAG2C,GAAqB,QAAQ,EAC7B,IAAIJ,GAAMU,GAAYV,CAAE,CAAC,CAC9B,EAGMW,GAAWC,EAAM,IAAM9B,EAG3B,GAAGsB,GAAqB,UAAU,EAC/B,IAAIJ,GAAMa,GAAcb,CAAE,CAAC,EAG9B,GAAGI,GAAqB,SAAS,EAC9B,IAAIJ,GAAMc,GAAad,EAAI,CAAE,UAAArC,GAAW,QAAAJ,GAAS,OAAAS,EAAO,CAAC,CAAC,EAG7D,GAAGoC,GAAqB,SAAS,EAC9B,IAAIJ,GAAMtB,EAAQ,kBAAkB,EACjCqC,GAAoBf,EAAI,CAAE,OAAA5B,GAAQ,UAAAf,EAAU,CAAC,EAC7C2D,CACJ,EAGF,GAAGZ,GAAqB,cAAc,EACnC,IAAIJ,GAAMiB,GAAiBjB,EAAI,CAAE,UAAArC,GAAW,QAAA+B,EAAQ,CAAC,CAAC,EAGzD,GAAGU,GAAqB,SAAS,EAC9B,IAAIJ,GAAMA,EAAG,aAAa,cAAc,IAAM,aAC3CkB,GAAGnD,GAAS,IAAMoD,GAAanB,EAAI,CAAE,UAAArC,GAAW,QAAA+B,GAAS,MAAAG,EAAM,CAAC,CAAC,EACjEqB,GAAGrD,GAAS,IAAMsD,GAAanB,EAAI,CAAE,UAAArC,GAAW,QAAA+B,GAAS,MAAAG,EAAM,CAAC,CAAC,CACrE,EAGF,GAAGO,GAAqB,MAAM,EAC3B,IAAIJ,GAAMoB,GAAUpB,EAAI,CAAE,UAAArC,GAAW,QAAA+B,EAAQ,CAAC,CAAC,EAGlD,GAAGU,GAAqB,KAAK,EAC1B,IAAIJ,GAAMqB,GAAqBrB,EAAI,CAAE,UAAArC,GAAW,QAAA+B,GAAS,QAAAnC,EAAQ,CAAC,CAAC,EAGtE,GAAG6C,GAAqB,KAAK,EAC1B,IAAIJ,GAAMsB,GAAetB,EAAI,CAAE,UAAArC,GAAW,QAAA+B,GAAS,MAAAG,GAAO,QAAAtC,EAAQ,CAAC,CAAC,CACzE,CAAC,EAGKgE,GAAapE,GAChB,KACC4C,EAAU,IAAMY,EAAQ,EACxBa,GAAUrB,EAAQ,EAClBD,EAAY,CAAC,CACf,EAGFqB,GAAW,UAAU,EAMrB,OAAO,UAAapE,GACpB,OAAO,UAAaE,GACpB,OAAO,QAAaE,GACpB,OAAO,UAAaE,GACpB,OAAO,UAAaE,GACpB,OAAO,QAAaE,GACpB,OAAO,QAAaE,GACpB,OAAO,OAAaC,GACpB,OAAO,OAAaO,GACpB,OAAO,WAAagD", + "names": ["require_focus_visible", "__commonJSMin", "exports", "module", "global", "factory", "applyFocusVisiblePolyfill", "scope", "hadKeyboardEvent", "hadFocusVisibleRecently", "hadFocusVisibleRecentlyTimeout", "inputTypesAllowlist", "isValidFocusTarget", "el", "focusTriggersKeyboardModality", "type", "tagName", "addFocusVisibleClass", "removeFocusVisibleClass", "onKeyDown", "e", "onPointerDown", "onFocus", "onBlur", "onVisibilityChange", "addInitialPointerMoveListeners", "onInitialPointerMove", "removeInitialPointerMoveListeners", "event", "error", "require_url_polyfill", "__commonJSMin", "exports", "global", "checkIfIteratorIsSupported", "error", "iteratorSupported", "createIterator", "items", "iterator", "value", "serializeParam", "deserializeParam", "polyfillURLSearchParams", "URLSearchParams", "searchString", "typeofSearchString", "_this", "name", "i", "entry", "key", "proto", "callback", "thisArg", "entries", "searchArray", "checkIfURLSearchParamsSupported", "e", "a", "b", "keys", "attributes", "attribute", "checkIfURLIsSupported", "u", "polyfillURL", "_URL", "URL", "url", "base", "doc", "baseElement", "err", "anchorElement", "inputElement", "searchParams", "enableSearchUpdate", "enableSearchParamsUpdate", "methodName", "method", "search", "linkURLWithAnchorAttribute", "attributeName", "expectedPort", "addPortToOrigin", "blob", "getOrigin", "require_tslib", "__commonJSMin", "exports", "module", "__extends", "__assign", "__rest", "__decorate", "__param", "__metadata", "__awaiter", "__generator", "__exportStar", "__values", "__read", "__spread", "__spreadArrays", "__spreadArray", "__await", "__asyncGenerator", "__asyncDelegator", "__asyncValues", "__makeTemplateObject", "__importStar", "__importDefault", "__classPrivateFieldGet", "__classPrivateFieldSet", "__createBinding", "factory", "root", "createExporter", "previous", "id", "v", "exporter", "extendStatics", "d", "b", "p", "__", "t", "s", "n", "e", "i", "decorators", "target", "key", "desc", "c", "r", "paramIndex", "decorator", "metadataKey", "metadataValue", "thisArg", "_arguments", "P", "generator", "adopt", "value", "resolve", "reject", "fulfilled", "step", "rejected", "result", "body", "_", "f", "y", "g", "verb", "op", "m", "o", "k", "k2", "ar", "error", "il", "a", "j", "jl", "to", "from", "pack", "l", "q", "resume", "settle", "fulfill", "cooked", "raw", "__setModuleDefault", "mod", "receiver", "state", "kind", "require_clipboard", "__commonJSMin", "exports", "module", "root", "factory", "__webpack_modules__", "__unused_webpack_module", "__webpack_exports__", "__webpack_require__", "clipboard", "tiny_emitter", "tiny_emitter_default", "listen", "listen_default", "src_select", "select_default", "command", "type", "err", "ClipboardActionCut", "target", "selectedText", "actions_cut", "createFakeElement", "value", "isRTL", "fakeElement", "yPosition", "fakeCopyAction", "options", "ClipboardActionCopy", "actions_copy", "_typeof", "obj", "ClipboardActionDefault", "_options$action", "action", "container", "text", "actions_default", "clipboard_typeof", "_classCallCheck", "instance", "Constructor", "_defineProperties", "props", "i", "descriptor", "_createClass", "protoProps", "staticProps", "_inherits", "subClass", "superClass", "_setPrototypeOf", "o", "p", "_createSuper", "Derived", "hasNativeReflectConstruct", "_isNativeReflectConstruct", "Super", "_getPrototypeOf", "result", "NewTarget", "_possibleConstructorReturn", "self", "call", "_assertThisInitialized", "e", "getAttributeValue", "suffix", "element", "attribute", "Clipboard", "_Emitter", "_super", "trigger", "_this", "_this2", "selector", "actions", "support", "DOCUMENT_NODE_TYPE", "proto", "closest", "__unused_webpack_exports", "_delegate", "callback", "useCapture", "listenerFn", "listener", "delegate", "elements", "is", "listenNode", "listenNodeList", "listenSelector", "node", "nodeList", "select", "isReadOnly", "selection", "range", "E", "name", "ctx", "data", "evtArr", "len", "evts", "liveEvents", "__webpack_module_cache__", "moduleId", "getter", "definition", "key", "prop", "require_escape_html", "__commonJSMin", "exports", "module", "matchHtmlRegExp", "escapeHtml", "string", "str", "match", "escape", "html", "index", "lastIndex", "r", "a", "e", "import_focus_visible", "n", "t", "s", "r", "o", "u", "i", "a", "e", "c", "import_url_polyfill", "import_tslib", "__extends", "__assign", "__rest", "__decorate", "__param", "__metadata", "__awaiter", "__generator", "__exportStar", "__createBinding", "__values", "__read", "__spread", "__spreadArrays", "__spreadArray", "__await", "__asyncGenerator", "__asyncDelegator", "__asyncValues", "__makeTemplateObject", "__importStar", "__importDefault", "__classPrivateFieldGet", "__classPrivateFieldSet", "tslib", "isFunction", "value", "createErrorClass", "createImpl", "_super", "instance", "ctorFunc", "UnsubscriptionError", "createErrorClass", "_super", "errors", "err", "i", "arrRemove", "arr", "item", "index", "Subscription", "initialTeardown", "errors", "_parentage", "_parentage_1", "__values", "_parentage_1_1", "parent_1", "initialFinalizer", "isFunction", "e", "UnsubscriptionError", "_finalizers", "_finalizers_1", "_finalizers_1_1", "finalizer", "execFinalizer", "err", "__spreadArray", "__read", "teardown", "_a", "parent", "arrRemove", "empty", "EMPTY_SUBSCRIPTION", "Subscription", "isSubscription", "value", "isFunction", "execFinalizer", "finalizer", "config", "timeoutProvider", "handler", "timeout", "args", "_i", "delegate", "__spreadArray", "__read", "handle", "reportUnhandledError", "err", "timeoutProvider", "onUnhandledError", "config", "noop", "COMPLETE_NOTIFICATION", "createNotification", "errorNotification", "error", "nextNotification", "value", "kind", "context", "errorContext", "cb", "config", "isRoot", "_a", "errorThrown", "error", "captureError", "err", "Subscriber", "_super", "__extends", "destination", "_this", "isSubscription", "EMPTY_OBSERVER", "next", "error", "complete", "SafeSubscriber", "value", "handleStoppedNotification", "nextNotification", "err", "errorNotification", "COMPLETE_NOTIFICATION", "Subscription", "_bind", "bind", "fn", "thisArg", "ConsumerObserver", "partialObserver", "value", "error", "handleUnhandledError", "err", "SafeSubscriber", "_super", "__extends", "observerOrNext", "complete", "_this", "isFunction", "context_1", "config", "Subscriber", "handleUnhandledError", "error", "config", "captureError", "reportUnhandledError", "defaultErrorHandler", "err", "handleStoppedNotification", "notification", "subscriber", "onStoppedNotification", "timeoutProvider", "EMPTY_OBSERVER", "noop", "observable", "identity", "x", "pipe", "fns", "_i", "pipeFromArray", "identity", "input", "prev", "fn", "Observable", "subscribe", "operator", "observable", "observerOrNext", "error", "complete", "_this", "subscriber", "isSubscriber", "SafeSubscriber", "errorContext", "_a", "source", "sink", "err", "next", "promiseCtor", "getPromiseCtor", "resolve", "reject", "value", "operations", "_i", "pipeFromArray", "x", "getPromiseCtor", "promiseCtor", "_a", "config", "isObserver", "value", "isFunction", "isSubscriber", "Subscriber", "isSubscription", "hasLift", "source", "isFunction", "operate", "init", "liftedSource", "err", "createOperatorSubscriber", "destination", "onNext", "onComplete", "onError", "onFinalize", "OperatorSubscriber", "_super", "__extends", "shouldUnsubscribe", "_this", "value", "err", "closed_1", "_a", "Subscriber", "animationFrameProvider", "callback", "request", "cancel", "delegate", "handle", "timestamp", "Subscription", "args", "_i", "__spreadArray", "__read", "ObjectUnsubscribedError", "createErrorClass", "_super", "Subject", "_super", "__extends", "_this", "operator", "subject", "AnonymousSubject", "ObjectUnsubscribedError", "value", "errorContext", "_b", "__values", "_c", "observer", "err", "observers", "_a", "subscriber", "hasError", "isStopped", "EMPTY_SUBSCRIPTION", "Subscription", "arrRemove", "thrownError", "observable", "Observable", "destination", "source", "AnonymousSubject", "_super", "__extends", "destination", "source", "_this", "value", "_b", "_a", "err", "subscriber", "EMPTY_SUBSCRIPTION", "Subject", "dateTimestampProvider", "ReplaySubject", "_super", "__extends", "_bufferSize", "_windowTime", "_timestampProvider", "dateTimestampProvider", "_this", "value", "_a", "isStopped", "_buffer", "_infiniteTimeWindow", "subscriber", "subscription", "copy", "i", "adjustedBufferSize", "now", "last", "Subject", "Action", "_super", "__extends", "scheduler", "work", "state", "delay", "Subscription", "intervalProvider", "handler", "timeout", "args", "_i", "delegate", "__spreadArray", "__read", "handle", "AsyncAction", "_super", "__extends", "scheduler", "work", "_this", "state", "delay", "id", "_a", "_id", "intervalProvider", "_scheduler", "error", "_delay", "errored", "errorValue", "e", "actions", "arrRemove", "Action", "Scheduler", "schedulerActionCtor", "now", "work", "delay", "state", "dateTimestampProvider", "AsyncScheduler", "_super", "__extends", "SchedulerAction", "now", "Scheduler", "_this", "action", "actions", "error", "asyncScheduler", "AsyncScheduler", "AsyncAction", "async", "AnimationFrameAction", "_super", "__extends", "scheduler", "work", "_this", "id", "delay", "animationFrameProvider", "actions", "_a", "AsyncAction", "AnimationFrameScheduler", "_super", "__extends", "action", "flushId", "actions", "error", "AsyncScheduler", "animationFrameScheduler", "AnimationFrameScheduler", "AnimationFrameAction", "EMPTY", "Observable", "subscriber", "isScheduler", "value", "isFunction", "last", "arr", "popResultSelector", "args", "isFunction", "popScheduler", "isScheduler", "popNumber", "defaultValue", "isArrayLike", "x", "isPromise", "value", "isFunction", "isInteropObservable", "input", "isFunction", "observable", "isAsyncIterable", "obj", "isFunction", "createInvalidObservableTypeError", "input", "getSymbolIterator", "iterator", "isIterable", "input", "isFunction", "iterator", "readableStreamLikeToAsyncGenerator", "readableStream", "reader", "__await", "_a", "_b", "value", "done", "isReadableStreamLike", "obj", "isFunction", "innerFrom", "input", "Observable", "isInteropObservable", "fromInteropObservable", "isArrayLike", "fromArrayLike", "isPromise", "fromPromise", "isAsyncIterable", "fromAsyncIterable", "isIterable", "fromIterable", "isReadableStreamLike", "fromReadableStreamLike", "createInvalidObservableTypeError", "obj", "subscriber", "obs", "observable", "isFunction", "array", "i", "promise", "value", "err", "reportUnhandledError", "iterable", "iterable_1", "__values", "iterable_1_1", "asyncIterable", "process", "readableStream", "readableStreamLikeToAsyncGenerator", "asyncIterable_1", "__asyncValues", "asyncIterable_1_1", "executeSchedule", "parentSubscription", "scheduler", "work", "delay", "repeat", "scheduleSubscription", "observeOn", "scheduler", "delay", "operate", "source", "subscriber", "createOperatorSubscriber", "value", "executeSchedule", "err", "subscribeOn", "scheduler", "delay", "operate", "source", "subscriber", "scheduleObservable", "input", "scheduler", "innerFrom", "subscribeOn", "observeOn", "schedulePromise", "input", "scheduler", "innerFrom", "subscribeOn", "observeOn", "scheduleArray", "input", "scheduler", "Observable", "subscriber", "i", "scheduleIterable", "input", "scheduler", "Observable", "subscriber", "iterator", "executeSchedule", "value", "done", "_a", "err", "isFunction", "scheduleAsyncIterable", "input", "scheduler", "Observable", "subscriber", "executeSchedule", "iterator", "result", "scheduleReadableStreamLike", "input", "scheduler", "scheduleAsyncIterable", "readableStreamLikeToAsyncGenerator", "scheduled", "input", "scheduler", "isInteropObservable", "scheduleObservable", "isArrayLike", "scheduleArray", "isPromise", "schedulePromise", "isAsyncIterable", "scheduleAsyncIterable", "isIterable", "scheduleIterable", "isReadableStreamLike", "scheduleReadableStreamLike", "createInvalidObservableTypeError", "from", "input", "scheduler", "scheduled", "innerFrom", "of", "args", "_i", "scheduler", "popScheduler", "from", "throwError", "errorOrErrorFactory", "scheduler", "errorFactory", "isFunction", "init", "subscriber", "Observable", "isValidDate", "value", "map", "project", "thisArg", "operate", "source", "subscriber", "index", "createOperatorSubscriber", "value", "isArray", "callOrApply", "fn", "args", "__spreadArray", "__read", "mapOneOrManyArgs", "map", "isArray", "getPrototypeOf", "objectProto", "getKeys", "argsArgArrayOrObject", "args", "first_1", "isPOJO", "keys", "key", "obj", "createObject", "keys", "values", "result", "key", "i", "combineLatest", "args", "_i", "scheduler", "popScheduler", "resultSelector", "popResultSelector", "_a", "argsArgArrayOrObject", "observables", "keys", "from", "result", "Observable", "combineLatestInit", "values", "createObject", "identity", "mapOneOrManyArgs", "valueTransform", "subscriber", "maybeSchedule", "length", "active", "remainingFirstValues", "i", "source", "hasFirstValue", "createOperatorSubscriber", "value", "execute", "subscription", "executeSchedule", "mergeInternals", "source", "subscriber", "project", "concurrent", "onBeforeNext", "expand", "innerSubScheduler", "additionalFinalizer", "buffer", "active", "index", "isComplete", "checkComplete", "outerNext", "value", "doInnerSub", "innerComplete", "innerFrom", "createOperatorSubscriber", "innerValue", "bufferedValue", "executeSchedule", "err", "mergeMap", "project", "resultSelector", "concurrent", "isFunction", "a", "i", "map", "b", "ii", "innerFrom", "operate", "source", "subscriber", "mergeInternals", "mergeAll", "concurrent", "mergeMap", "identity", "concatAll", "mergeAll", "concat", "args", "_i", "concatAll", "from", "popScheduler", "defer", "observableFactory", "Observable", "subscriber", "innerFrom", "nodeEventEmitterMethods", "eventTargetMethods", "jqueryMethods", "fromEvent", "target", "eventName", "options", "resultSelector", "isFunction", "mapOneOrManyArgs", "_a", "__read", "isEventTarget", "methodName", "handler", "isNodeStyleEventEmitter", "toCommonHandlerRegistry", "isJQueryStyleEventEmitter", "add", "remove", "isArrayLike", "mergeMap", "subTarget", "innerFrom", "Observable", "subscriber", "args", "_i", "fromEventPattern", "addHandler", "removeHandler", "resultSelector", "mapOneOrManyArgs", "Observable", "subscriber", "handler", "e", "_i", "retValue", "isFunction", "timer", "dueTime", "intervalOrScheduler", "scheduler", "async", "intervalDuration", "isScheduler", "Observable", "subscriber", "due", "isValidDate", "n", "merge", "args", "_i", "scheduler", "popScheduler", "concurrent", "popNumber", "sources", "innerFrom", "mergeAll", "from", "EMPTY", "NEVER", "Observable", "noop", "isArray", "argsOrArgArray", "args", "filter", "predicate", "thisArg", "operate", "source", "subscriber", "index", "createOperatorSubscriber", "value", "zip", "args", "_i", "resultSelector", "popResultSelector", "sources", "argsOrArgArray", "Observable", "subscriber", "buffers", "completed", "sourceIndex", "innerFrom", "createOperatorSubscriber", "value", "buffer", "result", "__spreadArray", "__read", "i", "EMPTY", "audit", "durationSelector", "operate", "source", "subscriber", "hasValue", "lastValue", "durationSubscriber", "isComplete", "endDuration", "value", "cleanupDuration", "createOperatorSubscriber", "innerFrom", "auditTime", "duration", "scheduler", "asyncScheduler", "audit", "timer", "bufferCount", "bufferSize", "startBufferEvery", "operate", "source", "subscriber", "buffers", "count", "createOperatorSubscriber", "value", "toEmit", "buffers_1", "__values", "buffers_1_1", "buffer", "toEmit_1", "toEmit_1_1", "arrRemove", "buffers_2", "buffers_2_1", "catchError", "selector", "operate", "source", "subscriber", "innerSub", "syncUnsub", "handledResult", "createOperatorSubscriber", "err", "innerFrom", "scanInternals", "accumulator", "seed", "hasSeed", "emitOnNext", "emitBeforeComplete", "source", "subscriber", "hasState", "state", "index", "createOperatorSubscriber", "value", "i", "combineLatest", "args", "_i", "resultSelector", "popResultSelector", "pipe", "__spreadArray", "__read", "mapOneOrManyArgs", "operate", "source", "subscriber", "combineLatestInit", "argsOrArgArray", "combineLatestWith", "otherSources", "_i", "combineLatest", "__spreadArray", "__read", "concatMap", "project", "resultSelector", "isFunction", "mergeMap", "debounceTime", "dueTime", "scheduler", "asyncScheduler", "operate", "source", "subscriber", "activeTask", "lastValue", "lastTime", "emit", "value", "emitWhenIdle", "targetTime", "now", "createOperatorSubscriber", "defaultIfEmpty", "defaultValue", "operate", "source", "subscriber", "hasValue", "createOperatorSubscriber", "value", "take", "count", "EMPTY", "operate", "source", "subscriber", "seen", "createOperatorSubscriber", "value", "ignoreElements", "operate", "source", "subscriber", "createOperatorSubscriber", "noop", "mapTo", "value", "map", "delayWhen", "delayDurationSelector", "subscriptionDelay", "source", "concat", "take", "ignoreElements", "mergeMap", "value", "index", "mapTo", "delay", "due", "scheduler", "asyncScheduler", "duration", "timer", "delayWhen", "distinctUntilChanged", "comparator", "keySelector", "identity", "defaultCompare", "operate", "source", "subscriber", "previousKey", "first", "createOperatorSubscriber", "value", "currentKey", "a", "b", "distinctUntilKeyChanged", "key", "compare", "distinctUntilChanged", "x", "y", "endWith", "values", "_i", "source", "concat", "of", "__spreadArray", "__read", "finalize", "callback", "operate", "source", "subscriber", "takeLast", "count", "EMPTY", "operate", "source", "subscriber", "buffer", "createOperatorSubscriber", "value", "buffer_1", "__values", "buffer_1_1", "merge", "args", "_i", "scheduler", "popScheduler", "concurrent", "popNumber", "argsOrArgArray", "operate", "source", "subscriber", "mergeAll", "from", "__spreadArray", "__read", "mergeWith", "otherSources", "_i", "merge", "__spreadArray", "__read", "repeat", "countOrConfig", "count", "delay", "_a", "EMPTY", "operate", "source", "subscriber", "soFar", "sourceSub", "resubscribe", "notifier", "timer", "innerFrom", "notifierSubscriber_1", "createOperatorSubscriber", "subscribeToSource", "syncUnsub", "sample", "notifier", "operate", "source", "subscriber", "hasValue", "lastValue", "createOperatorSubscriber", "value", "noop", "scan", "accumulator", "seed", "operate", "scanInternals", "share", "options", "_a", "connector", "Subject", "_b", "resetOnError", "_c", "resetOnComplete", "_d", "resetOnRefCountZero", "wrapperSource", "connection", "resetConnection", "subject", "refCount", "hasCompleted", "hasErrored", "cancelReset", "reset", "resetAndUnsubscribe", "conn", "operate", "source", "subscriber", "dest", "handleReset", "SafeSubscriber", "value", "err", "innerFrom", "on", "args", "_i", "onSubscriber", "__spreadArray", "__read", "shareReplay", "configOrBufferSize", "windowTime", "scheduler", "bufferSize", "refCount", "_a", "_b", "_c", "share", "ReplaySubject", "skip", "count", "filter", "_", "index", "skipUntil", "notifier", "operate", "source", "subscriber", "taking", "skipSubscriber", "createOperatorSubscriber", "noop", "innerFrom", "value", "startWith", "values", "_i", "scheduler", "popScheduler", "operate", "source", "subscriber", "concat", "switchMap", "project", "resultSelector", "operate", "source", "subscriber", "innerSubscriber", "index", "isComplete", "checkComplete", "createOperatorSubscriber", "value", "innerIndex", "outerIndex", "innerFrom", "innerValue", "takeUntil", "notifier", "operate", "source", "subscriber", "innerFrom", "createOperatorSubscriber", "noop", "takeWhile", "predicate", "inclusive", "operate", "source", "subscriber", "index", "createOperatorSubscriber", "value", "result", "tap", "observerOrNext", "error", "complete", "tapObserver", "isFunction", "operate", "source", "subscriber", "_a", "isUnsub", "createOperatorSubscriber", "value", "err", "_b", "identity", "defaultThrottleConfig", "throttle", "durationSelector", "config", "operate", "source", "subscriber", "leading", "trailing", "hasValue", "sendValue", "throttled", "isComplete", "endThrottling", "send", "cleanupThrottling", "startThrottle", "value", "innerFrom", "createOperatorSubscriber", "throttleTime", "duration", "scheduler", "config", "asyncScheduler", "defaultThrottleConfig", "duration$", "timer", "throttle", "withLatestFrom", "inputs", "_i", "project", "popResultSelector", "operate", "source", "subscriber", "len", "otherValues", "hasValue", "ready", "i", "innerFrom", "createOperatorSubscriber", "value", "identity", "noop", "values", "__spreadArray", "__read", "zip", "sources", "_i", "operate", "source", "subscriber", "__spreadArray", "__read", "zipWith", "otherInputs", "_i", "zip", "__spreadArray", "__read", "watchDocument", "document$", "ReplaySubject", "fromEvent", "getElements", "selector", "node", "getElement", "el", "getOptionalElement", "getActiveElement", "watchElementFocus", "el", "merge", "fromEvent", "debounceTime", "map", "active", "getActiveElement", "startWith", "distinctUntilChanged", "getElementOffset", "el", "watchElementOffset", "merge", "fromEvent", "auditTime", "animationFrameScheduler", "map", "startWith", "getElementContentOffset", "el", "watchElementContentOffset", "merge", "fromEvent", "auditTime", "animationFrameScheduler", "map", "startWith", "MapShim", "getIndex", "arr", "key", "result", "entry", "index", "class_1", "value", "entries", "callback", "ctx", "_i", "_a", "isBrowser", "global$1", "requestAnimationFrame$1", "trailingTimeout", "throttle", "delay", "leadingCall", "trailingCall", "lastCallTime", "resolvePending", "proxy", "timeoutCallback", "timeStamp", "REFRESH_DELAY", "transitionKeys", "mutationObserverSupported", "ResizeObserverController", "observer", "observers", "changesDetected", "activeObservers", "_b", "propertyName", "isReflowProperty", "defineConfigurable", "target", "props", "getWindowOf", "ownerGlobal", "emptyRect", "createRectInit", "toFloat", "getBordersSize", "styles", "positions", "size", "position", "getPaddings", "paddings", "positions_1", "getSVGContentRect", "bbox", "getHTMLElementContentRect", "clientWidth", "clientHeight", "horizPad", "vertPad", "width", "height", "isDocumentElement", "vertScrollbar", "horizScrollbar", "isSVGGraphicsElement", "getContentRect", "createReadOnlyRect", "x", "y", "Constr", "rect", "ResizeObservation", "ResizeObserverEntry", "rectInit", "contentRect", "ResizeObserverSPI", "controller", "callbackCtx", "observations", "_this", "observation", "ResizeObserver", "method", "ResizeObserver_es_default", "entry$", "Subject", "observer$", "defer", "of", "ResizeObserver_es_default", "entries", "entry", "switchMap", "observer", "merge", "NEVER", "finalize", "shareReplay", "getElementSize", "el", "watchElementSize", "tap", "filter", "target", "map", "startWith", "getElementContentSize", "el", "getElementContainer", "parent", "entry$", "Subject", "observer$", "defer", "of", "entries", "entry", "switchMap", "observer", "merge", "NEVER", "finalize", "shareReplay", "watchElementVisibility", "el", "tap", "filter", "target", "map", "isIntersecting", "watchElementBoundary", "threshold", "watchElementContentOffset", "y", "visible", "getElementSize", "content", "getElementContentSize", "distinctUntilChanged", "toggles", "getElement", "getToggle", "name", "setToggle", "value", "watchToggle", "el", "fromEvent", "map", "startWith", "isSusceptibleToKeyboard", "el", "type", "watchKeyboard", "fromEvent", "filter", "ev", "map", "getToggle", "mode", "active", "getActiveElement", "share", "getLocation", "setLocation", "url", "watchLocation", "Subject", "appendChild", "el", "child", "node", "h", "tag", "attributes", "children", "attr", "truncate", "value", "n", "i", "round", "digits", "getLocationHash", "setLocationHash", "hash", "el", "h", "ev", "watchLocationHash", "fromEvent", "map", "startWith", "filter", "shareReplay", "watchLocationTarget", "id", "getOptionalElement", "watchMedia", "query", "media", "fromEventPattern", "next", "startWith", "watchPrint", "merge", "fromEvent", "map", "at", "query$", "factory", "switchMap", "active", "EMPTY", "request", "url", "options", "from", "catchError", "EMPTY", "switchMap", "res", "throwError", "of", "requestJSON", "shareReplay", "requestXML", "dom", "map", "watchScript", "src", "script", "h", "defer", "merge", "fromEvent", "switchMap", "throwError", "map", "finalize", "take", "getViewportOffset", "watchViewportOffset", "merge", "fromEvent", "map", "startWith", "getViewportSize", "watchViewportSize", "fromEvent", "map", "startWith", "watchViewport", "combineLatest", "watchViewportOffset", "watchViewportSize", "map", "offset", "size", "shareReplay", "watchViewportAt", "el", "viewport$", "header$", "size$", "distinctUntilKeyChanged", "offset$", "combineLatest", "map", "getElementOffset", "height", "offset", "size", "x", "y", "watchWorker", "worker", "tx$", "rx$", "fromEvent", "map", "data", "throttle", "tap", "message", "switchMap", "share", "script", "getElement", "config", "getLocation", "configuration", "feature", "flag", "translation", "key", "value", "getComponentElement", "type", "node", "getElement", "getComponentElements", "getElements", "watchAnnounce", "el", "button", "getElement", "fromEvent", "map", "content", "mountAnnounce", "feature", "EMPTY", "defer", "push$", "Subject", "startWith", "hash", "_a", "tap", "state", "finalize", "__spreadValues", "watchConsent", "el", "target$", "map", "target", "mountConsent", "options", "internal$", "Subject", "hidden", "tap", "state", "finalize", "__spreadValues", "import_clipboard", "renderTooltip", "id", "h", "renderAnnotation", "id", "prefix", "anchor", "h", "renderTooltip", "renderClipboardButton", "id", "h", "translation", "renderSearchDocument", "document", "flag", "parent", "teaser", "missing", "key", "list", "h", "url", "feature", "match", "highlight", "value", "tags", "configuration", "truncate", "tag", "id", "type", "translation", "renderSearchResultItem", "result", "threshold", "docs", "doc", "article", "index", "best", "more", "children", "section", "renderSourceFacts", "facts", "h", "key", "value", "round", "renderTabbedControl", "type", "classes", "h", "renderTable", "table", "h", "renderVersion", "version", "config", "configuration", "url", "h", "renderVersionSelector", "versions", "active", "translation", "watchAnnotation", "el", "container", "offset$", "defer", "combineLatest", "watchElementOffset", "watchElementContentOffset", "map", "x", "y", "scroll", "width", "height", "getElementSize", "watchElementFocus", "switchMap", "active", "offset", "take", "mountAnnotation", "target$", "tooltip", "index", "push$", "Subject", "done$", "takeLast", "watchElementVisibility", "takeUntil", "visible", "merge", "filter", "debounceTime", "auditTime", "animationFrameScheduler", "throttleTime", "origin", "fromEvent", "ev", "withLatestFrom", "_a", "parent", "getActiveElement", "target", "delay", "tap", "state", "finalize", "__spreadValues", "findAnnotationMarkers", "container", "markers", "el", "getElements", "nodes", "it", "node", "text", "match", "id", "force", "marker", "swap", "source", "target", "mountAnnotationList", "target$", "print$", "parent", "prefix", "annotations", "getOptionalElement", "renderAnnotation", "EMPTY", "defer", "done$", "Subject", "pairs", "annotation", "getElement", "takeUntil", "takeLast", "active", "inner", "child", "merge", "mountAnnotation", "finalize", "share", "sequence", "findCandidateList", "el", "sibling", "watchCodeBlock", "watchElementSize", "map", "width", "getElementContentSize", "distinctUntilKeyChanged", "mountCodeBlock", "options", "hover", "factory$", "defer", "push$", "Subject", "scrollable", "ClipboardJS", "parent", "renderClipboardButton", "container", "list", "feature", "annotations$", "mountAnnotationList", "tap", "state", "finalize", "__spreadValues", "mergeWith", "height", "distinctUntilChanged", "switchMap", "active", "EMPTY", "watchElementVisibility", "filter", "visible", "take", "mermaid$", "sequence", "fetchScripts", "watchScript", "of", "mountMermaid", "el", "tap", "mermaid_default", "map", "shareReplay", "id", "host", "h", "svg", "shadow", "watchDetails", "el", "target$", "print$", "open", "merge", "map", "target", "filter", "details", "active", "tap", "mountDetails", "options", "defer", "push$", "Subject", "action", "reveal", "state", "finalize", "__spreadValues", "sentinel", "h", "mountDataTable", "el", "renderTable", "of", "watchContentTabs", "el", "inputs", "getElements", "initial", "input", "merge", "fromEvent", "map", "getElement", "startWith", "active", "mountContentTabs", "viewport$", "prev", "renderTabbedControl", "next", "container", "defer", "push$", "Subject", "done$", "takeLast", "combineLatest", "watchElementSize", "auditTime", "animationFrameScheduler", "takeUntil", "size", "offset", "getElementOffset", "width", "getElementSize", "content", "getElementContentOffset", "watchElementContentOffset", "getElementContentSize", "direction", "feature", "skip", "withLatestFrom", "tab", "y", "set", "label", "tabs", "tap", "state", "finalize", "__spreadValues", "subscribeOn", "asyncScheduler", "mountContent", "el", "viewport$", "target$", "print$", "merge", "getElements", "child", "mountCodeBlock", "mountMermaid", "mountDataTable", "mountDetails", "mountContentTabs", "watchDialog", "_el", "alert$", "switchMap", "message", "merge", "of", "delay", "map", "active", "mountDialog", "el", "options", "inner", "getElement", "defer", "push$", "Subject", "tap", "state", "finalize", "__spreadValues", "isHidden", "viewport$", "feature", "of", "direction$", "map", "y", "bufferCount", "a", "b", "distinctUntilKeyChanged", "hidden$", "combineLatest", "filter", "offset", "direction", "distinctUntilChanged", "search$", "watchToggle", "search", "switchMap", "active", "startWith", "watchHeader", "el", "options", "defer", "watchElementSize", "height", "hidden", "shareReplay", "mountHeader", "header$", "main$", "push$", "Subject", "done$", "takeLast", "combineLatestWith", "takeUntil", "state", "__spreadValues", "watchHeaderTitle", "el", "viewport$", "header$", "watchViewportAt", "map", "y", "height", "getElementSize", "distinctUntilKeyChanged", "mountHeaderTitle", "options", "defer", "push$", "Subject", "active", "heading", "getOptionalElement", "EMPTY", "tap", "state", "finalize", "__spreadValues", "watchMain", "el", "viewport$", "header$", "adjust$", "map", "height", "distinctUntilChanged", "border$", "switchMap", "watchElementSize", "distinctUntilKeyChanged", "combineLatest", "header", "top", "bottom", "y", "a", "b", "watchPalette", "inputs", "current", "input", "of", "mergeMap", "fromEvent", "map", "startWith", "shareReplay", "mountPalette", "el", "defer", "push$", "Subject", "palette", "key", "value", "index", "label", "observeOn", "asyncScheduler", "getElements", "tap", "state", "finalize", "__spreadValues", "import_clipboard", "extract", "el", "text", "setupClipboardJS", "alert$", "ClipboardJS", "Observable", "subscriber", "getElement", "ev", "tap", "map", "translation", "preprocess", "urls", "root", "next", "a", "b", "url", "index", "fetchSitemap", "base", "cached", "of", "config", "configuration", "requestXML", "map", "sitemap", "getElements", "node", "catchError", "EMPTY", "defaultIfEmpty", "tap", "setupInstantLoading", "document$", "location$", "viewport$", "config", "configuration", "fromEvent", "favicon", "getOptionalElement", "push$", "fetchSitemap", "map", "paths", "path", "switchMap", "urls", "filter", "ev", "el", "url", "of", "NEVER", "share", "pop$", "merge", "distinctUntilChanged", "a", "b", "response$", "distinctUntilKeyChanged", "request", "catchError", "setLocation", "sample", "dom", "res", "skip", "replacement", "selector", "feature", "source", "target", "getComponentElement", "getElements", "concatMap", "script", "h", "name", "Observable", "observer", "EMPTY", "offset", "setLocationHash", "skipUntil", "debounceTime", "bufferCount", "state", "import_escape_html", "import_escape_html", "setupSearchHighlighter", "config", "escape", "separator", "highlight", "_", "data", "term", "query", "match", "value", "escapeHTML", "defaultTransform", "query", "terms", "index", "isSearchReadyMessage", "message", "isSearchQueryMessage", "isSearchResultMessage", "setupSearchIndex", "config", "docs", "translation", "options", "feature", "setupSearchWorker", "url", "index", "configuration", "worker", "tx$", "Subject", "rx$", "watchWorker", "map", "message", "isSearchResultMessage", "result", "document", "share", "from", "data", "setupVersionSelector", "document$", "config", "configuration", "versions$", "requestJSON", "catchError", "EMPTY", "current$", "map", "versions", "current", "version", "aliases", "switchMap", "urls", "fromEvent", "filter", "ev", "withLatestFrom", "el", "url", "of", "fetchSitemap", "sitemap", "path", "getLocation", "setLocation", "combineLatest", "getElement", "renderVersionSelector", "_a", "outdated", "latest", "warning", "getComponentElements", "watchSearchQuery", "el", "rx$", "fn", "defaultTransform", "searchParams", "getLocation", "setToggle", "param$", "filter", "isSearchReadyMessage", "take", "map", "watchToggle", "active", "url", "value", "focus$", "watchElementFocus", "value$", "merge", "fromEvent", "delay", "startWith", "distinctUntilChanged", "combineLatest", "focus", "shareReplay", "mountSearchQuery", "tx$", "push$", "Subject", "done$", "takeLast", "distinctUntilKeyChanged", "translation", "takeUntil", "tap", "state", "finalize", "__spreadValues", "share", "mountSearchResult", "el", "rx$", "query$", "push$", "Subject", "boundary$", "watchElementBoundary", "filter", "meta", "getElement", "list", "ready$", "isSearchReadyMessage", "take", "withLatestFrom", "skipUntil", "items", "value", "translation", "round", "tap", "switchMap", "merge", "of", "bufferCount", "zipWith", "chunk", "result", "renderSearchResultItem", "isSearchResultMessage", "map", "data", "state", "finalize", "__spreadValues", "watchSearchShare", "_el", "query$", "map", "value", "url", "getLocation", "mountSearchShare", "el", "options", "push$", "Subject", "fromEvent", "ev", "tap", "state", "finalize", "__spreadValues", "mountSearchSuggest", "el", "rx$", "keyboard$", "push$", "Subject", "query", "getComponentElement", "query$", "merge", "fromEvent", "observeOn", "asyncScheduler", "map", "distinctUntilChanged", "combineLatestWith", "suggestions", "value", "words", "last", "filter", "mode", "key", "isSearchResultMessage", "data", "tap", "state", "finalize", "mountSearch", "el", "index$", "keyboard$", "config", "configuration", "url", "worker", "setupSearchWorker", "query", "getComponentElement", "result", "tx$", "rx$", "filter", "isSearchQueryMessage", "sample", "isSearchReadyMessage", "take", "mode", "key", "active", "getActiveElement", "anchors", "anchor", "getElements", "article", "best", "a", "b", "setToggle", "els", "i", "query$", "mountSearchQuery", "result$", "mountSearchResult", "merge", "mergeWith", "getComponentElements", "child", "mountSearchShare", "mountSearchSuggest", "err", "NEVER", "mountSearchHiglight", "el", "index$", "location$", "combineLatest", "startWith", "getLocation", "filter", "url", "map", "index", "setupSearchHighlighter", "fn", "_a", "nodes", "it", "node", "original", "replaced", "text", "childNodes", "h", "watchSidebar", "el", "viewport$", "main$", "parent", "adjust", "combineLatest", "map", "offset", "height", "y", "distinctUntilChanged", "a", "b", "mountSidebar", "_a", "_b", "header$", "options", "__objRest", "inner", "getElement", "getElementOffset", "defer", "push$", "Subject", "auditTime", "animationFrameScheduler", "withLatestFrom", "observeOn", "take", "item", "getElements", "container", "getElementContainer", "getElementSize", "tap", "state", "finalize", "__spreadValues", "fetchSourceFactsFromGitHub", "user", "repo", "url", "zip", "requestJSON", "catchError", "EMPTY", "map", "release", "defaultIfEmpty", "info", "__spreadValues", "fetchSourceFactsFromGitLab", "base", "project", "url", "requestJSON", "catchError", "EMPTY", "map", "star_count", "forks_count", "defaultIfEmpty", "fetchSourceFacts", "url", "match", "user", "repo", "fetchSourceFactsFromGitHub", "base", "slug", "fetchSourceFactsFromGitLab", "EMPTY", "fetch$", "watchSource", "el", "defer", "cached", "of", "getComponentElements", "consent", "EMPTY", "fetchSourceFacts", "tap", "facts", "catchError", "filter", "map", "shareReplay", "mountSource", "inner", "getElement", "push$", "Subject", "renderSourceFacts", "state", "finalize", "__spreadValues", "watchTabs", "el", "viewport$", "header$", "watchElementSize", "switchMap", "watchViewportAt", "map", "y", "distinctUntilKeyChanged", "mountTabs", "options", "defer", "push$", "Subject", "hidden", "feature", "of", "tap", "state", "finalize", "__spreadValues", "watchTableOfContents", "el", "viewport$", "header$", "table", "anchors", "getElements", "anchor", "id", "target", "getOptionalElement", "adjust$", "distinctUntilKeyChanged", "map", "height", "main", "getComponentElement", "grid", "getElement", "share", "watchElementSize", "switchMap", "body", "defer", "path", "of", "index", "offset", "a", "b", "combineLatestWith", "adjust", "scan", "prev", "next", "y", "size", "last", "distinctUntilChanged", "startWith", "bufferCount", "mountTableOfContents", "target$", "push$", "Subject", "done$", "takeLast", "feature", "smooth$", "merge", "debounceTime", "filter", "withLatestFrom", "behavior", "container", "getElementContainer", "getElementSize", "takeUntil", "skip", "repeat", "url", "getLocation", "active", "hash", "tap", "state", "finalize", "__spreadValues", "watchBackToTop", "_el", "viewport$", "main$", "target$", "direction$", "map", "y", "bufferCount", "a", "b", "distinctUntilChanged", "active$", "active", "combineLatest", "direction", "takeUntil", "skip", "endWith", "repeat", "hidden", "mountBackToTop", "el", "header$", "push$", "Subject", "done$", "takeLast", "distinctUntilKeyChanged", "height", "tap", "state", "finalize", "__spreadValues", "patchIndeterminate", "document$", "tablet$", "switchMap", "getElements", "tap", "el", "mergeMap", "fromEvent", "takeWhile", "map", "withLatestFrom", "tablet", "isAppleDevice", "patchScrollfix", "document$", "switchMap", "getElements", "tap", "el", "filter", "mergeMap", "fromEvent", "map", "top", "patchScrolllock", "viewport$", "tablet$", "combineLatest", "watchToggle", "map", "active", "tablet", "switchMap", "of", "delay", "withLatestFrom", "y", "value", "obj", "data", "key", "x", "y", "nodes", "parent", "i", "node", "document$", "watchDocument", "location$", "watchLocation", "target$", "watchLocationTarget", "keyboard$", "watchKeyboard", "viewport$", "watchViewport", "tablet$", "watchMedia", "screen$", "print$", "watchPrint", "config", "configuration", "index$", "requestJSON", "NEVER", "alert$", "Subject", "setupClipboardJS", "feature", "setupInstantLoading", "_a", "setupVersionSelector", "merge", "delay", "setToggle", "filter", "mode", "key", "prev", "getOptionalElement", "next", "patchIndeterminate", "patchScrollfix", "patchScrolllock", "header$", "watchHeader", "getComponentElement", "main$", "map", "switchMap", "el", "watchMain", "shareReplay", "control$", "getComponentElements", "mountConsent", "mountDialog", "mountHeader", "mountPalette", "mountSearch", "mountSource", "content$", "defer", "mountAnnounce", "mountContent", "mountSearchHiglight", "EMPTY", "mountHeaderTitle", "at", "mountSidebar", "mountTabs", "mountTableOfContents", "mountBackToTop", "component$", "mergeWith"] +} diff --git a/assets/javascripts/extra/bundle.5f09fbc3.min.js b/assets/javascripts/extra/bundle.5f09fbc3.min.js new file mode 100644 index 00000000..48b752cd --- /dev/null +++ b/assets/javascripts/extra/bundle.5f09fbc3.min.js @@ -0,0 +1,18 @@ +"use strict";(()=>{var Je=Object.create;var qr=Object.defineProperty;var $e=Object.getOwnPropertyDescriptor;var Qe=Object.getOwnPropertyNames;var Xe=Object.getPrototypeOf,Ze=Object.prototype.hasOwnProperty;var rt=(r,o)=>()=>(o||r((o={exports:{}}).exports,o),o.exports);var et=(r,o,t,e)=>{if(o&&typeof o=="object"||typeof o=="function")for(let n of Qe(o))!Ze.call(r,n)&&n!==t&&qr(r,n,{get:()=>o[n],enumerable:!(e=$e(o,n))||e.enumerable});return r};var tt=(r,o,t)=>(t=r!=null?Je(Xe(r)):{},et(o||!r||!r.__esModule?qr(t,"default",{value:r,enumerable:!0}):t,r));var me=rt((Tt,er)=>{/*! ***************************************************************************** +Copyright (c) Microsoft Corporation. + +Permission to use, copy, modify, and/or distribute this software for any +purpose with or without fee is hereby granted. + +THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH +REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, +INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR +OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +PERFORMANCE OF THIS SOFTWARE. +***************************************************************************** */var Hr,Kr,Jr,$r,Qr,Xr,Zr,re,ee,Z,Ar,te,oe,ne,k,ie,fe,ae,ue,ce,se,pe,le,rr;(function(r){var o=typeof global=="object"?global:typeof self=="object"?self:typeof this=="object"?this:{};typeof define=="function"&&define.amd?define("tslib",["exports"],function(e){r(t(o,t(e)))}):typeof er=="object"&&typeof er.exports=="object"?r(t(o,t(er.exports))):r(t(o));function t(e,n){return e!==o&&(typeof Object.create=="function"?Object.defineProperty(e,"__esModule",{value:!0}):e.__esModule=!0),function(i,f){return e[i]=n?n(i,f):f}}})(function(r){var o=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(e,n){e.__proto__=n}||function(e,n){for(var i in n)Object.prototype.hasOwnProperty.call(n,i)&&(e[i]=n[i])};Hr=function(e,n){if(typeof n!="function"&&n!==null)throw new TypeError("Class extends value "+String(n)+" is not a constructor or null");o(e,n);function i(){this.constructor=e}e.prototype=n===null?Object.create(n):(i.prototype=n.prototype,new i)},Kr=Object.assign||function(e){for(var n,i=1,f=arguments.length;i=0;s--)(c=e[s])&&(a=(u<3?c(a):u>3?c(n,i,a):c(n,i))||a);return u>3&&a&&Object.defineProperty(n,i,a),a},Qr=function(e,n){return function(i,f){n(i,f,e)}},Xr=function(e,n){if(typeof Reflect=="object"&&typeof Reflect.metadata=="function")return Reflect.metadata(e,n)},Zr=function(e,n,i,f){function u(a){return a instanceof i?a:new i(function(c){c(a)})}return new(i||(i=Promise))(function(a,c){function s(y){try{p(f.next(y))}catch(g){c(g)}}function d(y){try{p(f.throw(y))}catch(g){c(g)}}function p(y){y.done?a(y.value):u(y.value).then(s,d)}p((f=f.apply(e,n||[])).next())})},re=function(e,n){var i={label:0,sent:function(){if(a[0]&1)throw a[1];return a[1]},trys:[],ops:[]},f,u,a,c;return c={next:s(0),throw:s(1),return:s(2)},typeof Symbol=="function"&&(c[Symbol.iterator]=function(){return this}),c;function s(p){return function(y){return d([p,y])}}function d(p){if(f)throw new TypeError("Generator is already executing.");for(;i;)try{if(f=1,u&&(a=p[0]&2?u.return:p[0]?u.throw||((a=u.return)&&a.call(u),0):u.next)&&!(a=a.call(u,p[1])).done)return a;switch(u=0,a&&(p=[p[0]&2,a.value]),p[0]){case 0:case 1:a=p;break;case 4:return i.label++,{value:p[1],done:!1};case 5:i.label++,u=p[1],p=[0];continue;case 7:p=i.ops.pop(),i.trys.pop();continue;default:if(a=i.trys,!(a=a.length>0&&a[a.length-1])&&(p[0]===6||p[0]===2)){i=0;continue}if(p[0]===3&&(!a||p[1]>a[0]&&p[1]=e.length&&(e=void 0),{value:e&&e[f++],done:!e}}};throw new TypeError(n?"Object is not iterable.":"Symbol.iterator is not defined.")},Ar=function(e,n){var i=typeof Symbol=="function"&&e[Symbol.iterator];if(!i)return e;var f=i.call(e),u,a=[],c;try{for(;(n===void 0||n-- >0)&&!(u=f.next()).done;)a.push(u.value)}catch(s){c={error:s}}finally{try{u&&!u.done&&(i=f.return)&&i.call(f)}finally{if(c)throw c.error}}return a},te=function(){for(var e=[],n=0;n1||s(m,P)})})}function s(m,P){try{d(f[m](P))}catch(j){g(a[0][3],j)}}function d(m){m.value instanceof k?Promise.resolve(m.value.v).then(p,y):g(a[0][2],m)}function p(m){s("next",m)}function y(m){s("throw",m)}function g(m,P){m(P),a.shift(),a.length&&s(a[0][0],a[0][1])}},fe=function(e){var n,i;return n={},f("next"),f("throw",function(u){throw u}),f("return"),n[Symbol.iterator]=function(){return this},n;function f(u,a){n[u]=e[u]?function(c){return(i=!i)?{value:k(e[u](c)),done:u==="return"}:a?a(c):c}:a}},ae=function(e){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var n=e[Symbol.asyncIterator],i;return n?n.call(e):(e=typeof Z=="function"?Z(e):e[Symbol.iterator](),i={},f("next"),f("throw"),f("return"),i[Symbol.asyncIterator]=function(){return this},i);function f(a){i[a]=e[a]&&function(c){return new Promise(function(s,d){c=e[a](c),u(s,d,c.done,c.value)})}}function u(a,c,s,d){Promise.resolve(d).then(function(p){a({value:p,done:s})},c)}},ue=function(e,n){return Object.defineProperty?Object.defineProperty(e,"raw",{value:n}):e.raw=n,e};var t=Object.create?function(e,n){Object.defineProperty(e,"default",{enumerable:!0,value:n})}:function(e,n){e.default=n};ce=function(e){if(e&&e.__esModule)return e;var n={};if(e!=null)for(var i in e)i!=="default"&&Object.prototype.hasOwnProperty.call(e,i)&&rr(n,e,i);return t(n,e),n},se=function(e){return e&&e.__esModule?e:{default:e}},pe=function(e,n,i,f){if(i==="a"&&!f)throw new TypeError("Private accessor was defined without a getter");if(typeof n=="function"?e!==n||!f:!n.has(e))throw new TypeError("Cannot read private member from an object whose class did not declare it");return i==="m"?f:i==="a"?f.call(e):f?f.value:n.get(e)},le=function(e,n,i,f,u){if(f==="m")throw new TypeError("Private method is not writable");if(f==="a"&&!u)throw new TypeError("Private accessor was defined without a setter");if(typeof n=="function"?e!==n||!u:!n.has(e))throw new TypeError("Cannot write private member to an object whose class did not declare it");return f==="a"?u.call(e,i):u?u.value=i:n.set(e,i),i},r("__extends",Hr),r("__assign",Kr),r("__rest",Jr),r("__decorate",$r),r("__param",Qr),r("__metadata",Xr),r("__awaiter",Zr),r("__generator",re),r("__exportStar",ee),r("__createBinding",rr),r("__values",Z),r("__read",Ar),r("__spread",te),r("__spreadArrays",oe),r("__spreadArray",ne),r("__await",k),r("__asyncGenerator",ie),r("__asyncDelegator",fe),r("__asyncValues",ae),r("__makeTemplateObject",ue),r("__importStar",ce),r("__importDefault",se),r("__classPrivateFieldGet",pe),r("__classPrivateFieldSet",le)})});var de=tt(me(),1),{__extends:_,__assign:Pt,__rest:jt,__decorate:Ft,__param:Mt,__metadata:Ct,__awaiter:he,__generator:tr,__exportStar:Lt,__createBinding:Rt,__values:M,__read:w,__spread:kt,__spreadArrays:Ut,__spreadArray:S,__await:or,__asyncGenerator:ve,__asyncDelegator:Wt,__asyncValues:be,__makeTemplateObject:Dt,__importStar:Vt,__importDefault:Bt,__classPrivateFieldGet:Gt,__classPrivateFieldSet:Nt}=de.default;function l(r){return typeof r=="function"}function nr(r){var o=function(e){Error.call(e),e.stack=new Error().stack},t=r(o);return t.prototype=Object.create(Error.prototype),t.prototype.constructor=t,t}var ir=nr(function(r){return function(t){r(this),this.message=t?t.length+` errors occurred during unsubscription: +`+t.map(function(e,n){return n+1+") "+e.toString()}).join(` + `):"",this.name="UnsubscriptionError",this.errors=t}});function C(r,o){if(r){var t=r.indexOf(o);0<=t&&r.splice(t,1)}}var F=function(){function r(o){this.initialTeardown=o,this.closed=!1,this._parentage=null,this._finalizers=null}return r.prototype.unsubscribe=function(){var o,t,e,n,i;if(!this.closed){this.closed=!0;var f=this._parentage;if(f)if(this._parentage=null,Array.isArray(f))try{for(var u=M(f),a=u.next();!a.done;a=u.next()){var c=a.value;c.remove(this)}}catch(m){o={error:m}}finally{try{a&&!a.done&&(t=u.return)&&t.call(u)}finally{if(o)throw o.error}}else f.remove(this);var s=this.initialTeardown;if(l(s))try{s()}catch(m){i=m instanceof ir?m.errors:[m]}var d=this._finalizers;if(d){this._finalizers=null;try{for(var p=M(d),y=p.next();!y.done;y=p.next()){var g=y.value;try{ye(g)}catch(m){i=i!=null?i:[],m instanceof ir?i=S(S([],w(i)),w(m.errors)):i.push(m)}}}catch(m){e={error:m}}finally{try{y&&!y.done&&(n=p.return)&&n.call(p)}finally{if(e)throw e.error}}}if(i)throw new ir(i)}},r.prototype.add=function(o){var t;if(o&&o!==this)if(this.closed)ye(o);else{if(o instanceof r){if(o.closed||o._hasParent(this))return;o._addParent(this)}(this._finalizers=(t=this._finalizers)!==null&&t!==void 0?t:[]).push(o)}},r.prototype._hasParent=function(o){var t=this._parentage;return t===o||Array.isArray(t)&&t.includes(o)},r.prototype._addParent=function(o){var t=this._parentage;this._parentage=Array.isArray(t)?(t.push(o),t):t?[t,o]:o},r.prototype._removeParent=function(o){var t=this._parentage;t===o?this._parentage=null:Array.isArray(t)&&C(t,o)},r.prototype.remove=function(o){var t=this._finalizers;t&&C(t,o),o instanceof r&&o._removeParent(this)},r.EMPTY=function(){var o=new r;return o.closed=!0,o}(),r}();var Ir=F.EMPTY;function fr(r){return r instanceof F||r&&"closed"in r&&l(r.remove)&&l(r.add)&&l(r.unsubscribe)}function ye(r){l(r)?r():r.unsubscribe()}var O={onUnhandledError:null,onStoppedNotification:null,Promise:void 0,useDeprecatedSynchronousErrorHandling:!1,useDeprecatedNextContext:!1};var U={setTimeout:function(r,o){for(var t=[],e=2;e0},enumerable:!1,configurable:!0}),o.prototype._trySubscribe=function(t){return this._throwIfClosed(),r.prototype._trySubscribe.call(this,t)},o.prototype._subscribe=function(t){return this._throwIfClosed(),this._checkFinalizedStatuses(t),this._innerSubscribe(t)},o.prototype._innerSubscribe=function(t){var e=this,n=this,i=n.hasError,f=n.isStopped,u=n.observers;return i||f?Ir:(this.currentObservers=null,u.push(t),new F(function(){e.currentObservers=null,C(u,t)}))},o.prototype._checkFinalizedStatuses=function(t){var e=this,n=e.hasError,i=e.thrownError,f=e.isStopped;n?t.error(i):f&&t.complete()},o.prototype.asObservable=function(){var t=new b;return t.source=this,t},o.create=function(t,e){return new Ae(t,e)},o}(b);var Ae=function(r){_(o,r);function o(t,e){var n=r.call(this)||this;return n.destination=t,n.source=e,n}return o.prototype.next=function(t){var e,n;(n=(e=this.destination)===null||e===void 0?void 0:e.next)===null||n===void 0||n.call(e,t)},o.prototype.error=function(t){var e,n;(n=(e=this.destination)===null||e===void 0?void 0:e.error)===null||n===void 0||n.call(e,t)},o.prototype.complete=function(){var t,e;(e=(t=this.destination)===null||t===void 0?void 0:t.complete)===null||e===void 0||e.call(t)},o.prototype._subscribe=function(t){var e,n;return(n=(e=this.source)===null||e===void 0?void 0:e.subscribe(t))!==null&&n!==void 0?n:Ir},o}(Fr);var J={now:function(){return(J.delegate||Date).now()},delegate:void 0};var Mr=function(r){_(o,r);function o(t,e,n){t===void 0&&(t=1/0),e===void 0&&(e=1/0),n===void 0&&(n=J);var i=r.call(this)||this;return i._bufferSize=t,i._windowTime=e,i._timestampProvider=n,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=e===1/0,i._bufferSize=Math.max(1,t),i._windowTime=Math.max(1,e),i}return o.prototype.next=function(t){var e=this,n=e.isStopped,i=e._buffer,f=e._infiniteTimeWindow,u=e._timestampProvider,a=e._windowTime;n||(i.push(t),!f&&i.push(u.now()+a)),this._trimBuffer(),r.prototype.next.call(this,t)},o.prototype._subscribe=function(t){this._throwIfClosed(),this._trimBuffer();for(var e=this._innerSubscribe(t),n=this,i=n._infiniteTimeWindow,f=n._buffer,u=f.slice(),a=0;a{sessionStorage.setItem("\u1D34\u2092\u1D34\u2092\u1D34\u2092",`${t}`),r.hidden=!t}),o.next(JSON.parse(sessionStorage.getItem("\u1D34\u2092\u1D34\u2092\u1D34\u2092")||"true")),z(r,"click").pipe(zr(o)).subscribe(([,t])=>o.next(!t)),kr(250).pipe(gr(o.pipe(X(t=>!t))),H(75),Nr({delay:()=>o.pipe(X(t=>t))}),T(()=>{let t=document.createElement("div");return t.className="\u1D34\u2092\u1D34\u2092\u1D34\u2092",t.ariaHidden="true",Ke.appendChild(t),Ur(Wr,Rr(t)).pipe(Gr(()=>t.remove()),gr(o.pipe(X(e=>!e))),Yr(e=>z(e,"click").pipe(Er(()=>e.classList.add("\u1D34\u2092\u1D34\u2092\u1D34\u2092--\u1D4D\u2092\u1D57\uA700\u1D34\u2090")),Vr(1e3),Er(()=>e.classList.remove("\u1D34\u2092\u1D34\u2092\u1D34\u2092--\u1D4D\u2092\u1D57\uA700\u1D34\u2090")))))})).subscribe()}})(); +//# sourceMappingURL=bundle.5f09fbc3.min.js.map + diff --git a/assets/javascripts/extra/bundle.5f09fbc3.min.js.map b/assets/javascripts/extra/bundle.5f09fbc3.min.js.map new file mode 100644 index 00000000..24f36746 --- /dev/null +++ b/assets/javascripts/extra/bundle.5f09fbc3.min.js.map @@ -0,0 +1,8 @@ +{ + "version": 3, + "sources": ["node_modules/rxjs/node_modules/tslib/tslib.js", "node_modules/rxjs/node_modules/tslib/modules/index.js", "node_modules/rxjs/src/internal/util/isFunction.ts", "node_modules/rxjs/src/internal/util/createErrorClass.ts", "node_modules/rxjs/src/internal/util/UnsubscriptionError.ts", "node_modules/rxjs/src/internal/util/arrRemove.ts", "node_modules/rxjs/src/internal/Subscription.ts", "node_modules/rxjs/src/internal/config.ts", "node_modules/rxjs/src/internal/scheduler/timeoutProvider.ts", "node_modules/rxjs/src/internal/util/reportUnhandledError.ts", "node_modules/rxjs/src/internal/util/noop.ts", "node_modules/rxjs/src/internal/NotificationFactories.ts", "node_modules/rxjs/src/internal/util/errorContext.ts", "node_modules/rxjs/src/internal/Subscriber.ts", "node_modules/rxjs/src/internal/symbol/observable.ts", "node_modules/rxjs/src/internal/util/identity.ts", "node_modules/rxjs/src/internal/util/pipe.ts", "node_modules/rxjs/src/internal/Observable.ts", "node_modules/rxjs/src/internal/util/lift.ts", "node_modules/rxjs/src/internal/operators/OperatorSubscriber.ts", "node_modules/rxjs/src/internal/util/ObjectUnsubscribedError.ts", "node_modules/rxjs/src/internal/Subject.ts", "node_modules/rxjs/src/internal/scheduler/dateTimestampProvider.ts", "node_modules/rxjs/src/internal/ReplaySubject.ts", "node_modules/rxjs/src/internal/scheduler/Action.ts", "node_modules/rxjs/src/internal/scheduler/intervalProvider.ts", "node_modules/rxjs/src/internal/scheduler/AsyncAction.ts", "node_modules/rxjs/src/internal/Scheduler.ts", "node_modules/rxjs/src/internal/scheduler/AsyncScheduler.ts", "node_modules/rxjs/src/internal/scheduler/async.ts", "node_modules/rxjs/src/internal/observable/empty.ts", "node_modules/rxjs/src/internal/util/isScheduler.ts", "node_modules/rxjs/src/internal/util/args.ts", "node_modules/rxjs/src/internal/util/isArrayLike.ts", "node_modules/rxjs/src/internal/util/isPromise.ts", "node_modules/rxjs/src/internal/util/isInteropObservable.ts", "node_modules/rxjs/src/internal/util/isAsyncIterable.ts", "node_modules/rxjs/src/internal/util/throwUnobservableError.ts", "node_modules/rxjs/src/internal/symbol/iterator.ts", "node_modules/rxjs/src/internal/util/isIterable.ts", "node_modules/rxjs/src/internal/util/isReadableStreamLike.ts", "node_modules/rxjs/src/internal/observable/innerFrom.ts", "node_modules/rxjs/src/internal/util/executeSchedule.ts", "node_modules/rxjs/src/internal/operators/observeOn.ts", "node_modules/rxjs/src/internal/operators/subscribeOn.ts", "node_modules/rxjs/src/internal/scheduled/scheduleObservable.ts", "node_modules/rxjs/src/internal/scheduled/schedulePromise.ts", "node_modules/rxjs/src/internal/scheduled/scheduleArray.ts", "node_modules/rxjs/src/internal/scheduled/scheduleIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleAsyncIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleReadableStreamLike.ts", "node_modules/rxjs/src/internal/scheduled/scheduled.ts", "node_modules/rxjs/src/internal/observable/from.ts", "node_modules/rxjs/src/internal/observable/of.ts", "node_modules/rxjs/src/internal/util/isDate.ts", "node_modules/rxjs/src/internal/operators/map.ts", "node_modules/rxjs/src/internal/util/mapOneOrManyArgs.ts", "node_modules/rxjs/src/internal/operators/mergeInternals.ts", "node_modules/rxjs/src/internal/operators/mergeMap.ts", "node_modules/rxjs/src/internal/operators/mergeAll.ts", "node_modules/rxjs/src/internal/operators/concatAll.ts", "node_modules/rxjs/src/internal/observable/concat.ts", "node_modules/rxjs/src/internal/observable/fromEvent.ts", "node_modules/rxjs/src/internal/observable/timer.ts", "node_modules/rxjs/src/internal/observable/interval.ts", "node_modules/rxjs/src/internal/observable/merge.ts", "node_modules/rxjs/src/internal/observable/never.ts", "node_modules/rxjs/src/internal/operators/filter.ts", "node_modules/rxjs/src/internal/operators/take.ts", "node_modules/rxjs/src/internal/operators/ignoreElements.ts", "node_modules/rxjs/src/internal/operators/mapTo.ts", "node_modules/rxjs/src/internal/operators/delayWhen.ts", "node_modules/rxjs/src/internal/operators/delay.ts", "node_modules/rxjs/src/internal/operators/distinctUntilChanged.ts", "node_modules/rxjs/src/internal/operators/finalize.ts", "node_modules/rxjs/src/internal/operators/repeat.ts", "node_modules/rxjs/src/internal/operators/switchMap.ts", "node_modules/rxjs/src/internal/operators/takeUntil.ts", "node_modules/rxjs/src/internal/operators/tap.ts", "node_modules/rxjs/src/internal/operators/withLatestFrom.ts", "src/assets/javascripts/extra/bundle.ts"], + "sourceRoot": "../../../..", + "sourcesContent": ["/*! *****************************************************************************\r\nCopyright (c) Microsoft Corporation.\r\n\r\nPermission to use, copy, modify, and/or distribute this software for any\r\npurpose with or without fee is hereby granted.\r\n\r\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\r\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\r\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\r\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\r\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\r\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\r\nPERFORMANCE OF THIS SOFTWARE.\r\n***************************************************************************** */\r\n/* global global, define, System, Reflect, Promise */\r\nvar __extends;\r\nvar __assign;\r\nvar __rest;\r\nvar __decorate;\r\nvar __param;\r\nvar __metadata;\r\nvar __awaiter;\r\nvar __generator;\r\nvar __exportStar;\r\nvar __values;\r\nvar __read;\r\nvar __spread;\r\nvar __spreadArrays;\r\nvar __spreadArray;\r\nvar __await;\r\nvar __asyncGenerator;\r\nvar __asyncDelegator;\r\nvar __asyncValues;\r\nvar __makeTemplateObject;\r\nvar __importStar;\r\nvar __importDefault;\r\nvar __classPrivateFieldGet;\r\nvar __classPrivateFieldSet;\r\nvar __createBinding;\r\n(function (factory) {\r\n var root = typeof global === \"object\" ? global : typeof self === \"object\" ? self : typeof this === \"object\" ? this : {};\r\n if (typeof define === \"function\" && define.amd) {\r\n define(\"tslib\", [\"exports\"], function (exports) { factory(createExporter(root, createExporter(exports))); });\r\n }\r\n else if (typeof module === \"object\" && typeof module.exports === \"object\") {\r\n factory(createExporter(root, createExporter(module.exports)));\r\n }\r\n else {\r\n factory(createExporter(root));\r\n }\r\n function createExporter(exports, previous) {\r\n if (exports !== root) {\r\n if (typeof Object.create === \"function\") {\r\n Object.defineProperty(exports, \"__esModule\", { value: true });\r\n }\r\n else {\r\n exports.__esModule = true;\r\n }\r\n }\r\n return function (id, v) { return exports[id] = previous ? previous(id, v) : v; };\r\n }\r\n})\r\n(function (exporter) {\r\n var extendStatics = Object.setPrototypeOf ||\r\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\r\n function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };\r\n\r\n __extends = function (d, b) {\r\n if (typeof b !== \"function\" && b !== null)\r\n throw new TypeError(\"Class extends value \" + String(b) + \" is not a constructor or null\");\r\n extendStatics(d, b);\r\n function __() { this.constructor = d; }\r\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\r\n };\r\n\r\n __assign = Object.assign || function (t) {\r\n for (var s, i = 1, n = arguments.length; i < n; i++) {\r\n s = arguments[i];\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\r\n }\r\n return t;\r\n };\r\n\r\n __rest = function (s, e) {\r\n var t = {};\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\r\n t[p] = s[p];\r\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\r\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\r\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\r\n t[p[i]] = s[p[i]];\r\n }\r\n return t;\r\n };\r\n\r\n __decorate = function (decorators, target, key, desc) {\r\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\r\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\r\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\r\n return c > 3 && r && Object.defineProperty(target, key, r), r;\r\n };\r\n\r\n __param = function (paramIndex, decorator) {\r\n return function (target, key) { decorator(target, key, paramIndex); }\r\n };\r\n\r\n __metadata = function (metadataKey, metadataValue) {\r\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\r\n };\r\n\r\n __awaiter = function (thisArg, _arguments, P, generator) {\r\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\r\n return new (P || (P = Promise))(function (resolve, reject) {\r\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\r\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\r\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\r\n step((generator = generator.apply(thisArg, _arguments || [])).next());\r\n });\r\n };\r\n\r\n __generator = function (thisArg, body) {\r\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;\r\n return g = { next: verb(0), \"throw\": verb(1), \"return\": verb(2) }, typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\r\n function verb(n) { return function (v) { return step([n, v]); }; }\r\n function step(op) {\r\n if (f) throw new TypeError(\"Generator is already executing.\");\r\n while (_) try {\r\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\r\n if (y = 0, t) op = [op[0] & 2, t.value];\r\n switch (op[0]) {\r\n case 0: case 1: t = op; break;\r\n case 4: _.label++; return { value: op[1], done: false };\r\n case 5: _.label++; y = op[1]; op = [0]; continue;\r\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\r\n default:\r\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\r\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\r\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\r\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\r\n if (t[2]) _.ops.pop();\r\n _.trys.pop(); continue;\r\n }\r\n op = body.call(thisArg, _);\r\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\r\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\r\n }\r\n };\r\n\r\n __exportStar = function(m, o) {\r\n for (var p in m) if (p !== \"default\" && !Object.prototype.hasOwnProperty.call(o, p)) __createBinding(o, m, p);\r\n };\r\n\r\n __createBinding = Object.create ? (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });\r\n }) : (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n o[k2] = m[k];\r\n });\r\n\r\n __values = function (o) {\r\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\r\n if (m) return m.call(o);\r\n if (o && typeof o.length === \"number\") return {\r\n next: function () {\r\n if (o && i >= o.length) o = void 0;\r\n return { value: o && o[i++], done: !o };\r\n }\r\n };\r\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\r\n };\r\n\r\n __read = function (o, n) {\r\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\r\n if (!m) return o;\r\n var i = m.call(o), r, ar = [], e;\r\n try {\r\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\r\n }\r\n catch (error) { e = { error: error }; }\r\n finally {\r\n try {\r\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\r\n }\r\n finally { if (e) throw e.error; }\r\n }\r\n return ar;\r\n };\r\n\r\n /** @deprecated */\r\n __spread = function () {\r\n for (var ar = [], i = 0; i < arguments.length; i++)\r\n ar = ar.concat(__read(arguments[i]));\r\n return ar;\r\n };\r\n\r\n /** @deprecated */\r\n __spreadArrays = function () {\r\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\r\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\r\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\r\n r[k] = a[j];\r\n return r;\r\n };\r\n\r\n __spreadArray = function (to, from, pack) {\r\n if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {\r\n if (ar || !(i in from)) {\r\n if (!ar) ar = Array.prototype.slice.call(from, 0, i);\r\n ar[i] = from[i];\r\n }\r\n }\r\n return to.concat(ar || Array.prototype.slice.call(from));\r\n };\r\n\r\n __await = function (v) {\r\n return this instanceof __await ? (this.v = v, this) : new __await(v);\r\n };\r\n\r\n __asyncGenerator = function (thisArg, _arguments, generator) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\r\n return i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i;\r\n function verb(n) { if (g[n]) i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; }\r\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\r\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\r\n function fulfill(value) { resume(\"next\", value); }\r\n function reject(value) { resume(\"throw\", value); }\r\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\r\n };\r\n\r\n __asyncDelegator = function (o) {\r\n var i, p;\r\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\r\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: n === \"return\" } : f ? f(v) : v; } : f; }\r\n };\r\n\r\n __asyncValues = function (o) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var m = o[Symbol.asyncIterator], i;\r\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\r\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\r\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\r\n };\r\n\r\n __makeTemplateObject = function (cooked, raw) {\r\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\r\n return cooked;\r\n };\r\n\r\n var __setModuleDefault = Object.create ? (function(o, v) {\r\n Object.defineProperty(o, \"default\", { enumerable: true, value: v });\r\n }) : function(o, v) {\r\n o[\"default\"] = v;\r\n };\r\n\r\n __importStar = function (mod) {\r\n if (mod && mod.__esModule) return mod;\r\n var result = {};\r\n if (mod != null) for (var k in mod) if (k !== \"default\" && Object.prototype.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);\r\n __setModuleDefault(result, mod);\r\n return result;\r\n };\r\n\r\n __importDefault = function (mod) {\r\n return (mod && mod.__esModule) ? mod : { \"default\": mod };\r\n };\r\n\r\n __classPrivateFieldGet = function (receiver, state, kind, f) {\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a getter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot read private member from an object whose class did not declare it\");\r\n return kind === \"m\" ? f : kind === \"a\" ? f.call(receiver) : f ? f.value : state.get(receiver);\r\n };\r\n\r\n __classPrivateFieldSet = function (receiver, state, value, kind, f) {\r\n if (kind === \"m\") throw new TypeError(\"Private method is not writable\");\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a setter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot write private member to an object whose class did not declare it\");\r\n return (kind === \"a\" ? f.call(receiver, value) : f ? f.value = value : state.set(receiver, value)), value;\r\n };\r\n\r\n exporter(\"__extends\", __extends);\r\n exporter(\"__assign\", __assign);\r\n exporter(\"__rest\", __rest);\r\n exporter(\"__decorate\", __decorate);\r\n exporter(\"__param\", __param);\r\n exporter(\"__metadata\", __metadata);\r\n exporter(\"__awaiter\", __awaiter);\r\n exporter(\"__generator\", __generator);\r\n exporter(\"__exportStar\", __exportStar);\r\n exporter(\"__createBinding\", __createBinding);\r\n exporter(\"__values\", __values);\r\n exporter(\"__read\", __read);\r\n exporter(\"__spread\", __spread);\r\n exporter(\"__spreadArrays\", __spreadArrays);\r\n exporter(\"__spreadArray\", __spreadArray);\r\n exporter(\"__await\", __await);\r\n exporter(\"__asyncGenerator\", __asyncGenerator);\r\n exporter(\"__asyncDelegator\", __asyncDelegator);\r\n exporter(\"__asyncValues\", __asyncValues);\r\n exporter(\"__makeTemplateObject\", __makeTemplateObject);\r\n exporter(\"__importStar\", __importStar);\r\n exporter(\"__importDefault\", __importDefault);\r\n exporter(\"__classPrivateFieldGet\", __classPrivateFieldGet);\r\n exporter(\"__classPrivateFieldSet\", __classPrivateFieldSet);\r\n});\r\n", "import tslib from '../tslib.js';\r\nconst {\r\n __extends,\r\n __assign,\r\n __rest,\r\n __decorate,\r\n __param,\r\n __metadata,\r\n __awaiter,\r\n __generator,\r\n __exportStar,\r\n __createBinding,\r\n __values,\r\n __read,\r\n __spread,\r\n __spreadArrays,\r\n __spreadArray,\r\n __await,\r\n __asyncGenerator,\r\n __asyncDelegator,\r\n __asyncValues,\r\n __makeTemplateObject,\r\n __importStar,\r\n __importDefault,\r\n __classPrivateFieldGet,\r\n __classPrivateFieldSet,\r\n} = tslib;\r\nexport {\r\n __extends,\r\n __assign,\r\n __rest,\r\n __decorate,\r\n __param,\r\n __metadata,\r\n __awaiter,\r\n __generator,\r\n __exportStar,\r\n __createBinding,\r\n __values,\r\n __read,\r\n __spread,\r\n __spreadArrays,\r\n __spreadArray,\r\n __await,\r\n __asyncGenerator,\r\n __asyncDelegator,\r\n __asyncValues,\r\n __makeTemplateObject,\r\n __importStar,\r\n __importDefault,\r\n __classPrivateFieldGet,\r\n __classPrivateFieldSet,\r\n};\r\n", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n NEVER,\n ReplaySubject,\n delay,\n distinctUntilChanged,\n filter,\n finalize,\n fromEvent,\n interval,\n merge,\n mergeMap,\n of,\n repeat,\n switchMap,\n take,\n takeUntil,\n tap,\n withLatestFrom\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Script\n * ------------------------------------------------------------------------- */\n\n/* Append container for instances */\nconst container = document.createElement(\"div\")\ndocument.body.appendChild(container)\n\n/* Append button next to palette toggle */\nconst header = document.querySelector(\".md-header__option\")\nif (header) {\n const button = document.createElement(\"button\")\n button.className = \"md-header__button md-icon \u1D34\u2092\u1D34\u2092\u1D34\u2092__button\"\n if (header.parentElement)\n header.parentElement.insertBefore(button, header)\n\n /* Toggle animation */\n const on$ = new ReplaySubject(1)\n on$\n .pipe(\n distinctUntilChanged()\n )\n .subscribe(on => {\n sessionStorage.setItem(\"\u1D34\u2092\u1D34\u2092\u1D34\u2092\", `${on}`)\n button.hidden = !on\n })\n\n /* Load state from session storage */\n on$.next(JSON.parse(sessionStorage.getItem(\"\u1D34\u2092\u1D34\u2092\u1D34\u2092\") || \"true\"))\n fromEvent(button, \"click\")\n .pipe(\n withLatestFrom(on$)\n )\n .subscribe(([, on]) => on$.next(!on))\n\n /* Generate instances */\n interval(250)\n .pipe(\n takeUntil(on$.pipe(filter(on => !on))),\n take(75),\n repeat({ delay: () => on$.pipe(filter(on => on)) }),\n mergeMap(() => {\n const instance = document.createElement(\"div\")\n instance.className = \"\u1D34\u2092\u1D34\u2092\u1D34\u2092\"\n instance.ariaHidden = \"true\"\n container.appendChild(instance)\n return merge(NEVER, of(instance))\n .pipe(\n finalize(() => instance.remove()),\n takeUntil(on$.pipe(filter(on => !on))),\n switchMap(el => fromEvent(el, \"click\")\n .pipe(\n tap(() => el.classList.add(\"\u1D34\u2092\u1D34\u2092\u1D34\u2092--\u1D4D\u2092\u1D57\uA700\u1D34\u2090\")),\n delay(1000),\n tap(() => el.classList.remove(\"\u1D34\u2092\u1D34\u2092\u1D34\u2092--\u1D4D\u2092\u1D57\uA700\u1D34\u2090\"))\n )\n )\n )\n })\n )\n .subscribe()\n}\n"], + "mappings": "6iBAAA,IAAAA,GAAAC,GAAA,CAAAC,GAAAC,KAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,gFAeA,IAAIC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,EACAC,GACAC,GACAC,GACAC,GACAC,EACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,IACH,SAAUC,EAAS,CAChB,IAAIC,EAAO,OAAO,QAAW,SAAW,OAAS,OAAO,MAAS,SAAW,KAAO,OAAO,MAAS,SAAW,KAAO,CAAC,EAClH,OAAO,QAAW,YAAc,OAAO,IACvC,OAAO,QAAS,CAAC,SAAS,EAAG,SAAU3B,EAAS,CAAE0B,EAAQE,EAAeD,EAAMC,EAAe5B,CAAO,CAAC,CAAC,CAAG,CAAC,EAEtG,OAAOC,IAAW,UAAY,OAAOA,GAAO,SAAY,SAC7DyB,EAAQE,EAAeD,EAAMC,EAAe3B,GAAO,OAAO,CAAC,CAAC,EAG5DyB,EAAQE,EAAeD,CAAI,CAAC,EAEhC,SAASC,EAAe5B,EAAS6B,EAAU,CACvC,OAAI7B,IAAY2B,IACR,OAAO,OAAO,QAAW,WACzB,OAAO,eAAe3B,EAAS,aAAc,CAAE,MAAO,EAAK,CAAC,EAG5DA,EAAQ,WAAa,IAGtB,SAAU8B,EAAIC,EAAG,CAAE,OAAO/B,EAAQ8B,GAAMD,EAAWA,EAASC,EAAIC,CAAC,EAAIA,CAAG,CACnF,CACJ,GACC,SAAUC,EAAU,CACjB,IAAIC,EAAgB,OAAO,gBACtB,CAAE,UAAW,CAAC,CAAE,YAAa,OAAS,SAAUC,EAAGC,EAAG,CAAED,EAAE,UAAYC,CAAG,GAC1E,SAAUD,EAAGC,EAAG,CAAE,QAASC,KAAKD,EAAO,OAAO,UAAU,eAAe,KAAKA,EAAGC,CAAC,IAAGF,EAAEE,GAAKD,EAAEC,GAAI,EAEpGlC,GAAY,SAAUgC,EAAGC,EAAG,CACxB,GAAI,OAAOA,GAAM,YAAcA,IAAM,KACjC,MAAM,IAAI,UAAU,uBAAyB,OAAOA,CAAC,EAAI,+BAA+B,EAC5FF,EAAcC,EAAGC,CAAC,EAClB,SAASE,GAAK,CAAE,KAAK,YAAcH,CAAG,CACtCA,EAAE,UAAYC,IAAM,KAAO,OAAO,OAAOA,CAAC,GAAKE,EAAG,UAAYF,EAAE,UAAW,IAAIE,EACnF,EAEAlC,GAAW,OAAO,QAAU,SAAUmC,EAAG,CACrC,QAASC,EAAG,EAAI,EAAGC,EAAI,UAAU,OAAQ,EAAIA,EAAG,IAAK,CACjDD,EAAI,UAAU,GACd,QAASH,KAAKG,EAAO,OAAO,UAAU,eAAe,KAAKA,EAAGH,CAAC,IAAGE,EAAEF,GAAKG,EAAEH,GAC9E,CACA,OAAOE,CACX,EAEAlC,GAAS,SAAUmC,EAAGE,EAAG,CACrB,IAAIH,EAAI,CAAC,EACT,QAASF,KAAKG,EAAO,OAAO,UAAU,eAAe,KAAKA,EAAGH,CAAC,GAAKK,EAAE,QAAQL,CAAC,EAAI,IAC9EE,EAAEF,GAAKG,EAAEH,IACb,GAAIG,GAAK,MAAQ,OAAO,OAAO,uBAA0B,WACrD,QAASG,EAAI,EAAGN,EAAI,OAAO,sBAAsBG,CAAC,EAAGG,EAAIN,EAAE,OAAQM,IAC3DD,EAAE,QAAQL,EAAEM,EAAE,EAAI,GAAK,OAAO,UAAU,qBAAqB,KAAKH,EAAGH,EAAEM,EAAE,IACzEJ,EAAEF,EAAEM,IAAMH,EAAEH,EAAEM,KAE1B,OAAOJ,CACX,EAEAjC,GAAa,SAAUsC,EAAYC,EAAQC,EAAKC,EAAM,CAClD,IAAIC,EAAI,UAAU,OAAQC,EAAID,EAAI,EAAIH,EAASE,IAAS,KAAOA,EAAO,OAAO,yBAAyBF,EAAQC,CAAG,EAAIC,EAAMZ,EAC3H,GAAI,OAAO,SAAY,UAAY,OAAO,QAAQ,UAAa,WAAYc,EAAI,QAAQ,SAASL,EAAYC,EAAQC,EAAKC,CAAI,MACxH,SAASJ,EAAIC,EAAW,OAAS,EAAGD,GAAK,EAAGA,KAASR,EAAIS,EAAWD,MAAIM,GAAKD,EAAI,EAAIb,EAAEc,CAAC,EAAID,EAAI,EAAIb,EAAEU,EAAQC,EAAKG,CAAC,EAAId,EAAEU,EAAQC,CAAG,IAAMG,GAChJ,OAAOD,EAAI,GAAKC,GAAK,OAAO,eAAeJ,EAAQC,EAAKG,CAAC,EAAGA,CAChE,EAEA1C,GAAU,SAAU2C,EAAYC,EAAW,CACvC,OAAO,SAAUN,EAAQC,EAAK,CAAEK,EAAUN,EAAQC,EAAKI,CAAU,CAAG,CACxE,EAEA1C,GAAa,SAAU4C,EAAaC,EAAe,CAC/C,GAAI,OAAO,SAAY,UAAY,OAAO,QAAQ,UAAa,WAAY,OAAO,QAAQ,SAASD,EAAaC,CAAa,CACjI,EAEA5C,GAAY,SAAU6C,EAASC,EAAYC,EAAGC,EAAW,CACrD,SAASC,EAAMC,EAAO,CAAE,OAAOA,aAAiBH,EAAIG,EAAQ,IAAIH,EAAE,SAAUI,EAAS,CAAEA,EAAQD,CAAK,CAAG,CAAC,CAAG,CAC3G,OAAO,IAAKH,IAAMA,EAAI,UAAU,SAAUI,EAASC,EAAQ,CACvD,SAASC,EAAUH,EAAO,CAAE,GAAI,CAAEI,EAAKN,EAAU,KAAKE,CAAK,CAAC,CAAG,OAASjB,EAAP,CAAYmB,EAAOnB,CAAC,CAAG,CAAE,CAC1F,SAASsB,EAASL,EAAO,CAAE,GAAI,CAAEI,EAAKN,EAAU,MAASE,CAAK,CAAC,CAAG,OAASjB,EAAP,CAAYmB,EAAOnB,CAAC,CAAG,CAAE,CAC7F,SAASqB,EAAKE,EAAQ,CAAEA,EAAO,KAAOL,EAAQK,EAAO,KAAK,EAAIP,EAAMO,EAAO,KAAK,EAAE,KAAKH,EAAWE,CAAQ,CAAG,CAC7GD,GAAMN,EAAYA,EAAU,MAAMH,EAASC,GAAc,CAAC,CAAC,GAAG,KAAK,CAAC,CACxE,CAAC,CACL,EAEA7C,GAAc,SAAU4C,EAASY,EAAM,CACnC,IAAIC,EAAI,CAAE,MAAO,EAAG,KAAM,UAAW,CAAE,GAAI5B,EAAE,GAAK,EAAG,MAAMA,EAAE,GAAI,OAAOA,EAAE,EAAI,EAAG,KAAM,CAAC,EAAG,IAAK,CAAC,CAAE,EAAG,EAAG6B,EAAG7B,EAAG8B,EAC/G,OAAOA,EAAI,CAAE,KAAMC,EAAK,CAAC,EAAG,MAASA,EAAK,CAAC,EAAG,OAAUA,EAAK,CAAC,CAAE,EAAG,OAAO,QAAW,aAAeD,EAAE,OAAO,UAAY,UAAW,CAAE,OAAO,IAAM,GAAIA,EACvJ,SAASC,EAAK7B,EAAG,CAAE,OAAO,SAAUT,EAAG,CAAE,OAAO+B,EAAK,CAACtB,EAAGT,CAAC,CAAC,CAAG,CAAG,CACjE,SAAS+B,EAAKQ,EAAI,CACd,GAAI,EAAG,MAAM,IAAI,UAAU,iCAAiC,EAC5D,KAAOJ,GAAG,GAAI,CACV,GAAI,EAAI,EAAGC,IAAM7B,EAAIgC,EAAG,GAAK,EAAIH,EAAE,OAAYG,EAAG,GAAKH,EAAE,SAAc7B,EAAI6B,EAAE,SAAc7B,EAAE,KAAK6B,CAAC,EAAG,GAAKA,EAAE,OAAS,EAAE7B,EAAIA,EAAE,KAAK6B,EAAGG,EAAG,EAAE,GAAG,KAAM,OAAOhC,EAE3J,OADI6B,EAAI,EAAG7B,IAAGgC,EAAK,CAACA,EAAG,GAAK,EAAGhC,EAAE,KAAK,GAC9BgC,EAAG,GAAI,CACX,IAAK,GAAG,IAAK,GAAGhC,EAAIgC,EAAI,MACxB,IAAK,GAAG,OAAAJ,EAAE,QAAgB,CAAE,MAAOI,EAAG,GAAI,KAAM,EAAM,EACtD,IAAK,GAAGJ,EAAE,QAASC,EAAIG,EAAG,GAAIA,EAAK,CAAC,CAAC,EAAG,SACxC,IAAK,GAAGA,EAAKJ,EAAE,IAAI,IAAI,EAAGA,EAAE,KAAK,IAAI,EAAG,SACxC,QACI,GAAM5B,EAAI4B,EAAE,KAAM,EAAA5B,EAAIA,EAAE,OAAS,GAAKA,EAAEA,EAAE,OAAS,MAAQgC,EAAG,KAAO,GAAKA,EAAG,KAAO,GAAI,CAAEJ,EAAI,EAAG,QAAU,CAC3G,GAAII,EAAG,KAAO,IAAM,CAAChC,GAAMgC,EAAG,GAAKhC,EAAE,IAAMgC,EAAG,GAAKhC,EAAE,IAAM,CAAE4B,EAAE,MAAQI,EAAG,GAAI,KAAO,CACrF,GAAIA,EAAG,KAAO,GAAKJ,EAAE,MAAQ5B,EAAE,GAAI,CAAE4B,EAAE,MAAQ5B,EAAE,GAAIA,EAAIgC,EAAI,KAAO,CACpE,GAAIhC,GAAK4B,EAAE,MAAQ5B,EAAE,GAAI,CAAE4B,EAAE,MAAQ5B,EAAE,GAAI4B,EAAE,IAAI,KAAKI,CAAE,EAAG,KAAO,CAC9DhC,EAAE,IAAI4B,EAAE,IAAI,IAAI,EACpBA,EAAE,KAAK,IAAI,EAAG,QACtB,CACAI,EAAKL,EAAK,KAAKZ,EAASa,CAAC,CAC7B,OAASzB,EAAP,CAAY6B,EAAK,CAAC,EAAG7B,CAAC,EAAG0B,EAAI,CAAG,QAAE,CAAU,EAAI7B,EAAI,CAAG,CACzD,GAAIgC,EAAG,GAAK,EAAG,MAAMA,EAAG,GAAI,MAAO,CAAE,MAAOA,EAAG,GAAKA,EAAG,GAAK,OAAQ,KAAM,EAAK,CACnF,CACJ,EAEA5D,GAAe,SAAS6D,EAAGC,EAAG,CAC1B,QAASpC,KAAKmC,EAAOnC,IAAM,WAAa,CAAC,OAAO,UAAU,eAAe,KAAKoC,EAAGpC,CAAC,GAAGX,GAAgB+C,EAAGD,EAAGnC,CAAC,CAChH,EAEAX,GAAkB,OAAO,OAAU,SAAS+C,EAAGD,EAAGE,EAAGC,EAAI,CACjDA,IAAO,SAAWA,EAAKD,GAC3B,OAAO,eAAeD,EAAGE,EAAI,CAAE,WAAY,GAAM,IAAK,UAAW,CAAE,OAAOH,EAAEE,EAAI,CAAE,CAAC,CACvF,EAAM,SAASD,EAAGD,EAAGE,EAAGC,EAAI,CACpBA,IAAO,SAAWA,EAAKD,GAC3BD,EAAEE,GAAMH,EAAEE,EACd,EAEA9D,EAAW,SAAU6D,EAAG,CACpB,IAAIjC,EAAI,OAAO,QAAW,YAAc,OAAO,SAAUgC,EAAIhC,GAAKiC,EAAEjC,GAAIG,EAAI,EAC5E,GAAI6B,EAAG,OAAOA,EAAE,KAAKC,CAAC,EACtB,GAAIA,GAAK,OAAOA,EAAE,QAAW,SAAU,MAAO,CAC1C,KAAM,UAAY,CACd,OAAIA,GAAK9B,GAAK8B,EAAE,SAAQA,EAAI,QACrB,CAAE,MAAOA,GAAKA,EAAE9B,KAAM,KAAM,CAAC8B,CAAE,CAC1C,CACJ,EACA,MAAM,IAAI,UAAUjC,EAAI,0BAA4B,iCAAiC,CACzF,EAEA3B,GAAS,SAAU4D,EAAG,EAAG,CACrB,IAAID,EAAI,OAAO,QAAW,YAAcC,EAAE,OAAO,UACjD,GAAI,CAACD,EAAG,OAAOC,EACf,IAAI9B,EAAI6B,EAAE,KAAKC,CAAC,EAAGxB,EAAG2B,EAAK,CAAC,EAAGlC,EAC/B,GAAI,CACA,MAAQ,IAAM,QAAU,KAAM,IAAM,EAAEO,EAAIN,EAAE,KAAK,GAAG,MAAMiC,EAAG,KAAK3B,EAAE,KAAK,CAC7E,OACO4B,EAAP,CAAgBnC,EAAI,CAAE,MAAOmC,CAAM,CAAG,QACtC,CACI,GAAI,CACI5B,GAAK,CAACA,EAAE,OAASuB,EAAI7B,EAAE,SAAY6B,EAAE,KAAK7B,CAAC,CACnD,QACA,CAAU,GAAID,EAAG,MAAMA,EAAE,KAAO,CACpC,CACA,OAAOkC,CACX,EAGA9D,GAAW,UAAY,CACnB,QAAS8D,EAAK,CAAC,EAAGjC,EAAI,EAAGA,EAAI,UAAU,OAAQA,IAC3CiC,EAAKA,EAAG,OAAO/D,GAAO,UAAU8B,EAAE,CAAC,EACvC,OAAOiC,CACX,EAGA7D,GAAiB,UAAY,CACzB,QAASyB,EAAI,EAAGG,EAAI,EAAGmC,EAAK,UAAU,OAAQnC,EAAImC,EAAInC,IAAKH,GAAK,UAAUG,GAAG,OAC7E,QAASM,EAAI,MAAMT,CAAC,EAAGkC,EAAI,EAAG/B,EAAI,EAAGA,EAAImC,EAAInC,IACzC,QAAS,EAAI,UAAUA,GAAIoC,EAAI,EAAGC,EAAK,EAAE,OAAQD,EAAIC,EAAID,IAAKL,IAC1DzB,EAAEyB,GAAK,EAAEK,GACjB,OAAO9B,CACX,EAEAjC,GAAgB,SAAUiE,EAAIC,EAAMC,EAAM,CACtC,GAAIA,GAAQ,UAAU,SAAW,EAAG,QAASxC,EAAI,EAAGyC,EAAIF,EAAK,OAAQN,EAAIjC,EAAIyC,EAAGzC,KACxEiC,GAAM,EAAEjC,KAAKuC,MACRN,IAAIA,EAAK,MAAM,UAAU,MAAM,KAAKM,EAAM,EAAGvC,CAAC,GACnDiC,EAAGjC,GAAKuC,EAAKvC,IAGrB,OAAOsC,EAAG,OAAOL,GAAM,MAAM,UAAU,MAAM,KAAKM,CAAI,CAAC,CAC3D,EAEAjE,EAAU,SAAUe,EAAG,CACnB,OAAO,gBAAgBf,GAAW,KAAK,EAAIe,EAAG,MAAQ,IAAIf,EAAQe,CAAC,CACvE,EAEAd,GAAmB,SAAUoC,EAASC,EAAYE,EAAW,CACzD,GAAI,CAAC,OAAO,cAAe,MAAM,IAAI,UAAU,sCAAsC,EACrF,IAAIY,EAAIZ,EAAU,MAAMH,EAASC,GAAc,CAAC,CAAC,EAAGZ,EAAG0C,EAAI,CAAC,EAC5D,OAAO1C,EAAI,CAAC,EAAG2B,EAAK,MAAM,EAAGA,EAAK,OAAO,EAAGA,EAAK,QAAQ,EAAG3B,EAAE,OAAO,eAAiB,UAAY,CAAE,OAAO,IAAM,EAAGA,EACpH,SAAS2B,EAAK7B,EAAG,CAAM4B,EAAE5B,KAAIE,EAAEF,GAAK,SAAUT,EAAG,CAAE,OAAO,IAAI,QAAQ,SAAUsD,EAAGlD,EAAG,CAAEiD,EAAE,KAAK,CAAC5C,EAAGT,EAAGsD,EAAGlD,CAAC,CAAC,EAAI,GAAKmD,EAAO9C,EAAGT,CAAC,CAAG,CAAC,CAAG,EAAG,CACzI,SAASuD,EAAO9C,EAAGT,EAAG,CAAE,GAAI,CAAE+B,EAAKM,EAAE5B,GAAGT,CAAC,CAAC,CAAG,OAASU,EAAP,CAAY8C,EAAOH,EAAE,GAAG,GAAI3C,CAAC,CAAG,CAAE,CACjF,SAASqB,EAAKd,EAAG,CAAEA,EAAE,iBAAiBhC,EAAU,QAAQ,QAAQgC,EAAE,MAAM,CAAC,EAAE,KAAKwC,EAAS5B,CAAM,EAAI2B,EAAOH,EAAE,GAAG,GAAIpC,CAAC,CAAI,CACxH,SAASwC,EAAQ9B,EAAO,CAAE4B,EAAO,OAAQ5B,CAAK,CAAG,CACjD,SAASE,EAAOF,EAAO,CAAE4B,EAAO,QAAS5B,CAAK,CAAG,CACjD,SAAS6B,EAAOE,EAAG1D,EAAG,CAAM0D,EAAE1D,CAAC,EAAGqD,EAAE,MAAM,EAAGA,EAAE,QAAQE,EAAOF,EAAE,GAAG,GAAIA,EAAE,GAAG,EAAE,CAAG,CACrF,EAEAlE,GAAmB,SAAUsD,EAAG,CAC5B,IAAI9B,EAAGN,EACP,OAAOM,EAAI,CAAC,EAAG2B,EAAK,MAAM,EAAGA,EAAK,QAAS,SAAU5B,EAAG,CAAE,MAAMA,CAAG,CAAC,EAAG4B,EAAK,QAAQ,EAAG3B,EAAE,OAAO,UAAY,UAAY,CAAE,OAAO,IAAM,EAAGA,EAC1I,SAAS2B,EAAK7B,EAAGiD,EAAG,CAAE/C,EAAEF,GAAKgC,EAAEhC,GAAK,SAAUT,EAAG,CAAE,OAAQK,EAAI,CAACA,GAAK,CAAE,MAAOpB,EAAQwD,EAAEhC,GAAGT,CAAC,CAAC,EAAG,KAAMS,IAAM,QAAS,EAAIiD,EAAIA,EAAE1D,CAAC,EAAIA,CAAG,EAAI0D,CAAG,CAClJ,EAEAtE,GAAgB,SAAUqD,EAAG,CACzB,GAAI,CAAC,OAAO,cAAe,MAAM,IAAI,UAAU,sCAAsC,EACrF,IAAID,EAAIC,EAAE,OAAO,eAAgB,EACjC,OAAOD,EAAIA,EAAE,KAAKC,CAAC,GAAKA,EAAI,OAAO7D,GAAa,WAAaA,EAAS6D,CAAC,EAAIA,EAAE,OAAO,UAAU,EAAG,EAAI,CAAC,EAAGH,EAAK,MAAM,EAAGA,EAAK,OAAO,EAAGA,EAAK,QAAQ,EAAG,EAAE,OAAO,eAAiB,UAAY,CAAE,OAAO,IAAM,EAAG,GAC9M,SAASA,EAAK7B,EAAG,CAAE,EAAEA,GAAKgC,EAAEhC,IAAM,SAAUT,EAAG,CAAE,OAAO,IAAI,QAAQ,SAAU4B,EAASC,EAAQ,CAAE7B,EAAIyC,EAAEhC,GAAGT,CAAC,EAAGwD,EAAO5B,EAASC,EAAQ7B,EAAE,KAAMA,EAAE,KAAK,CAAG,CAAC,CAAG,CAAG,CAC/J,SAASwD,EAAO5B,EAASC,EAAQ1B,EAAGH,EAAG,CAAE,QAAQ,QAAQA,CAAC,EAAE,KAAK,SAASA,EAAG,CAAE4B,EAAQ,CAAE,MAAO5B,EAAG,KAAMG,CAAE,CAAC,CAAG,EAAG0B,CAAM,CAAG,CAC/H,EAEAxC,GAAuB,SAAUsE,EAAQC,EAAK,CAC1C,OAAI,OAAO,eAAkB,OAAO,eAAeD,EAAQ,MAAO,CAAE,MAAOC,CAAI,CAAC,EAAYD,EAAO,IAAMC,EAClGD,CACX,EAEA,IAAIE,EAAqB,OAAO,OAAU,SAASpB,EAAGzC,EAAG,CACrD,OAAO,eAAeyC,EAAG,UAAW,CAAE,WAAY,GAAM,MAAOzC,CAAE,CAAC,CACtE,EAAK,SAASyC,EAAGzC,EAAG,CAChByC,EAAE,QAAazC,CACnB,EAEAV,GAAe,SAAUwE,EAAK,CAC1B,GAAIA,GAAOA,EAAI,WAAY,OAAOA,EAClC,IAAI7B,EAAS,CAAC,EACd,GAAI6B,GAAO,KAAM,QAASpB,KAAKoB,EAASpB,IAAM,WAAa,OAAO,UAAU,eAAe,KAAKoB,EAAKpB,CAAC,GAAGhD,GAAgBuC,EAAQ6B,EAAKpB,CAAC,EACvI,OAAAmB,EAAmB5B,EAAQ6B,CAAG,EACvB7B,CACX,EAEA1C,GAAkB,SAAUuE,EAAK,CAC7B,OAAQA,GAAOA,EAAI,WAAcA,EAAM,CAAE,QAAWA,CAAI,CAC5D,EAEAtE,GAAyB,SAAUuE,EAAUC,EAAOC,EAAM,EAAG,CACzD,GAAIA,IAAS,KAAO,CAAC,EAAG,MAAM,IAAI,UAAU,+CAA+C,EAC3F,GAAI,OAAOD,GAAU,WAAaD,IAAaC,GAAS,CAAC,EAAI,CAACA,EAAM,IAAID,CAAQ,EAAG,MAAM,IAAI,UAAU,0EAA0E,EACjL,OAAOE,IAAS,IAAM,EAAIA,IAAS,IAAM,EAAE,KAAKF,CAAQ,EAAI,EAAI,EAAE,MAAQC,EAAM,IAAID,CAAQ,CAChG,EAEAtE,GAAyB,SAAUsE,EAAUC,EAAOrC,EAAOsC,EAAMP,EAAG,CAChE,GAAIO,IAAS,IAAK,MAAM,IAAI,UAAU,gCAAgC,EACtE,GAAIA,IAAS,KAAO,CAACP,EAAG,MAAM,IAAI,UAAU,+CAA+C,EAC3F,GAAI,OAAOM,GAAU,WAAaD,IAAaC,GAAS,CAACN,EAAI,CAACM,EAAM,IAAID,CAAQ,EAAG,MAAM,IAAI,UAAU,yEAAyE,EAChL,OAAQE,IAAS,IAAMP,EAAE,KAAKK,EAAUpC,CAAK,EAAI+B,EAAIA,EAAE,MAAQ/B,EAAQqC,EAAM,IAAID,EAAUpC,CAAK,EAAIA,CACxG,EAEA1B,EAAS,YAAa9B,EAAS,EAC/B8B,EAAS,WAAY7B,EAAQ,EAC7B6B,EAAS,SAAU5B,EAAM,EACzB4B,EAAS,aAAc3B,EAAU,EACjC2B,EAAS,UAAW1B,EAAO,EAC3B0B,EAAS,aAAczB,EAAU,EACjCyB,EAAS,YAAaxB,EAAS,EAC/BwB,EAAS,cAAevB,EAAW,EACnCuB,EAAS,eAAgBtB,EAAY,EACrCsB,EAAS,kBAAmBP,EAAe,EAC3CO,EAAS,WAAYrB,CAAQ,EAC7BqB,EAAS,SAAUpB,EAAM,EACzBoB,EAAS,WAAYnB,EAAQ,EAC7BmB,EAAS,iBAAkBlB,EAAc,EACzCkB,EAAS,gBAAiBjB,EAAa,EACvCiB,EAAS,UAAWhB,CAAO,EAC3BgB,EAAS,mBAAoBf,EAAgB,EAC7Ce,EAAS,mBAAoBd,EAAgB,EAC7Cc,EAAS,gBAAiBb,EAAa,EACvCa,EAAS,uBAAwBZ,EAAoB,EACrDY,EAAS,eAAgBX,EAAY,EACrCW,EAAS,kBAAmBV,EAAe,EAC3CU,EAAS,yBAA0BT,EAAsB,EACzDS,EAAS,yBAA0BR,EAAsB,CAC7D,CAAC,ICjTD,IAAAyE,GAAkB,WACZ,CACF,UAAAC,EACA,SAAAC,GACA,OAAAC,GACA,WAAAC,GACA,QAAAC,GACA,WAAAC,GACA,UAAAC,GACA,YAAAC,GACA,aAAAC,GACA,gBAAAC,GACA,SAAAC,EACA,OAAAC,EACA,SAAAC,GACA,eAAAC,GACA,cAAAC,EACA,QAAAC,GACA,iBAAAC,GACA,iBAAAC,GACA,cAAAC,GACA,qBAAAC,GACA,aAAAC,GACA,gBAAAC,GACA,uBAAAC,GACA,uBAAAC,EACJ,EAAI,GAAAC,QCtBE,SAAUC,EAAWC,EAAU,CACnC,OAAO,OAAOA,GAAU,UAC1B,CCGM,SAAUC,GAAoBC,EAAgC,CAClE,IAAMC,EAAS,SAACC,EAAa,CAC3B,MAAM,KAAKA,CAAQ,EACnBA,EAAS,MAAQ,IAAI,MAAK,EAAG,KAC/B,EAEMC,EAAWH,EAAWC,CAAM,EAClC,OAAAE,EAAS,UAAY,OAAO,OAAO,MAAM,SAAS,EAClDA,EAAS,UAAU,YAAcA,EAC1BA,CACT,CCDO,IAAMC,GAA+CC,GAC1D,SAACC,EAAM,CACL,OAAA,SAA4CC,EAA0B,CACpED,EAAO,IAAI,EACX,KAAK,QAAUC,EACRA,EAAO,OAAM;EACxBA,EAAO,IAAI,SAACC,EAAKC,EAAC,CAAK,OAAGA,EAAI,EAAC,KAAKD,EAAI,SAAQ,CAAzB,CAA6B,EAAE,KAAK;GAAM,EACzD,GACJ,KAAK,KAAO,sBACZ,KAAK,OAASD,CAChB,CARA,CAQC,ECvBC,SAAUG,EAAaC,EAA6BC,EAAO,CAC/D,GAAID,EAAK,CACP,IAAME,EAAQF,EAAI,QAAQC,CAAI,EAC9B,GAAKC,GAASF,EAAI,OAAOE,EAAO,CAAC,EAErC,CCOA,IAAAC,EAAA,UAAA,CAyBE,SAAAA,EAAoBC,EAA4B,CAA5B,KAAA,gBAAAA,EAdb,KAAA,OAAS,GAER,KAAA,WAAmD,KAMnD,KAAA,YAAqD,IAMV,CAQnD,OAAAD,EAAA,UAAA,YAAA,UAAA,aACME,EAEJ,GAAI,CAAC,KAAK,OAAQ,CAChB,KAAK,OAAS,GAGN,IAAAC,EAAe,KAAI,WAC3B,GAAIA,EAEF,GADA,KAAK,WAAa,KACd,MAAM,QAAQA,CAAU,MAC1B,QAAqBC,EAAAC,EAAAF,CAAU,EAAAG,EAAAF,EAAA,KAAA,EAAA,CAAAE,EAAA,KAAAA,EAAAF,EAAA,KAAA,EAAE,CAA5B,IAAMG,EAAMD,EAAA,MACfC,EAAO,OAAO,IAAI,yGAGpBJ,EAAW,OAAO,IAAI,EAIlB,IAAiBK,EAAqB,KAAI,gBAClD,GAAIC,EAAWD,CAAgB,EAC7B,GAAI,CACFA,EAAgB,QACTE,EAAP,CACAR,EAASQ,aAAaC,GAAsBD,EAAE,OAAS,CAACA,CAAC,EAIrD,IAAAE,EAAgB,KAAI,YAC5B,GAAIA,EAAa,CACf,KAAK,YAAc,SACnB,QAAwBC,EAAAR,EAAAO,CAAW,EAAAE,EAAAD,EAAA,KAAA,EAAA,CAAAC,EAAA,KAAAA,EAAAD,EAAA,KAAA,EAAE,CAAhC,IAAME,EAASD,EAAA,MAClB,GAAI,CACFE,GAAcD,CAAS,QAChBE,EAAP,CACAf,EAASA,GAAM,KAANA,EAAU,CAAA,EACfe,aAAeN,GACjBT,EAAMgB,EAAAA,EAAA,CAAA,EAAAC,EAAOjB,CAAM,CAAA,EAAAiB,EAAKF,EAAI,MAAM,CAAA,EAElCf,EAAO,KAAKe,CAAG,sGAMvB,GAAIf,EACF,MAAM,IAAIS,GAAoBT,CAAM,EAG1C,EAoBAF,EAAA,UAAA,IAAA,SAAIoB,EAAuB,OAGzB,GAAIA,GAAYA,IAAa,KAC3B,GAAI,KAAK,OAGPJ,GAAcI,CAAQ,MACjB,CACL,GAAIA,aAAoBpB,EAAc,CAGpC,GAAIoB,EAAS,QAAUA,EAAS,WAAW,IAAI,EAC7C,OAEFA,EAAS,WAAW,IAAI,GAEzB,KAAK,aAAcC,EAAA,KAAK,eAAW,MAAAA,IAAA,OAAAA,EAAI,CAAA,GAAI,KAAKD,CAAQ,EAG/D,EAOQpB,EAAA,UAAA,WAAR,SAAmBsB,EAAoB,CAC7B,IAAAnB,EAAe,KAAI,WAC3B,OAAOA,IAAemB,GAAW,MAAM,QAAQnB,CAAU,GAAKA,EAAW,SAASmB,CAAM,CAC1F,EASQtB,EAAA,UAAA,WAAR,SAAmBsB,EAAoB,CAC7B,IAAAnB,EAAe,KAAI,WAC3B,KAAK,WAAa,MAAM,QAAQA,CAAU,GAAKA,EAAW,KAAKmB,CAAM,EAAGnB,GAAcA,EAAa,CAACA,EAAYmB,CAAM,EAAIA,CAC5H,EAMQtB,EAAA,UAAA,cAAR,SAAsBsB,EAAoB,CAChC,IAAAnB,EAAe,KAAI,WACvBA,IAAemB,EACjB,KAAK,WAAa,KACT,MAAM,QAAQnB,CAAU,GACjCoB,EAAUpB,EAAYmB,CAAM,CAEhC,EAgBAtB,EAAA,UAAA,OAAA,SAAOoB,EAAsC,CACnC,IAAAR,EAAgB,KAAI,YAC5BA,GAAeW,EAAUX,EAAaQ,CAAQ,EAE1CA,aAAoBpB,GACtBoB,EAAS,cAAc,IAAI,CAE/B,EAlLcpB,EAAA,MAAS,UAAA,CACrB,IAAMwB,EAAQ,IAAIxB,EAClB,OAAAwB,EAAM,OAAS,GACRA,CACT,EAAE,EA+KJxB,GArLA,EAuLO,IAAMyB,GAAqBC,EAAa,MAEzC,SAAUC,GAAeC,EAAU,CACvC,OACEA,aAAiBF,GAChBE,GAAS,WAAYA,GAASC,EAAWD,EAAM,MAAM,GAAKC,EAAWD,EAAM,GAAG,GAAKC,EAAWD,EAAM,WAAW,CAEpH,CAEA,SAASE,GAAcC,EAAwC,CACzDF,EAAWE,CAAS,EACtBA,EAAS,EAETA,EAAU,YAAW,CAEzB,CChNO,IAAMC,EAAuB,CAClC,iBAAkB,KAClB,sBAAuB,KACvB,QAAS,OACT,sCAAuC,GACvC,yBAA0B,ICGrB,IAAMC,EAAmC,CAG9C,WAAA,SAAWC,EAAqBC,EAAgB,SAAEC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,EAAA,GAAA,UAAAA,GACxC,IAAAC,EAAaL,EAAe,SACpC,OAAIK,GAAQ,MAARA,EAAU,WACLA,EAAS,WAAU,MAAnBA,EAAQC,EAAA,CAAYL,EAASC,CAAO,EAAAK,EAAKJ,CAAI,CAAA,CAAA,EAE/C,WAAU,MAAA,OAAAG,EAAA,CAACL,EAASC,CAAO,EAAAK,EAAKJ,CAAI,CAAA,CAAA,CAC7C,EACA,aAAA,SAAaK,EAAM,CACT,IAAAH,EAAaL,EAAe,SACpC,QAAQK,GAAQ,KAAA,OAARA,EAAU,eAAgB,cAAcG,CAAa,CAC/D,EACA,SAAU,QCjBN,SAAUC,GAAqBC,EAAQ,CAC3CC,EAAgB,WAAW,UAAA,CACjB,IAAAC,EAAqBC,EAAM,iBACnC,GAAID,EAEFA,EAAiBF,CAAG,MAGpB,OAAMA,CAEV,CAAC,CACH,CCtBM,SAAUI,GAAI,CAAK,CCMlB,IAAMC,GAAyB,UAAA,CAAM,OAAAC,GAAmB,IAAK,OAAW,MAAS,CAA5C,EAAsE,EAO5G,SAAUC,GAAkBC,EAAU,CAC1C,OAAOF,GAAmB,IAAK,OAAWE,CAAK,CACjD,CAOM,SAAUC,GAAoBC,EAAQ,CAC1C,OAAOJ,GAAmB,IAAKI,EAAO,MAAS,CACjD,CAQM,SAAUJ,GAAmBK,EAAuBD,EAAYF,EAAU,CAC9E,MAAO,CACL,KAAIG,EACJ,MAAKD,EACL,MAAKF,EAET,CCrCA,IAAII,EAAuD,KASrD,SAAUC,EAAaC,EAAc,CACzC,GAAIC,EAAO,sCAAuC,CAChD,IAAMC,EAAS,CAACJ,EAKhB,GAJII,IACFJ,EAAU,CAAE,YAAa,GAAO,MAAO,IAAI,GAE7CE,EAAE,EACEE,EAAQ,CACJ,IAAAC,EAAyBL,EAAvBM,EAAWD,EAAA,YAAEE,EAAKF,EAAA,MAE1B,GADAL,EAAU,KACNM,EACF,MAAMC,QAMVL,EAAE,CAEN,CAMM,SAAUM,GAAaC,EAAQ,CAC/BN,EAAO,uCAAyCH,IAClDA,EAAQ,YAAc,GACtBA,EAAQ,MAAQS,EAEpB,CCrBA,IAAAC,EAAA,SAAAC,EAAA,CAAmCC,EAAAF,EAAAC,CAAA,EA6BjC,SAAAD,EAAYG,EAA6C,CAAzD,IAAAC,EACEH,EAAA,KAAA,IAAA,GAAO,KATC,OAAAG,EAAA,UAAqB,GAUzBD,GACFC,EAAK,YAAcD,EAGfE,GAAeF,CAAW,GAC5BA,EAAY,IAAIC,CAAI,GAGtBA,EAAK,YAAcE,IAEvB,CAzBO,OAAAN,EAAA,OAAP,SAAiBO,EAAwBC,EAA2BC,EAAqB,CACvF,OAAO,IAAIC,GAAeH,EAAMC,EAAOC,CAAQ,CACjD,EAgCAT,EAAA,UAAA,KAAA,SAAKW,EAAS,CACR,KAAK,UACPC,GAA0BC,GAAiBF,CAAK,EAAG,IAAI,EAEvD,KAAK,MAAMA,CAAM,CAErB,EASAX,EAAA,UAAA,MAAA,SAAMc,EAAS,CACT,KAAK,UACPF,GAA0BG,GAAkBD,CAAG,EAAG,IAAI,GAEtD,KAAK,UAAY,GACjB,KAAK,OAAOA,CAAG,EAEnB,EAQAd,EAAA,UAAA,SAAA,UAAA,CACM,KAAK,UACPY,GAA0BI,GAAuB,IAAI,GAErD,KAAK,UAAY,GACjB,KAAK,UAAS,EAElB,EAEAhB,EAAA,UAAA,YAAA,UAAA,CACO,KAAK,SACR,KAAK,UAAY,GACjBC,EAAA,UAAM,YAAW,KAAA,IAAA,EACjB,KAAK,YAAc,KAEvB,EAEUD,EAAA,UAAA,MAAV,SAAgBW,EAAQ,CACtB,KAAK,YAAY,KAAKA,CAAK,CAC7B,EAEUX,EAAA,UAAA,OAAV,SAAiBc,EAAQ,CACvB,GAAI,CACF,KAAK,YAAY,MAAMA,CAAG,UAE1B,KAAK,YAAW,EAEpB,EAEUd,EAAA,UAAA,UAAV,UAAA,CACE,GAAI,CACF,KAAK,YAAY,SAAQ,UAEzB,KAAK,YAAW,EAEpB,EACFA,CAAA,EApHmCiB,CAAY,EA2H/C,IAAMC,GAAQ,SAAS,UAAU,KAEjC,SAASC,GAAyCC,EAAQC,EAAY,CACpE,OAAOH,GAAM,KAAKE,EAAIC,CAAO,CAC/B,CAMA,IAAAC,GAAA,UAAA,CACE,SAAAA,EAAoBC,EAAqC,CAArC,KAAA,gBAAAA,CAAwC,CAE5D,OAAAD,EAAA,UAAA,KAAA,SAAKE,EAAQ,CACH,IAAAD,EAAoB,KAAI,gBAChC,GAAIA,EAAgB,KAClB,GAAI,CACFA,EAAgB,KAAKC,CAAK,QACnBC,EAAP,CACAC,GAAqBD,CAAK,EAGhC,EAEAH,EAAA,UAAA,MAAA,SAAMK,EAAQ,CACJ,IAAAJ,EAAoB,KAAI,gBAChC,GAAIA,EAAgB,MAClB,GAAI,CACFA,EAAgB,MAAMI,CAAG,QAClBF,EAAP,CACAC,GAAqBD,CAAK,OAG5BC,GAAqBC,CAAG,CAE5B,EAEAL,EAAA,UAAA,SAAA,UAAA,CACU,IAAAC,EAAoB,KAAI,gBAChC,GAAIA,EAAgB,SAClB,GAAI,CACFA,EAAgB,SAAQ,QACjBE,EAAP,CACAC,GAAqBD,CAAK,EAGhC,EACFH,CAAA,EArCA,EAuCAM,GAAA,SAAAC,EAAA,CAAuCC,EAAAF,EAAAC,CAAA,EACrC,SAAAD,EACEG,EACAN,EACAO,EAA8B,CAHhC,IAAAC,EAKEJ,EAAA,KAAA,IAAA,GAAO,KAEHN,EACJ,GAAIW,EAAWH,CAAc,GAAK,CAACA,EAGjCR,EAAkB,CAChB,KAAOQ,GAAc,KAAdA,EAAkB,OACzB,MAAON,GAAK,KAALA,EAAS,OAChB,SAAUO,GAAQ,KAARA,EAAY,YAEnB,CAEL,IAAIG,EACAF,GAAQG,EAAO,0BAIjBD,EAAU,OAAO,OAAOJ,CAAc,EACtCI,EAAQ,YAAc,UAAA,CAAM,OAAAF,EAAK,YAAW,CAAhB,EAC5BV,EAAkB,CAChB,KAAMQ,EAAe,MAAQZ,GAAKY,EAAe,KAAMI,CAAO,EAC9D,MAAOJ,EAAe,OAASZ,GAAKY,EAAe,MAAOI,CAAO,EACjE,SAAUJ,EAAe,UAAYZ,GAAKY,EAAe,SAAUI,CAAO,IAI5EZ,EAAkBQ,EAMtB,OAAAE,EAAK,YAAc,IAAIX,GAAiBC,CAAe,GACzD,CACF,OAAAK,CAAA,EAzCuCS,CAAU,EA2CjD,SAASC,GAAqBC,EAAU,CAClCC,EAAO,sCACTC,GAAaF,CAAK,EAIlBG,GAAqBH,CAAK,CAE9B,CAQA,SAASI,GAAoBC,EAAQ,CACnC,MAAMA,CACR,CAOA,SAASC,GAA0BC,EAA2CC,EAA2B,CAC/F,IAAAC,EAA0BR,EAAM,sBACxCQ,GAAyBC,EAAgB,WAAW,UAAA,CAAM,OAAAD,EAAsBF,EAAcC,CAAU,CAA9C,CAA+C,CAC3G,CAOO,IAAMG,GAA6D,CACxE,OAAQ,GACR,KAAMC,EACN,MAAOR,GACP,SAAUQ,GCjRL,IAAMC,EAA+B,UAAA,CAAM,OAAC,OAAO,QAAW,YAAc,OAAO,YAAe,cAAvD,EAAsE,ECyClH,SAAUC,EAAYC,EAAI,CAC9B,OAAOA,CACT,CCsCM,SAAUC,GAAoBC,EAA+B,CACjE,OAAIA,EAAI,SAAW,EACVC,EAGLD,EAAI,SAAW,EACVA,EAAI,GAGN,SAAeE,EAAQ,CAC5B,OAAOF,EAAI,OAAO,SAACG,EAAWC,EAAuB,CAAK,OAAAA,EAAGD,CAAI,CAAP,EAAUD,CAAY,CAClF,CACF,CC9EA,IAAAG,EAAA,UAAA,CAkBE,SAAAA,EAAYC,EAA6E,CACnFA,IACF,KAAK,WAAaA,EAEtB,CA4BA,OAAAD,EAAA,UAAA,KAAA,SAAQE,EAAyB,CAC/B,IAAMC,EAAa,IAAIH,EACvB,OAAAG,EAAW,OAAS,KACpBA,EAAW,SAAWD,EACfC,CACT,EA8IAH,EAAA,UAAA,UAAA,SACEI,EACAC,EACAC,EAA8B,CAHhC,IAAAC,EAAA,KAKQC,EAAaC,GAAaL,CAAc,EAAIA,EAAiB,IAAIM,GAAeN,EAAgBC,EAAOC,CAAQ,EAErH,OAAAK,EAAa,UAAA,CACL,IAAAC,EAAuBL,EAArBL,EAAQU,EAAA,SAAEC,EAAMD,EAAA,OACxBJ,EAAW,IACTN,EAGIA,EAAS,KAAKM,EAAYK,CAAM,EAChCA,EAIAN,EAAK,WAAWC,CAAU,EAG1BD,EAAK,cAAcC,CAAU,CAAC,CAEtC,CAAC,EAEMA,CACT,EAGUR,EAAA,UAAA,cAAV,SAAwBc,EAAmB,CACzC,GAAI,CACF,OAAO,KAAK,WAAWA,CAAI,QACpBC,EAAP,CAIAD,EAAK,MAAMC,CAAG,EAElB,EA6DAf,EAAA,UAAA,QAAA,SAAQgB,EAA0BC,EAAoC,CAAtE,IAAAV,EAAA,KACE,OAAAU,EAAcC,GAAeD,CAAW,EAEjC,IAAIA,EAAkB,SAACE,EAASC,EAAM,CAC3C,IAAMZ,EAAa,IAAIE,GAAkB,CACvC,KAAM,SAACW,EAAK,CACV,GAAI,CACFL,EAAKK,CAAK,QACHN,EAAP,CACAK,EAAOL,CAAG,EACVP,EAAW,YAAW,EAE1B,EACA,MAAOY,EACP,SAAUD,EACX,EACDZ,EAAK,UAAUC,CAAU,CAC3B,CAAC,CACH,EAGUR,EAAA,UAAA,WAAV,SAAqBQ,EAA2B,OAC9C,OAAOI,EAAA,KAAK,UAAM,MAAAA,IAAA,OAAA,OAAAA,EAAE,UAAUJ,CAAU,CAC1C,EAOAR,EAAA,UAACG,GAAD,UAAA,CACE,OAAO,IACT,EA4FAH,EAAA,UAAA,KAAA,UAAA,SAAKsB,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACH,OAAOC,GAAcF,CAAU,EAAE,IAAI,CACvC,EA6BAtB,EAAA,UAAA,UAAA,SAAUiB,EAAoC,CAA9C,IAAAV,EAAA,KACE,OAAAU,EAAcC,GAAeD,CAAW,EAEjC,IAAIA,EAAY,SAACE,EAASC,EAAM,CACrC,IAAIC,EACJd,EAAK,UACH,SAACkB,EAAI,CAAK,OAACJ,EAAQI,CAAT,EACV,SAACV,EAAQ,CAAK,OAAAK,EAAOL,CAAG,CAAV,EACd,UAAA,CAAM,OAAAI,EAAQE,CAAK,CAAb,CAAc,CAExB,CAAC,CACH,EA3aOrB,EAAA,OAAkC,SAAIC,EAAwD,CACnG,OAAO,IAAID,EAAcC,CAAS,CACpC,EA0aFD,GA/cA,EAwdA,SAAS0B,GAAeC,EAA+C,OACrE,OAAOC,EAAAD,GAAW,KAAXA,EAAeE,EAAO,WAAO,MAAAD,IAAA,OAAAA,EAAI,OAC1C,CAEA,SAASE,GAAcC,EAAU,CAC/B,OAAOA,GAASC,EAAWD,EAAM,IAAI,GAAKC,EAAWD,EAAM,KAAK,GAAKC,EAAWD,EAAM,QAAQ,CAChG,CAEA,SAASE,GAAgBF,EAAU,CACjC,OAAQA,GAASA,aAAiBG,GAAgBJ,GAAWC,CAAK,GAAKI,GAAeJ,CAAK,CAC7F,CC1eM,SAAUK,GAAQC,EAAW,CACjC,OAAOC,EAAWD,GAAM,KAAA,OAANA,EAAQ,IAAI,CAChC,CAMM,SAAUE,EACdC,EAAqF,CAErF,OAAO,SAACH,EAAqB,CAC3B,GAAID,GAAQC,CAAM,EAChB,OAAOA,EAAO,KAAK,SAA+BI,EAA2B,CAC3E,GAAI,CACF,OAAOD,EAAKC,EAAc,IAAI,QACvBC,EAAP,CACA,KAAK,MAAMA,CAAG,EAElB,CAAC,EAEH,MAAM,IAAI,UAAU,wCAAwC,CAC9D,CACF,CCjBM,SAAUC,EACdC,EACAC,EACAC,EACAC,EACAC,EAAuB,CAEvB,OAAO,IAAIC,GAAmBL,EAAaC,EAAQC,EAAYC,EAASC,CAAU,CACpF,CAMA,IAAAC,GAAA,SAAAC,EAAA,CAA2CC,EAAAF,EAAAC,CAAA,EAiBzC,SAAAD,EACEL,EACAC,EACAC,EACAC,EACQC,EACAI,EAAiC,CAN3C,IAAAC,EAoBEH,EAAA,KAAA,KAAMN,CAAW,GAAC,KAfV,OAAAS,EAAA,WAAAL,EACAK,EAAA,kBAAAD,EAeRC,EAAK,MAAQR,EACT,SAAuCS,EAAQ,CAC7C,GAAI,CACFT,EAAOS,CAAK,QACLC,EAAP,CACAX,EAAY,MAAMW,CAAG,EAEzB,EACAL,EAAA,UAAM,MACVG,EAAK,OAASN,EACV,SAAuCQ,EAAQ,CAC7C,GAAI,CACFR,EAAQQ,CAAG,QACJA,EAAP,CAEAX,EAAY,MAAMW,CAAG,UAGrB,KAAK,YAAW,EAEpB,EACAL,EAAA,UAAM,OACVG,EAAK,UAAYP,EACb,UAAA,CACE,GAAI,CACFA,EAAU,QACHS,EAAP,CAEAX,EAAY,MAAMW,CAAG,UAGrB,KAAK,YAAW,EAEpB,EACAL,EAAA,UAAM,WACZ,CAEA,OAAAD,EAAA,UAAA,YAAA,UAAA,OACE,GAAI,CAAC,KAAK,mBAAqB,KAAK,kBAAiB,EAAI,CAC/C,IAAAO,EAAW,KAAI,OACvBN,EAAA,UAAM,YAAW,KAAA,IAAA,EAEjB,CAACM,KAAUC,EAAA,KAAK,cAAU,MAAAA,IAAA,QAAAA,EAAA,KAAf,IAAI,GAEnB,EACFR,CAAA,EAnF2CS,CAAU,ECP9C,IAAMC,GAAuDC,GAClE,SAACC,EAAM,CACL,OAAA,UAAoC,CAClCA,EAAO,IAAI,EACX,KAAK,KAAO,0BACZ,KAAK,QAAU,qBACjB,CAJA,CAIC,ECXL,IAAAC,GAAA,SAAAC,EAAA,CAAgCC,EAAAF,EAAAC,CAAA,EAwB9B,SAAAD,GAAA,CAAA,IAAAG,EAEEF,EAAA,KAAA,IAAA,GAAO,KAzBT,OAAAE,EAAA,OAAS,GAEDA,EAAA,iBAAyC,KAGjDA,EAAA,UAA2B,CAAA,EAE3BA,EAAA,UAAY,GAEZA,EAAA,SAAW,GAEXA,EAAA,YAAmB,MAenB,CAGA,OAAAH,EAAA,UAAA,KAAA,SAAQI,EAAwB,CAC9B,IAAMC,EAAU,IAAIC,GAAiB,KAAM,IAAI,EAC/C,OAAAD,EAAQ,SAAWD,EACZC,CACT,EAGUL,EAAA,UAAA,eAAV,UAAA,CACE,GAAI,KAAK,OACP,MAAM,IAAIO,EAEd,EAEAP,EAAA,UAAA,KAAA,SAAKQ,EAAQ,CAAb,IAAAL,EAAA,KACEM,EAAa,UAAA,SAEX,GADAN,EAAK,eAAc,EACf,CAACA,EAAK,UAAW,CACdA,EAAK,mBACRA,EAAK,iBAAmB,MAAM,KAAKA,EAAK,SAAS,OAEnD,QAAuBO,EAAAC,EAAAR,EAAK,gBAAgB,EAAAS,EAAAF,EAAA,KAAA,EAAA,CAAAE,EAAA,KAAAA,EAAAF,EAAA,KAAA,EAAE,CAAzC,IAAMG,EAAQD,EAAA,MACjBC,EAAS,KAAKL,CAAK,qGAGzB,CAAC,CACH,EAEAR,EAAA,UAAA,MAAA,SAAMc,EAAQ,CAAd,IAAAX,EAAA,KACEM,EAAa,UAAA,CAEX,GADAN,EAAK,eAAc,EACf,CAACA,EAAK,UAAW,CACnBA,EAAK,SAAWA,EAAK,UAAY,GACjCA,EAAK,YAAcW,EAEnB,QADQC,EAAcZ,EAAI,UACnBY,EAAU,QACfA,EAAU,MAAK,EAAI,MAAMD,CAAG,EAGlC,CAAC,CACH,EAEAd,EAAA,UAAA,SAAA,UAAA,CAAA,IAAAG,EAAA,KACEM,EAAa,UAAA,CAEX,GADAN,EAAK,eAAc,EACf,CAACA,EAAK,UAAW,CACnBA,EAAK,UAAY,GAEjB,QADQY,EAAcZ,EAAI,UACnBY,EAAU,QACfA,EAAU,MAAK,EAAI,SAAQ,EAGjC,CAAC,CACH,EAEAf,EAAA,UAAA,YAAA,UAAA,CACE,KAAK,UAAY,KAAK,OAAS,GAC/B,KAAK,UAAY,KAAK,iBAAmB,IAC3C,EAEA,OAAA,eAAIA,EAAA,UAAA,WAAQ,KAAZ,UAAA,OACE,QAAOgB,EAAA,KAAK,aAAS,MAAAA,IAAA,OAAA,OAAAA,EAAE,QAAS,CAClC,kCAGUhB,EAAA,UAAA,cAAV,SAAwBiB,EAAyB,CAC/C,YAAK,eAAc,EACZhB,EAAA,UAAM,cAAa,KAAA,KAACgB,CAAU,CACvC,EAGUjB,EAAA,UAAA,WAAV,SAAqBiB,EAAyB,CAC5C,YAAK,eAAc,EACnB,KAAK,wBAAwBA,CAAU,EAChC,KAAK,gBAAgBA,CAAU,CACxC,EAGUjB,EAAA,UAAA,gBAAV,SAA0BiB,EAA2B,CAArD,IAAAd,EAAA,KACQa,EAAqC,KAAnCE,EAAQF,EAAA,SAAEG,EAASH,EAAA,UAAED,EAASC,EAAA,UACtC,OAAIE,GAAYC,EACPC,IAET,KAAK,iBAAmB,KACxBL,EAAU,KAAKE,CAAU,EAClB,IAAII,EAAa,UAAA,CACtBlB,EAAK,iBAAmB,KACxBmB,EAAUP,EAAWE,CAAU,CACjC,CAAC,EACH,EAGUjB,EAAA,UAAA,wBAAV,SAAkCiB,EAA2B,CACrD,IAAAD,EAAuC,KAArCE,EAAQF,EAAA,SAAEO,EAAWP,EAAA,YAAEG,EAASH,EAAA,UACpCE,EACFD,EAAW,MAAMM,CAAW,EACnBJ,GACTF,EAAW,SAAQ,CAEvB,EAQAjB,EAAA,UAAA,aAAA,UAAA,CACE,IAAMwB,EAAkB,IAAIC,EAC5B,OAAAD,EAAW,OAAS,KACbA,CACT,EAxHOxB,EAAA,OAAkC,SAAI0B,EAA0BC,EAAqB,CAC1F,OAAO,IAAIrB,GAAoBoB,EAAaC,CAAM,CACpD,EAuHF3B,GA7IgCyB,CAAU,EAkJ1C,IAAAG,GAAA,SAAAC,EAAA,CAAyCC,EAAAF,EAAAC,CAAA,EACvC,SAAAD,EAESG,EACPC,EAAsB,CAHxB,IAAAC,EAKEJ,EAAA,KAAA,IAAA,GAAO,KAHA,OAAAI,EAAA,YAAAF,EAIPE,EAAK,OAASD,GAChB,CAEA,OAAAJ,EAAA,UAAA,KAAA,SAAKM,EAAQ,UACXC,GAAAC,EAAA,KAAK,eAAW,MAAAA,IAAA,OAAA,OAAAA,EAAE,QAAI,MAAAD,IAAA,QAAAA,EAAA,KAAAC,EAAGF,CAAK,CAChC,EAEAN,EAAA,UAAA,MAAA,SAAMS,EAAQ,UACZF,GAAAC,EAAA,KAAK,eAAW,MAAAA,IAAA,OAAA,OAAAA,EAAE,SAAK,MAAAD,IAAA,QAAAA,EAAA,KAAAC,EAAGC,CAAG,CAC/B,EAEAT,EAAA,UAAA,SAAA,UAAA,UACEO,GAAAC,EAAA,KAAK,eAAW,MAAAA,IAAA,OAAA,OAAAA,EAAE,YAAQ,MAAAD,IAAA,QAAAA,EAAA,KAAAC,CAAA,CAC5B,EAGUR,EAAA,UAAA,WAAV,SAAqBU,EAAyB,SAC5C,OAAOH,GAAAC,EAAA,KAAK,UAAM,MAAAA,IAAA,OAAA,OAAAA,EAAE,UAAUE,CAAU,KAAC,MAAAH,IAAA,OAAAA,EAAII,EAC/C,EACFX,CAAA,EA1ByCY,EAAO,EC5JzC,IAAMC,EAA+C,CAC1D,IAAG,UAAA,CAGD,OAAQA,EAAsB,UAAY,MAAM,IAAG,CACrD,EACA,SAAU,QCwBZ,IAAAC,GAAA,SAAAC,EAAA,CAAsCC,EAAAF,EAAAC,CAAA,EAUpC,SAAAD,EACUG,EACAC,EACAC,EAA6D,CAF7DF,IAAA,SAAAA,EAAA,KACAC,IAAA,SAAAA,EAAA,KACAC,IAAA,SAAAA,EAAAC,GAHV,IAAAC,EAKEN,EAAA,KAAA,IAAA,GAAO,KAJC,OAAAM,EAAA,YAAAJ,EACAI,EAAA,YAAAH,EACAG,EAAA,mBAAAF,EAZFE,EAAA,QAA0B,CAAA,EAC1BA,EAAA,oBAAsB,GAc5BA,EAAK,oBAAsBH,IAAgB,IAC3CG,EAAK,YAAc,KAAK,IAAI,EAAGJ,CAAW,EAC1CI,EAAK,YAAc,KAAK,IAAI,EAAGH,CAAW,GAC5C,CAEA,OAAAJ,EAAA,UAAA,KAAA,SAAKQ,EAAQ,CACL,IAAAC,EAA+E,KAA7EC,EAASD,EAAA,UAAEE,EAAOF,EAAA,QAAEG,EAAmBH,EAAA,oBAAEJ,EAAkBI,EAAA,mBAAEL,EAAWK,EAAA,YAC3EC,IACHC,EAAQ,KAAKH,CAAK,EAClB,CAACI,GAAuBD,EAAQ,KAAKN,EAAmB,IAAG,EAAKD,CAAW,GAE7E,KAAK,YAAW,EAChBH,EAAA,UAAM,KAAI,KAAA,KAACO,CAAK,CAClB,EAGUR,EAAA,UAAA,WAAV,SAAqBa,EAAyB,CAC5C,KAAK,eAAc,EACnB,KAAK,YAAW,EAQhB,QANMC,EAAe,KAAK,gBAAgBD,CAAU,EAE9CJ,EAAmC,KAAjCG,EAAmBH,EAAA,oBAAEE,EAAOF,EAAA,QAG9BM,EAAOJ,EAAQ,MAAK,EACjBK,EAAI,EAAGA,EAAID,EAAK,QAAU,CAACF,EAAW,OAAQG,GAAKJ,EAAsB,EAAI,EACpFC,EAAW,KAAKE,EAAKC,EAAO,EAG9B,YAAK,wBAAwBH,CAAU,EAEhCC,CACT,EAEQd,EAAA,UAAA,YAAR,UAAA,CACQ,IAAAS,EAAoE,KAAlEN,EAAWM,EAAA,YAAEJ,EAAkBI,EAAA,mBAAEE,EAAOF,EAAA,QAAEG,EAAmBH,EAAA,oBAK/DQ,GAAsBL,EAAsB,EAAI,GAAKT,EAK3D,GAJAA,EAAc,KAAYc,EAAqBN,EAAQ,QAAUA,EAAQ,OAAO,EAAGA,EAAQ,OAASM,CAAkB,EAIlH,CAACL,EAAqB,CAKxB,QAJMM,EAAMb,EAAmB,IAAG,EAC9Bc,EAAO,EAGFH,EAAI,EAAGA,EAAIL,EAAQ,QAAWA,EAAQK,IAAiBE,EAAKF,GAAK,EACxEG,EAAOH,EAETG,GAAQR,EAAQ,OAAO,EAAGQ,EAAO,CAAC,EAEtC,EACFnB,CAAA,EAzEsCoB,EAAO,EClB7C,IAAAC,GAAA,SAAAC,EAAA,CAA+BC,EAAAF,EAAAC,CAAA,EAC7B,SAAAD,EAAYG,EAAsBC,EAAmD,QACnFH,EAAA,KAAA,IAAA,GAAO,IACT,CAWO,OAAAD,EAAA,UAAA,SAAP,SAAgBK,EAAWC,EAAiB,CAAjB,OAAAA,IAAA,SAAAA,EAAA,GAClB,IACT,EACFN,CAAA,EAjB+BO,CAAY,ECHpC,IAAMC,EAAqC,CAGhD,YAAA,SAAYC,EAAqBC,EAAgB,SAAEC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,EAAA,GAAA,UAAAA,GACzC,IAAAC,EAAaL,EAAgB,SACrC,OAAIK,GAAQ,MAARA,EAAU,YACLA,EAAS,YAAW,MAApBA,EAAQC,EAAA,CAAaL,EAASC,CAAO,EAAAK,EAAKJ,CAAI,CAAA,CAAA,EAEhD,YAAW,MAAA,OAAAG,EAAA,CAACL,EAASC,CAAO,EAAAK,EAAKJ,CAAI,CAAA,CAAA,CAC9C,EACA,cAAA,SAAcK,EAAM,CACV,IAAAH,EAAaL,EAAgB,SACrC,QAAQK,GAAQ,KAAA,OAARA,EAAU,gBAAiB,eAAeG,CAAa,CACjE,EACA,SAAU,QCrBZ,IAAAC,GAAA,SAAAC,EAAA,CAAoCC,EAAAF,EAAAC,CAAA,EAOlC,SAAAD,EAAsBG,EAAqCC,EAAmD,CAA9G,IAAAC,EACEJ,EAAA,KAAA,KAAME,EAAWC,CAAI,GAAC,KADF,OAAAC,EAAA,UAAAF,EAAqCE,EAAA,KAAAD,EAFjDC,EAAA,QAAmB,IAI7B,CAEO,OAAAL,EAAA,UAAA,SAAP,SAAgBM,EAAWC,EAAiB,OAC1C,GADyBA,IAAA,SAAAA,EAAA,GACrB,KAAK,OACP,OAAO,KAIT,KAAK,MAAQD,EAEb,IAAME,EAAK,KAAK,GACVL,EAAY,KAAK,UAuBvB,OAAIK,GAAM,OACR,KAAK,GAAK,KAAK,eAAeL,EAAWK,EAAID,CAAK,GAKpD,KAAK,QAAU,GAEf,KAAK,MAAQA,EAEb,KAAK,IAAKE,EAAA,KAAK,MAAE,MAAAA,IAAA,OAAAA,EAAI,KAAK,eAAeN,EAAW,KAAK,GAAII,CAAK,EAE3D,IACT,EAEUP,EAAA,UAAA,eAAV,SAAyBG,EAA2BO,EAAmBH,EAAiB,CAAjB,OAAAA,IAAA,SAAAA,EAAA,GAC9DI,EAAiB,YAAYR,EAAU,MAAM,KAAKA,EAAW,IAAI,EAAGI,CAAK,CAClF,EAEUP,EAAA,UAAA,eAAV,SAAyBY,EAA4BJ,EAAkBD,EAAwB,CAE7F,GAFqEA,IAAA,SAAAA,EAAA,GAEjEA,GAAS,MAAQ,KAAK,QAAUA,GAAS,KAAK,UAAY,GAC5D,OAAOC,EAILA,GAAM,MACRG,EAAiB,cAAcH,CAAE,CAIrC,EAMOR,EAAA,UAAA,QAAP,SAAeM,EAAUC,EAAa,CACpC,GAAI,KAAK,OACP,OAAO,IAAI,MAAM,8BAA8B,EAGjD,KAAK,QAAU,GACf,IAAMM,EAAQ,KAAK,SAASP,EAAOC,CAAK,EACxC,GAAIM,EACF,OAAOA,EACE,KAAK,UAAY,IAAS,KAAK,IAAM,OAc9C,KAAK,GAAK,KAAK,eAAe,KAAK,UAAW,KAAK,GAAI,IAAI,EAE/D,EAEUb,EAAA,UAAA,SAAV,SAAmBM,EAAUQ,EAAc,CACzC,IAAIC,EAAmB,GACnBC,EACJ,GAAI,CACF,KAAK,KAAKV,CAAK,QACRW,EAAP,CACAF,EAAU,GAIVC,EAAaC,GAAQ,IAAI,MAAM,oCAAoC,EAErE,GAAIF,EACF,YAAK,YAAW,EACTC,CAEX,EAEAhB,EAAA,UAAA,YAAA,UAAA,CACE,GAAI,CAAC,KAAK,OAAQ,CACV,IAAAS,EAAoB,KAAlBD,EAAEC,EAAA,GAAEN,EAASM,EAAA,UACbS,EAAYf,EAAS,QAE7B,KAAK,KAAO,KAAK,MAAQ,KAAK,UAAY,KAC1C,KAAK,QAAU,GAEfgB,EAAUD,EAAS,IAAI,EACnBV,GAAM,OACR,KAAK,GAAK,KAAK,eAAeL,EAAWK,EAAI,IAAI,GAGnD,KAAK,MAAQ,KACbP,EAAA,UAAM,YAAW,KAAA,IAAA,EAErB,EACFD,CAAA,EA9IoCoB,EAAM,ECgB1C,IAAAC,GAAA,UAAA,CAGE,SAAAA,EAAoBC,EAAoCC,EAAiC,CAAjCA,IAAA,SAAAA,EAAoBF,EAAU,KAAlE,KAAA,oBAAAC,EAClB,KAAK,IAAMC,CACb,CA6BO,OAAAF,EAAA,UAAA,SAAP,SAAmBG,EAAqDC,EAAmBC,EAAS,CAA5B,OAAAD,IAAA,SAAAA,EAAA,GAC/D,IAAI,KAAK,oBAAuB,KAAMD,CAAI,EAAE,SAASE,EAAOD,CAAK,CAC1E,EAnCcJ,EAAA,IAAoBM,EAAsB,IAoC1DN,GArCA,ECnBA,IAAAO,GAAA,SAAAC,EAAA,CAAoCC,EAAAF,EAAAC,CAAA,EAkBlC,SAAAD,EAAYG,EAAgCC,EAAiC,CAAjCA,IAAA,SAAAA,EAAoBC,GAAU,KAA1E,IAAAC,EACEL,EAAA,KAAA,KAAME,EAAiBC,CAAG,GAAC,KAlBtB,OAAAE,EAAA,QAAmC,CAAA,EAOnCA,EAAA,QAAmB,IAY1B,CAEO,OAAAN,EAAA,UAAA,MAAP,SAAaO,EAAwB,CAC3B,IAAAC,EAAY,KAAI,QAExB,GAAI,KAAK,QAAS,CAChBA,EAAQ,KAAKD,CAAM,EACnB,OAGF,IAAIE,EACJ,KAAK,QAAU,GAEf,EACE,IAAKA,EAAQF,EAAO,QAAQA,EAAO,MAAOA,EAAO,KAAK,EACpD,YAEMA,EAASC,EAAQ,MAAK,GAIhC,GAFA,KAAK,QAAU,GAEXC,EAAO,CACT,KAAQF,EAASC,EAAQ,MAAK,GAC5BD,EAAO,YAAW,EAEpB,MAAME,EAEV,EACFT,CAAA,EAhDoCK,EAAS,EC6CtC,IAAMK,EAAiB,IAAIC,GAAeC,EAAW,EAK/CC,GAAQH,ECUd,IAAMI,EAAQ,IAAIC,EAAkB,SAACC,EAAU,CAAK,OAAAA,EAAW,SAAQ,CAAnB,CAAqB,EC9D1E,SAAUC,GAAYC,EAAU,CACpC,OAAOA,GAASC,EAAWD,EAAM,QAAQ,CAC3C,CCDA,SAASE,GAAQC,EAAQ,CACvB,OAAOA,EAAIA,EAAI,OAAS,EAC1B,CAEM,SAAUC,GAAkBC,EAAW,CAC3C,OAAOC,EAAWJ,GAAKG,CAAI,CAAC,EAAIA,EAAK,IAAG,EAAK,MAC/C,CAEM,SAAUE,EAAaF,EAAW,CACtC,OAAOG,GAAYN,GAAKG,CAAI,CAAC,EAAIA,EAAK,IAAG,EAAK,MAChD,CAEM,SAAUI,GAAUJ,EAAaK,EAAoB,CACzD,OAAO,OAAOR,GAAKG,CAAI,GAAM,SAAWA,EAAK,IAAG,EAAMK,CACxD,CClBO,IAAMC,EAAe,SAAIC,EAAM,CAAwB,OAAAA,GAAK,OAAOA,EAAE,QAAW,UAAY,OAAOA,GAAM,UAAlD,ECMxD,SAAUC,GAAUC,EAAU,CAClC,OAAOC,EAAWD,GAAK,KAAA,OAALA,EAAO,IAAI,CAC/B,CCHM,SAAUE,GAAoBC,EAAU,CAC5C,OAAOC,EAAWD,EAAME,EAAkB,CAC5C,CCLM,SAAUC,GAAmBC,EAAQ,CACzC,OAAO,OAAO,eAAiBC,EAAWD,GAAG,KAAA,OAAHA,EAAM,OAAO,cAAc,CACvE,CCAM,SAAUE,GAAiCC,EAAU,CAEzD,OAAO,IAAI,UACT,iBACEA,IAAU,MAAQ,OAAOA,GAAU,SAAW,oBAAsB,IAAIA,EAAK,KAAG,0HACwC,CAE9H,CCXM,SAAUC,IAAiB,CAC/B,OAAI,OAAO,QAAW,YAAc,CAAC,OAAO,SACnC,aAGF,OAAO,QAChB,CAEO,IAAMC,GAAWD,GAAiB,ECJnC,SAAUE,GAAWC,EAAU,CACnC,OAAOC,EAAWD,GAAK,KAAA,OAALA,EAAQE,GAAgB,CAC5C,CCHM,SAAiBC,GAAsCC,EAAqC,mGAC1FC,EAASD,EAAe,UAAS,2DAGX,MAAA,CAAA,EAAAE,GAAMD,EAAO,KAAI,CAAE,CAAA,gBAArCE,EAAkBC,EAAA,KAAA,EAAhBC,EAAKF,EAAA,MAAEG,EAAIH,EAAA,KACfG,iBAAA,CAAA,EAAA,CAAA,SACF,MAAA,CAAA,EAAAF,EAAA,KAAA,CAAA,qBAEIC,CAAM,CAAA,SAAZ,MAAA,CAAA,EAAAD,EAAA,KAAA,CAAA,SAAA,OAAAA,EAAA,KAAA,mCAGF,OAAAH,EAAO,YAAW,6BAIhB,SAAUM,GAAwBC,EAAQ,CAG9C,OAAOC,EAAWD,GAAG,KAAA,OAAHA,EAAK,SAAS,CAClC,CCPM,SAAUE,EAAaC,EAAyB,CACpD,GAAIA,aAAiBC,EACnB,OAAOD,EAET,GAAIA,GAAS,KAAM,CACjB,GAAIE,GAAoBF,CAAK,EAC3B,OAAOG,GAAsBH,CAAK,EAEpC,GAAII,EAAYJ,CAAK,EACnB,OAAOK,GAAcL,CAAK,EAE5B,GAAIM,GAAUN,CAAK,EACjB,OAAOO,GAAYP,CAAK,EAE1B,GAAIQ,GAAgBR,CAAK,EACvB,OAAOS,GAAkBT,CAAK,EAEhC,GAAIU,GAAWV,CAAK,EAClB,OAAOW,GAAaX,CAAK,EAE3B,GAAIY,GAAqBZ,CAAK,EAC5B,OAAOa,GAAuBb,CAAK,EAIvC,MAAMc,GAAiCd,CAAK,CAC9C,CAMM,SAAUG,GAAyBY,EAAQ,CAC/C,OAAO,IAAId,EAAW,SAACe,EAAyB,CAC9C,IAAMC,EAAMF,EAAIG,GAAkB,EAClC,GAAIC,EAAWF,EAAI,SAAS,EAC1B,OAAOA,EAAI,UAAUD,CAAU,EAGjC,MAAM,IAAI,UAAU,gEAAgE,CACtF,CAAC,CACH,CASM,SAAUX,GAAiBe,EAAmB,CAClD,OAAO,IAAInB,EAAW,SAACe,EAAyB,CAU9C,QAASK,EAAI,EAAGA,EAAID,EAAM,QAAU,CAACJ,EAAW,OAAQK,IACtDL,EAAW,KAAKI,EAAMC,EAAE,EAE1BL,EAAW,SAAQ,CACrB,CAAC,CACH,CAEM,SAAUT,GAAee,EAAuB,CACpD,OAAO,IAAIrB,EAAW,SAACe,EAAyB,CAC9CM,EACG,KACC,SAACC,EAAK,CACCP,EAAW,SACdA,EAAW,KAAKO,CAAK,EACrBP,EAAW,SAAQ,EAEvB,EACA,SAACQ,EAAQ,CAAK,OAAAR,EAAW,MAAMQ,CAAG,CAApB,CAAqB,EAEpC,KAAK,KAAMC,EAAoB,CACpC,CAAC,CACH,CAEM,SAAUd,GAAgBe,EAAqB,CACnD,OAAO,IAAIzB,EAAW,SAACe,EAAyB,aAC9C,QAAoBW,EAAAC,EAAAF,CAAQ,EAAAG,EAAAF,EAAA,KAAA,EAAA,CAAAE,EAAA,KAAAA,EAAAF,EAAA,KAAA,EAAE,CAAzB,IAAMJ,EAAKM,EAAA,MAEd,GADAb,EAAW,KAAKO,CAAK,EACjBP,EAAW,OACb,yGAGJA,EAAW,SAAQ,CACrB,CAAC,CACH,CAEM,SAAUP,GAAqBqB,EAA+B,CAClE,OAAO,IAAI7B,EAAW,SAACe,EAAyB,CAC9Ce,GAAQD,EAAed,CAAU,EAAE,MAAM,SAACQ,EAAG,CAAK,OAAAR,EAAW,MAAMQ,CAAG,CAApB,CAAqB,CACzE,CAAC,CACH,CAEM,SAAUX,GAA0BmB,EAAqC,CAC7E,OAAOvB,GAAkBwB,GAAmCD,CAAc,CAAC,CAC7E,CAEA,SAAeD,GAAWD,EAAiCd,EAAyB,uIACxDkB,EAAAC,GAAAL,CAAa,gFAIrC,GAJeP,EAAKa,EAAA,MACpBpB,EAAW,KAAKO,CAAK,EAGjBP,EAAW,OACb,MAAA,CAAA,CAAA,6RAGJ,OAAAA,EAAW,SAAQ,WChHf,SAAUqB,EACdC,EACAC,EACAC,EACAC,EACAC,EAAc,CADdD,IAAA,SAAAA,EAAA,GACAC,IAAA,SAAAA,EAAA,IAEA,IAAMC,EAAuBJ,EAAU,SAAS,UAAA,CAC9CC,EAAI,EACAE,EACFJ,EAAmB,IAAI,KAAK,SAAS,KAAMG,CAAK,CAAC,EAEjD,KAAK,YAAW,CAEpB,EAAGA,CAAK,EAIR,GAFAH,EAAmB,IAAIK,CAAoB,EAEvC,CAACD,EAKH,OAAOC,CAEX,CCeM,SAAUC,GAAaC,EAA0BC,EAAS,CAAT,OAAAA,IAAA,SAAAA,EAAA,GAC9CC,EAAQ,SAACC,EAAQC,EAAU,CAChCD,EAAO,UACLE,EACED,EACA,SAACE,EAAK,CAAK,OAAAC,EAAgBH,EAAYJ,EAAW,UAAA,CAAM,OAAAI,EAAW,KAAKE,CAAK,CAArB,EAAwBL,CAAK,CAA1E,EACX,UAAA,CAAM,OAAAM,EAAgBH,EAAYJ,EAAW,UAAA,CAAM,OAAAI,EAAW,SAAQ,CAAnB,EAAuBH,CAAK,CAAzE,EACN,SAACO,EAAG,CAAK,OAAAD,EAAgBH,EAAYJ,EAAW,UAAA,CAAM,OAAAI,EAAW,MAAMI,CAAG,CAApB,EAAuBP,CAAK,CAAzE,CAA0E,CACpF,CAEL,CAAC,CACH,CCPM,SAAUQ,GAAeC,EAA0BC,EAAiB,CAAjB,OAAAA,IAAA,SAAAA,EAAA,GAChDC,EAAQ,SAACC,EAAQC,EAAU,CAChCA,EAAW,IAAIJ,EAAU,SAAS,UAAA,CAAM,OAAAG,EAAO,UAAUC,CAAU,CAA3B,EAA8BH,CAAK,CAAC,CAC9E,CAAC,CACH,CC7DM,SAAUI,GAAsBC,EAA6BC,EAAwB,CACzF,OAAOC,EAAUF,CAAK,EAAE,KAAKG,GAAYF,CAAS,EAAGG,GAAUH,CAAS,CAAC,CAC3E,CCFM,SAAUI,GAAmBC,EAAuBC,EAAwB,CAChF,OAAOC,EAAUF,CAAK,EAAE,KAAKG,GAAYF,CAAS,EAAGG,GAAUH,CAAS,CAAC,CAC3E,CCJM,SAAUI,GAAiBC,EAAqBC,EAAwB,CAC5E,OAAO,IAAIC,EAAc,SAACC,EAAU,CAElC,IAAIC,EAAI,EAER,OAAOH,EAAU,SAAS,UAAA,CACpBG,IAAMJ,EAAM,OAGdG,EAAW,SAAQ,GAInBA,EAAW,KAAKH,EAAMI,IAAI,EAIrBD,EAAW,QACd,KAAK,SAAQ,EAGnB,CAAC,CACH,CAAC,CACH,CCfM,SAAUE,GAAoBC,EAAoBC,EAAwB,CAC9E,OAAO,IAAIC,EAAc,SAACC,EAAU,CAClC,IAAIC,EAKJ,OAAAC,EAAgBF,EAAYF,EAAW,UAAA,CAErCG,EAAYJ,EAAcI,IAAgB,EAE1CC,EACEF,EACAF,EACA,UAAA,OACMK,EACAC,EACJ,GAAI,CAEDC,EAAkBJ,EAAS,KAAI,EAA7BE,EAAKE,EAAA,MAAED,EAAIC,EAAA,WACPC,EAAP,CAEAN,EAAW,MAAMM,CAAG,EACpB,OAGEF,EAKFJ,EAAW,SAAQ,EAGnBA,EAAW,KAAKG,CAAK,CAEzB,EACA,EACA,EAAI,CAER,CAAC,EAMM,UAAA,CAAM,OAAAI,EAAWN,GAAQ,KAAA,OAARA,EAAU,MAAM,GAAKA,EAAS,OAAM,CAA/C,CACf,CAAC,CACH,CCvDM,SAAUO,GAAyBC,EAAyBC,EAAwB,CACxF,GAAI,CAACD,EACH,MAAM,IAAI,MAAM,yBAAyB,EAE3C,OAAO,IAAIE,EAAc,SAACC,EAAU,CAClCC,EAAgBD,EAAYF,EAAW,UAAA,CACrC,IAAMI,EAAWL,EAAM,OAAO,eAAc,EAC5CI,EACED,EACAF,EACA,UAAA,CACEI,EAAS,KAAI,EAAG,KAAK,SAACC,EAAM,CACtBA,EAAO,KAGTH,EAAW,SAAQ,EAEnBA,EAAW,KAAKG,EAAO,KAAK,CAEhC,CAAC,CACH,EACA,EACA,EAAI,CAER,CAAC,CACH,CAAC,CACH,CCzBM,SAAUC,GAA8BC,EAA8BC,EAAwB,CAClG,OAAOC,GAAsBC,GAAmCH,CAAK,EAAGC,CAAS,CACnF,CCoBM,SAAUG,GAAaC,EAA2BC,EAAwB,CAC9E,GAAID,GAAS,KAAM,CACjB,GAAIE,GAAoBF,CAAK,EAC3B,OAAOG,GAAmBH,EAAOC,CAAS,EAE5C,GAAIG,EAAYJ,CAAK,EACnB,OAAOK,GAAcL,EAAOC,CAAS,EAEvC,GAAIK,GAAUN,CAAK,EACjB,OAAOO,GAAgBP,EAAOC,CAAS,EAEzC,GAAIO,GAAgBR,CAAK,EACvB,OAAOS,GAAsBT,EAAOC,CAAS,EAE/C,GAAIS,GAAWV,CAAK,EAClB,OAAOW,GAAiBX,EAAOC,CAAS,EAE1C,GAAIW,GAAqBZ,CAAK,EAC5B,OAAOa,GAA2Bb,EAAOC,CAAS,EAGtD,MAAMa,GAAiCd,CAAK,CAC9C,CCoDM,SAAUe,EAAQC,EAA2BC,EAAyB,CAC1E,OAAOA,EAAYC,GAAUF,EAAOC,CAAS,EAAIE,EAAUH,CAAK,CAClE,CCxBM,SAAUI,IAAE,SAAIC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACpB,IAAMC,EAAYC,EAAaH,CAAI,EACnC,OAAOI,EAAKJ,EAAaE,CAAS,CACpC,CC3EM,SAAUG,GAAYC,EAAU,CACpC,OAAOA,aAAiB,MAAQ,CAAC,MAAMA,CAAY,CACrD,CCsCM,SAAUC,EAAUC,EAAyCC,EAAa,CAC9E,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAEhC,IAAIC,EAAQ,EAGZF,EAAO,UACLG,EAAyBF,EAAY,SAACG,EAAQ,CAG5CH,EAAW,KAAKJ,EAAQ,KAAKC,EAASM,EAAOF,GAAO,CAAC,CACvD,CAAC,CAAC,CAEN,CAAC,CACH,CC1DQ,IAAAG,GAAY,MAAK,QAEzB,SAASC,GAAkBC,EAA6BC,EAAW,CAC/D,OAAOH,GAAQG,CAAI,EAAID,EAAE,MAAA,OAAAE,EAAA,CAAA,EAAAC,EAAIF,CAAI,CAAA,CAAA,EAAID,EAAGC,CAAI,CAChD,CAMM,SAAUG,GAAuBJ,EAA2B,CAC9D,OAAOK,EAAI,SAAAJ,EAAI,CAAI,OAAAF,GAAYC,EAAIC,CAAI,CAApB,CAAqB,CAC5C,CCKM,SAAUK,GACdC,EACAC,EACAC,EACAC,EACAC,EACAC,EACAC,EACAC,EAAgC,CAGhC,IAAMC,EAAc,CAAA,EAEhBC,EAAS,EAETC,EAAQ,EAERC,EAAa,GAKXC,EAAgB,UAAA,CAIhBD,GAAc,CAACH,EAAO,QAAU,CAACC,GACnCR,EAAW,SAAQ,CAEvB,EAGMY,EAAY,SAACC,EAAQ,CAAK,OAACL,EAASN,EAAaY,EAAWD,CAAK,EAAIN,EAAO,KAAKM,CAAK,CAA5D,EAE1BC,EAAa,SAACD,EAAQ,CAI1BT,GAAUJ,EAAW,KAAKa,CAAY,EAItCL,IAKA,IAAIO,EAAgB,GAGpBC,EAAUf,EAAQY,EAAOJ,GAAO,CAAC,EAAE,UACjCQ,EACEjB,EACA,SAACkB,EAAU,CAGTf,GAAY,MAAZA,EAAee,CAAU,EAErBd,EAGFQ,EAAUM,CAAiB,EAG3BlB,EAAW,KAAKkB,CAAU,CAE9B,EACA,UAAA,CAGEH,EAAgB,EAClB,EAEA,OACA,UAAA,CAIE,GAAIA,EAKF,GAAI,CAIFP,IAKA,qBACE,IAAMW,EAAgBZ,EAAO,MAAK,EAI9BF,EACFe,EAAgBpB,EAAYK,EAAmB,UAAA,CAAM,OAAAS,EAAWK,CAAa,CAAxB,CAAyB,EAE9EL,EAAWK,CAAa,GARrBZ,EAAO,QAAUC,EAASN,OAYjCS,EAAa,QACNU,EAAP,CACArB,EAAW,MAAMqB,CAAG,EAG1B,CAAC,CACF,CAEL,EAGA,OAAAtB,EAAO,UACLkB,EAAyBjB,EAAYY,EAAW,UAAA,CAE9CF,EAAa,GACbC,EAAa,CACf,CAAC,CAAC,EAKG,UAAA,CACLL,GAAmB,MAAnBA,EAAmB,CACrB,CACF,CClEM,SAAUgB,EACdC,EACAC,EACAC,EAA6B,CAE7B,OAFAA,IAAA,SAAAA,EAAA,KAEIC,EAAWF,CAAc,EAEpBF,EAAS,SAACK,EAAGC,EAAC,CAAK,OAAAC,EAAI,SAACC,EAAQC,EAAU,CAAK,OAAAP,EAAeG,EAAGG,EAAGF,EAAGG,CAAE,CAA1B,CAA2B,EAAEC,EAAUT,EAAQI,EAAGC,CAAC,CAAC,CAAC,CAAjF,EAAoFH,CAAU,GAC/G,OAAOD,GAAmB,WACnCC,EAAaD,GAGRS,EAAQ,SAACC,EAAQC,EAAU,CAAK,OAAAC,GAAeF,EAAQC,EAAYZ,EAASE,CAAU,CAAtD,CAAuD,EAChG,CChCM,SAAUY,GAAyCC,EAA6B,CAA7B,OAAAA,IAAA,SAAAA,EAAA,KAChDC,EAASC,EAAUF,CAAU,CACtC,CCNM,SAAUG,IAAS,CACvB,OAAOC,GAAS,CAAC,CACnB,CCmDM,SAAUC,IAAM,SAACC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACrB,OAAOC,GAAS,EAAGC,EAAKH,EAAMI,EAAaJ,CAAI,CAAC,CAAC,CACnD,CC1GA,IAAMK,GAA0B,CAAC,cAAe,gBAAgB,EAC1DC,GAAqB,CAAC,mBAAoB,qBAAqB,EAC/DC,GAAgB,CAAC,KAAM,KAAK,EA8N5B,SAAUC,EACdC,EACAC,EACAC,EACAC,EAAsC,CAMtC,GAJIC,EAAWF,CAAO,IACpBC,EAAiBD,EACjBA,EAAU,QAERC,EACF,OAAOJ,EAAaC,EAAQC,EAAWC,CAA+B,EAAE,KAAKG,GAAiBF,CAAc,CAAC,EAUzG,IAAAG,EAAAC,EAEJC,GAAcR,CAAM,EAChBH,GAAmB,IAAI,SAACY,EAAU,CAAK,OAAA,SAACC,EAAY,CAAK,OAAAV,EAAOS,GAAYR,EAAWS,EAASR,CAA+B,CAAtE,CAAlB,CAAyF,EAElIS,GAAwBX,CAAM,EAC5BJ,GAAwB,IAAIgB,GAAwBZ,EAAQC,CAAS,CAAC,EACtEY,GAA0Bb,CAAM,EAChCF,GAAc,IAAIc,GAAwBZ,EAAQC,CAAS,CAAC,EAC5D,CAAA,EAAE,CAAA,EATDa,EAAGR,EAAA,GAAES,EAAMT,EAAA,GAgBlB,GAAI,CAACQ,GACCE,EAAYhB,CAAM,EACpB,OAAOiB,EAAS,SAACC,EAAc,CAAK,OAAAnB,EAAUmB,EAAWjB,EAAWC,CAA+B,CAA/D,CAAgE,EAClGiB,EAAUnB,CAAM,CAAC,EAOvB,GAAI,CAACc,EACH,MAAM,IAAI,UAAU,sBAAsB,EAG5C,OAAO,IAAIM,EAAc,SAACC,EAAU,CAIlC,IAAMX,EAAU,UAAA,SAACY,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAAmB,OAAAF,EAAW,KAAK,EAAIC,EAAK,OAASA,EAAOA,EAAK,EAAE,CAAhD,EAEpC,OAAAR,EAAIJ,CAAO,EAEJ,UAAA,CAAM,OAAAK,EAAQL,CAAO,CAAf,CACf,CAAC,CACH,CASA,SAASE,GAAwBZ,EAAaC,EAAiB,CAC7D,OAAO,SAACQ,EAAkB,CAAK,OAAA,SAACC,EAAY,CAAK,OAAAV,EAAOS,GAAYR,EAAWS,CAAO,CAArC,CAAlB,CACjC,CAOA,SAASC,GAAwBX,EAAW,CAC1C,OAAOI,EAAWJ,EAAO,WAAW,GAAKI,EAAWJ,EAAO,cAAc,CAC3E,CAOA,SAASa,GAA0Bb,EAAW,CAC5C,OAAOI,EAAWJ,EAAO,EAAE,GAAKI,EAAWJ,EAAO,GAAG,CACvD,CAOA,SAASQ,GAAcR,EAAW,CAChC,OAAOI,EAAWJ,EAAO,gBAAgB,GAAKI,EAAWJ,EAAO,mBAAmB,CACrF,CCvMM,SAAUwB,EACdC,EACAC,EACAC,EAAyC,CAFzCF,IAAA,SAAAA,EAAA,GAEAE,IAAA,SAAAA,EAAAC,IAIA,IAAIC,EAAmB,GAEvB,OAAIH,GAAuB,OAIrBI,GAAYJ,CAAmB,EACjCC,EAAYD,EAIZG,EAAmBH,GAIhB,IAAIK,EAAW,SAACC,EAAU,CAI/B,IAAIC,EAAMC,GAAYT,CAAO,EAAI,CAACA,EAAUE,EAAW,IAAG,EAAKF,EAE3DQ,EAAM,IAERA,EAAM,GAIR,IAAIE,EAAI,EAGR,OAAOR,EAAU,SAAS,UAAA,CACnBK,EAAW,SAEdA,EAAW,KAAKG,GAAG,EAEf,GAAKN,EAGP,KAAK,SAAS,OAAWA,CAAgB,EAGzCG,EAAW,SAAQ,EAGzB,EAAGC,CAAG,CACR,CAAC,CACH,CCvIM,SAAUG,GAASC,EAAYC,EAAyC,CAArD,OAAAD,IAAA,SAAAA,EAAA,GAAYC,IAAA,SAAAA,EAAAC,GAC/BF,EAAS,IAEXA,EAAS,GAGJG,EAAMH,EAAQA,EAAQC,CAAS,CACxC,CCgCM,SAAUG,IAAK,SAACC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACpB,IAAMC,EAAYC,EAAaH,CAAI,EAC7BI,EAAaC,GAAUL,EAAM,GAAQ,EACrCM,EAAUN,EAChB,OAAQM,EAAQ,OAGZA,EAAQ,SAAW,EAEnBC,EAAUD,EAAQ,EAAE,EAEpBE,GAASJ,CAAU,EAAEK,EAAKH,EAASJ,CAAS,CAAC,EAL7CQ,CAMN,CCjEO,IAAMC,GAAQ,IAAIC,EAAkBC,CAAI,ECwBzC,SAAUC,EAAUC,EAAiDC,EAAa,CACtF,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAEhC,IAAIC,EAAQ,EAIZF,EAAO,UAILG,EAAyBF,EAAY,SAACG,EAAK,CAAK,OAAAP,EAAU,KAAKC,EAASM,EAAOF,GAAO,GAAKD,EAAW,KAAKG,CAAK,CAAhE,CAAiE,CAAC,CAEtH,CAAC,CACH,CC3BM,SAAUC,EAAQC,EAAa,CACnC,OAAOA,GAAS,EAEZ,UAAA,CAAM,OAAAC,CAAA,EACNC,EAAQ,SAACC,EAAQC,EAAU,CACzB,IAAIC,EAAO,EACXF,EAAO,UACLG,EAAyBF,EAAY,SAACG,EAAK,CAIrC,EAAEF,GAAQL,IACZI,EAAW,KAAKG,CAAK,EAIjBP,GAASK,GACXD,EAAW,SAAQ,EAGzB,CAAC,CAAC,CAEN,CAAC,CACP,CC9BM,SAAUI,IAAc,CAC5B,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChCD,EAAO,UAAUE,EAAyBD,EAAYE,CAAI,CAAC,CAC7D,CAAC,CACH,CCCM,SAAUC,GAASC,EAAQ,CAC/B,OAAOC,EAAI,UAAA,CAAM,OAAAD,CAAA,CAAK,CACxB,CCyCM,SAAUE,GACdC,EACAC,EAAmC,CAEnC,OAAIA,EAEK,SAACC,EAAqB,CAC3B,OAAAC,GAAOF,EAAkB,KAAKG,EAAK,CAAC,EAAGC,GAAc,CAAE,EAAGH,EAAO,KAAKH,GAAUC,CAAqB,CAAC,CAAC,CAAvG,EAGGM,EAAS,SAACC,EAAOC,EAAK,CAAK,OAAAR,EAAsBO,EAAOC,CAAK,EAAE,KAAKJ,EAAK,CAAC,EAAGK,GAAMF,CAAK,CAAC,CAA9D,CAA+D,CACnG,CCtCM,SAAUG,GAASC,EAAoBC,EAAyC,CAAzCA,IAAA,SAAAA,EAAAC,GAC3C,IAAMC,EAAWC,EAAMJ,EAAKC,CAAS,EACrC,OAAOI,GAAU,UAAA,CAAM,OAAAF,CAAA,CAAQ,CACjC,CC0EM,SAAUG,GACdC,EACAC,EAA0D,CAA1D,OAAAA,IAAA,SAAAA,EAA+BC,GAK/BF,EAAaA,GAAU,KAAVA,EAAcG,GAEpBC,EAAQ,SAACC,EAAQC,EAAU,CAGhC,IAAIC,EAEAC,EAAQ,GAEZH,EAAO,UACLI,EAAyBH,EAAY,SAACI,EAAK,CAEzC,IAAMC,EAAaV,EAAYS,CAAK,GAKhCF,GAAS,CAACR,EAAYO,EAAaI,CAAU,KAM/CH,EAAQ,GACRD,EAAcI,EAGdL,EAAW,KAAKI,CAAK,EAEzB,CAAC,CAAC,CAEN,CAAC,CACH,CAEA,SAASP,GAAeS,EAAQC,EAAM,CACpC,OAAOD,IAAMC,CACf,CCrHM,SAAUC,GAAYC,EAAoB,CAC9C,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAGhC,GAAI,CACFD,EAAO,UAAUC,CAAU,UAE3BA,EAAW,IAAIH,CAAQ,EAE3B,CAAC,CACH,CCyCM,SAAUI,GAAUC,EAAqC,OACzDC,EAAQ,IACRC,EAEJ,OAAIF,GAAiB,OACf,OAAOA,GAAkB,UACxBG,EAA4BH,EAAa,MAAzCC,EAAKE,IAAA,OAAG,IAAQA,EAAED,EAAUF,EAAa,OAE5CC,EAAQD,GAILC,GAAS,EACZ,UAAA,CAAM,OAAAG,CAAA,EACNC,EAAQ,SAACC,EAAQC,EAAU,CACzB,IAAIC,EAAQ,EACRC,EAEEC,EAAc,UAAA,CAGlB,GAFAD,GAAS,MAATA,EAAW,YAAW,EACtBA,EAAY,KACRP,GAAS,KAAM,CACjB,IAAMS,EAAW,OAAOT,GAAU,SAAWU,EAAMV,CAAK,EAAIW,EAAUX,EAAMM,CAAK,CAAC,EAC5EM,EAAqBC,EAAyBR,EAAY,UAAA,CAC9DO,EAAmB,YAAW,EAC9BE,EAAiB,CACnB,CAAC,EACDL,EAAS,UAAUG,CAAkB,OAErCE,EAAiB,CAErB,EAEMA,EAAoB,UAAA,CACxB,IAAIC,EAAY,GAChBR,EAAYH,EAAO,UACjBS,EAAyBR,EAAY,OAAW,UAAA,CAC1C,EAAEC,EAAQP,EACRQ,EACFC,EAAW,EAEXO,EAAY,GAGdV,EAAW,SAAQ,CAEvB,CAAC,CAAC,EAGAU,GACFP,EAAW,CAEf,EAEAM,EAAiB,CACnB,CAAC,CACP,CCtFM,SAAUE,GACdC,EACAC,EAA6G,CAE7G,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAyD,KACzDC,EAAQ,EAERC,EAAa,GAIXC,EAAgB,UAAA,CAAM,OAAAD,GAAc,CAACF,GAAmBD,EAAW,SAAQ,CAArD,EAE5BD,EAAO,UACLM,EACEL,EACA,SAACM,EAAK,CAEJL,GAAe,MAAfA,EAAiB,YAAW,EAC5B,IAAIM,EAAa,EACXC,EAAaN,IAEnBO,EAAUb,EAAQU,EAAOE,CAAU,CAAC,EAAE,UACnCP,EAAkBI,EACjBL,EAIA,SAACU,EAAU,CAAK,OAAAV,EAAW,KAAKH,EAAiBA,EAAeS,EAAOI,EAAYF,EAAYD,GAAY,EAAIG,CAAU,CAAzG,EAChB,UAAA,CAIET,EAAkB,KAClBG,EAAa,CACf,CAAC,CACD,CAEN,EACA,UAAA,CACED,EAAa,GACbC,EAAa,CACf,CAAC,CACF,CAEL,CAAC,CACH,CCvFM,SAAUO,GAAaC,EAA8B,CACzD,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChCC,EAAUJ,CAAQ,EAAE,UAAUK,EAAyBF,EAAY,UAAA,CAAM,OAAAA,EAAW,SAAQ,CAAnB,EAAuBG,CAAI,CAAC,EACrG,CAACH,EAAW,QAAUD,EAAO,UAAUC,CAAU,CACnD,CAAC,CACH,CCwDM,SAAUI,GACdC,EACAC,EACAC,EAA8B,CAK9B,IAAMC,EACJC,EAAWJ,CAAc,GAAKC,GAASC,EAElC,CAAE,KAAMF,EAA2E,MAAKC,EAAE,SAAQC,CAAA,EACnGF,EAEN,OAAOG,EACHE,EAAQ,SAACC,EAAQC,EAAU,QACzBC,EAAAL,EAAY,aAAS,MAAAK,IAAA,QAAAA,EAAA,KAArBL,CAAW,EACX,IAAIM,EAAU,GACdH,EAAO,UACLI,EACEH,EACA,SAACI,EAAK,QACJH,EAAAL,EAAY,QAAI,MAAAK,IAAA,QAAAA,EAAA,KAAhBL,EAAmBQ,CAAK,EACxBJ,EAAW,KAAKI,CAAK,CACvB,EACA,UAAA,OACEF,EAAU,IACVD,EAAAL,EAAY,YAAQ,MAAAK,IAAA,QAAAA,EAAA,KAApBL,CAAW,EACXI,EAAW,SAAQ,CACrB,EACA,SAACK,EAAG,OACFH,EAAU,IACVD,EAAAL,EAAY,SAAK,MAAAK,IAAA,QAAAA,EAAA,KAAjBL,EAAoBS,CAAG,EACvBL,EAAW,MAAMK,CAAG,CACtB,EACA,UAAA,SACMH,KACFD,EAAAL,EAAY,eAAW,MAAAK,IAAA,QAAAA,EAAA,KAAvBL,CAAW,IAEbU,EAAAV,EAAY,YAAQ,MAAAU,IAAA,QAAAA,EAAA,KAApBV,CAAW,CACb,CAAC,CACF,CAEL,CAAC,EAIDW,CACN,CCjGM,SAAUC,IAAc,SAAOC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACnC,IAAMC,EAAUC,GAAkBH,CAAM,EAExC,OAAOI,EAAQ,SAACC,EAAQC,EAAU,CAehC,QAdMC,EAAMP,EAAO,OACbQ,EAAc,IAAI,MAAMD,CAAG,EAI7BE,EAAWT,EAAO,IAAI,UAAA,CAAM,MAAA,EAAA,CAAK,EAGjCU,EAAQ,cAMHC,EAAC,CACRC,EAAUZ,EAAOW,EAAE,EAAE,UACnBE,EACEP,EACA,SAACQ,EAAK,CACJN,EAAYG,GAAKG,EACb,CAACJ,GAAS,CAACD,EAASE,KAEtBF,EAASE,GAAK,IAKbD,EAAQD,EAAS,MAAMM,CAAQ,KAAON,EAAW,MAEtD,EAGAO,CAAI,CACL,GAnBIL,EAAI,EAAGA,EAAIJ,EAAKI,MAAhBA,CAAC,EAwBVN,EAAO,UACLQ,EAAyBP,EAAY,SAACQ,EAAK,CACzC,GAAIJ,EAAO,CAET,IAAMO,EAAMC,EAAA,CAAIJ,CAAK,EAAAK,EAAKX,CAAW,CAAA,EACrCF,EAAW,KAAKJ,EAAUA,EAAO,MAAA,OAAAgB,EAAA,CAAA,EAAAC,EAAIF,CAAM,CAAA,CAAA,EAAIA,CAAM,EAEzD,CAAC,CAAC,CAEN,CAAC,CACH,CC9DA,IAAMG,GAAY,SAAS,cAAc,KAAK,EAC9C,SAAS,KAAK,YAAYA,EAAS,EAGnC,IAAMC,GAAS,SAAS,cAAc,oBAAoB,EAC1D,GAAIA,GAAQ,CACV,IAAMC,EAAS,SAAS,cAAc,QAAQ,EAC9CA,EAAO,UAAY,yEACfD,GAAO,eACTA,GAAO,cAAc,aAAaC,EAAQD,EAAM,EAGlD,IAAME,EAAM,IAAIC,GAAuB,CAAC,EACxCD,EACG,KACCE,GAAqB,CACvB,EACG,UAAUC,GAAM,CACf,eAAe,QAAQ,uCAAU,GAAGA,GAAI,EACxCJ,EAAO,OAAS,CAACI,CACnB,CAAC,EAGLH,EAAI,KAAK,KAAK,MAAM,eAAe,QAAQ,sCAAQ,GAAK,MAAM,CAAC,EAC/DI,EAAUL,EAAQ,OAAO,EACtB,KACCM,GAAeL,CAAG,CACpB,EACG,UAAU,CAAC,CAAC,CAAEG,CAAE,IAAMH,EAAI,KAAK,CAACG,CAAE,CAAC,EAGxCG,GAAS,GAAG,EACT,KACCC,GAAUP,EAAI,KAAKQ,EAAOL,GAAM,CAACA,CAAE,CAAC,CAAC,EACrCM,EAAK,EAAE,EACPC,GAAO,CAAE,MAAO,IAAMV,EAAI,KAAKQ,EAAOL,GAAMA,CAAE,CAAC,CAAE,CAAC,EAClDQ,EAAS,IAAM,CACb,IAAMC,EAAW,SAAS,cAAc,KAAK,EAC7C,OAAAA,EAAS,UAAY,uCACrBA,EAAS,WAAa,OACtBf,GAAU,YAAYe,CAAQ,EACvBC,GAAMC,GAAOC,GAAGH,CAAQ,CAAC,EAC7B,KACCI,GAAS,IAAMJ,EAAS,OAAO,CAAC,EAChCL,GAAUP,EAAI,KAAKQ,EAAOL,GAAM,CAACA,CAAE,CAAC,CAAC,EACrCc,GAAUC,GAAMd,EAAUc,EAAI,OAAO,EAClC,KACCC,GAAI,IAAMD,EAAG,UAAU,IAAI,4EAAgB,CAAC,EAC5CE,GAAM,GAAI,EACVD,GAAI,IAAMD,EAAG,UAAU,OAAO,4EAAgB,CAAC,CACjD,CACF,CACF,CACJ,CAAC,CACH,EACG,UAAU,CACjB", + "names": ["require_tslib", "__commonJSMin", "exports", "module", "__extends", "__assign", "__rest", "__decorate", "__param", "__metadata", "__awaiter", "__generator", "__exportStar", "__values", "__read", "__spread", "__spreadArrays", "__spreadArray", "__await", "__asyncGenerator", "__asyncDelegator", "__asyncValues", "__makeTemplateObject", "__importStar", "__importDefault", "__classPrivateFieldGet", "__classPrivateFieldSet", "__createBinding", "factory", "root", "createExporter", "previous", "id", "v", "exporter", "extendStatics", "d", "b", "p", "__", "t", "s", "n", "e", "i", "decorators", "target", "key", "desc", "c", "r", "paramIndex", "decorator", "metadataKey", "metadataValue", "thisArg", "_arguments", "P", "generator", "adopt", "value", "resolve", "reject", "fulfilled", "step", "rejected", "result", "body", "_", "y", "g", "verb", "op", "m", "o", "k", "k2", "ar", "error", "il", "j", "jl", "to", "from", "pack", "l", "q", "a", "resume", "settle", "fulfill", "f", "cooked", "raw", "__setModuleDefault", "mod", "receiver", "state", "kind", "import_tslib", "__extends", "__assign", "__rest", "__decorate", "__param", "__metadata", "__awaiter", "__generator", "__exportStar", "__createBinding", "__values", "__read", "__spread", "__spreadArrays", "__spreadArray", "__await", "__asyncGenerator", "__asyncDelegator", "__asyncValues", "__makeTemplateObject", "__importStar", "__importDefault", "__classPrivateFieldGet", "__classPrivateFieldSet", "tslib", "isFunction", "value", "createErrorClass", "createImpl", "_super", "instance", "ctorFunc", "UnsubscriptionError", "createErrorClass", "_super", "errors", "err", "i", "arrRemove", "arr", "item", "index", "Subscription", "initialTeardown", "errors", "_parentage", "_parentage_1", "__values", "_parentage_1_1", "parent_1", "initialFinalizer", "isFunction", "e", "UnsubscriptionError", "_finalizers", "_finalizers_1", "_finalizers_1_1", "finalizer", "execFinalizer", "err", "__spreadArray", "__read", "teardown", "_a", "parent", "arrRemove", "empty", "EMPTY_SUBSCRIPTION", "Subscription", "isSubscription", "value", "isFunction", "execFinalizer", "finalizer", "config", "timeoutProvider", "handler", "timeout", "args", "_i", "delegate", "__spreadArray", "__read", "handle", "reportUnhandledError", "err", "timeoutProvider", "onUnhandledError", "config", "noop", "COMPLETE_NOTIFICATION", "createNotification", "errorNotification", "error", "nextNotification", "value", "kind", "context", "errorContext", "cb", "config", "isRoot", "_a", "errorThrown", "error", "captureError", "err", "Subscriber", "_super", "__extends", "destination", "_this", "isSubscription", "EMPTY_OBSERVER", "next", "error", "complete", "SafeSubscriber", "value", "handleStoppedNotification", "nextNotification", "err", "errorNotification", "COMPLETE_NOTIFICATION", "Subscription", "_bind", "bind", "fn", "thisArg", "ConsumerObserver", "partialObserver", "value", "error", "handleUnhandledError", "err", "SafeSubscriber", "_super", "__extends", "observerOrNext", "complete", "_this", "isFunction", "context_1", "config", "Subscriber", "handleUnhandledError", "error", "config", "captureError", "reportUnhandledError", "defaultErrorHandler", "err", "handleStoppedNotification", "notification", "subscriber", "onStoppedNotification", "timeoutProvider", "EMPTY_OBSERVER", "noop", "observable", "identity", "x", "pipeFromArray", "fns", "identity", "input", "prev", "fn", "Observable", "subscribe", "operator", "observable", "observerOrNext", "error", "complete", "_this", "subscriber", "isSubscriber", "SafeSubscriber", "errorContext", "_a", "source", "sink", "err", "next", "promiseCtor", "getPromiseCtor", "resolve", "reject", "value", "operations", "_i", "pipeFromArray", "x", "getPromiseCtor", "promiseCtor", "_a", "config", "isObserver", "value", "isFunction", "isSubscriber", "Subscriber", "isSubscription", "hasLift", "source", "isFunction", "operate", "init", "liftedSource", "err", "createOperatorSubscriber", "destination", "onNext", "onComplete", "onError", "onFinalize", "OperatorSubscriber", "_super", "__extends", "shouldUnsubscribe", "_this", "value", "err", "closed_1", "_a", "Subscriber", "ObjectUnsubscribedError", "createErrorClass", "_super", "Subject", "_super", "__extends", "_this", "operator", "subject", "AnonymousSubject", "ObjectUnsubscribedError", "value", "errorContext", "_b", "__values", "_c", "observer", "err", "observers", "_a", "subscriber", "hasError", "isStopped", "EMPTY_SUBSCRIPTION", "Subscription", "arrRemove", "thrownError", "observable", "Observable", "destination", "source", "AnonymousSubject", "_super", "__extends", "destination", "source", "_this", "value", "_b", "_a", "err", "subscriber", "EMPTY_SUBSCRIPTION", "Subject", "dateTimestampProvider", "ReplaySubject", "_super", "__extends", "_bufferSize", "_windowTime", "_timestampProvider", "dateTimestampProvider", "_this", "value", "_a", "isStopped", "_buffer", "_infiniteTimeWindow", "subscriber", "subscription", "copy", "i", "adjustedBufferSize", "now", "last", "Subject", "Action", "_super", "__extends", "scheduler", "work", "state", "delay", "Subscription", "intervalProvider", "handler", "timeout", "args", "_i", "delegate", "__spreadArray", "__read", "handle", "AsyncAction", "_super", "__extends", "scheduler", "work", "_this", "state", "delay", "id", "_a", "_id", "intervalProvider", "_scheduler", "error", "_delay", "errored", "errorValue", "e", "actions", "arrRemove", "Action", "Scheduler", "schedulerActionCtor", "now", "work", "delay", "state", "dateTimestampProvider", "AsyncScheduler", "_super", "__extends", "SchedulerAction", "now", "Scheduler", "_this", "action", "actions", "error", "asyncScheduler", "AsyncScheduler", "AsyncAction", "async", "EMPTY", "Observable", "subscriber", "isScheduler", "value", "isFunction", "last", "arr", "popResultSelector", "args", "isFunction", "popScheduler", "isScheduler", "popNumber", "defaultValue", "isArrayLike", "x", "isPromise", "value", "isFunction", "isInteropObservable", "input", "isFunction", "observable", "isAsyncIterable", "obj", "isFunction", "createInvalidObservableTypeError", "input", "getSymbolIterator", "iterator", "isIterable", "input", "isFunction", "iterator", "readableStreamLikeToAsyncGenerator", "readableStream", "reader", "__await", "_a", "_b", "value", "done", "isReadableStreamLike", "obj", "isFunction", "innerFrom", "input", "Observable", "isInteropObservable", "fromInteropObservable", "isArrayLike", "fromArrayLike", "isPromise", "fromPromise", "isAsyncIterable", "fromAsyncIterable", "isIterable", "fromIterable", "isReadableStreamLike", "fromReadableStreamLike", "createInvalidObservableTypeError", "obj", "subscriber", "obs", "observable", "isFunction", "array", "i", "promise", "value", "err", "reportUnhandledError", "iterable", "iterable_1", "__values", "iterable_1_1", "asyncIterable", "process", "readableStream", "readableStreamLikeToAsyncGenerator", "asyncIterable_1", "__asyncValues", "asyncIterable_1_1", "executeSchedule", "parentSubscription", "scheduler", "work", "delay", "repeat", "scheduleSubscription", "observeOn", "scheduler", "delay", "operate", "source", "subscriber", "createOperatorSubscriber", "value", "executeSchedule", "err", "subscribeOn", "scheduler", "delay", "operate", "source", "subscriber", "scheduleObservable", "input", "scheduler", "innerFrom", "subscribeOn", "observeOn", "schedulePromise", "input", "scheduler", "innerFrom", "subscribeOn", "observeOn", "scheduleArray", "input", "scheduler", "Observable", "subscriber", "i", "scheduleIterable", "input", "scheduler", "Observable", "subscriber", "iterator", "executeSchedule", "value", "done", "_a", "err", "isFunction", "scheduleAsyncIterable", "input", "scheduler", "Observable", "subscriber", "executeSchedule", "iterator", "result", "scheduleReadableStreamLike", "input", "scheduler", "scheduleAsyncIterable", "readableStreamLikeToAsyncGenerator", "scheduled", "input", "scheduler", "isInteropObservable", "scheduleObservable", "isArrayLike", "scheduleArray", "isPromise", "schedulePromise", "isAsyncIterable", "scheduleAsyncIterable", "isIterable", "scheduleIterable", "isReadableStreamLike", "scheduleReadableStreamLike", "createInvalidObservableTypeError", "from", "input", "scheduler", "scheduled", "innerFrom", "of", "args", "_i", "scheduler", "popScheduler", "from", "isValidDate", "value", "map", "project", "thisArg", "operate", "source", "subscriber", "index", "createOperatorSubscriber", "value", "isArray", "callOrApply", "fn", "args", "__spreadArray", "__read", "mapOneOrManyArgs", "map", "mergeInternals", "source", "subscriber", "project", "concurrent", "onBeforeNext", "expand", "innerSubScheduler", "additionalFinalizer", "buffer", "active", "index", "isComplete", "checkComplete", "outerNext", "value", "doInnerSub", "innerComplete", "innerFrom", "createOperatorSubscriber", "innerValue", "bufferedValue", "executeSchedule", "err", "mergeMap", "project", "resultSelector", "concurrent", "isFunction", "a", "i", "map", "b", "ii", "innerFrom", "operate", "source", "subscriber", "mergeInternals", "mergeAll", "concurrent", "mergeMap", "identity", "concatAll", "mergeAll", "concat", "args", "_i", "concatAll", "from", "popScheduler", "nodeEventEmitterMethods", "eventTargetMethods", "jqueryMethods", "fromEvent", "target", "eventName", "options", "resultSelector", "isFunction", "mapOneOrManyArgs", "_a", "__read", "isEventTarget", "methodName", "handler", "isNodeStyleEventEmitter", "toCommonHandlerRegistry", "isJQueryStyleEventEmitter", "add", "remove", "isArrayLike", "mergeMap", "subTarget", "innerFrom", "Observable", "subscriber", "args", "_i", "timer", "dueTime", "intervalOrScheduler", "scheduler", "async", "intervalDuration", "isScheduler", "Observable", "subscriber", "due", "isValidDate", "n", "interval", "period", "scheduler", "asyncScheduler", "timer", "merge", "args", "_i", "scheduler", "popScheduler", "concurrent", "popNumber", "sources", "innerFrom", "mergeAll", "from", "EMPTY", "NEVER", "Observable", "noop", "filter", "predicate", "thisArg", "operate", "source", "subscriber", "index", "createOperatorSubscriber", "value", "take", "count", "EMPTY", "operate", "source", "subscriber", "seen", "createOperatorSubscriber", "value", "ignoreElements", "operate", "source", "subscriber", "createOperatorSubscriber", "noop", "mapTo", "value", "map", "delayWhen", "delayDurationSelector", "subscriptionDelay", "source", "concat", "take", "ignoreElements", "mergeMap", "value", "index", "mapTo", "delay", "due", "scheduler", "asyncScheduler", "duration", "timer", "delayWhen", "distinctUntilChanged", "comparator", "keySelector", "identity", "defaultCompare", "operate", "source", "subscriber", "previousKey", "first", "createOperatorSubscriber", "value", "currentKey", "a", "b", "finalize", "callback", "operate", "source", "subscriber", "repeat", "countOrConfig", "count", "delay", "_a", "EMPTY", "operate", "source", "subscriber", "soFar", "sourceSub", "resubscribe", "notifier", "timer", "innerFrom", "notifierSubscriber_1", "createOperatorSubscriber", "subscribeToSource", "syncUnsub", "switchMap", "project", "resultSelector", "operate", "source", "subscriber", "innerSubscriber", "index", "isComplete", "checkComplete", "createOperatorSubscriber", "value", "innerIndex", "outerIndex", "innerFrom", "innerValue", "takeUntil", "notifier", "operate", "source", "subscriber", "innerFrom", "createOperatorSubscriber", "noop", "tap", "observerOrNext", "error", "complete", "tapObserver", "isFunction", "operate", "source", "subscriber", "_a", "isUnsub", "createOperatorSubscriber", "value", "err", "_b", "identity", "withLatestFrom", "inputs", "_i", "project", "popResultSelector", "operate", "source", "subscriber", "len", "otherValues", "hasValue", "ready", "i", "innerFrom", "createOperatorSubscriber", "value", "identity", "noop", "values", "__spreadArray", "__read", "container", "header", "button", "on$", "ReplaySubject", "distinctUntilChanged", "on", "fromEvent", "withLatestFrom", "interval", "takeUntil", "filter", "take", "repeat", "mergeMap", "instance", "merge", "NEVER", "of", "finalize", "switchMap", "el", "tap", "delay"] +} diff --git a/assets/javascripts/lunr/min/lunr.ar.min.js b/assets/javascripts/lunr/min/lunr.ar.min.js new file mode 100644 index 00000000..9b06c26c --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ar.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.ar=function(){this.pipeline.reset(),this.pipeline.add(e.ar.trimmer,e.ar.stopWordFilter,e.ar.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.ar.stemmer))},e.ar.wordCharacters="ء-ٛٱـ",e.ar.trimmer=e.trimmerSupport.generateTrimmer(e.ar.wordCharacters),e.Pipeline.registerFunction(e.ar.trimmer,"trimmer-ar"),e.ar.stemmer=function(){var e=this;return e.result=!1,e.preRemoved=!1,e.sufRemoved=!1,e.pre={pre1:"ف ك ب و س ل ن ا ي ت",pre2:"ال لل",pre3:"بال وال فال تال كال ولل",pre4:"فبال كبال وبال وكال"},e.suf={suf1:"ه ك ت ن ا ي",suf2:"نك نه ها وك يا اه ون ين تن تم نا وا ان كم كن ني نن ما هم هن تك ته ات يه",suf3:"تين كهم نيه نهم ونه وها يهم ونا ونك وني وهم تكم تنا تها تني تهم كما كها ناه نكم هنا تان يها",suf4:"كموه ناها ونني ونهم تكما تموه تكاه كماه ناكم ناهم نيها وننا"},e.patterns=JSON.parse('{"pt43":[{"pt":[{"c":"ا","l":1}]},{"pt":[{"c":"ا,ت,ن,ي","l":0}],"mPt":[{"c":"ف","l":0,"m":1},{"c":"ع","l":1,"m":2},{"c":"ل","l":2,"m":3}]},{"pt":[{"c":"و","l":2}],"mPt":[{"c":"ف","l":0,"m":0},{"c":"ع","l":1,"m":1},{"c":"ل","l":2,"m":3}]},{"pt":[{"c":"ا","l":2}]},{"pt":[{"c":"ي","l":2}],"mPt":[{"c":"ف","l":0,"m":0},{"c":"ع","l":1,"m":1},{"c":"ا","l":2},{"c":"ل","l":3,"m":3}]},{"pt":[{"c":"م","l":0}]}],"pt53":[{"pt":[{"c":"ت","l":0},{"c":"ا","l":2}]},{"pt":[{"c":"ا,ن,ت,ي","l":0},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ت","l":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":0},{"c":"ا","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ع","l":2,"m":3},{"c":"ل","l":3,"m":4},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":0},{"c":"ا","l":3}],"mPt":[{"c":"ف","l":0,"m":1},{"c":"ع","l":1,"m":2},{"c":"ل","l":2,"m":4}]},{"pt":[{"c":"ا","l":3},{"c":"ن","l":4}]},{"pt":[{"c":"ت","l":0},{"c":"ي","l":3}]},{"pt":[{"c":"م","l":0},{"c":"و","l":3}]},{"pt":[{"c":"ا","l":1},{"c":"و","l":3}]},{"pt":[{"c":"و","l":1},{"c":"ا","l":2}]},{"pt":[{"c":"م","l":0},{"c":"ا","l":3}]},{"pt":[{"c":"م","l":0},{"c":"ي","l":3}]},{"pt":[{"c":"ا","l":2},{"c":"ن","l":3}]},{"pt":[{"c":"م","l":0},{"c":"ن","l":1}],"mPt":[{"c":"ا","l":0},{"c":"ن","l":1},{"c":"ف","l":2,"m":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"م","l":0},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ت","l":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"م","l":0},{"c":"ا","l":2}]},{"pt":[{"c":"م","l":1},{"c":"ا","l":3}]},{"pt":[{"c":"ي,ت,ا,ن","l":0},{"c":"ت","l":1}],"mPt":[{"c":"ف","l":0,"m":2},{"c":"ع","l":1,"m":3},{"c":"ا","l":2},{"c":"ل","l":3,"m":4}]},{"pt":[{"c":"ت,ي,ا,ن","l":0},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ت","l":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":2},{"c":"ي","l":3}]},{"pt":[{"c":"ا,ي,ت,ن","l":0},{"c":"ن","l":1}],"mPt":[{"c":"ا","l":0},{"c":"ن","l":1},{"c":"ف","l":2,"m":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":3},{"c":"ء","l":4}]}],"pt63":[{"pt":[{"c":"ا","l":0},{"c":"ت","l":2},{"c":"ا","l":4}]},{"pt":[{"c":"ا,ت,ن,ي","l":0},{"c":"س","l":1},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"س","l":1},{"c":"ت","l":2},{"c":"ف","l":3,"m":3},{"c":"ع","l":4,"m":4},{"c":"ا","l":5},{"c":"ل","l":6,"m":5}]},{"pt":[{"c":"ا,ن,ت,ي","l":0},{"c":"و","l":3}]},{"pt":[{"c":"م","l":0},{"c":"س","l":1},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"س","l":1},{"c":"ت","l":2},{"c":"ف","l":3,"m":3},{"c":"ع","l":4,"m":4},{"c":"ا","l":5},{"c":"ل","l":6,"m":5}]},{"pt":[{"c":"ي","l":1},{"c":"ي","l":3},{"c":"ا","l":4},{"c":"ء","l":5}]},{"pt":[{"c":"ا","l":0},{"c":"ن","l":1},{"c":"ا","l":4}]}],"pt54":[{"pt":[{"c":"ت","l":0}]},{"pt":[{"c":"ا,ي,ت,ن","l":0}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ع","l":2,"m":2},{"c":"ل","l":3,"m":3},{"c":"ر","l":4,"m":4},{"c":"ا","l":5},{"c":"ر","l":6,"m":4}]},{"pt":[{"c":"م","l":0}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ع","l":2,"m":2},{"c":"ل","l":3,"m":3},{"c":"ر","l":4,"m":4},{"c":"ا","l":5},{"c":"ر","l":6,"m":4}]},{"pt":[{"c":"ا","l":2}]},{"pt":[{"c":"ا","l":0},{"c":"ن","l":2}]}],"pt64":[{"pt":[{"c":"ا","l":0},{"c":"ا","l":4}]},{"pt":[{"c":"م","l":0},{"c":"ت","l":1}]}],"pt73":[{"pt":[{"c":"ا","l":0},{"c":"س","l":1},{"c":"ت","l":2},{"c":"ا","l":5}]}],"pt75":[{"pt":[{"c":"ا","l":0},{"c":"ا","l":5}]}]}'),e.execArray=["cleanWord","removeDiacritics","cleanAlef","removeStopWords","normalizeHamzaAndAlef","removeStartWaw","removePre432","removeEndTaa","wordCheck"],e.stem=function(){var r=0;for(e.result=!1,e.preRemoved=!1,e.sufRemoved=!1;r=0)return!0},e.normalizeHamzaAndAlef=function(){return e.word=e.word.replace("ؤ","ء"),e.word=e.word.replace("ئ","ء"),e.word=e.word.replace(/([\u0627])\1+/gi,"ا"),!1},e.removeEndTaa=function(){return!(e.word.length>2)||(e.word=e.word.replace(/[\u0627]$/,""),e.word=e.word.replace("ة",""),!1)},e.removeStartWaw=function(){return e.word.length>3&&"و"==e.word[0]&&"و"==e.word[1]&&(e.word=e.word.slice(1)),!1},e.removePre432=function(){var r=e.word;if(e.word.length>=7){var t=new RegExp("^("+e.pre.pre4.split(" ").join("|")+")");e.word=e.word.replace(t,"")}if(e.word==r&&e.word.length>=6){var c=new RegExp("^("+e.pre.pre3.split(" ").join("|")+")");e.word=e.word.replace(c,"")}if(e.word==r&&e.word.length>=5){var l=new RegExp("^("+e.pre.pre2.split(" ").join("|")+")");e.word=e.word.replace(l,"")}return r!=e.word&&(e.preRemoved=!0),!1},e.patternCheck=function(r){for(var t=0;t3){var t=new RegExp("^("+e.pre.pre1.split(" ").join("|")+")");e.word=e.word.replace(t,"")}return r!=e.word&&(e.preRemoved=!0),!1},e.removeSuf1=function(){var r=e.word;if(0==e.sufRemoved&&e.word.length>3){var t=new RegExp("("+e.suf.suf1.split(" ").join("|")+")$");e.word=e.word.replace(t,"")}return r!=e.word&&(e.sufRemoved=!0),!1},e.removeSuf432=function(){var r=e.word;if(e.word.length>=6){var t=new RegExp("("+e.suf.suf4.split(" ").join("|")+")$");e.word=e.word.replace(t,"")}if(e.word==r&&e.word.length>=5){var c=new RegExp("("+e.suf.suf3.split(" ").join("|")+")$");e.word=e.word.replace(c,"")}if(e.word==r&&e.word.length>=4){var l=new RegExp("("+e.suf.suf2.split(" ").join("|")+")$");e.word=e.word.replace(l,"")}return r!=e.word&&(e.sufRemoved=!0),!1},e.wordCheck=function(){for(var r=(e.word,[e.removeSuf432,e.removeSuf1,e.removePre1]),t=0,c=!1;e.word.length>=7&&!e.result&&t=f.limit)return;f.cursor++}for(;!f.out_grouping(w,97,248);){if(f.cursor>=f.limit)return;f.cursor++}d=f.cursor,d=d&&(r=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,e=f.find_among_b(c,32),f.limit_backward=r,e))switch(f.bra=f.cursor,e){case 1:f.slice_del();break;case 2:f.in_grouping_b(p,97,229)&&f.slice_del()}}function t(){var e,r=f.limit-f.cursor;f.cursor>=d&&(e=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,f.find_among_b(l,4)?(f.bra=f.cursor,f.limit_backward=e,f.cursor=f.limit-r,f.cursor>f.limit_backward&&(f.cursor--,f.bra=f.cursor,f.slice_del())):f.limit_backward=e)}function s(){var e,r,i,n=f.limit-f.cursor;if(f.ket=f.cursor,f.eq_s_b(2,"st")&&(f.bra=f.cursor,f.eq_s_b(2,"ig")&&f.slice_del()),f.cursor=f.limit-n,f.cursor>=d&&(r=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,e=f.find_among_b(m,5),f.limit_backward=r,e))switch(f.bra=f.cursor,e){case 1:f.slice_del(),i=f.limit-f.cursor,t(),f.cursor=f.limit-i;break;case 2:f.slice_from("løs")}}function o(){var e;f.cursor>=d&&(e=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,f.out_grouping_b(w,97,248)?(f.bra=f.cursor,u=f.slice_to(u),f.limit_backward=e,f.eq_v_b(u)&&f.slice_del()):f.limit_backward=e)}var a,d,u,c=[new r("hed",-1,1),new r("ethed",0,1),new r("ered",-1,1),new r("e",-1,1),new r("erede",3,1),new r("ende",3,1),new r("erende",5,1),new r("ene",3,1),new r("erne",3,1),new r("ere",3,1),new r("en",-1,1),new r("heden",10,1),new r("eren",10,1),new r("er",-1,1),new r("heder",13,1),new r("erer",13,1),new r("s",-1,2),new r("heds",16,1),new r("es",16,1),new r("endes",18,1),new r("erendes",19,1),new r("enes",18,1),new r("ernes",18,1),new r("eres",18,1),new r("ens",16,1),new r("hedens",24,1),new r("erens",24,1),new r("ers",16,1),new r("ets",16,1),new r("erets",28,1),new r("et",-1,1),new r("eret",30,1)],l=[new r("gd",-1,-1),new r("dt",-1,-1),new r("gt",-1,-1),new r("kt",-1,-1)],m=[new r("ig",-1,1),new r("lig",0,1),new r("elig",1,1),new r("els",-1,1),new r("løst",-1,2)],w=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,48,0,128],p=[239,254,42,3,0,0,0,0,0,0,0,0,0,0,0,0,16],f=new i;this.setCurrent=function(e){f.setCurrent(e)},this.getCurrent=function(){return f.getCurrent()},this.stem=function(){var r=f.cursor;return e(),f.limit_backward=r,f.cursor=f.limit,n(),f.cursor=f.limit,t(),f.cursor=f.limit,s(),f.cursor=f.limit,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.da.stemmer,"stemmer-da"),e.da.stopWordFilter=e.generateStopWordFilter("ad af alle alt anden at blev blive bliver da de dem den denne der deres det dette dig din disse dog du efter eller en end er et for fra ham han hans har havde have hende hendes her hos hun hvad hvis hvor i ikke ind jeg jer jo kunne man mange med meget men mig min mine mit mod ned noget nogle nu når og også om op os over på selv sig sin sine sit skal skulle som sådan thi til ud under var vi vil ville vor være været".split(" ")),e.Pipeline.registerFunction(e.da.stopWordFilter,"stopWordFilter-da")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.de.min.js b/assets/javascripts/lunr/min/lunr.de.min.js new file mode 100644 index 00000000..f3b5c108 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.de.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `German` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.de=function(){this.pipeline.reset(),this.pipeline.add(e.de.trimmer,e.de.stopWordFilter,e.de.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.de.stemmer))},e.de.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.de.trimmer=e.trimmerSupport.generateTrimmer(e.de.wordCharacters),e.Pipeline.registerFunction(e.de.trimmer,"trimmer-de"),e.de.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,i=new function(){function e(e,r,n){return!(!v.eq_s(1,e)||(v.ket=v.cursor,!v.in_grouping(p,97,252)))&&(v.slice_from(r),v.cursor=n,!0)}function i(){for(var r,n,i,s,t=v.cursor;;)if(r=v.cursor,v.bra=r,v.eq_s(1,"ß"))v.ket=v.cursor,v.slice_from("ss");else{if(r>=v.limit)break;v.cursor=r+1}for(v.cursor=t;;)for(n=v.cursor;;){if(i=v.cursor,v.in_grouping(p,97,252)){if(s=v.cursor,v.bra=s,e("u","U",i))break;if(v.cursor=s,e("y","Y",i))break}if(i>=v.limit)return void(v.cursor=n);v.cursor=i+1}}function s(){for(;!v.in_grouping(p,97,252);){if(v.cursor>=v.limit)return!0;v.cursor++}for(;!v.out_grouping(p,97,252);){if(v.cursor>=v.limit)return!0;v.cursor++}return!1}function t(){m=v.limit,l=m;var e=v.cursor+3;0<=e&&e<=v.limit&&(d=e,s()||(m=v.cursor,m=v.limit)return;v.cursor++}}}function c(){return m<=v.cursor}function u(){return l<=v.cursor}function a(){var e,r,n,i,s=v.limit-v.cursor;if(v.ket=v.cursor,(e=v.find_among_b(w,7))&&(v.bra=v.cursor,c()))switch(e){case 1:v.slice_del();break;case 2:v.slice_del(),v.ket=v.cursor,v.eq_s_b(1,"s")&&(v.bra=v.cursor,v.eq_s_b(3,"nis")&&v.slice_del());break;case 3:v.in_grouping_b(g,98,116)&&v.slice_del()}if(v.cursor=v.limit-s,v.ket=v.cursor,(e=v.find_among_b(f,4))&&(v.bra=v.cursor,c()))switch(e){case 1:v.slice_del();break;case 2:if(v.in_grouping_b(k,98,116)){var t=v.cursor-3;v.limit_backward<=t&&t<=v.limit&&(v.cursor=t,v.slice_del())}}if(v.cursor=v.limit-s,v.ket=v.cursor,(e=v.find_among_b(_,8))&&(v.bra=v.cursor,u()))switch(e){case 1:v.slice_del(),v.ket=v.cursor,v.eq_s_b(2,"ig")&&(v.bra=v.cursor,r=v.limit-v.cursor,v.eq_s_b(1,"e")||(v.cursor=v.limit-r,u()&&v.slice_del()));break;case 2:n=v.limit-v.cursor,v.eq_s_b(1,"e")||(v.cursor=v.limit-n,v.slice_del());break;case 3:if(v.slice_del(),v.ket=v.cursor,i=v.limit-v.cursor,!v.eq_s_b(2,"er")&&(v.cursor=v.limit-i,!v.eq_s_b(2,"en")))break;v.bra=v.cursor,c()&&v.slice_del();break;case 4:v.slice_del(),v.ket=v.cursor,e=v.find_among_b(b,2),e&&(v.bra=v.cursor,u()&&1==e&&v.slice_del())}}var d,l,m,h=[new r("",-1,6),new r("U",0,2),new r("Y",0,1),new r("ä",0,3),new r("ö",0,4),new r("ü",0,5)],w=[new r("e",-1,2),new r("em",-1,1),new r("en",-1,2),new r("ern",-1,1),new r("er",-1,1),new r("s",-1,3),new r("es",5,2)],f=[new r("en",-1,1),new r("er",-1,1),new r("st",-1,2),new r("est",2,1)],b=[new r("ig",-1,1),new r("lich",-1,1)],_=[new r("end",-1,1),new r("ig",-1,2),new r("ung",-1,1),new r("lich",-1,3),new r("isch",-1,2),new r("ik",-1,2),new r("heit",-1,3),new r("keit",-1,4)],p=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32,8],g=[117,30,5],k=[117,30,4],v=new n;this.setCurrent=function(e){v.setCurrent(e)},this.getCurrent=function(){return v.getCurrent()},this.stem=function(){var e=v.cursor;return i(),v.cursor=e,t(),v.limit_backward=e,v.cursor=v.limit,a(),v.cursor=v.limit_backward,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.de.stemmer,"stemmer-de"),e.de.stopWordFilter=e.generateStopWordFilter("aber alle allem allen aller alles als also am an ander andere anderem anderen anderer anderes anderm andern anderr anders auch auf aus bei bin bis bist da damit dann das dasselbe dazu daß dein deine deinem deinen deiner deines dem demselben den denn denselben der derer derselbe derselben des desselben dessen dich die dies diese dieselbe dieselben diesem diesen dieser dieses dir doch dort du durch ein eine einem einen einer eines einig einige einigem einigen einiger einiges einmal er es etwas euch euer eure eurem euren eurer eures für gegen gewesen hab habe haben hat hatte hatten hier hin hinter ich ihm ihn ihnen ihr ihre ihrem ihren ihrer ihres im in indem ins ist jede jedem jeden jeder jedes jene jenem jenen jener jenes jetzt kann kein keine keinem keinen keiner keines können könnte machen man manche manchem manchen mancher manches mein meine meinem meinen meiner meines mich mir mit muss musste nach nicht nichts noch nun nur ob oder ohne sehr sein seine seinem seinen seiner seines selbst sich sie sind so solche solchem solchen solcher solches soll sollte sondern sonst um und uns unse unsem unsen unser unses unter viel vom von vor war waren warst was weg weil weiter welche welchem welchen welcher welches wenn werde werden wie wieder will wir wird wirst wo wollen wollte während würde würden zu zum zur zwar zwischen über".split(" ")),e.Pipeline.registerFunction(e.de.stopWordFilter,"stopWordFilter-de")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.du.min.js b/assets/javascripts/lunr/min/lunr.du.min.js new file mode 100644 index 00000000..49a0f3f0 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.du.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Dutch` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");console.warn('[Lunr Languages] Please use the "nl" instead of the "du". The "nl" code is the standard code for Dutch language, and "du" will be removed in the next major versions.'),e.du=function(){this.pipeline.reset(),this.pipeline.add(e.du.trimmer,e.du.stopWordFilter,e.du.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.du.stemmer))},e.du.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.du.trimmer=e.trimmerSupport.generateTrimmer(e.du.wordCharacters),e.Pipeline.registerFunction(e.du.trimmer,"trimmer-du"),e.du.stemmer=function(){var r=e.stemmerSupport.Among,i=e.stemmerSupport.SnowballProgram,n=new function(){function e(){for(var e,r,i,o=C.cursor;;){if(C.bra=C.cursor,e=C.find_among(b,11))switch(C.ket=C.cursor,e){case 1:C.slice_from("a");continue;case 2:C.slice_from("e");continue;case 3:C.slice_from("i");continue;case 4:C.slice_from("o");continue;case 5:C.slice_from("u");continue;case 6:if(C.cursor>=C.limit)break;C.cursor++;continue}break}for(C.cursor=o,C.bra=o,C.eq_s(1,"y")?(C.ket=C.cursor,C.slice_from("Y")):C.cursor=o;;)if(r=C.cursor,C.in_grouping(q,97,232)){if(i=C.cursor,C.bra=i,C.eq_s(1,"i"))C.ket=C.cursor,C.in_grouping(q,97,232)&&(C.slice_from("I"),C.cursor=r);else if(C.cursor=i,C.eq_s(1,"y"))C.ket=C.cursor,C.slice_from("Y"),C.cursor=r;else if(n(r))break}else if(n(r))break}function n(e){return C.cursor=e,e>=C.limit||(C.cursor++,!1)}function o(){_=C.limit,f=_,t()||(_=C.cursor,_<3&&(_=3),t()||(f=C.cursor))}function t(){for(;!C.in_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}for(;!C.out_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}return!1}function s(){for(var e;;)if(C.bra=C.cursor,e=C.find_among(p,3))switch(C.ket=C.cursor,e){case 1:C.slice_from("y");break;case 2:C.slice_from("i");break;case 3:if(C.cursor>=C.limit)return;C.cursor++}}function u(){return _<=C.cursor}function c(){return f<=C.cursor}function a(){var e=C.limit-C.cursor;C.find_among_b(g,3)&&(C.cursor=C.limit-e,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del()))}function l(){var e;w=!1,C.ket=C.cursor,C.eq_s_b(1,"e")&&(C.bra=C.cursor,u()&&(e=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-e,C.slice_del(),w=!0,a())))}function m(){var e;u()&&(e=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-e,C.eq_s_b(3,"gem")||(C.cursor=C.limit-e,C.slice_del(),a())))}function d(){var e,r,i,n,o,t,s=C.limit-C.cursor;if(C.ket=C.cursor,e=C.find_among_b(h,5))switch(C.bra=C.cursor,e){case 1:u()&&C.slice_from("heid");break;case 2:m();break;case 3:u()&&C.out_grouping_b(z,97,232)&&C.slice_del()}if(C.cursor=C.limit-s,l(),C.cursor=C.limit-s,C.ket=C.cursor,C.eq_s_b(4,"heid")&&(C.bra=C.cursor,c()&&(r=C.limit-C.cursor,C.eq_s_b(1,"c")||(C.cursor=C.limit-r,C.slice_del(),C.ket=C.cursor,C.eq_s_b(2,"en")&&(C.bra=C.cursor,m())))),C.cursor=C.limit-s,C.ket=C.cursor,e=C.find_among_b(k,6))switch(C.bra=C.cursor,e){case 1:if(c()){if(C.slice_del(),i=C.limit-C.cursor,C.ket=C.cursor,C.eq_s_b(2,"ig")&&(C.bra=C.cursor,c()&&(n=C.limit-C.cursor,!C.eq_s_b(1,"e")))){C.cursor=C.limit-n,C.slice_del();break}C.cursor=C.limit-i,a()}break;case 2:c()&&(o=C.limit-C.cursor,C.eq_s_b(1,"e")||(C.cursor=C.limit-o,C.slice_del()));break;case 3:c()&&(C.slice_del(),l());break;case 4:c()&&C.slice_del();break;case 5:c()&&w&&C.slice_del()}C.cursor=C.limit-s,C.out_grouping_b(j,73,232)&&(t=C.limit-C.cursor,C.find_among_b(v,4)&&C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-t,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del())))}var f,_,w,b=[new r("",-1,6),new r("á",0,1),new r("ä",0,1),new r("é",0,2),new r("ë",0,2),new r("í",0,3),new r("ï",0,3),new r("ó",0,4),new r("ö",0,4),new r("ú",0,5),new r("ü",0,5)],p=[new r("",-1,3),new r("I",0,2),new r("Y",0,1)],g=[new r("dd",-1,-1),new r("kk",-1,-1),new r("tt",-1,-1)],h=[new r("ene",-1,2),new r("se",-1,3),new r("en",-1,2),new r("heden",2,1),new r("s",-1,3)],k=[new r("end",-1,1),new r("ig",-1,2),new r("ing",-1,1),new r("lijk",-1,3),new r("baar",-1,4),new r("bar",-1,5)],v=[new r("aa",-1,-1),new r("ee",-1,-1),new r("oo",-1,-1),new r("uu",-1,-1)],q=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],j=[1,0,0,17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],z=[17,67,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],C=new i;this.setCurrent=function(e){C.setCurrent(e)},this.getCurrent=function(){return C.getCurrent()},this.stem=function(){var r=C.cursor;return e(),C.cursor=r,o(),C.limit_backward=r,C.cursor=C.limit,d(),C.cursor=C.limit_backward,s(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.du.stemmer,"stemmer-du"),e.du.stopWordFilter=e.generateStopWordFilter(" aan al alles als altijd andere ben bij daar dan dat de der deze die dit doch doen door dus een eens en er ge geen geweest haar had heb hebben heeft hem het hier hij hoe hun iemand iets ik in is ja je kan kon kunnen maar me meer men met mij mijn moet na naar niet niets nog nu of om omdat onder ons ook op over reeds te tegen toch toen tot u uit uw van veel voor want waren was wat werd wezen wie wil worden wordt zal ze zelf zich zij zijn zo zonder zou".split(" ")),e.Pipeline.registerFunction(e.du.stopWordFilter,"stopWordFilter-du")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.es.min.js b/assets/javascripts/lunr/min/lunr.es.min.js new file mode 100644 index 00000000..2989d342 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.es.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Spanish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,s){"function"==typeof define&&define.amd?define(s):"object"==typeof exports?module.exports=s():s()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.es=function(){this.pipeline.reset(),this.pipeline.add(e.es.trimmer,e.es.stopWordFilter,e.es.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.es.stemmer))},e.es.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.es.trimmer=e.trimmerSupport.generateTrimmer(e.es.wordCharacters),e.Pipeline.registerFunction(e.es.trimmer,"trimmer-es"),e.es.stemmer=function(){var s=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,n=new function(){function e(){if(A.out_grouping(x,97,252)){for(;!A.in_grouping(x,97,252);){if(A.cursor>=A.limit)return!0;A.cursor++}return!1}return!0}function n(){if(A.in_grouping(x,97,252)){var s=A.cursor;if(e()){if(A.cursor=s,!A.in_grouping(x,97,252))return!0;for(;!A.out_grouping(x,97,252);){if(A.cursor>=A.limit)return!0;A.cursor++}}return!1}return!0}function i(){var s,r=A.cursor;if(n()){if(A.cursor=r,!A.out_grouping(x,97,252))return;if(s=A.cursor,e()){if(A.cursor=s,!A.in_grouping(x,97,252)||A.cursor>=A.limit)return;A.cursor++}}g=A.cursor}function a(){for(;!A.in_grouping(x,97,252);){if(A.cursor>=A.limit)return!1;A.cursor++}for(;!A.out_grouping(x,97,252);){if(A.cursor>=A.limit)return!1;A.cursor++}return!0}function t(){var e=A.cursor;g=A.limit,p=g,v=g,i(),A.cursor=e,a()&&(p=A.cursor,a()&&(v=A.cursor))}function o(){for(var e;;){if(A.bra=A.cursor,e=A.find_among(k,6))switch(A.ket=A.cursor,e){case 1:A.slice_from("a");continue;case 2:A.slice_from("e");continue;case 3:A.slice_from("i");continue;case 4:A.slice_from("o");continue;case 5:A.slice_from("u");continue;case 6:if(A.cursor>=A.limit)break;A.cursor++;continue}break}}function u(){return g<=A.cursor}function w(){return p<=A.cursor}function c(){return v<=A.cursor}function m(){var e;if(A.ket=A.cursor,A.find_among_b(y,13)&&(A.bra=A.cursor,(e=A.find_among_b(q,11))&&u()))switch(e){case 1:A.bra=A.cursor,A.slice_from("iendo");break;case 2:A.bra=A.cursor,A.slice_from("ando");break;case 3:A.bra=A.cursor,A.slice_from("ar");break;case 4:A.bra=A.cursor,A.slice_from("er");break;case 5:A.bra=A.cursor,A.slice_from("ir");break;case 6:A.slice_del();break;case 7:A.eq_s_b(1,"u")&&A.slice_del()}}function l(e,s){if(!c())return!0;A.slice_del(),A.ket=A.cursor;var r=A.find_among_b(e,s);return r&&(A.bra=A.cursor,1==r&&c()&&A.slice_del()),!1}function d(e){return!c()||(A.slice_del(),A.ket=A.cursor,A.eq_s_b(2,e)&&(A.bra=A.cursor,c()&&A.slice_del()),!1)}function b(){var e;if(A.ket=A.cursor,e=A.find_among_b(S,46)){switch(A.bra=A.cursor,e){case 1:if(!c())return!1;A.slice_del();break;case 2:if(d("ic"))return!1;break;case 3:if(!c())return!1;A.slice_from("log");break;case 4:if(!c())return!1;A.slice_from("u");break;case 5:if(!c())return!1;A.slice_from("ente");break;case 6:if(!w())return!1;A.slice_del(),A.ket=A.cursor,e=A.find_among_b(C,4),e&&(A.bra=A.cursor,c()&&(A.slice_del(),1==e&&(A.ket=A.cursor,A.eq_s_b(2,"at")&&(A.bra=A.cursor,c()&&A.slice_del()))));break;case 7:if(l(P,3))return!1;break;case 8:if(l(F,3))return!1;break;case 9:if(d("at"))return!1}return!0}return!1}function f(){var e,s;if(A.cursor>=g&&(s=A.limit_backward,A.limit_backward=g,A.ket=A.cursor,e=A.find_among_b(W,12),A.limit_backward=s,e)){if(A.bra=A.cursor,1==e){if(!A.eq_s_b(1,"u"))return!1;A.slice_del()}return!0}return!1}function _(){var e,s,r,n;if(A.cursor>=g&&(s=A.limit_backward,A.limit_backward=g,A.ket=A.cursor,e=A.find_among_b(L,96),A.limit_backward=s,e))switch(A.bra=A.cursor,e){case 1:r=A.limit-A.cursor,A.eq_s_b(1,"u")?(n=A.limit-A.cursor,A.eq_s_b(1,"g")?A.cursor=A.limit-n:A.cursor=A.limit-r):A.cursor=A.limit-r,A.bra=A.cursor;case 2:A.slice_del()}}function h(){var e,s;if(A.ket=A.cursor,e=A.find_among_b(z,8))switch(A.bra=A.cursor,e){case 1:u()&&A.slice_del();break;case 2:u()&&(A.slice_del(),A.ket=A.cursor,A.eq_s_b(1,"u")&&(A.bra=A.cursor,s=A.limit-A.cursor,A.eq_s_b(1,"g")&&(A.cursor=A.limit-s,u()&&A.slice_del())))}}var v,p,g,k=[new s("",-1,6),new s("á",0,1),new s("é",0,2),new s("í",0,3),new s("ó",0,4),new s("ú",0,5)],y=[new s("la",-1,-1),new s("sela",0,-1),new s("le",-1,-1),new s("me",-1,-1),new s("se",-1,-1),new s("lo",-1,-1),new s("selo",5,-1),new s("las",-1,-1),new s("selas",7,-1),new s("les",-1,-1),new s("los",-1,-1),new s("selos",10,-1),new s("nos",-1,-1)],q=[new s("ando",-1,6),new s("iendo",-1,6),new s("yendo",-1,7),new s("ándo",-1,2),new s("iéndo",-1,1),new s("ar",-1,6),new s("er",-1,6),new s("ir",-1,6),new s("ár",-1,3),new s("ér",-1,4),new s("ír",-1,5)],C=[new s("ic",-1,-1),new s("ad",-1,-1),new s("os",-1,-1),new s("iv",-1,1)],P=[new s("able",-1,1),new s("ible",-1,1),new s("ante",-1,1)],F=[new s("ic",-1,1),new s("abil",-1,1),new s("iv",-1,1)],S=[new s("ica",-1,1),new s("ancia",-1,2),new s("encia",-1,5),new s("adora",-1,2),new s("osa",-1,1),new s("ista",-1,1),new s("iva",-1,9),new s("anza",-1,1),new s("logía",-1,3),new s("idad",-1,8),new s("able",-1,1),new s("ible",-1,1),new s("ante",-1,2),new s("mente",-1,7),new s("amente",13,6),new s("ación",-1,2),new s("ución",-1,4),new s("ico",-1,1),new s("ismo",-1,1),new s("oso",-1,1),new s("amiento",-1,1),new s("imiento",-1,1),new s("ivo",-1,9),new s("ador",-1,2),new s("icas",-1,1),new s("ancias",-1,2),new s("encias",-1,5),new s("adoras",-1,2),new s("osas",-1,1),new s("istas",-1,1),new s("ivas",-1,9),new s("anzas",-1,1),new s("logías",-1,3),new s("idades",-1,8),new s("ables",-1,1),new s("ibles",-1,1),new s("aciones",-1,2),new s("uciones",-1,4),new s("adores",-1,2),new s("antes",-1,2),new s("icos",-1,1),new s("ismos",-1,1),new s("osos",-1,1),new s("amientos",-1,1),new s("imientos",-1,1),new s("ivos",-1,9)],W=[new s("ya",-1,1),new s("ye",-1,1),new s("yan",-1,1),new s("yen",-1,1),new s("yeron",-1,1),new s("yendo",-1,1),new s("yo",-1,1),new s("yas",-1,1),new s("yes",-1,1),new s("yais",-1,1),new s("yamos",-1,1),new s("yó",-1,1)],L=[new s("aba",-1,2),new s("ada",-1,2),new s("ida",-1,2),new s("ara",-1,2),new s("iera",-1,2),new s("ía",-1,2),new s("aría",5,2),new s("ería",5,2),new s("iría",5,2),new s("ad",-1,2),new s("ed",-1,2),new s("id",-1,2),new s("ase",-1,2),new s("iese",-1,2),new s("aste",-1,2),new s("iste",-1,2),new s("an",-1,2),new s("aban",16,2),new s("aran",16,2),new s("ieran",16,2),new s("ían",16,2),new s("arían",20,2),new s("erían",20,2),new s("irían",20,2),new s("en",-1,1),new s("asen",24,2),new s("iesen",24,2),new s("aron",-1,2),new s("ieron",-1,2),new s("arán",-1,2),new s("erán",-1,2),new s("irán",-1,2),new s("ado",-1,2),new s("ido",-1,2),new s("ando",-1,2),new s("iendo",-1,2),new s("ar",-1,2),new s("er",-1,2),new s("ir",-1,2),new s("as",-1,2),new s("abas",39,2),new s("adas",39,2),new s("idas",39,2),new s("aras",39,2),new s("ieras",39,2),new s("ías",39,2),new s("arías",45,2),new s("erías",45,2),new s("irías",45,2),new s("es",-1,1),new s("ases",49,2),new s("ieses",49,2),new s("abais",-1,2),new s("arais",-1,2),new s("ierais",-1,2),new s("íais",-1,2),new s("aríais",55,2),new s("eríais",55,2),new s("iríais",55,2),new s("aseis",-1,2),new s("ieseis",-1,2),new s("asteis",-1,2),new s("isteis",-1,2),new s("áis",-1,2),new s("éis",-1,1),new s("aréis",64,2),new s("eréis",64,2),new s("iréis",64,2),new s("ados",-1,2),new s("idos",-1,2),new s("amos",-1,2),new s("ábamos",70,2),new s("áramos",70,2),new s("iéramos",70,2),new s("íamos",70,2),new s("aríamos",74,2),new s("eríamos",74,2),new s("iríamos",74,2),new s("emos",-1,1),new s("aremos",78,2),new s("eremos",78,2),new s("iremos",78,2),new s("ásemos",78,2),new s("iésemos",78,2),new s("imos",-1,2),new s("arás",-1,2),new s("erás",-1,2),new s("irás",-1,2),new s("ís",-1,2),new s("ará",-1,2),new s("erá",-1,2),new s("irá",-1,2),new s("aré",-1,2),new s("eré",-1,2),new s("iré",-1,2),new s("ió",-1,2)],z=[new s("a",-1,1),new s("e",-1,2),new s("o",-1,1),new s("os",-1,1),new s("á",-1,1),new s("é",-1,2),new s("í",-1,1),new s("ó",-1,1)],x=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,1,17,4,10],A=new r;this.setCurrent=function(e){A.setCurrent(e)},this.getCurrent=function(){return A.getCurrent()},this.stem=function(){var e=A.cursor;return t(),A.limit_backward=e,A.cursor=A.limit,m(),A.cursor=A.limit,b()||(A.cursor=A.limit,f()||(A.cursor=A.limit,_())),A.cursor=A.limit,h(),A.cursor=A.limit_backward,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.es.stemmer,"stemmer-es"),e.es.stopWordFilter=e.generateStopWordFilter("a al algo algunas algunos ante antes como con contra cual cuando de del desde donde durante e el ella ellas ellos en entre era erais eran eras eres es esa esas ese eso esos esta estaba estabais estaban estabas estad estada estadas estado estados estamos estando estar estaremos estará estarán estarás estaré estaréis estaría estaríais estaríamos estarían estarías estas este estemos esto estos estoy estuve estuviera estuvierais estuvieran estuvieras estuvieron estuviese estuvieseis estuviesen estuvieses estuvimos estuviste estuvisteis estuviéramos estuviésemos estuvo está estábamos estáis están estás esté estéis estén estés fue fuera fuerais fueran fueras fueron fuese fueseis fuesen fueses fui fuimos fuiste fuisteis fuéramos fuésemos ha habida habidas habido habidos habiendo habremos habrá habrán habrás habré habréis habría habríais habríamos habrían habrías habéis había habíais habíamos habían habías han has hasta hay haya hayamos hayan hayas hayáis he hemos hube hubiera hubierais hubieran hubieras hubieron hubiese hubieseis hubiesen hubieses hubimos hubiste hubisteis hubiéramos hubiésemos hubo la las le les lo los me mi mis mucho muchos muy más mí mía mías mío míos nada ni no nos nosotras nosotros nuestra nuestras nuestro nuestros o os otra otras otro otros para pero poco por porque que quien quienes qué se sea seamos sean seas seremos será serán serás seré seréis sería seríais seríamos serían serías seáis sido siendo sin sobre sois somos son soy su sus suya suyas suyo suyos sí también tanto te tendremos tendrá tendrán tendrás tendré tendréis tendría tendríais tendríamos tendrían tendrías tened tenemos tenga tengamos tengan tengas tengo tengáis tenida tenidas tenido tenidos teniendo tenéis tenía teníais teníamos tenían tenías ti tiene tienen tienes todo todos tu tus tuve tuviera tuvierais tuvieran tuvieras tuvieron tuviese tuvieseis tuviesen tuvieses tuvimos tuviste tuvisteis tuviéramos tuviésemos tuvo tuya tuyas tuyo tuyos tú un una uno unos vosotras vosotros vuestra vuestras vuestro vuestros y ya yo él éramos".split(" ")),e.Pipeline.registerFunction(e.es.stopWordFilter,"stopWordFilter-es")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.fi.min.js b/assets/javascripts/lunr/min/lunr.fi.min.js new file mode 100644 index 00000000..29f5dfce --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.fi.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Finnish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(i,e){"function"==typeof define&&define.amd?define(e):"object"==typeof exports?module.exports=e():e()(i.lunr)}(this,function(){return function(i){if(void 0===i)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===i.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");i.fi=function(){this.pipeline.reset(),this.pipeline.add(i.fi.trimmer,i.fi.stopWordFilter,i.fi.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(i.fi.stemmer))},i.fi.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",i.fi.trimmer=i.trimmerSupport.generateTrimmer(i.fi.wordCharacters),i.Pipeline.registerFunction(i.fi.trimmer,"trimmer-fi"),i.fi.stemmer=function(){var e=i.stemmerSupport.Among,r=i.stemmerSupport.SnowballProgram,n=new function(){function i(){f=A.limit,d=f,n()||(f=A.cursor,n()||(d=A.cursor))}function n(){for(var i;;){if(i=A.cursor,A.in_grouping(W,97,246))break;if(A.cursor=i,i>=A.limit)return!0;A.cursor++}for(A.cursor=i;!A.out_grouping(W,97,246);){if(A.cursor>=A.limit)return!0;A.cursor++}return!1}function t(){return d<=A.cursor}function s(){var i,e;if(A.cursor>=f)if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,i=A.find_among_b(h,10)){switch(A.bra=A.cursor,A.limit_backward=e,i){case 1:if(!A.in_grouping_b(x,97,246))return;break;case 2:if(!t())return}A.slice_del()}else A.limit_backward=e}function o(){var i,e,r;if(A.cursor>=f)if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,i=A.find_among_b(v,9))switch(A.bra=A.cursor,A.limit_backward=e,i){case 1:r=A.limit-A.cursor,A.eq_s_b(1,"k")||(A.cursor=A.limit-r,A.slice_del());break;case 2:A.slice_del(),A.ket=A.cursor,A.eq_s_b(3,"kse")&&(A.bra=A.cursor,A.slice_from("ksi"));break;case 3:A.slice_del();break;case 4:A.find_among_b(p,6)&&A.slice_del();break;case 5:A.find_among_b(g,6)&&A.slice_del();break;case 6:A.find_among_b(j,2)&&A.slice_del()}else A.limit_backward=e}function l(){return A.find_among_b(q,7)}function a(){return A.eq_s_b(1,"i")&&A.in_grouping_b(L,97,246)}function u(){var i,e,r;if(A.cursor>=f)if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,i=A.find_among_b(C,30)){switch(A.bra=A.cursor,A.limit_backward=e,i){case 1:if(!A.eq_s_b(1,"a"))return;break;case 2:case 9:if(!A.eq_s_b(1,"e"))return;break;case 3:if(!A.eq_s_b(1,"i"))return;break;case 4:if(!A.eq_s_b(1,"o"))return;break;case 5:if(!A.eq_s_b(1,"ä"))return;break;case 6:if(!A.eq_s_b(1,"ö"))return;break;case 7:if(r=A.limit-A.cursor,!l()&&(A.cursor=A.limit-r,!A.eq_s_b(2,"ie"))){A.cursor=A.limit-r;break}if(A.cursor=A.limit-r,A.cursor<=A.limit_backward){A.cursor=A.limit-r;break}A.cursor--,A.bra=A.cursor;break;case 8:if(!A.in_grouping_b(W,97,246)||!A.out_grouping_b(W,97,246))return}A.slice_del(),k=!0}else A.limit_backward=e}function c(){var i,e,r;if(A.cursor>=d)if(e=A.limit_backward,A.limit_backward=d,A.ket=A.cursor,i=A.find_among_b(P,14)){if(A.bra=A.cursor,A.limit_backward=e,1==i){if(r=A.limit-A.cursor,A.eq_s_b(2,"po"))return;A.cursor=A.limit-r}A.slice_del()}else A.limit_backward=e}function m(){var i;A.cursor>=f&&(i=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,A.find_among_b(F,2)?(A.bra=A.cursor,A.limit_backward=i,A.slice_del()):A.limit_backward=i)}function w(){var i,e,r,n,t,s;if(A.cursor>=f){if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,A.eq_s_b(1,"t")&&(A.bra=A.cursor,r=A.limit-A.cursor,A.in_grouping_b(W,97,246)&&(A.cursor=A.limit-r,A.slice_del(),A.limit_backward=e,n=A.limit-A.cursor,A.cursor>=d&&(A.cursor=d,t=A.limit_backward,A.limit_backward=A.cursor,A.cursor=A.limit-n,A.ket=A.cursor,i=A.find_among_b(S,2))))){if(A.bra=A.cursor,A.limit_backward=t,1==i){if(s=A.limit-A.cursor,A.eq_s_b(2,"po"))return;A.cursor=A.limit-s}return void A.slice_del()}A.limit_backward=e}}function _(){var i,e,r,n;if(A.cursor>=f){for(i=A.limit_backward,A.limit_backward=f,e=A.limit-A.cursor,l()&&(A.cursor=A.limit-e,A.ket=A.cursor,A.cursor>A.limit_backward&&(A.cursor--,A.bra=A.cursor,A.slice_del())),A.cursor=A.limit-e,A.ket=A.cursor,A.in_grouping_b(y,97,228)&&(A.bra=A.cursor,A.out_grouping_b(W,97,246)&&A.slice_del()),A.cursor=A.limit-e,A.ket=A.cursor,A.eq_s_b(1,"j")&&(A.bra=A.cursor,r=A.limit-A.cursor,A.eq_s_b(1,"o")?A.slice_del():(A.cursor=A.limit-r,A.eq_s_b(1,"u")&&A.slice_del())),A.cursor=A.limit-e,A.ket=A.cursor,A.eq_s_b(1,"o")&&(A.bra=A.cursor,A.eq_s_b(1,"j")&&A.slice_del()),A.cursor=A.limit-e,A.limit_backward=i;;){if(n=A.limit-A.cursor,A.out_grouping_b(W,97,246)){A.cursor=A.limit-n;break}if(A.cursor=A.limit-n,A.cursor<=A.limit_backward)return;A.cursor--}A.ket=A.cursor,A.cursor>A.limit_backward&&(A.cursor--,A.bra=A.cursor,b=A.slice_to(),A.eq_v_b(b)&&A.slice_del())}}var k,b,d,f,h=[new e("pa",-1,1),new e("sti",-1,2),new e("kaan",-1,1),new e("han",-1,1),new e("kin",-1,1),new e("hän",-1,1),new e("kään",-1,1),new e("ko",-1,1),new e("pä",-1,1),new e("kö",-1,1)],p=[new e("lla",-1,-1),new e("na",-1,-1),new e("ssa",-1,-1),new e("ta",-1,-1),new e("lta",3,-1),new e("sta",3,-1)],g=[new e("llä",-1,-1),new e("nä",-1,-1),new e("ssä",-1,-1),new e("tä",-1,-1),new e("ltä",3,-1),new e("stä",3,-1)],j=[new e("lle",-1,-1),new e("ine",-1,-1)],v=[new e("nsa",-1,3),new e("mme",-1,3),new e("nne",-1,3),new e("ni",-1,2),new e("si",-1,1),new e("an",-1,4),new e("en",-1,6),new e("än",-1,5),new e("nsä",-1,3)],q=[new e("aa",-1,-1),new e("ee",-1,-1),new e("ii",-1,-1),new e("oo",-1,-1),new e("uu",-1,-1),new e("ää",-1,-1),new e("öö",-1,-1)],C=[new e("a",-1,8),new e("lla",0,-1),new e("na",0,-1),new e("ssa",0,-1),new e("ta",0,-1),new e("lta",4,-1),new e("sta",4,-1),new e("tta",4,9),new e("lle",-1,-1),new e("ine",-1,-1),new e("ksi",-1,-1),new e("n",-1,7),new e("han",11,1),new e("den",11,-1,a),new e("seen",11,-1,l),new e("hen",11,2),new e("tten",11,-1,a),new e("hin",11,3),new e("siin",11,-1,a),new e("hon",11,4),new e("hän",11,5),new e("hön",11,6),new e("ä",-1,8),new e("llä",22,-1),new e("nä",22,-1),new e("ssä",22,-1),new e("tä",22,-1),new e("ltä",26,-1),new e("stä",26,-1),new e("ttä",26,9)],P=[new e("eja",-1,-1),new e("mma",-1,1),new e("imma",1,-1),new e("mpa",-1,1),new e("impa",3,-1),new e("mmi",-1,1),new e("immi",5,-1),new e("mpi",-1,1),new e("impi",7,-1),new e("ejä",-1,-1),new e("mmä",-1,1),new e("immä",10,-1),new e("mpä",-1,1),new e("impä",12,-1)],F=[new e("i",-1,-1),new e("j",-1,-1)],S=[new e("mma",-1,1),new e("imma",0,-1)],y=[17,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8],W=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32],L=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32],x=[17,97,24,1,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32],A=new r;this.setCurrent=function(i){A.setCurrent(i)},this.getCurrent=function(){return A.getCurrent()},this.stem=function(){var e=A.cursor;return i(),k=!1,A.limit_backward=e,A.cursor=A.limit,s(),A.cursor=A.limit,o(),A.cursor=A.limit,u(),A.cursor=A.limit,c(),A.cursor=A.limit,k?(m(),A.cursor=A.limit):(A.cursor=A.limit,w(),A.cursor=A.limit),_(),!0}};return function(i){return"function"==typeof i.update?i.update(function(i){return n.setCurrent(i),n.stem(),n.getCurrent()}):(n.setCurrent(i),n.stem(),n.getCurrent())}}(),i.Pipeline.registerFunction(i.fi.stemmer,"stemmer-fi"),i.fi.stopWordFilter=i.generateStopWordFilter("ei eivät emme en et ette että he heidän heidät heihin heille heillä heiltä heissä heistä heitä hän häneen hänelle hänellä häneltä hänen hänessä hänestä hänet häntä itse ja johon joiden joihin joiksi joilla joille joilta joina joissa joista joita joka joksi jolla jolle jolta jona jonka jos jossa josta jota jotka kanssa keiden keihin keiksi keille keillä keiltä keinä keissä keistä keitä keneen keneksi kenelle kenellä keneltä kenen kenenä kenessä kenestä kenet ketkä ketkä ketä koska kuin kuka kun me meidän meidät meihin meille meillä meiltä meissä meistä meitä mihin miksi mikä mille millä miltä minkä minkä minua minulla minulle minulta minun minussa minusta minut minuun minä minä missä mistä mitkä mitä mukaan mutta ne niiden niihin niiksi niille niillä niiltä niin niin niinä niissä niistä niitä noiden noihin noiksi noilla noille noilta noin noina noissa noista noita nuo nyt näiden näihin näiksi näille näillä näiltä näinä näissä näistä näitä nämä ole olemme olen olet olette oli olimme olin olisi olisimme olisin olisit olisitte olisivat olit olitte olivat olla olleet ollut on ovat poikki se sekä sen siihen siinä siitä siksi sille sillä sillä siltä sinua sinulla sinulle sinulta sinun sinussa sinusta sinut sinuun sinä sinä sitä tai te teidän teidät teihin teille teillä teiltä teissä teistä teitä tuo tuohon tuoksi tuolla tuolle tuolta tuon tuona tuossa tuosta tuota tähän täksi tälle tällä tältä tämä tämän tänä tässä tästä tätä vaan vai vaikka yli".split(" ")),i.Pipeline.registerFunction(i.fi.stopWordFilter,"stopWordFilter-fi")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.fr.min.js b/assets/javascripts/lunr/min/lunr.fr.min.js new file mode 100644 index 00000000..68cd0094 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.fr.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `French` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.fr=function(){this.pipeline.reset(),this.pipeline.add(e.fr.trimmer,e.fr.stopWordFilter,e.fr.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.fr.stemmer))},e.fr.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.fr.trimmer=e.trimmerSupport.generateTrimmer(e.fr.wordCharacters),e.Pipeline.registerFunction(e.fr.trimmer,"trimmer-fr"),e.fr.stemmer=function(){var r=e.stemmerSupport.Among,s=e.stemmerSupport.SnowballProgram,i=new function(){function e(e,r,s){return!(!W.eq_s(1,e)||(W.ket=W.cursor,!W.in_grouping(F,97,251)))&&(W.slice_from(r),W.cursor=s,!0)}function i(e,r,s){return!!W.eq_s(1,e)&&(W.ket=W.cursor,W.slice_from(r),W.cursor=s,!0)}function n(){for(var r,s;;){if(r=W.cursor,W.in_grouping(F,97,251)){if(W.bra=W.cursor,s=W.cursor,e("u","U",r))continue;if(W.cursor=s,e("i","I",r))continue;if(W.cursor=s,i("y","Y",r))continue}if(W.cursor=r,W.bra=r,!e("y","Y",r)){if(W.cursor=r,W.eq_s(1,"q")&&(W.bra=W.cursor,i("u","U",r)))continue;if(W.cursor=r,r>=W.limit)return;W.cursor++}}}function t(){for(;!W.in_grouping(F,97,251);){if(W.cursor>=W.limit)return!0;W.cursor++}for(;!W.out_grouping(F,97,251);){if(W.cursor>=W.limit)return!0;W.cursor++}return!1}function u(){var e=W.cursor;if(q=W.limit,g=q,p=q,W.in_grouping(F,97,251)&&W.in_grouping(F,97,251)&&W.cursor=W.limit){W.cursor=q;break}W.cursor++}while(!W.in_grouping(F,97,251))}q=W.cursor,W.cursor=e,t()||(g=W.cursor,t()||(p=W.cursor))}function o(){for(var e,r;;){if(r=W.cursor,W.bra=r,!(e=W.find_among(h,4)))break;switch(W.ket=W.cursor,e){case 1:W.slice_from("i");break;case 2:W.slice_from("u");break;case 3:W.slice_from("y");break;case 4:if(W.cursor>=W.limit)return;W.cursor++}}}function c(){return q<=W.cursor}function a(){return g<=W.cursor}function l(){return p<=W.cursor}function w(){var e,r;if(W.ket=W.cursor,e=W.find_among_b(C,43)){switch(W.bra=W.cursor,e){case 1:if(!l())return!1;W.slice_del();break;case 2:if(!l())return!1;W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"ic")&&(W.bra=W.cursor,l()?W.slice_del():W.slice_from("iqU"));break;case 3:if(!l())return!1;W.slice_from("log");break;case 4:if(!l())return!1;W.slice_from("u");break;case 5:if(!l())return!1;W.slice_from("ent");break;case 6:if(!c())return!1;if(W.slice_del(),W.ket=W.cursor,e=W.find_among_b(z,6))switch(W.bra=W.cursor,e){case 1:l()&&(W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"at")&&(W.bra=W.cursor,l()&&W.slice_del()));break;case 2:l()?W.slice_del():a()&&W.slice_from("eux");break;case 3:l()&&W.slice_del();break;case 4:c()&&W.slice_from("i")}break;case 7:if(!l())return!1;if(W.slice_del(),W.ket=W.cursor,e=W.find_among_b(y,3))switch(W.bra=W.cursor,e){case 1:l()?W.slice_del():W.slice_from("abl");break;case 2:l()?W.slice_del():W.slice_from("iqU");break;case 3:l()&&W.slice_del()}break;case 8:if(!l())return!1;if(W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"at")&&(W.bra=W.cursor,l()&&(W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"ic")))){W.bra=W.cursor,l()?W.slice_del():W.slice_from("iqU");break}break;case 9:W.slice_from("eau");break;case 10:if(!a())return!1;W.slice_from("al");break;case 11:if(l())W.slice_del();else{if(!a())return!1;W.slice_from("eux")}break;case 12:if(!a()||!W.out_grouping_b(F,97,251))return!1;W.slice_del();break;case 13:return c()&&W.slice_from("ant"),!1;case 14:return c()&&W.slice_from("ent"),!1;case 15:return r=W.limit-W.cursor,W.in_grouping_b(F,97,251)&&c()&&(W.cursor=W.limit-r,W.slice_del()),!1}return!0}return!1}function f(){var e,r;if(W.cursor=q){if(s=W.limit_backward,W.limit_backward=q,W.ket=W.cursor,e=W.find_among_b(P,7))switch(W.bra=W.cursor,e){case 1:if(l()){if(i=W.limit-W.cursor,!W.eq_s_b(1,"s")&&(W.cursor=W.limit-i,!W.eq_s_b(1,"t")))break;W.slice_del()}break;case 2:W.slice_from("i");break;case 3:W.slice_del();break;case 4:W.eq_s_b(2,"gu")&&W.slice_del()}W.limit_backward=s}}function b(){var e=W.limit-W.cursor;W.find_among_b(U,5)&&(W.cursor=W.limit-e,W.ket=W.cursor,W.cursor>W.limit_backward&&(W.cursor--,W.bra=W.cursor,W.slice_del()))}function d(){for(var e,r=1;W.out_grouping_b(F,97,251);)r--;if(r<=0){if(W.ket=W.cursor,e=W.limit-W.cursor,!W.eq_s_b(1,"é")&&(W.cursor=W.limit-e,!W.eq_s_b(1,"è")))return;W.bra=W.cursor,W.slice_from("e")}}function k(){if(!w()&&(W.cursor=W.limit,!f()&&(W.cursor=W.limit,!m())))return W.cursor=W.limit,void _();W.cursor=W.limit,W.ket=W.cursor,W.eq_s_b(1,"Y")?(W.bra=W.cursor,W.slice_from("i")):(W.cursor=W.limit,W.eq_s_b(1,"ç")&&(W.bra=W.cursor,W.slice_from("c")))}var p,g,q,v=[new r("col",-1,-1),new r("par",-1,-1),new r("tap",-1,-1)],h=[new r("",-1,4),new r("I",0,1),new r("U",0,2),new r("Y",0,3)],z=[new r("iqU",-1,3),new r("abl",-1,3),new r("Ièr",-1,4),new r("ièr",-1,4),new r("eus",-1,2),new r("iv",-1,1)],y=[new r("ic",-1,2),new r("abil",-1,1),new r("iv",-1,3)],C=[new r("iqUe",-1,1),new r("atrice",-1,2),new r("ance",-1,1),new r("ence",-1,5),new r("logie",-1,3),new r("able",-1,1),new r("isme",-1,1),new r("euse",-1,11),new r("iste",-1,1),new r("ive",-1,8),new r("if",-1,8),new r("usion",-1,4),new r("ation",-1,2),new r("ution",-1,4),new r("ateur",-1,2),new r("iqUes",-1,1),new r("atrices",-1,2),new r("ances",-1,1),new r("ences",-1,5),new r("logies",-1,3),new r("ables",-1,1),new r("ismes",-1,1),new r("euses",-1,11),new r("istes",-1,1),new r("ives",-1,8),new r("ifs",-1,8),new r("usions",-1,4),new r("ations",-1,2),new r("utions",-1,4),new r("ateurs",-1,2),new r("ments",-1,15),new r("ements",30,6),new r("issements",31,12),new r("ités",-1,7),new r("ment",-1,15),new r("ement",34,6),new r("issement",35,12),new r("amment",34,13),new r("emment",34,14),new r("aux",-1,10),new r("eaux",39,9),new r("eux",-1,1),new r("ité",-1,7)],x=[new r("ira",-1,1),new r("ie",-1,1),new r("isse",-1,1),new r("issante",-1,1),new r("i",-1,1),new r("irai",4,1),new r("ir",-1,1),new r("iras",-1,1),new r("ies",-1,1),new r("îmes",-1,1),new r("isses",-1,1),new r("issantes",-1,1),new r("îtes",-1,1),new r("is",-1,1),new r("irais",13,1),new r("issais",13,1),new r("irions",-1,1),new r("issions",-1,1),new r("irons",-1,1),new r("issons",-1,1),new r("issants",-1,1),new r("it",-1,1),new r("irait",21,1),new r("issait",21,1),new r("issant",-1,1),new r("iraIent",-1,1),new r("issaIent",-1,1),new r("irent",-1,1),new r("issent",-1,1),new r("iront",-1,1),new r("ît",-1,1),new r("iriez",-1,1),new r("issiez",-1,1),new r("irez",-1,1),new r("issez",-1,1)],I=[new r("a",-1,3),new r("era",0,2),new r("asse",-1,3),new r("ante",-1,3),new r("ée",-1,2),new r("ai",-1,3),new r("erai",5,2),new r("er",-1,2),new r("as",-1,3),new r("eras",8,2),new r("âmes",-1,3),new r("asses",-1,3),new r("antes",-1,3),new r("âtes",-1,3),new r("ées",-1,2),new r("ais",-1,3),new r("erais",15,2),new r("ions",-1,1),new r("erions",17,2),new r("assions",17,3),new r("erons",-1,2),new r("ants",-1,3),new r("és",-1,2),new r("ait",-1,3),new r("erait",23,2),new r("ant",-1,3),new r("aIent",-1,3),new r("eraIent",26,2),new r("èrent",-1,2),new r("assent",-1,3),new r("eront",-1,2),new r("ât",-1,3),new r("ez",-1,2),new r("iez",32,2),new r("eriez",33,2),new r("assiez",33,3),new r("erez",32,2),new r("é",-1,2)],P=[new r("e",-1,3),new r("Ière",0,2),new r("ière",0,2),new r("ion",-1,1),new r("Ier",-1,2),new r("ier",-1,2),new r("ë",-1,4)],U=[new r("ell",-1,-1),new r("eill",-1,-1),new r("enn",-1,-1),new r("onn",-1,-1),new r("ett",-1,-1)],F=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,128,130,103,8,5],S=[1,65,20,0,0,0,0,0,0,0,0,0,0,0,0,0,128],W=new s;this.setCurrent=function(e){W.setCurrent(e)},this.getCurrent=function(){return W.getCurrent()},this.stem=function(){var e=W.cursor;return n(),W.cursor=e,u(),W.limit_backward=e,W.cursor=W.limit,k(),W.cursor=W.limit,b(),W.cursor=W.limit,d(),W.cursor=W.limit_backward,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.fr.stemmer,"stemmer-fr"),e.fr.stopWordFilter=e.generateStopWordFilter("ai aie aient aies ait as au aura aurai auraient aurais aurait auras aurez auriez aurions aurons auront aux avaient avais avait avec avez aviez avions avons ayant ayez ayons c ce ceci celà ces cet cette d dans de des du elle en es est et eu eue eues eurent eus eusse eussent eusses eussiez eussions eut eux eûmes eût eûtes furent fus fusse fussent fusses fussiez fussions fut fûmes fût fûtes ici il ils j je l la le les leur leurs lui m ma mais me mes moi mon même n ne nos notre nous on ont ou par pas pour qu que quel quelle quelles quels qui s sa sans se sera serai seraient serais serait seras serez seriez serions serons seront ses soi soient sois soit sommes son sont soyez soyons suis sur t ta te tes toi ton tu un une vos votre vous y à étaient étais était étant étiez étions été étée étées étés êtes".split(" ")),e.Pipeline.registerFunction(e.fr.stopWordFilter,"stopWordFilter-fr")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.hi.min.js b/assets/javascripts/lunr/min/lunr.hi.min.js new file mode 100644 index 00000000..7dbc4140 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.hi.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.hi=function(){this.pipeline.reset(),this.pipeline.add(e.hi.trimmer,e.hi.stopWordFilter,e.hi.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.hi.stemmer))},e.hi.wordCharacters="ऀ-ःऄ-एऐ-टठ-यर-िी-ॏॐ-य़ॠ-९॰-ॿa-zA-Za-zA-Z0-90-9",e.hi.trimmer=e.trimmerSupport.generateTrimmer(e.hi.wordCharacters),e.Pipeline.registerFunction(e.hi.trimmer,"trimmer-hi"),e.hi.stopWordFilter=e.generateStopWordFilter("अत अपना अपनी अपने अभी अंदर आदि आप इत्यादि इन इनका इन्हीं इन्हें इन्हों इस इसका इसकी इसके इसमें इसी इसे उन उनका उनकी उनके उनको उन्हीं उन्हें उन्हों उस उसके उसी उसे एक एवं एस ऐसे और कई कर करता करते करना करने करें कहते कहा का काफ़ी कि कितना किन्हें किन्हों किया किर किस किसी किसे की कुछ कुल के को कोई कौन कौनसा गया घर जब जहाँ जा जितना जिन जिन्हें जिन्हों जिस जिसे जीधर जैसा जैसे जो तक तब तरह तिन तिन्हें तिन्हों तिस तिसे तो था थी थे दबारा दिया दुसरा दूसरे दो द्वारा न नके नहीं ना निहायत नीचे ने पर पहले पूरा पे फिर बनी बही बहुत बाद बाला बिलकुल भी भीतर मगर मानो मे में यदि यह यहाँ यही या यिह ये रखें रहा रहे ऱ्वासा लिए लिये लेकिन व वग़ैरह वर्ग वह वहाँ वहीं वाले वुह वे वो सकता सकते सबसे सभी साथ साबुत साभ सारा से सो संग ही हुआ हुई हुए है हैं हो होता होती होते होना होने".split(" ")),e.hi.stemmer=function(){return function(e){return"function"==typeof e.update?e.update(function(e){return e}):e}}();var r=e.wordcut;r.init(),e.hi.tokenizer=function(i){if(!arguments.length||null==i||void 0==i)return[];if(Array.isArray(i))return i.map(function(r){return isLunr2?new e.Token(r.toLowerCase()):r.toLowerCase()});var t=i.toString().toLowerCase().replace(/^\s+/,"");return r.cut(t).split("|")},e.Pipeline.registerFunction(e.hi.stemmer,"stemmer-hi"),e.Pipeline.registerFunction(e.hi.stopWordFilter,"stopWordFilter-hi")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.hu.min.js b/assets/javascripts/lunr/min/lunr.hu.min.js new file mode 100644 index 00000000..ed9d909f --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.hu.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Hungarian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,n){"function"==typeof define&&define.amd?define(n):"object"==typeof exports?module.exports=n():n()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.hu=function(){this.pipeline.reset(),this.pipeline.add(e.hu.trimmer,e.hu.stopWordFilter,e.hu.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.hu.stemmer))},e.hu.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.hu.trimmer=e.trimmerSupport.generateTrimmer(e.hu.wordCharacters),e.Pipeline.registerFunction(e.hu.trimmer,"trimmer-hu"),e.hu.stemmer=function(){var n=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,i=new function(){function e(){var e,n=L.cursor;if(d=L.limit,L.in_grouping(W,97,252))for(;;){if(e=L.cursor,L.out_grouping(W,97,252))return L.cursor=e,L.find_among(g,8)||(L.cursor=e,e=L.limit)return void(d=e);L.cursor++}if(L.cursor=n,L.out_grouping(W,97,252)){for(;!L.in_grouping(W,97,252);){if(L.cursor>=L.limit)return;L.cursor++}d=L.cursor}}function i(){return d<=L.cursor}function a(){var e;if(L.ket=L.cursor,(e=L.find_among_b(h,2))&&(L.bra=L.cursor,i()))switch(e){case 1:L.slice_from("a");break;case 2:L.slice_from("e")}}function t(){var e=L.limit-L.cursor;return!!L.find_among_b(p,23)&&(L.cursor=L.limit-e,!0)}function s(){if(L.cursor>L.limit_backward){L.cursor--,L.ket=L.cursor;var e=L.cursor-1;L.limit_backward<=e&&e<=L.limit&&(L.cursor=e,L.bra=e,L.slice_del())}}function c(){var e;if(L.ket=L.cursor,(e=L.find_among_b(_,2))&&(L.bra=L.cursor,i())){if((1==e||2==e)&&!t())return;L.slice_del(),s()}}function o(){L.ket=L.cursor,L.find_among_b(v,44)&&(L.bra=L.cursor,i()&&(L.slice_del(),a()))}function w(){var e;if(L.ket=L.cursor,(e=L.find_among_b(z,3))&&(L.bra=L.cursor,i()))switch(e){case 1:L.slice_from("e");break;case 2:case 3:L.slice_from("a")}}function l(){var e;if(L.ket=L.cursor,(e=L.find_among_b(y,6))&&(L.bra=L.cursor,i()))switch(e){case 1:case 2:L.slice_del();break;case 3:L.slice_from("a");break;case 4:L.slice_from("e")}}function u(){var e;if(L.ket=L.cursor,(e=L.find_among_b(j,2))&&(L.bra=L.cursor,i())){if((1==e||2==e)&&!t())return;L.slice_del(),s()}}function m(){var e;if(L.ket=L.cursor,(e=L.find_among_b(C,7))&&(L.bra=L.cursor,i()))switch(e){case 1:L.slice_from("a");break;case 2:L.slice_from("e");break;case 3:case 4:case 5:case 6:case 7:L.slice_del()}}function k(){var e;if(L.ket=L.cursor,(e=L.find_among_b(P,12))&&(L.bra=L.cursor,i()))switch(e){case 1:case 4:case 7:case 9:L.slice_del();break;case 2:case 5:case 8:L.slice_from("e");break;case 3:case 6:L.slice_from("a")}}function f(){var e;if(L.ket=L.cursor,(e=L.find_among_b(F,31))&&(L.bra=L.cursor,i()))switch(e){case 1:case 4:case 7:case 8:case 9:case 12:case 13:case 16:case 17:case 18:L.slice_del();break;case 2:case 5:case 10:case 14:case 19:L.slice_from("a");break;case 3:case 6:case 11:case 15:case 20:L.slice_from("e")}}function b(){var e;if(L.ket=L.cursor,(e=L.find_among_b(S,42))&&(L.bra=L.cursor,i()))switch(e){case 1:case 4:case 5:case 6:case 9:case 10:case 11:case 14:case 15:case 16:case 17:case 20:case 21:case 24:case 25:case 26:case 29:L.slice_del();break;case 2:case 7:case 12:case 18:case 22:case 27:L.slice_from("a");break;case 3:case 8:case 13:case 19:case 23:case 28:L.slice_from("e")}}var d,g=[new n("cs",-1,-1),new n("dzs",-1,-1),new n("gy",-1,-1),new n("ly",-1,-1),new n("ny",-1,-1),new n("sz",-1,-1),new n("ty",-1,-1),new n("zs",-1,-1)],h=[new n("á",-1,1),new n("é",-1,2)],p=[new n("bb",-1,-1),new n("cc",-1,-1),new n("dd",-1,-1),new n("ff",-1,-1),new n("gg",-1,-1),new n("jj",-1,-1),new n("kk",-1,-1),new n("ll",-1,-1),new n("mm",-1,-1),new n("nn",-1,-1),new n("pp",-1,-1),new n("rr",-1,-1),new n("ccs",-1,-1),new n("ss",-1,-1),new n("zzs",-1,-1),new n("tt",-1,-1),new n("vv",-1,-1),new n("ggy",-1,-1),new n("lly",-1,-1),new n("nny",-1,-1),new n("tty",-1,-1),new n("ssz",-1,-1),new n("zz",-1,-1)],_=[new n("al",-1,1),new n("el",-1,2)],v=[new n("ba",-1,-1),new n("ra",-1,-1),new n("be",-1,-1),new n("re",-1,-1),new n("ig",-1,-1),new n("nak",-1,-1),new n("nek",-1,-1),new n("val",-1,-1),new n("vel",-1,-1),new n("ul",-1,-1),new n("nál",-1,-1),new n("nél",-1,-1),new n("ból",-1,-1),new n("ról",-1,-1),new n("tól",-1,-1),new n("bõl",-1,-1),new n("rõl",-1,-1),new n("tõl",-1,-1),new n("ül",-1,-1),new n("n",-1,-1),new n("an",19,-1),new n("ban",20,-1),new n("en",19,-1),new n("ben",22,-1),new n("képpen",22,-1),new n("on",19,-1),new n("ön",19,-1),new n("képp",-1,-1),new n("kor",-1,-1),new n("t",-1,-1),new n("at",29,-1),new n("et",29,-1),new n("ként",29,-1),new n("anként",32,-1),new n("enként",32,-1),new n("onként",32,-1),new n("ot",29,-1),new n("ért",29,-1),new n("öt",29,-1),new n("hez",-1,-1),new n("hoz",-1,-1),new n("höz",-1,-1),new n("vá",-1,-1),new n("vé",-1,-1)],z=[new n("án",-1,2),new n("én",-1,1),new n("ánként",-1,3)],y=[new n("stul",-1,2),new n("astul",0,1),new n("ástul",0,3),new n("stül",-1,2),new n("estül",3,1),new n("éstül",3,4)],j=[new n("á",-1,1),new n("é",-1,2)],C=[new n("k",-1,7),new n("ak",0,4),new n("ek",0,6),new n("ok",0,5),new n("ák",0,1),new n("ék",0,2),new n("ök",0,3)],P=[new n("éi",-1,7),new n("áéi",0,6),new n("ééi",0,5),new n("é",-1,9),new n("ké",3,4),new n("aké",4,1),new n("eké",4,1),new n("oké",4,1),new n("áké",4,3),new n("éké",4,2),new n("öké",4,1),new n("éé",3,8)],F=[new n("a",-1,18),new n("ja",0,17),new n("d",-1,16),new n("ad",2,13),new n("ed",2,13),new n("od",2,13),new n("ád",2,14),new n("éd",2,15),new n("öd",2,13),new n("e",-1,18),new n("je",9,17),new n("nk",-1,4),new n("unk",11,1),new n("ánk",11,2),new n("énk",11,3),new n("ünk",11,1),new n("uk",-1,8),new n("juk",16,7),new n("ájuk",17,5),new n("ük",-1,8),new n("jük",19,7),new n("éjük",20,6),new n("m",-1,12),new n("am",22,9),new n("em",22,9),new n("om",22,9),new n("ám",22,10),new n("ém",22,11),new n("o",-1,18),new n("á",-1,19),new n("é",-1,20)],S=[new n("id",-1,10),new n("aid",0,9),new n("jaid",1,6),new n("eid",0,9),new n("jeid",3,6),new n("áid",0,7),new n("éid",0,8),new n("i",-1,15),new n("ai",7,14),new n("jai",8,11),new n("ei",7,14),new n("jei",10,11),new n("ái",7,12),new n("éi",7,13),new n("itek",-1,24),new n("eitek",14,21),new n("jeitek",15,20),new n("éitek",14,23),new n("ik",-1,29),new n("aik",18,26),new n("jaik",19,25),new n("eik",18,26),new n("jeik",21,25),new n("áik",18,27),new n("éik",18,28),new n("ink",-1,20),new n("aink",25,17),new n("jaink",26,16),new n("eink",25,17),new n("jeink",28,16),new n("áink",25,18),new n("éink",25,19),new n("aitok",-1,21),new n("jaitok",32,20),new n("áitok",-1,22),new n("im",-1,5),new n("aim",35,4),new n("jaim",36,1),new n("eim",35,4),new n("jeim",38,1),new n("áim",35,2),new n("éim",35,3)],W=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,1,17,52,14],L=new r;this.setCurrent=function(e){L.setCurrent(e)},this.getCurrent=function(){return L.getCurrent()},this.stem=function(){var n=L.cursor;return e(),L.limit_backward=n,L.cursor=L.limit,c(),L.cursor=L.limit,o(),L.cursor=L.limit,w(),L.cursor=L.limit,l(),L.cursor=L.limit,u(),L.cursor=L.limit,k(),L.cursor=L.limit,f(),L.cursor=L.limit,b(),L.cursor=L.limit,m(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.hu.stemmer,"stemmer-hu"),e.hu.stopWordFilter=e.generateStopWordFilter("a abban ahhoz ahogy ahol aki akik akkor alatt amely amelyek amelyekben amelyeket amelyet amelynek ami amikor amit amolyan amíg annak arra arról az azok azon azonban azt aztán azután azzal azért be belül benne bár cikk cikkek cikkeket csak de e ebben eddig egy egyes egyetlen egyik egyre egyéb egész ehhez ekkor el ellen elsõ elég elõ elõször elõtt emilyen ennek erre ez ezek ezen ezt ezzel ezért fel felé hanem hiszen hogy hogyan igen ill ill. illetve ilyen ilyenkor ismét ison itt jobban jó jól kell kellett keressünk keresztül ki kívül között közül legalább legyen lehet lehetett lenne lenni lesz lett maga magát majd majd meg mellett mely melyek mert mi mikor milyen minden mindenki mindent mindig mint mintha mit mivel miért most már más másik még míg nagy nagyobb nagyon ne nekem neki nem nincs néha néhány nélkül olyan ott pedig persze rá s saját sem semmi sok sokat sokkal szemben szerint szinte számára talán tehát teljes tovább továbbá több ugyanis utolsó után utána vagy vagyis vagyok valaki valami valamint való van vannak vele vissza viszont volna volt voltak voltam voltunk által általában át én éppen és így õ õk õket össze úgy új újabb újra".split(" ")),e.Pipeline.registerFunction(e.hu.stopWordFilter,"stopWordFilter-hu")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.it.min.js b/assets/javascripts/lunr/min/lunr.it.min.js new file mode 100644 index 00000000..344b6a3c --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.it.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Italian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.it=function(){this.pipeline.reset(),this.pipeline.add(e.it.trimmer,e.it.stopWordFilter,e.it.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.it.stemmer))},e.it.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.it.trimmer=e.trimmerSupport.generateTrimmer(e.it.wordCharacters),e.Pipeline.registerFunction(e.it.trimmer,"trimmer-it"),e.it.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,i=new function(){function e(e,r,n){return!(!x.eq_s(1,e)||(x.ket=x.cursor,!x.in_grouping(L,97,249)))&&(x.slice_from(r),x.cursor=n,!0)}function i(){for(var r,n,i,o,t=x.cursor;;){if(x.bra=x.cursor,r=x.find_among(h,7))switch(x.ket=x.cursor,r){case 1:x.slice_from("à");continue;case 2:x.slice_from("è");continue;case 3:x.slice_from("ì");continue;case 4:x.slice_from("ò");continue;case 5:x.slice_from("ù");continue;case 6:x.slice_from("qU");continue;case 7:if(x.cursor>=x.limit)break;x.cursor++;continue}break}for(x.cursor=t;;)for(n=x.cursor;;){if(i=x.cursor,x.in_grouping(L,97,249)){if(x.bra=x.cursor,o=x.cursor,e("u","U",i))break;if(x.cursor=o,e("i","I",i))break}if(x.cursor=i,x.cursor>=x.limit)return void(x.cursor=n);x.cursor++}}function o(e){if(x.cursor=e,!x.in_grouping(L,97,249))return!1;for(;!x.out_grouping(L,97,249);){if(x.cursor>=x.limit)return!1;x.cursor++}return!0}function t(){if(x.in_grouping(L,97,249)){var e=x.cursor;if(x.out_grouping(L,97,249)){for(;!x.in_grouping(L,97,249);){if(x.cursor>=x.limit)return o(e);x.cursor++}return!0}return o(e)}return!1}function s(){var e,r=x.cursor;if(!t()){if(x.cursor=r,!x.out_grouping(L,97,249))return;if(e=x.cursor,x.out_grouping(L,97,249)){for(;!x.in_grouping(L,97,249);){if(x.cursor>=x.limit)return x.cursor=e,void(x.in_grouping(L,97,249)&&x.cursor=x.limit)return;x.cursor++}k=x.cursor}function a(){for(;!x.in_grouping(L,97,249);){if(x.cursor>=x.limit)return!1;x.cursor++}for(;!x.out_grouping(L,97,249);){if(x.cursor>=x.limit)return!1;x.cursor++}return!0}function u(){var e=x.cursor;k=x.limit,p=k,g=k,s(),x.cursor=e,a()&&(p=x.cursor,a()&&(g=x.cursor))}function c(){for(var e;;){if(x.bra=x.cursor,!(e=x.find_among(q,3)))break;switch(x.ket=x.cursor,e){case 1:x.slice_from("i");break;case 2:x.slice_from("u");break;case 3:if(x.cursor>=x.limit)return;x.cursor++}}}function w(){return k<=x.cursor}function l(){return p<=x.cursor}function m(){return g<=x.cursor}function f(){var e;if(x.ket=x.cursor,x.find_among_b(C,37)&&(x.bra=x.cursor,(e=x.find_among_b(z,5))&&w()))switch(e){case 1:x.slice_del();break;case 2:x.slice_from("e")}}function v(){var e;if(x.ket=x.cursor,!(e=x.find_among_b(S,51)))return!1;switch(x.bra=x.cursor,e){case 1:if(!m())return!1;x.slice_del();break;case 2:if(!m())return!1;x.slice_del(),x.ket=x.cursor,x.eq_s_b(2,"ic")&&(x.bra=x.cursor,m()&&x.slice_del());break;case 3:if(!m())return!1;x.slice_from("log");break;case 4:if(!m())return!1;x.slice_from("u");break;case 5:if(!m())return!1;x.slice_from("ente");break;case 6:if(!w())return!1;x.slice_del();break;case 7:if(!l())return!1;x.slice_del(),x.ket=x.cursor,e=x.find_among_b(P,4),e&&(x.bra=x.cursor,m()&&(x.slice_del(),1==e&&(x.ket=x.cursor,x.eq_s_b(2,"at")&&(x.bra=x.cursor,m()&&x.slice_del()))));break;case 8:if(!m())return!1;x.slice_del(),x.ket=x.cursor,e=x.find_among_b(F,3),e&&(x.bra=x.cursor,1==e&&m()&&x.slice_del());break;case 9:if(!m())return!1;x.slice_del(),x.ket=x.cursor,x.eq_s_b(2,"at")&&(x.bra=x.cursor,m()&&(x.slice_del(),x.ket=x.cursor,x.eq_s_b(2,"ic")&&(x.bra=x.cursor,m()&&x.slice_del())))}return!0}function b(){var e,r;x.cursor>=k&&(r=x.limit_backward,x.limit_backward=k,x.ket=x.cursor,e=x.find_among_b(W,87),e&&(x.bra=x.cursor,1==e&&x.slice_del()),x.limit_backward=r)}function d(){var e=x.limit-x.cursor;if(x.ket=x.cursor,x.in_grouping_b(y,97,242)&&(x.bra=x.cursor,w()&&(x.slice_del(),x.ket=x.cursor,x.eq_s_b(1,"i")&&(x.bra=x.cursor,w()))))return void x.slice_del();x.cursor=x.limit-e}function _(){d(),x.ket=x.cursor,x.eq_s_b(1,"h")&&(x.bra=x.cursor,x.in_grouping_b(U,99,103)&&w()&&x.slice_del())}var g,p,k,h=[new r("",-1,7),new r("qu",0,6),new r("á",0,1),new r("é",0,2),new r("í",0,3),new r("ó",0,4),new r("ú",0,5)],q=[new r("",-1,3),new r("I",0,1),new r("U",0,2)],C=[new r("la",-1,-1),new r("cela",0,-1),new r("gliela",0,-1),new r("mela",0,-1),new r("tela",0,-1),new r("vela",0,-1),new r("le",-1,-1),new r("cele",6,-1),new r("gliele",6,-1),new r("mele",6,-1),new r("tele",6,-1),new r("vele",6,-1),new r("ne",-1,-1),new r("cene",12,-1),new r("gliene",12,-1),new r("mene",12,-1),new r("sene",12,-1),new r("tene",12,-1),new r("vene",12,-1),new r("ci",-1,-1),new r("li",-1,-1),new r("celi",20,-1),new r("glieli",20,-1),new r("meli",20,-1),new r("teli",20,-1),new r("veli",20,-1),new r("gli",20,-1),new r("mi",-1,-1),new r("si",-1,-1),new r("ti",-1,-1),new r("vi",-1,-1),new r("lo",-1,-1),new r("celo",31,-1),new r("glielo",31,-1),new r("melo",31,-1),new r("telo",31,-1),new r("velo",31,-1)],z=[new r("ando",-1,1),new r("endo",-1,1),new r("ar",-1,2),new r("er",-1,2),new r("ir",-1,2)],P=[new r("ic",-1,-1),new r("abil",-1,-1),new r("os",-1,-1),new r("iv",-1,1)],F=[new r("ic",-1,1),new r("abil",-1,1),new r("iv",-1,1)],S=[new r("ica",-1,1),new r("logia",-1,3),new r("osa",-1,1),new r("ista",-1,1),new r("iva",-1,9),new r("anza",-1,1),new r("enza",-1,5),new r("ice",-1,1),new r("atrice",7,1),new r("iche",-1,1),new r("logie",-1,3),new r("abile",-1,1),new r("ibile",-1,1),new r("usione",-1,4),new r("azione",-1,2),new r("uzione",-1,4),new r("atore",-1,2),new r("ose",-1,1),new r("ante",-1,1),new r("mente",-1,1),new r("amente",19,7),new r("iste",-1,1),new r("ive",-1,9),new r("anze",-1,1),new r("enze",-1,5),new r("ici",-1,1),new r("atrici",25,1),new r("ichi",-1,1),new r("abili",-1,1),new r("ibili",-1,1),new r("ismi",-1,1),new r("usioni",-1,4),new r("azioni",-1,2),new r("uzioni",-1,4),new r("atori",-1,2),new r("osi",-1,1),new r("anti",-1,1),new r("amenti",-1,6),new r("imenti",-1,6),new r("isti",-1,1),new r("ivi",-1,9),new r("ico",-1,1),new r("ismo",-1,1),new r("oso",-1,1),new r("amento",-1,6),new r("imento",-1,6),new r("ivo",-1,9),new r("ità",-1,8),new r("istà",-1,1),new r("istè",-1,1),new r("istì",-1,1)],W=[new r("isca",-1,1),new r("enda",-1,1),new r("ata",-1,1),new r("ita",-1,1),new r("uta",-1,1),new r("ava",-1,1),new r("eva",-1,1),new r("iva",-1,1),new r("erebbe",-1,1),new r("irebbe",-1,1),new r("isce",-1,1),new r("ende",-1,1),new r("are",-1,1),new r("ere",-1,1),new r("ire",-1,1),new r("asse",-1,1),new r("ate",-1,1),new r("avate",16,1),new r("evate",16,1),new r("ivate",16,1),new r("ete",-1,1),new r("erete",20,1),new r("irete",20,1),new r("ite",-1,1),new r("ereste",-1,1),new r("ireste",-1,1),new r("ute",-1,1),new r("erai",-1,1),new r("irai",-1,1),new r("isci",-1,1),new r("endi",-1,1),new r("erei",-1,1),new r("irei",-1,1),new r("assi",-1,1),new r("ati",-1,1),new r("iti",-1,1),new r("eresti",-1,1),new r("iresti",-1,1),new r("uti",-1,1),new r("avi",-1,1),new r("evi",-1,1),new r("ivi",-1,1),new r("isco",-1,1),new r("ando",-1,1),new r("endo",-1,1),new r("Yamo",-1,1),new r("iamo",-1,1),new r("avamo",-1,1),new r("evamo",-1,1),new r("ivamo",-1,1),new r("eremo",-1,1),new r("iremo",-1,1),new r("assimo",-1,1),new r("ammo",-1,1),new r("emmo",-1,1),new r("eremmo",54,1),new r("iremmo",54,1),new r("immo",-1,1),new r("ano",-1,1),new r("iscano",58,1),new r("avano",58,1),new r("evano",58,1),new r("ivano",58,1),new r("eranno",-1,1),new r("iranno",-1,1),new r("ono",-1,1),new r("iscono",65,1),new r("arono",65,1),new r("erono",65,1),new r("irono",65,1),new r("erebbero",-1,1),new r("irebbero",-1,1),new r("assero",-1,1),new r("essero",-1,1),new r("issero",-1,1),new r("ato",-1,1),new r("ito",-1,1),new r("uto",-1,1),new r("avo",-1,1),new r("evo",-1,1),new r("ivo",-1,1),new r("ar",-1,1),new r("ir",-1,1),new r("erà",-1,1),new r("irà",-1,1),new r("erò",-1,1),new r("irò",-1,1)],L=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,128,128,8,2,1],y=[17,65,0,0,0,0,0,0,0,0,0,0,0,0,0,128,128,8,2],U=[17],x=new n;this.setCurrent=function(e){x.setCurrent(e)},this.getCurrent=function(){return x.getCurrent()},this.stem=function(){var e=x.cursor;return i(),x.cursor=e,u(),x.limit_backward=e,x.cursor=x.limit,f(),x.cursor=x.limit,v()||(x.cursor=x.limit,b()),x.cursor=x.limit,_(),x.cursor=x.limit_backward,c(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.it.stemmer,"stemmer-it"),e.it.stopWordFilter=e.generateStopWordFilter("a abbia abbiamo abbiano abbiate ad agl agli ai al all alla alle allo anche avemmo avendo avesse avessero avessi avessimo aveste avesti avete aveva avevamo avevano avevate avevi avevo avrai avranno avrebbe avrebbero avrei avremmo avremo avreste avresti avrete avrà avrò avuta avute avuti avuto c che chi ci coi col come con contro cui da dagl dagli dai dal dall dalla dalle dallo degl degli dei del dell della delle dello di dov dove e ebbe ebbero ebbi ed era erano eravamo eravate eri ero essendo faccia facciamo facciano facciate faccio facemmo facendo facesse facessero facessi facessimo faceste facesti faceva facevamo facevano facevate facevi facevo fai fanno farai faranno farebbe farebbero farei faremmo faremo fareste faresti farete farà farò fece fecero feci fosse fossero fossi fossimo foste fosti fu fui fummo furono gli ha hai hanno ho i il in io l la le lei li lo loro lui ma mi mia mie miei mio ne negl negli nei nel nell nella nelle nello noi non nostra nostre nostri nostro o per perché più quale quanta quante quanti quanto quella quelle quelli quello questa queste questi questo sarai saranno sarebbe sarebbero sarei saremmo saremo sareste saresti sarete sarà sarò se sei si sia siamo siano siate siete sono sta stai stando stanno starai staranno starebbe starebbero starei staremmo staremo stareste staresti starete starà starò stava stavamo stavano stavate stavi stavo stemmo stesse stessero stessi stessimo steste stesti stette stettero stetti stia stiamo stiano stiate sto su sua sue sugl sugli sui sul sull sulla sulle sullo suo suoi ti tra tu tua tue tuo tuoi tutti tutto un una uno vi voi vostra vostre vostri vostro è".split(" ")),e.Pipeline.registerFunction(e.it.stopWordFilter,"stopWordFilter-it")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.ja.min.js b/assets/javascripts/lunr/min/lunr.ja.min.js new file mode 100644 index 00000000..5f254ebe --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ja.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");var r="2"==e.version[0];e.ja=function(){this.pipeline.reset(),this.pipeline.add(e.ja.trimmer,e.ja.stopWordFilter,e.ja.stemmer),r?this.tokenizer=e.ja.tokenizer:(e.tokenizer&&(e.tokenizer=e.ja.tokenizer),this.tokenizerFn&&(this.tokenizerFn=e.ja.tokenizer))};var t=new e.TinySegmenter;e.ja.tokenizer=function(i){var n,o,s,p,a,u,m,l,c,f;if(!arguments.length||null==i||void 0==i)return[];if(Array.isArray(i))return i.map(function(t){return r?new e.Token(t.toLowerCase()):t.toLowerCase()});for(o=i.toString().toLowerCase().replace(/^\s+/,""),n=o.length-1;n>=0;n--)if(/\S/.test(o.charAt(n))){o=o.substring(0,n+1);break}for(a=[],s=o.length,c=0,l=0;c<=s;c++)if(u=o.charAt(c),m=c-l,u.match(/\s/)||c==s){if(m>0)for(p=t.segment(o.slice(l,c)).filter(function(e){return!!e}),f=l,n=0;n=C.limit)break;C.cursor++;continue}break}for(C.cursor=o,C.bra=o,C.eq_s(1,"y")?(C.ket=C.cursor,C.slice_from("Y")):C.cursor=o;;)if(e=C.cursor,C.in_grouping(q,97,232)){if(i=C.cursor,C.bra=i,C.eq_s(1,"i"))C.ket=C.cursor,C.in_grouping(q,97,232)&&(C.slice_from("I"),C.cursor=e);else if(C.cursor=i,C.eq_s(1,"y"))C.ket=C.cursor,C.slice_from("Y"),C.cursor=e;else if(n(e))break}else if(n(e))break}function n(r){return C.cursor=r,r>=C.limit||(C.cursor++,!1)}function o(){_=C.limit,d=_,t()||(_=C.cursor,_<3&&(_=3),t()||(d=C.cursor))}function t(){for(;!C.in_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}for(;!C.out_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}return!1}function s(){for(var r;;)if(C.bra=C.cursor,r=C.find_among(p,3))switch(C.ket=C.cursor,r){case 1:C.slice_from("y");break;case 2:C.slice_from("i");break;case 3:if(C.cursor>=C.limit)return;C.cursor++}}function u(){return _<=C.cursor}function c(){return d<=C.cursor}function a(){var r=C.limit-C.cursor;C.find_among_b(g,3)&&(C.cursor=C.limit-r,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del()))}function l(){var r;w=!1,C.ket=C.cursor,C.eq_s_b(1,"e")&&(C.bra=C.cursor,u()&&(r=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-r,C.slice_del(),w=!0,a())))}function m(){var r;u()&&(r=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-r,C.eq_s_b(3,"gem")||(C.cursor=C.limit-r,C.slice_del(),a())))}function f(){var r,e,i,n,o,t,s=C.limit-C.cursor;if(C.ket=C.cursor,r=C.find_among_b(h,5))switch(C.bra=C.cursor,r){case 1:u()&&C.slice_from("heid");break;case 2:m();break;case 3:u()&&C.out_grouping_b(j,97,232)&&C.slice_del()}if(C.cursor=C.limit-s,l(),C.cursor=C.limit-s,C.ket=C.cursor,C.eq_s_b(4,"heid")&&(C.bra=C.cursor,c()&&(e=C.limit-C.cursor,C.eq_s_b(1,"c")||(C.cursor=C.limit-e,C.slice_del(),C.ket=C.cursor,C.eq_s_b(2,"en")&&(C.bra=C.cursor,m())))),C.cursor=C.limit-s,C.ket=C.cursor,r=C.find_among_b(k,6))switch(C.bra=C.cursor,r){case 1:if(c()){if(C.slice_del(),i=C.limit-C.cursor,C.ket=C.cursor,C.eq_s_b(2,"ig")&&(C.bra=C.cursor,c()&&(n=C.limit-C.cursor,!C.eq_s_b(1,"e")))){C.cursor=C.limit-n,C.slice_del();break}C.cursor=C.limit-i,a()}break;case 2:c()&&(o=C.limit-C.cursor,C.eq_s_b(1,"e")||(C.cursor=C.limit-o,C.slice_del()));break;case 3:c()&&(C.slice_del(),l());break;case 4:c()&&C.slice_del();break;case 5:c()&&w&&C.slice_del()}C.cursor=C.limit-s,C.out_grouping_b(z,73,232)&&(t=C.limit-C.cursor,C.find_among_b(v,4)&&C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-t,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del())))}var d,_,w,b=[new e("",-1,6),new e("á",0,1),new e("ä",0,1),new e("é",0,2),new e("ë",0,2),new e("í",0,3),new e("ï",0,3),new e("ó",0,4),new e("ö",0,4),new e("ú",0,5),new e("ü",0,5)],p=[new e("",-1,3),new e("I",0,2),new e("Y",0,1)],g=[new e("dd",-1,-1),new e("kk",-1,-1),new e("tt",-1,-1)],h=[new e("ene",-1,2),new e("se",-1,3),new e("en",-1,2),new e("heden",2,1),new e("s",-1,3)],k=[new e("end",-1,1),new e("ig",-1,2),new e("ing",-1,1),new e("lijk",-1,3),new e("baar",-1,4),new e("bar",-1,5)],v=[new e("aa",-1,-1),new e("ee",-1,-1),new e("oo",-1,-1),new e("uu",-1,-1)],q=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],z=[1,0,0,17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],j=[17,67,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],C=new i;this.setCurrent=function(r){C.setCurrent(r)},this.getCurrent=function(){return C.getCurrent()},this.stem=function(){var e=C.cursor;return r(),C.cursor=e,o(),C.limit_backward=e,C.cursor=C.limit,f(),C.cursor=C.limit_backward,s(),!0}};return function(r){return"function"==typeof r.update?r.update(function(r){return n.setCurrent(r),n.stem(),n.getCurrent()}):(n.setCurrent(r),n.stem(),n.getCurrent())}}(),r.Pipeline.registerFunction(r.nl.stemmer,"stemmer-nl"),r.nl.stopWordFilter=r.generateStopWordFilter(" aan al alles als altijd andere ben bij daar dan dat de der deze die dit doch doen door dus een eens en er ge geen geweest haar had heb hebben heeft hem het hier hij hoe hun iemand iets ik in is ja je kan kon kunnen maar me meer men met mij mijn moet na naar niet niets nog nu of om omdat onder ons ook op over reeds te tegen toch toen tot u uit uw van veel voor want waren was wat werd wezen wie wil worden wordt zal ze zelf zich zij zijn zo zonder zou".split(" ")),r.Pipeline.registerFunction(r.nl.stopWordFilter,"stopWordFilter-nl")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.no.min.js b/assets/javascripts/lunr/min/lunr.no.min.js new file mode 100644 index 00000000..92bc7e4e --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.no.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Norwegian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.no=function(){this.pipeline.reset(),this.pipeline.add(e.no.trimmer,e.no.stopWordFilter,e.no.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.no.stemmer))},e.no.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.no.trimmer=e.trimmerSupport.generateTrimmer(e.no.wordCharacters),e.Pipeline.registerFunction(e.no.trimmer,"trimmer-no"),e.no.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,i=new function(){function e(){var e,r=w.cursor+3;if(a=w.limit,0<=r||r<=w.limit){for(s=r;;){if(e=w.cursor,w.in_grouping(d,97,248)){w.cursor=e;break}if(e>=w.limit)return;w.cursor=e+1}for(;!w.out_grouping(d,97,248);){if(w.cursor>=w.limit)return;w.cursor++}a=w.cursor,a=a&&(r=w.limit_backward,w.limit_backward=a,w.ket=w.cursor,e=w.find_among_b(m,29),w.limit_backward=r,e))switch(w.bra=w.cursor,e){case 1:w.slice_del();break;case 2:n=w.limit-w.cursor,w.in_grouping_b(c,98,122)?w.slice_del():(w.cursor=w.limit-n,w.eq_s_b(1,"k")&&w.out_grouping_b(d,97,248)&&w.slice_del());break;case 3:w.slice_from("er")}}function t(){var e,r=w.limit-w.cursor;w.cursor>=a&&(e=w.limit_backward,w.limit_backward=a,w.ket=w.cursor,w.find_among_b(u,2)?(w.bra=w.cursor,w.limit_backward=e,w.cursor=w.limit-r,w.cursor>w.limit_backward&&(w.cursor--,w.bra=w.cursor,w.slice_del())):w.limit_backward=e)}function o(){var e,r;w.cursor>=a&&(r=w.limit_backward,w.limit_backward=a,w.ket=w.cursor,e=w.find_among_b(l,11),e?(w.bra=w.cursor,w.limit_backward=r,1==e&&w.slice_del()):w.limit_backward=r)}var s,a,m=[new r("a",-1,1),new r("e",-1,1),new r("ede",1,1),new r("ande",1,1),new r("ende",1,1),new r("ane",1,1),new r("ene",1,1),new r("hetene",6,1),new r("erte",1,3),new r("en",-1,1),new r("heten",9,1),new r("ar",-1,1),new r("er",-1,1),new r("heter",12,1),new r("s",-1,2),new r("as",14,1),new r("es",14,1),new r("edes",16,1),new r("endes",16,1),new r("enes",16,1),new r("hetenes",19,1),new r("ens",14,1),new r("hetens",21,1),new r("ers",14,1),new r("ets",14,1),new r("et",-1,1),new r("het",25,1),new r("ert",-1,3),new r("ast",-1,1)],u=[new r("dt",-1,-1),new r("vt",-1,-1)],l=[new r("leg",-1,1),new r("eleg",0,1),new r("ig",-1,1),new r("eig",2,1),new r("lig",2,1),new r("elig",4,1),new r("els",-1,1),new r("lov",-1,1),new r("elov",7,1),new r("slov",7,1),new r("hetslov",9,1)],d=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,48,0,128],c=[119,125,149,1],w=new n;this.setCurrent=function(e){w.setCurrent(e)},this.getCurrent=function(){return w.getCurrent()},this.stem=function(){var r=w.cursor;return e(),w.limit_backward=r,w.cursor=w.limit,i(),w.cursor=w.limit,t(),w.cursor=w.limit,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.no.stemmer,"stemmer-no"),e.no.stopWordFilter=e.generateStopWordFilter("alle at av bare begge ble blei bli blir blitt både båe da de deg dei deim deira deires dem den denne der dere deres det dette di din disse ditt du dykk dykkar då eg ein eit eitt eller elles en enn er et ett etter for fordi fra før ha hadde han hans har hennar henne hennes her hjå ho hoe honom hoss hossen hun hva hvem hver hvilke hvilken hvis hvor hvordan hvorfor i ikke ikkje ikkje ingen ingi inkje inn inni ja jeg kan kom korleis korso kun kunne kva kvar kvarhelst kven kvi kvifor man mange me med medan meg meget mellom men mi min mine mitt mot mykje ned no noe noen noka noko nokon nokor nokre nå når og også om opp oss over på samme seg selv si si sia sidan siden sin sine sitt sjøl skal skulle slik so som som somme somt så sånn til um upp ut uten var vart varte ved vere verte vi vil ville vore vors vort vår være være vært å".split(" ")),e.Pipeline.registerFunction(e.no.stopWordFilter,"stopWordFilter-no")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.pt.min.js b/assets/javascripts/lunr/min/lunr.pt.min.js new file mode 100644 index 00000000..6c16996d --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.pt.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Portuguese` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.pt=function(){this.pipeline.reset(),this.pipeline.add(e.pt.trimmer,e.pt.stopWordFilter,e.pt.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.pt.stemmer))},e.pt.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.pt.trimmer=e.trimmerSupport.generateTrimmer(e.pt.wordCharacters),e.Pipeline.registerFunction(e.pt.trimmer,"trimmer-pt"),e.pt.stemmer=function(){var r=e.stemmerSupport.Among,s=e.stemmerSupport.SnowballProgram,n=new function(){function e(){for(var e;;){if(z.bra=z.cursor,e=z.find_among(k,3))switch(z.ket=z.cursor,e){case 1:z.slice_from("a~");continue;case 2:z.slice_from("o~");continue;case 3:if(z.cursor>=z.limit)break;z.cursor++;continue}break}}function n(){if(z.out_grouping(y,97,250)){for(;!z.in_grouping(y,97,250);){if(z.cursor>=z.limit)return!0;z.cursor++}return!1}return!0}function i(){if(z.in_grouping(y,97,250))for(;!z.out_grouping(y,97,250);){if(z.cursor>=z.limit)return!1;z.cursor++}return g=z.cursor,!0}function o(){var e,r,s=z.cursor;if(z.in_grouping(y,97,250))if(e=z.cursor,n()){if(z.cursor=e,i())return}else g=z.cursor;if(z.cursor=s,z.out_grouping(y,97,250)){if(r=z.cursor,n()){if(z.cursor=r,!z.in_grouping(y,97,250)||z.cursor>=z.limit)return;z.cursor++}g=z.cursor}}function t(){for(;!z.in_grouping(y,97,250);){if(z.cursor>=z.limit)return!1;z.cursor++}for(;!z.out_grouping(y,97,250);){if(z.cursor>=z.limit)return!1;z.cursor++}return!0}function a(){var e=z.cursor;g=z.limit,b=g,h=g,o(),z.cursor=e,t()&&(b=z.cursor,t()&&(h=z.cursor))}function u(){for(var e;;){if(z.bra=z.cursor,e=z.find_among(q,3))switch(z.ket=z.cursor,e){case 1:z.slice_from("ã");continue;case 2:z.slice_from("õ");continue;case 3:if(z.cursor>=z.limit)break;z.cursor++;continue}break}}function w(){return g<=z.cursor}function m(){return b<=z.cursor}function c(){return h<=z.cursor}function l(){var e;if(z.ket=z.cursor,!(e=z.find_among_b(F,45)))return!1;switch(z.bra=z.cursor,e){case 1:if(!c())return!1;z.slice_del();break;case 2:if(!c())return!1;z.slice_from("log");break;case 3:if(!c())return!1;z.slice_from("u");break;case 4:if(!c())return!1;z.slice_from("ente");break;case 5:if(!m())return!1;z.slice_del(),z.ket=z.cursor,e=z.find_among_b(j,4),e&&(z.bra=z.cursor,c()&&(z.slice_del(),1==e&&(z.ket=z.cursor,z.eq_s_b(2,"at")&&(z.bra=z.cursor,c()&&z.slice_del()))));break;case 6:if(!c())return!1;z.slice_del(),z.ket=z.cursor,e=z.find_among_b(C,3),e&&(z.bra=z.cursor,1==e&&c()&&z.slice_del());break;case 7:if(!c())return!1;z.slice_del(),z.ket=z.cursor,e=z.find_among_b(P,3),e&&(z.bra=z.cursor,1==e&&c()&&z.slice_del());break;case 8:if(!c())return!1;z.slice_del(),z.ket=z.cursor,z.eq_s_b(2,"at")&&(z.bra=z.cursor,c()&&z.slice_del());break;case 9:if(!w()||!z.eq_s_b(1,"e"))return!1;z.slice_from("ir")}return!0}function f(){var e,r;if(z.cursor>=g){if(r=z.limit_backward,z.limit_backward=g,z.ket=z.cursor,e=z.find_among_b(S,120))return z.bra=z.cursor,1==e&&z.slice_del(),z.limit_backward=r,!0;z.limit_backward=r}return!1}function d(){var e;z.ket=z.cursor,(e=z.find_among_b(W,7))&&(z.bra=z.cursor,1==e&&w()&&z.slice_del())}function v(e,r){if(z.eq_s_b(1,e)){z.bra=z.cursor;var s=z.limit-z.cursor;if(z.eq_s_b(1,r))return z.cursor=z.limit-s,w()&&z.slice_del(),!1}return!0}function p(){var e;if(z.ket=z.cursor,e=z.find_among_b(L,4))switch(z.bra=z.cursor,e){case 1:w()&&(z.slice_del(),z.ket=z.cursor,z.limit-z.cursor,v("u","g")&&v("i","c"));break;case 2:z.slice_from("c")}}function _(){if(!l()&&(z.cursor=z.limit,!f()))return z.cursor=z.limit,void d();z.cursor=z.limit,z.ket=z.cursor,z.eq_s_b(1,"i")&&(z.bra=z.cursor,z.eq_s_b(1,"c")&&(z.cursor=z.limit,w()&&z.slice_del()))}var h,b,g,k=[new r("",-1,3),new r("ã",0,1),new r("õ",0,2)],q=[new r("",-1,3),new r("a~",0,1),new r("o~",0,2)],j=[new r("ic",-1,-1),new r("ad",-1,-1),new r("os",-1,-1),new r("iv",-1,1)],C=[new r("ante",-1,1),new r("avel",-1,1),new r("ível",-1,1)],P=[new r("ic",-1,1),new r("abil",-1,1),new r("iv",-1,1)],F=[new r("ica",-1,1),new r("ância",-1,1),new r("ência",-1,4),new r("ira",-1,9),new r("adora",-1,1),new r("osa",-1,1),new r("ista",-1,1),new r("iva",-1,8),new r("eza",-1,1),new r("logía",-1,2),new r("idade",-1,7),new r("ante",-1,1),new r("mente",-1,6),new r("amente",12,5),new r("ável",-1,1),new r("ível",-1,1),new r("ución",-1,3),new r("ico",-1,1),new r("ismo",-1,1),new r("oso",-1,1),new r("amento",-1,1),new r("imento",-1,1),new r("ivo",-1,8),new r("aça~o",-1,1),new r("ador",-1,1),new r("icas",-1,1),new r("ências",-1,4),new r("iras",-1,9),new r("adoras",-1,1),new r("osas",-1,1),new r("istas",-1,1),new r("ivas",-1,8),new r("ezas",-1,1),new r("logías",-1,2),new r("idades",-1,7),new r("uciones",-1,3),new r("adores",-1,1),new r("antes",-1,1),new r("aço~es",-1,1),new r("icos",-1,1),new r("ismos",-1,1),new r("osos",-1,1),new r("amentos",-1,1),new r("imentos",-1,1),new r("ivos",-1,8)],S=[new r("ada",-1,1),new r("ida",-1,1),new r("ia",-1,1),new r("aria",2,1),new r("eria",2,1),new r("iria",2,1),new r("ara",-1,1),new r("era",-1,1),new r("ira",-1,1),new r("ava",-1,1),new r("asse",-1,1),new r("esse",-1,1),new r("isse",-1,1),new r("aste",-1,1),new r("este",-1,1),new r("iste",-1,1),new r("ei",-1,1),new r("arei",16,1),new r("erei",16,1),new r("irei",16,1),new r("am",-1,1),new r("iam",20,1),new r("ariam",21,1),new r("eriam",21,1),new r("iriam",21,1),new r("aram",20,1),new r("eram",20,1),new r("iram",20,1),new r("avam",20,1),new r("em",-1,1),new r("arem",29,1),new r("erem",29,1),new r("irem",29,1),new r("assem",29,1),new r("essem",29,1),new r("issem",29,1),new r("ado",-1,1),new r("ido",-1,1),new r("ando",-1,1),new r("endo",-1,1),new r("indo",-1,1),new r("ara~o",-1,1),new r("era~o",-1,1),new r("ira~o",-1,1),new r("ar",-1,1),new r("er",-1,1),new r("ir",-1,1),new r("as",-1,1),new r("adas",47,1),new r("idas",47,1),new r("ias",47,1),new r("arias",50,1),new r("erias",50,1),new r("irias",50,1),new r("aras",47,1),new r("eras",47,1),new r("iras",47,1),new r("avas",47,1),new r("es",-1,1),new r("ardes",58,1),new r("erdes",58,1),new r("irdes",58,1),new r("ares",58,1),new r("eres",58,1),new r("ires",58,1),new r("asses",58,1),new r("esses",58,1),new r("isses",58,1),new r("astes",58,1),new r("estes",58,1),new r("istes",58,1),new r("is",-1,1),new r("ais",71,1),new r("eis",71,1),new r("areis",73,1),new r("ereis",73,1),new r("ireis",73,1),new r("áreis",73,1),new r("éreis",73,1),new r("íreis",73,1),new r("ásseis",73,1),new r("ésseis",73,1),new r("ísseis",73,1),new r("áveis",73,1),new r("íeis",73,1),new r("aríeis",84,1),new r("eríeis",84,1),new r("iríeis",84,1),new r("ados",-1,1),new r("idos",-1,1),new r("amos",-1,1),new r("áramos",90,1),new r("éramos",90,1),new r("íramos",90,1),new r("ávamos",90,1),new r("íamos",90,1),new r("aríamos",95,1),new r("eríamos",95,1),new r("iríamos",95,1),new r("emos",-1,1),new r("aremos",99,1),new r("eremos",99,1),new r("iremos",99,1),new r("ássemos",99,1),new r("êssemos",99,1),new r("íssemos",99,1),new r("imos",-1,1),new r("armos",-1,1),new r("ermos",-1,1),new r("irmos",-1,1),new r("ámos",-1,1),new r("arás",-1,1),new r("erás",-1,1),new r("irás",-1,1),new r("eu",-1,1),new r("iu",-1,1),new r("ou",-1,1),new r("ará",-1,1),new r("erá",-1,1),new r("irá",-1,1)],W=[new r("a",-1,1),new r("i",-1,1),new r("o",-1,1),new r("os",-1,1),new r("á",-1,1),new r("í",-1,1),new r("ó",-1,1)],L=[new r("e",-1,1),new r("ç",-1,2),new r("é",-1,1),new r("ê",-1,1)],y=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,3,19,12,2],z=new s;this.setCurrent=function(e){z.setCurrent(e)},this.getCurrent=function(){return z.getCurrent()},this.stem=function(){var r=z.cursor;return e(),z.cursor=r,a(),z.limit_backward=r,z.cursor=z.limit,_(),z.cursor=z.limit,p(),z.cursor=z.limit_backward,u(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.pt.stemmer,"stemmer-pt"),e.pt.stopWordFilter=e.generateStopWordFilter("a ao aos aquela aquelas aquele aqueles aquilo as até com como da das de dela delas dele deles depois do dos e ela elas ele eles em entre era eram essa essas esse esses esta estamos estas estava estavam este esteja estejam estejamos estes esteve estive estivemos estiver estivera estiveram estiverem estivermos estivesse estivessem estivéramos estivéssemos estou está estávamos estão eu foi fomos for fora foram forem formos fosse fossem fui fôramos fôssemos haja hajam hajamos havemos hei houve houvemos houver houvera houveram houverei houverem houveremos houveria houveriam houvermos houverá houverão houveríamos houvesse houvessem houvéramos houvéssemos há hão isso isto já lhe lhes mais mas me mesmo meu meus minha minhas muito na nas nem no nos nossa nossas nosso nossos num numa não nós o os ou para pela pelas pelo pelos por qual quando que quem se seja sejam sejamos sem serei seremos seria seriam será serão seríamos seu seus somos sou sua suas são só também te tem temos tenha tenham tenhamos tenho terei teremos teria teriam terá terão teríamos teu teus teve tinha tinham tive tivemos tiver tivera tiveram tiverem tivermos tivesse tivessem tivéramos tivéssemos tu tua tuas tém tínhamos um uma você vocês vos à às éramos".split(" ")),e.Pipeline.registerFunction(e.pt.stopWordFilter,"stopWordFilter-pt")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.ro.min.js b/assets/javascripts/lunr/min/lunr.ro.min.js new file mode 100644 index 00000000..72771401 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ro.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Romanian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,i){"function"==typeof define&&define.amd?define(i):"object"==typeof exports?module.exports=i():i()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.ro=function(){this.pipeline.reset(),this.pipeline.add(e.ro.trimmer,e.ro.stopWordFilter,e.ro.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.ro.stemmer))},e.ro.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.ro.trimmer=e.trimmerSupport.generateTrimmer(e.ro.wordCharacters),e.Pipeline.registerFunction(e.ro.trimmer,"trimmer-ro"),e.ro.stemmer=function(){var i=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,n=new function(){function e(e,i){L.eq_s(1,e)&&(L.ket=L.cursor,L.in_grouping(W,97,259)&&L.slice_from(i))}function n(){for(var i,r;;){if(i=L.cursor,L.in_grouping(W,97,259)&&(r=L.cursor,L.bra=r,e("u","U"),L.cursor=r,e("i","I")),L.cursor=i,L.cursor>=L.limit)break;L.cursor++}}function t(){if(L.out_grouping(W,97,259)){for(;!L.in_grouping(W,97,259);){if(L.cursor>=L.limit)return!0;L.cursor++}return!1}return!0}function a(){if(L.in_grouping(W,97,259))for(;!L.out_grouping(W,97,259);){if(L.cursor>=L.limit)return!0;L.cursor++}return!1}function o(){var e,i,r=L.cursor;if(L.in_grouping(W,97,259)){if(e=L.cursor,!t())return void(h=L.cursor);if(L.cursor=e,!a())return void(h=L.cursor)}L.cursor=r,L.out_grouping(W,97,259)&&(i=L.cursor,t()&&(L.cursor=i,L.in_grouping(W,97,259)&&L.cursor=L.limit)return!1;L.cursor++}for(;!L.out_grouping(W,97,259);){if(L.cursor>=L.limit)return!1;L.cursor++}return!0}function c(){var e=L.cursor;h=L.limit,k=h,g=h,o(),L.cursor=e,u()&&(k=L.cursor,u()&&(g=L.cursor))}function s(){for(var e;;){if(L.bra=L.cursor,e=L.find_among(z,3))switch(L.ket=L.cursor,e){case 1:L.slice_from("i");continue;case 2:L.slice_from("u");continue;case 3:if(L.cursor>=L.limit)break;L.cursor++;continue}break}}function w(){return h<=L.cursor}function m(){return k<=L.cursor}function l(){return g<=L.cursor}function f(){var e,i;if(L.ket=L.cursor,(e=L.find_among_b(C,16))&&(L.bra=L.cursor,m()))switch(e){case 1:L.slice_del();break;case 2:L.slice_from("a");break;case 3:L.slice_from("e");break;case 4:L.slice_from("i");break;case 5:i=L.limit-L.cursor,L.eq_s_b(2,"ab")||(L.cursor=L.limit-i,L.slice_from("i"));break;case 6:L.slice_from("at");break;case 7:L.slice_from("aţi")}}function p(){var e,i=L.limit-L.cursor;if(L.ket=L.cursor,(e=L.find_among_b(P,46))&&(L.bra=L.cursor,m())){switch(e){case 1:L.slice_from("abil");break;case 2:L.slice_from("ibil");break;case 3:L.slice_from("iv");break;case 4:L.slice_from("ic");break;case 5:L.slice_from("at");break;case 6:L.slice_from("it")}return _=!0,L.cursor=L.limit-i,!0}return!1}function d(){var e,i;for(_=!1;;)if(i=L.limit-L.cursor,!p()){L.cursor=L.limit-i;break}if(L.ket=L.cursor,(e=L.find_among_b(F,62))&&(L.bra=L.cursor,l())){switch(e){case 1:L.slice_del();break;case 2:L.eq_s_b(1,"ţ")&&(L.bra=L.cursor,L.slice_from("t"));break;case 3:L.slice_from("ist")}_=!0}}function b(){var e,i,r;if(L.cursor>=h){if(i=L.limit_backward,L.limit_backward=h,L.ket=L.cursor,e=L.find_among_b(q,94))switch(L.bra=L.cursor,e){case 1:if(r=L.limit-L.cursor,!L.out_grouping_b(W,97,259)&&(L.cursor=L.limit-r,!L.eq_s_b(1,"u")))break;case 2:L.slice_del()}L.limit_backward=i}}function v(){var e;L.ket=L.cursor,(e=L.find_among_b(S,5))&&(L.bra=L.cursor,w()&&1==e&&L.slice_del())}var _,g,k,h,z=[new i("",-1,3),new i("I",0,1),new i("U",0,2)],C=[new i("ea",-1,3),new i("aţia",-1,7),new i("aua",-1,2),new i("iua",-1,4),new i("aţie",-1,7),new i("ele",-1,3),new i("ile",-1,5),new i("iile",6,4),new i("iei",-1,4),new i("atei",-1,6),new i("ii",-1,4),new i("ului",-1,1),new i("ul",-1,1),new i("elor",-1,3),new i("ilor",-1,4),new i("iilor",14,4)],P=[new i("icala",-1,4),new i("iciva",-1,4),new i("ativa",-1,5),new i("itiva",-1,6),new i("icale",-1,4),new i("aţiune",-1,5),new i("iţiune",-1,6),new i("atoare",-1,5),new i("itoare",-1,6),new i("ătoare",-1,5),new i("icitate",-1,4),new i("abilitate",-1,1),new i("ibilitate",-1,2),new i("ivitate",-1,3),new i("icive",-1,4),new i("ative",-1,5),new i("itive",-1,6),new i("icali",-1,4),new i("atori",-1,5),new i("icatori",18,4),new i("itori",-1,6),new i("ători",-1,5),new i("icitati",-1,4),new i("abilitati",-1,1),new i("ivitati",-1,3),new i("icivi",-1,4),new i("ativi",-1,5),new i("itivi",-1,6),new i("icităi",-1,4),new i("abilităi",-1,1),new i("ivităi",-1,3),new i("icităţi",-1,4),new i("abilităţi",-1,1),new i("ivităţi",-1,3),new i("ical",-1,4),new i("ator",-1,5),new i("icator",35,4),new i("itor",-1,6),new i("ător",-1,5),new i("iciv",-1,4),new i("ativ",-1,5),new i("itiv",-1,6),new i("icală",-1,4),new i("icivă",-1,4),new i("ativă",-1,5),new i("itivă",-1,6)],F=[new i("ica",-1,1),new i("abila",-1,1),new i("ibila",-1,1),new i("oasa",-1,1),new i("ata",-1,1),new i("ita",-1,1),new i("anta",-1,1),new i("ista",-1,3),new i("uta",-1,1),new i("iva",-1,1),new i("ic",-1,1),new i("ice",-1,1),new i("abile",-1,1),new i("ibile",-1,1),new i("isme",-1,3),new i("iune",-1,2),new i("oase",-1,1),new i("ate",-1,1),new i("itate",17,1),new i("ite",-1,1),new i("ante",-1,1),new i("iste",-1,3),new i("ute",-1,1),new i("ive",-1,1),new i("ici",-1,1),new i("abili",-1,1),new i("ibili",-1,1),new i("iuni",-1,2),new i("atori",-1,1),new i("osi",-1,1),new i("ati",-1,1),new i("itati",30,1),new i("iti",-1,1),new i("anti",-1,1),new i("isti",-1,3),new i("uti",-1,1),new i("işti",-1,3),new i("ivi",-1,1),new i("ităi",-1,1),new i("oşi",-1,1),new i("ităţi",-1,1),new i("abil",-1,1),new i("ibil",-1,1),new i("ism",-1,3),new i("ator",-1,1),new i("os",-1,1),new i("at",-1,1),new i("it",-1,1),new i("ant",-1,1),new i("ist",-1,3),new i("ut",-1,1),new i("iv",-1,1),new i("ică",-1,1),new i("abilă",-1,1),new i("ibilă",-1,1),new i("oasă",-1,1),new i("ată",-1,1),new i("ită",-1,1),new i("antă",-1,1),new i("istă",-1,3),new i("ută",-1,1),new i("ivă",-1,1)],q=[new i("ea",-1,1),new i("ia",-1,1),new i("esc",-1,1),new i("ăsc",-1,1),new i("ind",-1,1),new i("ând",-1,1),new i("are",-1,1),new i("ere",-1,1),new i("ire",-1,1),new i("âre",-1,1),new i("se",-1,2),new i("ase",10,1),new i("sese",10,2),new i("ise",10,1),new i("use",10,1),new i("âse",10,1),new i("eşte",-1,1),new i("ăşte",-1,1),new i("eze",-1,1),new i("ai",-1,1),new i("eai",19,1),new i("iai",19,1),new i("sei",-1,2),new i("eşti",-1,1),new i("ăşti",-1,1),new i("ui",-1,1),new i("ezi",-1,1),new i("âi",-1,1),new i("aşi",-1,1),new i("seşi",-1,2),new i("aseşi",29,1),new i("seseşi",29,2),new i("iseşi",29,1),new i("useşi",29,1),new i("âseşi",29,1),new i("işi",-1,1),new i("uşi",-1,1),new i("âşi",-1,1),new i("aţi",-1,2),new i("eaţi",38,1),new i("iaţi",38,1),new i("eţi",-1,2),new i("iţi",-1,2),new i("âţi",-1,2),new i("arăţi",-1,1),new i("serăţi",-1,2),new i("aserăţi",45,1),new i("seserăţi",45,2),new i("iserăţi",45,1),new i("userăţi",45,1),new i("âserăţi",45,1),new i("irăţi",-1,1),new i("urăţi",-1,1),new i("ârăţi",-1,1),new i("am",-1,1),new i("eam",54,1),new i("iam",54,1),new i("em",-1,2),new i("asem",57,1),new i("sesem",57,2),new i("isem",57,1),new i("usem",57,1),new i("âsem",57,1),new i("im",-1,2),new i("âm",-1,2),new i("ăm",-1,2),new i("arăm",65,1),new i("serăm",65,2),new i("aserăm",67,1),new i("seserăm",67,2),new i("iserăm",67,1),new i("userăm",67,1),new i("âserăm",67,1),new i("irăm",65,1),new i("urăm",65,1),new i("ârăm",65,1),new i("au",-1,1),new i("eau",76,1),new i("iau",76,1),new i("indu",-1,1),new i("ându",-1,1),new i("ez",-1,1),new i("ească",-1,1),new i("ară",-1,1),new i("seră",-1,2),new i("aseră",84,1),new i("seseră",84,2),new i("iseră",84,1),new i("useră",84,1),new i("âseră",84,1),new i("iră",-1,1),new i("ură",-1,1),new i("âră",-1,1),new i("ează",-1,1)],S=[new i("a",-1,1),new i("e",-1,1),new i("ie",1,1),new i("i",-1,1),new i("ă",-1,1)],W=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,2,32,0,0,4],L=new r;this.setCurrent=function(e){L.setCurrent(e)},this.getCurrent=function(){return L.getCurrent()},this.stem=function(){var e=L.cursor;return n(),L.cursor=e,c(),L.limit_backward=e,L.cursor=L.limit,f(),L.cursor=L.limit,d(),L.cursor=L.limit,_||(L.cursor=L.limit,b(),L.cursor=L.limit),v(),L.cursor=L.limit_backward,s(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.ro.stemmer,"stemmer-ro"),e.ro.stopWordFilter=e.generateStopWordFilter("acea aceasta această aceea acei aceia acel acela acele acelea acest acesta aceste acestea aceşti aceştia acolo acord acum ai aia aibă aici al ale alea altceva altcineva am ar are asemenea asta astea astăzi asupra au avea avem aveţi azi aş aşadar aţi bine bucur bună ca care caut ce cel ceva chiar cinci cine cineva contra cu cum cumva curând curînd când cât câte câtva câţi cînd cît cîte cîtva cîţi că căci cărei căror cărui către da dacă dar datorită dată dau de deci deja deoarece departe deşi din dinaintea dintr- dintre doi doilea două drept după dă ea ei el ele eram este eu eşti face fata fi fie fiecare fii fim fiu fiţi frumos fără graţie halbă iar ieri la le li lor lui lângă lîngă mai mea mei mele mereu meu mi mie mine mult multă mulţi mulţumesc mâine mîine mă ne nevoie nici nicăieri nimeni nimeri nimic nişte noastre noastră noi noroc nostru nouă noştri nu opt ori oricare orice oricine oricum oricând oricât oricînd oricît oriunde patra patru patrulea pe pentru peste pic poate pot prea prima primul prin puţin puţina puţină până pînă rog sa sale sau se spate spre sub sunt suntem sunteţi sută sînt sîntem sînteţi să săi său ta tale te timp tine toate toată tot totuşi toţi trei treia treilea tu tăi tău un una unde undeva unei uneia unele uneori unii unor unora unu unui unuia unul vi voastre voastră voi vostru vouă voştri vreme vreo vreun vă zece zero zi zice îi îl îmi împotriva în înainte înaintea încotro încât încît între întrucât întrucît îţi ăla ălea ăsta ăstea ăştia şapte şase şi ştiu ţi ţie".split(" ")),e.Pipeline.registerFunction(e.ro.stopWordFilter,"stopWordFilter-ro")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.ru.min.js b/assets/javascripts/lunr/min/lunr.ru.min.js new file mode 100644 index 00000000..186cc485 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ru.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Russian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,n){"function"==typeof define&&define.amd?define(n):"object"==typeof exports?module.exports=n():n()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.ru=function(){this.pipeline.reset(),this.pipeline.add(e.ru.trimmer,e.ru.stopWordFilter,e.ru.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.ru.stemmer))},e.ru.wordCharacters="Ѐ-҄҇-ԯᴫᵸⷠ-ⷿꙀ-ꚟ︮︯",e.ru.trimmer=e.trimmerSupport.generateTrimmer(e.ru.wordCharacters),e.Pipeline.registerFunction(e.ru.trimmer,"trimmer-ru"),e.ru.stemmer=function(){var n=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,t=new function(){function e(){for(;!W.in_grouping(S,1072,1103);){if(W.cursor>=W.limit)return!1;W.cursor++}return!0}function t(){for(;!W.out_grouping(S,1072,1103);){if(W.cursor>=W.limit)return!1;W.cursor++}return!0}function w(){b=W.limit,_=b,e()&&(b=W.cursor,t()&&e()&&t()&&(_=W.cursor))}function i(){return _<=W.cursor}function u(e,n){var r,t;if(W.ket=W.cursor,r=W.find_among_b(e,n)){switch(W.bra=W.cursor,r){case 1:if(t=W.limit-W.cursor,!W.eq_s_b(1,"а")&&(W.cursor=W.limit-t,!W.eq_s_b(1,"я")))return!1;case 2:W.slice_del()}return!0}return!1}function o(){return u(h,9)}function s(e,n){var r;return W.ket=W.cursor,!!(r=W.find_among_b(e,n))&&(W.bra=W.cursor,1==r&&W.slice_del(),!0)}function c(){return s(g,26)}function m(){return!!c()&&(u(C,8),!0)}function f(){return s(k,2)}function l(){return u(P,46)}function a(){s(v,36)}function p(){var e;W.ket=W.cursor,(e=W.find_among_b(F,2))&&(W.bra=W.cursor,i()&&1==e&&W.slice_del())}function d(){var e;if(W.ket=W.cursor,e=W.find_among_b(q,4))switch(W.bra=W.cursor,e){case 1:if(W.slice_del(),W.ket=W.cursor,!W.eq_s_b(1,"н"))break;W.bra=W.cursor;case 2:if(!W.eq_s_b(1,"н"))break;case 3:W.slice_del()}}var _,b,h=[new n("в",-1,1),new n("ив",0,2),new n("ыв",0,2),new n("вши",-1,1),new n("ивши",3,2),new n("ывши",3,2),new n("вшись",-1,1),new n("ившись",6,2),new n("ывшись",6,2)],g=[new n("ее",-1,1),new n("ие",-1,1),new n("ое",-1,1),new n("ые",-1,1),new n("ими",-1,1),new n("ыми",-1,1),new n("ей",-1,1),new n("ий",-1,1),new n("ой",-1,1),new n("ый",-1,1),new n("ем",-1,1),new n("им",-1,1),new n("ом",-1,1),new n("ым",-1,1),new n("его",-1,1),new n("ого",-1,1),new n("ему",-1,1),new n("ому",-1,1),new n("их",-1,1),new n("ых",-1,1),new n("ею",-1,1),new n("ою",-1,1),new n("ую",-1,1),new n("юю",-1,1),new n("ая",-1,1),new n("яя",-1,1)],C=[new n("ем",-1,1),new n("нн",-1,1),new n("вш",-1,1),new n("ивш",2,2),new n("ывш",2,2),new n("щ",-1,1),new n("ющ",5,1),new n("ующ",6,2)],k=[new n("сь",-1,1),new n("ся",-1,1)],P=[new n("ла",-1,1),new n("ила",0,2),new n("ыла",0,2),new n("на",-1,1),new n("ена",3,2),new n("ете",-1,1),new n("ите",-1,2),new n("йте",-1,1),new n("ейте",7,2),new n("уйте",7,2),new n("ли",-1,1),new n("или",10,2),new n("ыли",10,2),new n("й",-1,1),new n("ей",13,2),new n("уй",13,2),new n("л",-1,1),new n("ил",16,2),new n("ыл",16,2),new n("ем",-1,1),new n("им",-1,2),new n("ым",-1,2),new n("н",-1,1),new n("ен",22,2),new n("ло",-1,1),new n("ило",24,2),new n("ыло",24,2),new n("но",-1,1),new n("ено",27,2),new n("нно",27,1),new n("ет",-1,1),new n("ует",30,2),new n("ит",-1,2),new n("ыт",-1,2),new n("ют",-1,1),new n("уют",34,2),new n("ят",-1,2),new n("ны",-1,1),new n("ены",37,2),new n("ть",-1,1),new n("ить",39,2),new n("ыть",39,2),new n("ешь",-1,1),new n("ишь",-1,2),new n("ю",-1,2),new n("ую",44,2)],v=[new n("а",-1,1),new n("ев",-1,1),new n("ов",-1,1),new n("е",-1,1),new n("ие",3,1),new n("ье",3,1),new n("и",-1,1),new n("еи",6,1),new n("ии",6,1),new n("ами",6,1),new n("ями",6,1),new n("иями",10,1),new n("й",-1,1),new n("ей",12,1),new n("ией",13,1),new n("ий",12,1),new n("ой",12,1),new n("ам",-1,1),new n("ем",-1,1),new n("ием",18,1),new n("ом",-1,1),new n("ям",-1,1),new n("иям",21,1),new n("о",-1,1),new n("у",-1,1),new n("ах",-1,1),new n("ях",-1,1),new n("иях",26,1),new n("ы",-1,1),new n("ь",-1,1),new n("ю",-1,1),new n("ию",30,1),new n("ью",30,1),new n("я",-1,1),new n("ия",33,1),new n("ья",33,1)],F=[new n("ост",-1,1),new n("ость",-1,1)],q=[new n("ейше",-1,1),new n("н",-1,2),new n("ейш",-1,1),new n("ь",-1,3)],S=[33,65,8,232],W=new r;this.setCurrent=function(e){W.setCurrent(e)},this.getCurrent=function(){return W.getCurrent()},this.stem=function(){return w(),W.cursor=W.limit,!(W.cursor=i&&(e-=i,t[e>>3]&1<<(7&e)))return this.cursor++,!0}return!1},in_grouping_b:function(t,i,s){if(this.cursor>this.limit_backward){var e=r.charCodeAt(this.cursor-1);if(e<=s&&e>=i&&(e-=i,t[e>>3]&1<<(7&e)))return this.cursor--,!0}return!1},out_grouping:function(t,i,s){if(this.cursors||e>3]&1<<(7&e)))return this.cursor++,!0}return!1},out_grouping_b:function(t,i,s){if(this.cursor>this.limit_backward){var e=r.charCodeAt(this.cursor-1);if(e>s||e>3]&1<<(7&e)))return this.cursor--,!0}return!1},eq_s:function(t,i){if(this.limit-this.cursor>1),f=0,l=o0||e==s||c)break;c=!0}}for(;;){var _=t[s];if(o>=_.s_size){if(this.cursor=n+_.s_size,!_.method)return _.result;var b=_.method();if(this.cursor=n+_.s_size,b)return _.result}if((s=_.substring_i)<0)return 0}},find_among_b:function(t,i){for(var s=0,e=i,n=this.cursor,u=this.limit_backward,o=0,h=0,c=!1;;){for(var a=s+(e-s>>1),f=0,l=o=0;m--){if(n-l==u){f=-1;break}if(f=r.charCodeAt(n-1-l)-_.s[m])break;l++}if(f<0?(e=a,h=l):(s=a,o=l),e-s<=1){if(s>0||e==s||c)break;c=!0}}for(;;){var _=t[s];if(o>=_.s_size){if(this.cursor=n-_.s_size,!_.method)return _.result;var b=_.method();if(this.cursor=n-_.s_size,b)return _.result}if((s=_.substring_i)<0)return 0}},replace_s:function(t,i,s){var e=s.length-(i-t),n=r.substring(0,t),u=r.substring(i);return r=n+s+u,this.limit+=e,this.cursor>=i?this.cursor+=e:this.cursor>t&&(this.cursor=t),e},slice_check:function(){if(this.bra<0||this.bra>this.ket||this.ket>this.limit||this.limit>r.length)throw"faulty slice operation"},slice_from:function(r){this.slice_check(),this.replace_s(this.bra,this.ket,r)},slice_del:function(){this.slice_from("")},insert:function(r,t,i){var s=this.replace_s(r,t,i);r<=this.bra&&(this.bra+=s),r<=this.ket&&(this.ket+=s)},slice_to:function(){return this.slice_check(),r.substring(this.bra,this.ket)},eq_v_b:function(r){return this.eq_s_b(r.length,r)}}}},r.trimmerSupport={generateTrimmer:function(r){var t=new RegExp("^[^"+r+"]+"),i=new RegExp("[^"+r+"]+$");return function(r){return"function"==typeof r.update?r.update(function(r){return r.replace(t,"").replace(i,"")}):r.replace(t,"").replace(i,"")}}}}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.sv.min.js b/assets/javascripts/lunr/min/lunr.sv.min.js new file mode 100644 index 00000000..3e5eb640 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.sv.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Swedish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.sv=function(){this.pipeline.reset(),this.pipeline.add(e.sv.trimmer,e.sv.stopWordFilter,e.sv.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.sv.stemmer))},e.sv.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.sv.trimmer=e.trimmerSupport.generateTrimmer(e.sv.wordCharacters),e.Pipeline.registerFunction(e.sv.trimmer,"trimmer-sv"),e.sv.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,t=new function(){function e(){var e,r=w.cursor+3;if(o=w.limit,0<=r||r<=w.limit){for(a=r;;){if(e=w.cursor,w.in_grouping(l,97,246)){w.cursor=e;break}if(w.cursor=e,w.cursor>=w.limit)return;w.cursor++}for(;!w.out_grouping(l,97,246);){if(w.cursor>=w.limit)return;w.cursor++}o=w.cursor,o=o&&(w.limit_backward=o,w.cursor=w.limit,w.ket=w.cursor,e=w.find_among_b(u,37),w.limit_backward=r,e))switch(w.bra=w.cursor,e){case 1:w.slice_del();break;case 2:w.in_grouping_b(d,98,121)&&w.slice_del()}}function i(){var e=w.limit_backward;w.cursor>=o&&(w.limit_backward=o,w.cursor=w.limit,w.find_among_b(c,7)&&(w.cursor=w.limit,w.ket=w.cursor,w.cursor>w.limit_backward&&(w.bra=--w.cursor,w.slice_del())),w.limit_backward=e)}function s(){var e,r;if(w.cursor>=o){if(r=w.limit_backward,w.limit_backward=o,w.cursor=w.limit,w.ket=w.cursor,e=w.find_among_b(m,5))switch(w.bra=w.cursor,e){case 1:w.slice_del();break;case 2:w.slice_from("lös");break;case 3:w.slice_from("full")}w.limit_backward=r}}var a,o,u=[new r("a",-1,1),new r("arna",0,1),new r("erna",0,1),new r("heterna",2,1),new r("orna",0,1),new r("ad",-1,1),new r("e",-1,1),new r("ade",6,1),new r("ande",6,1),new r("arne",6,1),new r("are",6,1),new r("aste",6,1),new r("en",-1,1),new r("anden",12,1),new r("aren",12,1),new r("heten",12,1),new r("ern",-1,1),new r("ar",-1,1),new r("er",-1,1),new r("heter",18,1),new r("or",-1,1),new r("s",-1,2),new r("as",21,1),new r("arnas",22,1),new r("ernas",22,1),new r("ornas",22,1),new r("es",21,1),new r("ades",26,1),new r("andes",26,1),new r("ens",21,1),new r("arens",29,1),new r("hetens",29,1),new r("erns",21,1),new r("at",-1,1),new r("andet",-1,1),new r("het",-1,1),new r("ast",-1,1)],c=[new r("dd",-1,-1),new r("gd",-1,-1),new r("nn",-1,-1),new r("dt",-1,-1),new r("gt",-1,-1),new r("kt",-1,-1),new r("tt",-1,-1)],m=[new r("ig",-1,1),new r("lig",0,1),new r("els",-1,1),new r("fullt",-1,3),new r("löst",-1,2)],l=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,24,0,32],d=[119,127,149],w=new n;this.setCurrent=function(e){w.setCurrent(e)},this.getCurrent=function(){return w.getCurrent()},this.stem=function(){var r=w.cursor;return e(),w.limit_backward=r,w.cursor=w.limit,t(),w.cursor=w.limit,i(),w.cursor=w.limit,s(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return t.setCurrent(e),t.stem(),t.getCurrent()}):(t.setCurrent(e),t.stem(),t.getCurrent())}}(),e.Pipeline.registerFunction(e.sv.stemmer,"stemmer-sv"),e.sv.stopWordFilter=e.generateStopWordFilter("alla allt att av blev bli blir blivit de dem den denna deras dess dessa det detta dig din dina ditt du där då efter ej eller en er era ert ett från för ha hade han hans har henne hennes hon honom hur här i icke ingen inom inte jag ju kan kunde man med mellan men mig min mina mitt mot mycket ni nu när någon något några och om oss på samma sedan sig sin sina sitta själv skulle som så sådan sådana sådant till under upp ut utan vad var vara varför varit varje vars vart vem vi vid vilka vilkas vilken vilket vår våra vårt än är åt över".split(" ")),e.Pipeline.registerFunction(e.sv.stopWordFilter,"stopWordFilter-sv")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.ta.min.js b/assets/javascripts/lunr/min/lunr.ta.min.js new file mode 100644 index 00000000..a644bed2 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ta.min.js @@ -0,0 +1 @@ +!function(e,t){"function"==typeof define&&define.amd?define(t):"object"==typeof exports?module.exports=t():t()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.ta=function(){this.pipeline.reset(),this.pipeline.add(e.ta.trimmer,e.ta.stopWordFilter,e.ta.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.ta.stemmer))},e.ta.wordCharacters="஀-உஊ-ஏஐ-ஙச-ட஠-னப-யர-ஹ஺-ிீ-௉ொ-௏ௐ-௙௚-௟௠-௩௪-௯௰-௹௺-௿a-zA-Za-zA-Z0-90-9",e.ta.trimmer=e.trimmerSupport.generateTrimmer(e.ta.wordCharacters),e.Pipeline.registerFunction(e.ta.trimmer,"trimmer-ta"),e.ta.stopWordFilter=e.generateStopWordFilter("அங்கு அங்கே அது அதை அந்த அவர் அவர்கள் அவள் அவன் அவை ஆக ஆகவே ஆகையால் ஆதலால் ஆதலினால் ஆனாலும் ஆனால் இங்கு இங்கே இது இதை இந்த இப்படி இவர் இவர்கள் இவள் இவன் இவை இவ்வளவு உனக்கு உனது உன் உன்னால் எங்கு எங்கே எது எதை எந்த எப்படி எவர் எவர்கள் எவள் எவன் எவை எவ்வளவு எனக்கு எனது எனவே என் என்ன என்னால் ஏது ஏன் தனது தன்னால் தானே தான் நாங்கள் நாம் நான் நீ நீங்கள்".split(" ")),e.ta.stemmer=function(){return function(e){return"function"==typeof e.update?e.update(function(e){return e}):e}}();var t=e.wordcut;t.init(),e.ta.tokenizer=function(r){if(!arguments.length||null==r||void 0==r)return[];if(Array.isArray(r))return r.map(function(t){return isLunr2?new e.Token(t.toLowerCase()):t.toLowerCase()});var i=r.toString().toLowerCase().replace(/^\s+/,"");return t.cut(i).split("|")},e.Pipeline.registerFunction(e.ta.stemmer,"stemmer-ta"),e.Pipeline.registerFunction(e.ta.stopWordFilter,"stopWordFilter-ta")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.th.min.js b/assets/javascripts/lunr/min/lunr.th.min.js new file mode 100644 index 00000000..dee3aac6 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.th.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");var r="2"==e.version[0];e.th=function(){this.pipeline.reset(),this.pipeline.add(e.th.trimmer),r?this.tokenizer=e.th.tokenizer:(e.tokenizer&&(e.tokenizer=e.th.tokenizer),this.tokenizerFn&&(this.tokenizerFn=e.th.tokenizer))},e.th.wordCharacters="[฀-๿]",e.th.trimmer=e.trimmerSupport.generateTrimmer(e.th.wordCharacters),e.Pipeline.registerFunction(e.th.trimmer,"trimmer-th");var t=e.wordcut;t.init(),e.th.tokenizer=function(i){if(!arguments.length||null==i||void 0==i)return[];if(Array.isArray(i))return i.map(function(t){return r?new e.Token(t):t});var n=i.toString().replace(/^\s+/,"");return t.cut(n).split("|")}}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.tr.min.js b/assets/javascripts/lunr/min/lunr.tr.min.js new file mode 100644 index 00000000..563f6ec1 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.tr.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Turkish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(r,i){"function"==typeof define&&define.amd?define(i):"object"==typeof exports?module.exports=i():i()(r.lunr)}(this,function(){return function(r){if(void 0===r)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===r.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");r.tr=function(){this.pipeline.reset(),this.pipeline.add(r.tr.trimmer,r.tr.stopWordFilter,r.tr.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(r.tr.stemmer))},r.tr.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",r.tr.trimmer=r.trimmerSupport.generateTrimmer(r.tr.wordCharacters),r.Pipeline.registerFunction(r.tr.trimmer,"trimmer-tr"),r.tr.stemmer=function(){var i=r.stemmerSupport.Among,e=r.stemmerSupport.SnowballProgram,n=new function(){function r(r,i,e){for(;;){var n=Dr.limit-Dr.cursor;if(Dr.in_grouping_b(r,i,e)){Dr.cursor=Dr.limit-n;break}if(Dr.cursor=Dr.limit-n,Dr.cursor<=Dr.limit_backward)return!1;Dr.cursor--}return!0}function n(){var i,e;i=Dr.limit-Dr.cursor,r(Wr,97,305);for(var n=0;nDr.limit_backward&&(Dr.cursor--,e=Dr.limit-Dr.cursor,i()))?(Dr.cursor=Dr.limit-e,!0):(Dr.cursor=Dr.limit-n,r()?(Dr.cursor=Dr.limit-n,!1):(Dr.cursor=Dr.limit-n,!(Dr.cursor<=Dr.limit_backward)&&(Dr.cursor--,!!i()&&(Dr.cursor=Dr.limit-n,!0))))}function u(r){return t(r,function(){return Dr.in_grouping_b(Wr,97,305)})}function o(){return u(function(){return Dr.eq_s_b(1,"n")})}function s(){return u(function(){return Dr.eq_s_b(1,"s")})}function c(){return u(function(){return Dr.eq_s_b(1,"y")})}function l(){return t(function(){return Dr.in_grouping_b(Lr,105,305)},function(){return Dr.out_grouping_b(Wr,97,305)})}function a(){return Dr.find_among_b(ur,10)&&l()}function m(){return n()&&Dr.in_grouping_b(Lr,105,305)&&s()}function d(){return Dr.find_among_b(or,2)}function f(){return n()&&Dr.in_grouping_b(Lr,105,305)&&c()}function b(){return n()&&Dr.find_among_b(sr,4)}function w(){return n()&&Dr.find_among_b(cr,4)&&o()}function _(){return n()&&Dr.find_among_b(lr,2)&&c()}function k(){return n()&&Dr.find_among_b(ar,2)}function p(){return n()&&Dr.find_among_b(mr,4)}function g(){return n()&&Dr.find_among_b(dr,2)}function y(){return n()&&Dr.find_among_b(fr,4)}function z(){return n()&&Dr.find_among_b(br,2)}function v(){return n()&&Dr.find_among_b(wr,2)&&c()}function h(){return Dr.eq_s_b(2,"ki")}function q(){return n()&&Dr.find_among_b(_r,2)&&o()}function C(){return n()&&Dr.find_among_b(kr,4)&&c()}function P(){return n()&&Dr.find_among_b(pr,4)}function F(){return n()&&Dr.find_among_b(gr,4)&&c()}function S(){return Dr.find_among_b(yr,4)}function W(){return n()&&Dr.find_among_b(zr,2)}function L(){return n()&&Dr.find_among_b(vr,4)}function x(){return n()&&Dr.find_among_b(hr,8)}function A(){return Dr.find_among_b(qr,2)}function E(){return n()&&Dr.find_among_b(Cr,32)&&c()}function j(){return Dr.find_among_b(Pr,8)&&c()}function T(){return n()&&Dr.find_among_b(Fr,4)&&c()}function Z(){return Dr.eq_s_b(3,"ken")&&c()}function B(){var r=Dr.limit-Dr.cursor;return!(T()||(Dr.cursor=Dr.limit-r,E()||(Dr.cursor=Dr.limit-r,j()||(Dr.cursor=Dr.limit-r,Z()))))}function D(){if(A()){var r=Dr.limit-Dr.cursor;if(S()||(Dr.cursor=Dr.limit-r,W()||(Dr.cursor=Dr.limit-r,C()||(Dr.cursor=Dr.limit-r,P()||(Dr.cursor=Dr.limit-r,F()||(Dr.cursor=Dr.limit-r))))),T())return!1}return!0}function G(){if(W()){Dr.bra=Dr.cursor,Dr.slice_del();var r=Dr.limit-Dr.cursor;return Dr.ket=Dr.cursor,x()||(Dr.cursor=Dr.limit-r,E()||(Dr.cursor=Dr.limit-r,j()||(Dr.cursor=Dr.limit-r,T()||(Dr.cursor=Dr.limit-r)))),nr=!1,!1}return!0}function H(){if(!L())return!0;var r=Dr.limit-Dr.cursor;return!E()&&(Dr.cursor=Dr.limit-r,!j())}function I(){var r,i=Dr.limit-Dr.cursor;return!(S()||(Dr.cursor=Dr.limit-i,F()||(Dr.cursor=Dr.limit-i,P()||(Dr.cursor=Dr.limit-i,C()))))||(Dr.bra=Dr.cursor,Dr.slice_del(),r=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,T()||(Dr.cursor=Dr.limit-r),!1)}function J(){var r,i=Dr.limit-Dr.cursor;if(Dr.ket=Dr.cursor,nr=!0,B()&&(Dr.cursor=Dr.limit-i,D()&&(Dr.cursor=Dr.limit-i,G()&&(Dr.cursor=Dr.limit-i,H()&&(Dr.cursor=Dr.limit-i,I()))))){if(Dr.cursor=Dr.limit-i,!x())return;Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,r=Dr.limit-Dr.cursor,S()||(Dr.cursor=Dr.limit-r,W()||(Dr.cursor=Dr.limit-r,C()||(Dr.cursor=Dr.limit-r,P()||(Dr.cursor=Dr.limit-r,F()||(Dr.cursor=Dr.limit-r))))),T()||(Dr.cursor=Dr.limit-r)}Dr.bra=Dr.cursor,Dr.slice_del()}function K(){var r,i,e,n;if(Dr.ket=Dr.cursor,h()){if(r=Dr.limit-Dr.cursor,p())return Dr.bra=Dr.cursor,Dr.slice_del(),i=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,W()?(Dr.bra=Dr.cursor,Dr.slice_del(),K()):(Dr.cursor=Dr.limit-i,a()&&(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()))),!0;if(Dr.cursor=Dr.limit-r,w()){if(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,e=Dr.limit-Dr.cursor,d())Dr.bra=Dr.cursor,Dr.slice_del();else{if(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,!a()&&(Dr.cursor=Dr.limit-e,!m()&&(Dr.cursor=Dr.limit-e,!K())))return!0;Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K())}return!0}if(Dr.cursor=Dr.limit-r,g()){if(n=Dr.limit-Dr.cursor,d())Dr.bra=Dr.cursor,Dr.slice_del();else if(Dr.cursor=Dr.limit-n,m())Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K());else if(Dr.cursor=Dr.limit-n,!K())return!1;return!0}}return!1}function M(r){if(Dr.ket=Dr.cursor,!g()&&(Dr.cursor=Dr.limit-r,!k()))return!1;var i=Dr.limit-Dr.cursor;if(d())Dr.bra=Dr.cursor,Dr.slice_del();else if(Dr.cursor=Dr.limit-i,m())Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K());else if(Dr.cursor=Dr.limit-i,!K())return!1;return!0}function N(r){if(Dr.ket=Dr.cursor,!z()&&(Dr.cursor=Dr.limit-r,!b()))return!1;var i=Dr.limit-Dr.cursor;return!(!m()&&(Dr.cursor=Dr.limit-i,!d()))&&(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()),!0)}function O(){var r,i=Dr.limit-Dr.cursor;return Dr.ket=Dr.cursor,!(!w()&&(Dr.cursor=Dr.limit-i,!v()))&&(Dr.bra=Dr.cursor,Dr.slice_del(),r=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,!(!W()||(Dr.bra=Dr.cursor,Dr.slice_del(),!K()))||(Dr.cursor=Dr.limit-r,Dr.ket=Dr.cursor,!(a()||(Dr.cursor=Dr.limit-r,m()||(Dr.cursor=Dr.limit-r,K())))||(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()),!0)))}function Q(){var r,i,e=Dr.limit-Dr.cursor;if(Dr.ket=Dr.cursor,!p()&&(Dr.cursor=Dr.limit-e,!f()&&(Dr.cursor=Dr.limit-e,!_())))return!1;if(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,r=Dr.limit-Dr.cursor,a())Dr.bra=Dr.cursor,Dr.slice_del(),i=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,W()||(Dr.cursor=Dr.limit-i);else if(Dr.cursor=Dr.limit-r,!W())return!0;return Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,K(),!0}function R(){var r,i,e=Dr.limit-Dr.cursor;if(Dr.ket=Dr.cursor,W())return Dr.bra=Dr.cursor,Dr.slice_del(),void K();if(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,q())if(Dr.bra=Dr.cursor,Dr.slice_del(),r=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,d())Dr.bra=Dr.cursor,Dr.slice_del();else{if(Dr.cursor=Dr.limit-r,Dr.ket=Dr.cursor,!a()&&(Dr.cursor=Dr.limit-r,!m())){if(Dr.cursor=Dr.limit-r,Dr.ket=Dr.cursor,!W())return;if(Dr.bra=Dr.cursor,Dr.slice_del(),!K())return}Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K())}else if(Dr.cursor=Dr.limit-e,!M(e)&&(Dr.cursor=Dr.limit-e,!N(e))){if(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,y())return Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,i=Dr.limit-Dr.cursor,void(a()?(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K())):(Dr.cursor=Dr.limit-i,W()?(Dr.bra=Dr.cursor,Dr.slice_del(),K()):(Dr.cursor=Dr.limit-i,K())));if(Dr.cursor=Dr.limit-e,!O()){if(Dr.cursor=Dr.limit-e,d())return Dr.bra=Dr.cursor,void Dr.slice_del();Dr.cursor=Dr.limit-e,K()||(Dr.cursor=Dr.limit-e,Q()||(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,(a()||(Dr.cursor=Dr.limit-e,m()))&&(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()))))}}}function U(){var r;if(Dr.ket=Dr.cursor,r=Dr.find_among_b(Sr,4))switch(Dr.bra=Dr.cursor,r){case 1:Dr.slice_from("p");break;case 2:Dr.slice_from("ç");break;case 3:Dr.slice_from("t");break;case 4:Dr.slice_from("k")}}function V(){for(;;){var r=Dr.limit-Dr.cursor;if(Dr.in_grouping_b(Wr,97,305)){Dr.cursor=Dr.limit-r;break}if(Dr.cursor=Dr.limit-r,Dr.cursor<=Dr.limit_backward)return!1;Dr.cursor--}return!0}function X(r,i,e){if(Dr.cursor=Dr.limit-r,V()){var n=Dr.limit-Dr.cursor;if(!Dr.eq_s_b(1,i)&&(Dr.cursor=Dr.limit-n,!Dr.eq_s_b(1,e)))return!0;Dr.cursor=Dr.limit-r;var t=Dr.cursor;return Dr.insert(Dr.cursor,Dr.cursor,e),Dr.cursor=t,!1}return!0}function Y(){var r=Dr.limit-Dr.cursor;(Dr.eq_s_b(1,"d")||(Dr.cursor=Dr.limit-r,Dr.eq_s_b(1,"g")))&&X(r,"a","ı")&&X(r,"e","i")&&X(r,"o","u")&&X(r,"ö","ü")}function $(){for(var r,i=Dr.cursor,e=2;;){for(r=Dr.cursor;!Dr.in_grouping(Wr,97,305);){if(Dr.cursor>=Dr.limit)return Dr.cursor=r,!(e>0)&&(Dr.cursor=i,!0);Dr.cursor++}e--}}function rr(r,i,e){for(;!Dr.eq_s(i,e);){if(Dr.cursor>=Dr.limit)return!0;Dr.cursor++}return(tr=i)!=Dr.limit||(Dr.cursor=r,!1)}function ir(){var r=Dr.cursor;return!rr(r,2,"ad")||(Dr.cursor=r,!rr(r,5,"soyad"))}function er(){var r=Dr.cursor;return!ir()&&(Dr.limit_backward=r,Dr.cursor=Dr.limit,Y(),Dr.cursor=Dr.limit,U(),!0)}var nr,tr,ur=[new i("m",-1,-1),new i("n",-1,-1),new i("miz",-1,-1),new i("niz",-1,-1),new i("muz",-1,-1),new i("nuz",-1,-1),new i("müz",-1,-1),new i("nüz",-1,-1),new i("mız",-1,-1),new i("nız",-1,-1)],or=[new i("leri",-1,-1),new i("ları",-1,-1)],sr=[new i("ni",-1,-1),new i("nu",-1,-1),new i("nü",-1,-1),new i("nı",-1,-1)],cr=[new i("in",-1,-1),new i("un",-1,-1),new i("ün",-1,-1),new i("ın",-1,-1)],lr=[new i("a",-1,-1),new i("e",-1,-1)],ar=[new i("na",-1,-1),new i("ne",-1,-1)],mr=[new i("da",-1,-1),new i("ta",-1,-1),new i("de",-1,-1),new i("te",-1,-1)],dr=[new i("nda",-1,-1),new i("nde",-1,-1)],fr=[new i("dan",-1,-1),new i("tan",-1,-1),new i("den",-1,-1),new i("ten",-1,-1)],br=[new i("ndan",-1,-1),new i("nden",-1,-1)],wr=[new i("la",-1,-1),new i("le",-1,-1)],_r=[new i("ca",-1,-1),new i("ce",-1,-1)],kr=[new i("im",-1,-1),new i("um",-1,-1),new i("üm",-1,-1),new i("ım",-1,-1)],pr=[new i("sin",-1,-1),new i("sun",-1,-1),new i("sün",-1,-1),new i("sın",-1,-1)],gr=[new i("iz",-1,-1),new i("uz",-1,-1),new i("üz",-1,-1),new i("ız",-1,-1)],yr=[new i("siniz",-1,-1),new i("sunuz",-1,-1),new i("sünüz",-1,-1),new i("sınız",-1,-1)],zr=[new i("lar",-1,-1),new i("ler",-1,-1)],vr=[new i("niz",-1,-1),new i("nuz",-1,-1),new i("nüz",-1,-1),new i("nız",-1,-1)],hr=[new i("dir",-1,-1),new i("tir",-1,-1),new i("dur",-1,-1),new i("tur",-1,-1),new i("dür",-1,-1),new i("tür",-1,-1),new i("dır",-1,-1),new i("tır",-1,-1)],qr=[new i("casına",-1,-1),new i("cesine",-1,-1)],Cr=[new i("di",-1,-1),new i("ti",-1,-1),new i("dik",-1,-1),new i("tik",-1,-1),new i("duk",-1,-1),new i("tuk",-1,-1),new i("dük",-1,-1),new i("tük",-1,-1),new i("dık",-1,-1),new i("tık",-1,-1),new i("dim",-1,-1),new i("tim",-1,-1),new i("dum",-1,-1),new i("tum",-1,-1),new i("düm",-1,-1),new i("tüm",-1,-1),new i("dım",-1,-1),new i("tım",-1,-1),new i("din",-1,-1),new i("tin",-1,-1),new i("dun",-1,-1),new i("tun",-1,-1),new i("dün",-1,-1),new i("tün",-1,-1),new i("dın",-1,-1),new i("tın",-1,-1),new i("du",-1,-1),new i("tu",-1,-1),new i("dü",-1,-1),new i("tü",-1,-1),new i("dı",-1,-1),new i("tı",-1,-1)],Pr=[new i("sa",-1,-1),new i("se",-1,-1),new i("sak",-1,-1),new i("sek",-1,-1),new i("sam",-1,-1),new i("sem",-1,-1),new i("san",-1,-1),new i("sen",-1,-1)],Fr=[new i("miş",-1,-1),new i("muş",-1,-1),new i("müş",-1,-1),new i("mış",-1,-1)],Sr=[new i("b",-1,1),new i("c",-1,2),new i("d",-1,3),new i("ğ",-1,4)],Wr=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,32,8,0,0,0,0,0,0,1],Lr=[1,16,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8,0,0,0,0,0,0,1],xr=[1,64,16,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],Ar=[17,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,130],Er=[1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],jr=[17],Tr=[65],Zr=[65],Br=[["a",xr,97,305],["e",Ar,101,252],["ı",Er,97,305],["i",jr,101,105],["o",Tr,111,117],["ö",Zr,246,252],["u",Tr,111,117]],Dr=new e;this.setCurrent=function(r){Dr.setCurrent(r)},this.getCurrent=function(){return Dr.getCurrent()},this.stem=function(){return!!($()&&(Dr.limit_backward=Dr.cursor,Dr.cursor=Dr.limit,J(),Dr.cursor=Dr.limit,nr&&(R(),Dr.cursor=Dr.limit_backward,er())))}};return function(r){return"function"==typeof r.update?r.update(function(r){return n.setCurrent(r),n.stem(),n.getCurrent()}):(n.setCurrent(r),n.stem(),n.getCurrent())}}(),r.Pipeline.registerFunction(r.tr.stemmer,"stemmer-tr"),r.tr.stopWordFilter=r.generateStopWordFilter("acaba altmış altı ama ancak arada aslında ayrıca bana bazı belki ben benden beni benim beri beş bile bin bir biri birkaç birkez birçok birşey birşeyi biz bizden bize bizi bizim bu buna bunda bundan bunlar bunları bunların bunu bunun burada böyle böylece da daha dahi de defa değil diye diğer doksan dokuz dolayı dolayısıyla dört edecek eden ederek edilecek ediliyor edilmesi ediyor elli en etmesi etti ettiği ettiğini eğer gibi göre halen hangi hatta hem henüz hep hepsi her herhangi herkesin hiç hiçbir iki ile ilgili ise itibaren itibariyle için işte kadar karşın katrilyon kendi kendilerine kendini kendisi kendisine kendisini kez ki kim kimden kime kimi kimse kırk milyar milyon mu mü mı nasıl ne neden nedenle nerde nerede nereye niye niçin o olan olarak oldu olduklarını olduğu olduğunu olmadı olmadığı olmak olması olmayan olmaz olsa olsun olup olur olursa oluyor on ona ondan onlar onlardan onları onların onu onun otuz oysa pek rağmen sadece sanki sekiz seksen sen senden seni senin siz sizden sizi sizin tarafından trilyon tüm var vardı ve veya ya yani yapacak yapmak yaptı yaptıkları yaptığı yaptığını yapılan yapılması yapıyor yedi yerine yetmiş yine yirmi yoksa yüz zaten çok çünkü öyle üzere üç şey şeyden şeyi şeyler şu şuna şunda şundan şunları şunu şöyle".split(" ")),r.Pipeline.registerFunction(r.tr.stopWordFilter,"stopWordFilter-tr")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.vi.min.js b/assets/javascripts/lunr/min/lunr.vi.min.js new file mode 100644 index 00000000..22aed28c --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.vi.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.vi=function(){this.pipeline.reset(),this.pipeline.add(e.vi.stopWordFilter,e.vi.trimmer)},e.vi.wordCharacters="[A-Za-ẓ̀͐́͑̉̃̓ÂâÊêÔôĂ-ăĐ-đƠ-ơƯ-ư]",e.vi.trimmer=e.trimmerSupport.generateTrimmer(e.vi.wordCharacters),e.Pipeline.registerFunction(e.vi.trimmer,"trimmer-vi"),e.vi.stopWordFilter=e.generateStopWordFilter("là cái nhưng mà".split(" "))}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.zh.min.js b/assets/javascripts/lunr/min/lunr.zh.min.js new file mode 100644 index 00000000..9838ef96 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.zh.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r(require("@node-rs/jieba")):r()(e.lunr)}(this,function(e){return function(r,t){if(void 0===r)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===r.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");var i="2"==r.version[0];r.zh=function(){this.pipeline.reset(),this.pipeline.add(r.zh.trimmer,r.zh.stopWordFilter,r.zh.stemmer),i?this.tokenizer=r.zh.tokenizer:(r.tokenizer&&(r.tokenizer=r.zh.tokenizer),this.tokenizerFn&&(this.tokenizerFn=r.zh.tokenizer))},r.zh.tokenizer=function(n){if(!arguments.length||null==n||void 0==n)return[];if(Array.isArray(n))return n.map(function(e){return i?new r.Token(e.toLowerCase()):e.toLowerCase()});t&&e.load(t);var o=n.toString().trim().toLowerCase(),s=[];e.cut(o,!0).forEach(function(e){s=s.concat(e.split(" "))}),s=s.filter(function(e){return!!e});var u=0;return s.map(function(e,t){if(i){var n=o.indexOf(e,u),s={};return s.position=[n,e.length],s.index=t,u=n,new r.Token(e,s)}return e})},r.zh.wordCharacters="\\w一-龥",r.zh.trimmer=r.trimmerSupport.generateTrimmer(r.zh.wordCharacters),r.Pipeline.registerFunction(r.zh.trimmer,"trimmer-zh"),r.zh.stemmer=function(){return function(e){return e}}(),r.Pipeline.registerFunction(r.zh.stemmer,"stemmer-zh"),r.zh.stopWordFilter=r.generateStopWordFilter("的 一 不 在 人 有 是 为 以 于 上 他 而 后 之 来 及 了 因 下 可 到 由 这 与 也 此 但 并 个 其 已 无 小 我 们 起 最 再 今 去 好 只 又 或 很 亦 某 把 那 你 乃 它 吧 被 比 别 趁 当 从 到 得 打 凡 儿 尔 该 各 给 跟 和 何 还 即 几 既 看 据 距 靠 啦 了 另 么 每 们 嘛 拿 哪 那 您 凭 且 却 让 仍 啥 如 若 使 谁 虽 随 同 所 她 哇 嗡 往 哪 些 向 沿 哟 用 于 咱 则 怎 曾 至 致 着 诸 自".split(" ")),r.Pipeline.registerFunction(r.zh.stopWordFilter,"stopWordFilter-zh")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/tinyseg.js b/assets/javascripts/lunr/tinyseg.js new file mode 100644 index 00000000..167fa6dd --- /dev/null +++ b/assets/javascripts/lunr/tinyseg.js @@ -0,0 +1,206 @@ +/** + * export the module via AMD, CommonJS or as a browser global + * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js + */ +;(function (root, factory) { + if (typeof define === 'function' && define.amd) { + // AMD. Register as an anonymous module. + define(factory) + } else if (typeof exports === 'object') { + /** + * Node. Does not work with strict CommonJS, but + * only CommonJS-like environments that support module.exports, + * like Node. + */ + module.exports = factory() + } else { + // Browser globals (root is window) + factory()(root.lunr); + } +}(this, function () { + /** + * Just return a value to define the module export. + * This example returns an object, but the module + * can return a function as the exported value. + */ + + return function(lunr) { + // TinySegmenter 0.1 -- Super compact Japanese tokenizer in Javascript + // (c) 2008 Taku Kudo + // TinySegmenter is freely distributable under the terms of a new BSD licence. + // For details, see http://chasen.org/~taku/software/TinySegmenter/LICENCE.txt + + function TinySegmenter() { + var patterns = { + "[一二三四五六七八九十百千万億兆]":"M", + "[一-龠々〆ヵヶ]":"H", + "[ぁ-ん]":"I", + "[ァ-ヴーア-ン゙ー]":"K", + "[a-zA-Za-zA-Z]":"A", + "[0-90-9]":"N" + } + this.chartype_ = []; + for (var i in patterns) { + var regexp = new RegExp(i); + this.chartype_.push([regexp, patterns[i]]); + } + + this.BIAS__ = -332 + this.BC1__ = {"HH":6,"II":2461,"KH":406,"OH":-1378}; + this.BC2__ = {"AA":-3267,"AI":2744,"AN":-878,"HH":-4070,"HM":-1711,"HN":4012,"HO":3761,"IA":1327,"IH":-1184,"II":-1332,"IK":1721,"IO":5492,"KI":3831,"KK":-8741,"MH":-3132,"MK":3334,"OO":-2920}; + this.BC3__ = {"HH":996,"HI":626,"HK":-721,"HN":-1307,"HO":-836,"IH":-301,"KK":2762,"MK":1079,"MM":4034,"OA":-1652,"OH":266}; + this.BP1__ = {"BB":295,"OB":304,"OO":-125,"UB":352}; + this.BP2__ = {"BO":60,"OO":-1762}; + this.BQ1__ = {"BHH":1150,"BHM":1521,"BII":-1158,"BIM":886,"BMH":1208,"BNH":449,"BOH":-91,"BOO":-2597,"OHI":451,"OIH":-296,"OKA":1851,"OKH":-1020,"OKK":904,"OOO":2965}; + this.BQ2__ = {"BHH":118,"BHI":-1159,"BHM":466,"BIH":-919,"BKK":-1720,"BKO":864,"OHH":-1139,"OHM":-181,"OIH":153,"UHI":-1146}; + this.BQ3__ = {"BHH":-792,"BHI":2664,"BII":-299,"BKI":419,"BMH":937,"BMM":8335,"BNN":998,"BOH":775,"OHH":2174,"OHM":439,"OII":280,"OKH":1798,"OKI":-793,"OKO":-2242,"OMH":-2402,"OOO":11699}; + this.BQ4__ = {"BHH":-3895,"BIH":3761,"BII":-4654,"BIK":1348,"BKK":-1806,"BMI":-3385,"BOO":-12396,"OAH":926,"OHH":266,"OHK":-2036,"ONN":-973}; + this.BW1__ = {",と":660,",同":727,"B1あ":1404,"B1同":542,"、と":660,"、同":727,"」と":1682,"あっ":1505,"いう":1743,"いっ":-2055,"いる":672,"うし":-4817,"うん":665,"から":3472,"がら":600,"こう":-790,"こと":2083,"こん":-1262,"さら":-4143,"さん":4573,"した":2641,"して":1104,"すで":-3399,"そこ":1977,"それ":-871,"たち":1122,"ため":601,"った":3463,"つい":-802,"てい":805,"てき":1249,"でき":1127,"です":3445,"では":844,"とい":-4915,"とみ":1922,"どこ":3887,"ない":5713,"なっ":3015,"など":7379,"なん":-1113,"にし":2468,"には":1498,"にも":1671,"に対":-912,"の一":-501,"の中":741,"ませ":2448,"まで":1711,"まま":2600,"まる":-2155,"やむ":-1947,"よっ":-2565,"れた":2369,"れで":-913,"をし":1860,"を見":731,"亡く":-1886,"京都":2558,"取り":-2784,"大き":-2604,"大阪":1497,"平方":-2314,"引き":-1336,"日本":-195,"本当":-2423,"毎日":-2113,"目指":-724,"B1あ":1404,"B1同":542,"」と":1682}; + this.BW2__ = {"..":-11822,"11":-669,"――":-5730,"−−":-13175,"いう":-1609,"うか":2490,"かし":-1350,"かも":-602,"から":-7194,"かれ":4612,"がい":853,"がら":-3198,"きた":1941,"くな":-1597,"こと":-8392,"この":-4193,"させ":4533,"され":13168,"さん":-3977,"しい":-1819,"しか":-545,"した":5078,"して":972,"しな":939,"その":-3744,"たい":-1253,"たた":-662,"ただ":-3857,"たち":-786,"たと":1224,"たは":-939,"った":4589,"って":1647,"っと":-2094,"てい":6144,"てき":3640,"てく":2551,"ては":-3110,"ても":-3065,"でい":2666,"でき":-1528,"でし":-3828,"です":-4761,"でも":-4203,"とい":1890,"とこ":-1746,"とと":-2279,"との":720,"とみ":5168,"とも":-3941,"ない":-2488,"なが":-1313,"など":-6509,"なの":2614,"なん":3099,"にお":-1615,"にし":2748,"にな":2454,"によ":-7236,"に対":-14943,"に従":-4688,"に関":-11388,"のか":2093,"ので":-7059,"のに":-6041,"のの":-6125,"はい":1073,"はが":-1033,"はず":-2532,"ばれ":1813,"まし":-1316,"まで":-6621,"まれ":5409,"めて":-3153,"もい":2230,"もの":-10713,"らか":-944,"らし":-1611,"らに":-1897,"りし":651,"りま":1620,"れた":4270,"れて":849,"れば":4114,"ろう":6067,"われ":7901,"を通":-11877,"んだ":728,"んな":-4115,"一人":602,"一方":-1375,"一日":970,"一部":-1051,"上が":-4479,"会社":-1116,"出て":2163,"分の":-7758,"同党":970,"同日":-913,"大阪":-2471,"委員":-1250,"少な":-1050,"年度":-8669,"年間":-1626,"府県":-2363,"手権":-1982,"新聞":-4066,"日新":-722,"日本":-7068,"日米":3372,"曜日":-601,"朝鮮":-2355,"本人":-2697,"東京":-1543,"然と":-1384,"社会":-1276,"立て":-990,"第に":-1612,"米国":-4268,"11":-669}; + this.BW3__ = {"あた":-2194,"あり":719,"ある":3846,"い.":-1185,"い。":-1185,"いい":5308,"いえ":2079,"いく":3029,"いた":2056,"いっ":1883,"いる":5600,"いわ":1527,"うち":1117,"うと":4798,"えと":1454,"か.":2857,"か。":2857,"かけ":-743,"かっ":-4098,"かに":-669,"から":6520,"かり":-2670,"が,":1816,"が、":1816,"がき":-4855,"がけ":-1127,"がっ":-913,"がら":-4977,"がり":-2064,"きた":1645,"けど":1374,"こと":7397,"この":1542,"ころ":-2757,"さい":-714,"さを":976,"し,":1557,"し、":1557,"しい":-3714,"した":3562,"して":1449,"しな":2608,"しま":1200,"す.":-1310,"す。":-1310,"する":6521,"ず,":3426,"ず、":3426,"ずに":841,"そう":428,"た.":8875,"た。":8875,"たい":-594,"たの":812,"たり":-1183,"たる":-853,"だ.":4098,"だ。":4098,"だっ":1004,"った":-4748,"って":300,"てい":6240,"てお":855,"ても":302,"です":1437,"でに":-1482,"では":2295,"とう":-1387,"とし":2266,"との":541,"とも":-3543,"どう":4664,"ない":1796,"なく":-903,"など":2135,"に,":-1021,"に、":-1021,"にし":1771,"にな":1906,"には":2644,"の,":-724,"の、":-724,"の子":-1000,"は,":1337,"は、":1337,"べき":2181,"まし":1113,"ます":6943,"まっ":-1549,"まで":6154,"まれ":-793,"らし":1479,"られ":6820,"るる":3818,"れ,":854,"れ、":854,"れた":1850,"れて":1375,"れば":-3246,"れる":1091,"われ":-605,"んだ":606,"んで":798,"カ月":990,"会議":860,"入り":1232,"大会":2217,"始め":1681,"市":965,"新聞":-5055,"日,":974,"日、":974,"社会":2024,"カ月":990}; + this.TC1__ = {"AAA":1093,"HHH":1029,"HHM":580,"HII":998,"HOH":-390,"HOM":-331,"IHI":1169,"IOH":-142,"IOI":-1015,"IOM":467,"MMH":187,"OOI":-1832}; + this.TC2__ = {"HHO":2088,"HII":-1023,"HMM":-1154,"IHI":-1965,"KKH":703,"OII":-2649}; + this.TC3__ = {"AAA":-294,"HHH":346,"HHI":-341,"HII":-1088,"HIK":731,"HOH":-1486,"IHH":128,"IHI":-3041,"IHO":-1935,"IIH":-825,"IIM":-1035,"IOI":-542,"KHH":-1216,"KKA":491,"KKH":-1217,"KOK":-1009,"MHH":-2694,"MHM":-457,"MHO":123,"MMH":-471,"NNH":-1689,"NNO":662,"OHO":-3393}; + this.TC4__ = {"HHH":-203,"HHI":1344,"HHK":365,"HHM":-122,"HHN":182,"HHO":669,"HIH":804,"HII":679,"HOH":446,"IHH":695,"IHO":-2324,"IIH":321,"III":1497,"IIO":656,"IOO":54,"KAK":4845,"KKA":3386,"KKK":3065,"MHH":-405,"MHI":201,"MMH":-241,"MMM":661,"MOM":841}; + this.TQ1__ = {"BHHH":-227,"BHHI":316,"BHIH":-132,"BIHH":60,"BIII":1595,"BNHH":-744,"BOHH":225,"BOOO":-908,"OAKK":482,"OHHH":281,"OHIH":249,"OIHI":200,"OIIH":-68}; + this.TQ2__ = {"BIHH":-1401,"BIII":-1033,"BKAK":-543,"BOOO":-5591}; + this.TQ3__ = {"BHHH":478,"BHHM":-1073,"BHIH":222,"BHII":-504,"BIIH":-116,"BIII":-105,"BMHI":-863,"BMHM":-464,"BOMH":620,"OHHH":346,"OHHI":1729,"OHII":997,"OHMH":481,"OIHH":623,"OIIH":1344,"OKAK":2792,"OKHH":587,"OKKA":679,"OOHH":110,"OOII":-685}; + this.TQ4__ = {"BHHH":-721,"BHHM":-3604,"BHII":-966,"BIIH":-607,"BIII":-2181,"OAAA":-2763,"OAKK":180,"OHHH":-294,"OHHI":2446,"OHHO":480,"OHIH":-1573,"OIHH":1935,"OIHI":-493,"OIIH":626,"OIII":-4007,"OKAK":-8156}; + this.TW1__ = {"につい":-4681,"東京都":2026}; + this.TW2__ = {"ある程":-2049,"いった":-1256,"ころが":-2434,"しょう":3873,"その後":-4430,"だって":-1049,"ていた":1833,"として":-4657,"ともに":-4517,"もので":1882,"一気に":-792,"初めて":-1512,"同時に":-8097,"大きな":-1255,"対して":-2721,"社会党":-3216}; + this.TW3__ = {"いただ":-1734,"してい":1314,"として":-4314,"につい":-5483,"にとっ":-5989,"に当た":-6247,"ので,":-727,"ので、":-727,"のもの":-600,"れから":-3752,"十二月":-2287}; + this.TW4__ = {"いう.":8576,"いう。":8576,"からな":-2348,"してい":2958,"たが,":1516,"たが、":1516,"ている":1538,"という":1349,"ました":5543,"ません":1097,"ようと":-4258,"よると":5865}; + this.UC1__ = {"A":484,"K":93,"M":645,"O":-505}; + this.UC2__ = {"A":819,"H":1059,"I":409,"M":3987,"N":5775,"O":646}; + this.UC3__ = {"A":-1370,"I":2311}; + this.UC4__ = {"A":-2643,"H":1809,"I":-1032,"K":-3450,"M":3565,"N":3876,"O":6646}; + this.UC5__ = {"H":313,"I":-1238,"K":-799,"M":539,"O":-831}; + this.UC6__ = {"H":-506,"I":-253,"K":87,"M":247,"O":-387}; + this.UP1__ = {"O":-214}; + this.UP2__ = {"B":69,"O":935}; + this.UP3__ = {"B":189}; + this.UQ1__ = {"BH":21,"BI":-12,"BK":-99,"BN":142,"BO":-56,"OH":-95,"OI":477,"OK":410,"OO":-2422}; + this.UQ2__ = {"BH":216,"BI":113,"OK":1759}; + this.UQ3__ = {"BA":-479,"BH":42,"BI":1913,"BK":-7198,"BM":3160,"BN":6427,"BO":14761,"OI":-827,"ON":-3212}; + this.UW1__ = {",":156,"、":156,"「":-463,"あ":-941,"う":-127,"が":-553,"き":121,"こ":505,"で":-201,"と":-547,"ど":-123,"に":-789,"の":-185,"は":-847,"も":-466,"や":-470,"よ":182,"ら":-292,"り":208,"れ":169,"を":-446,"ん":-137,"・":-135,"主":-402,"京":-268,"区":-912,"午":871,"国":-460,"大":561,"委":729,"市":-411,"日":-141,"理":361,"生":-408,"県":-386,"都":-718,"「":-463,"・":-135}; + this.UW2__ = {",":-829,"、":-829,"〇":892,"「":-645,"」":3145,"あ":-538,"い":505,"う":134,"お":-502,"か":1454,"が":-856,"く":-412,"こ":1141,"さ":878,"ざ":540,"し":1529,"す":-675,"せ":300,"そ":-1011,"た":188,"だ":1837,"つ":-949,"て":-291,"で":-268,"と":-981,"ど":1273,"な":1063,"に":-1764,"の":130,"は":-409,"ひ":-1273,"べ":1261,"ま":600,"も":-1263,"や":-402,"よ":1639,"り":-579,"る":-694,"れ":571,"を":-2516,"ん":2095,"ア":-587,"カ":306,"キ":568,"ッ":831,"三":-758,"不":-2150,"世":-302,"中":-968,"主":-861,"事":492,"人":-123,"会":978,"保":362,"入":548,"初":-3025,"副":-1566,"北":-3414,"区":-422,"大":-1769,"天":-865,"太":-483,"子":-1519,"学":760,"実":1023,"小":-2009,"市":-813,"年":-1060,"強":1067,"手":-1519,"揺":-1033,"政":1522,"文":-1355,"新":-1682,"日":-1815,"明":-1462,"最":-630,"朝":-1843,"本":-1650,"東":-931,"果":-665,"次":-2378,"民":-180,"気":-1740,"理":752,"発":529,"目":-1584,"相":-242,"県":-1165,"立":-763,"第":810,"米":509,"自":-1353,"行":838,"西":-744,"見":-3874,"調":1010,"議":1198,"込":3041,"開":1758,"間":-1257,"「":-645,"」":3145,"ッ":831,"ア":-587,"カ":306,"キ":568}; + this.UW3__ = {",":4889,"1":-800,"−":-1723,"、":4889,"々":-2311,"〇":5827,"」":2670,"〓":-3573,"あ":-2696,"い":1006,"う":2342,"え":1983,"お":-4864,"か":-1163,"が":3271,"く":1004,"け":388,"げ":401,"こ":-3552,"ご":-3116,"さ":-1058,"し":-395,"す":584,"せ":3685,"そ":-5228,"た":842,"ち":-521,"っ":-1444,"つ":-1081,"て":6167,"で":2318,"と":1691,"ど":-899,"な":-2788,"に":2745,"の":4056,"は":4555,"ひ":-2171,"ふ":-1798,"へ":1199,"ほ":-5516,"ま":-4384,"み":-120,"め":1205,"も":2323,"や":-788,"よ":-202,"ら":727,"り":649,"る":5905,"れ":2773,"わ":-1207,"を":6620,"ん":-518,"ア":551,"グ":1319,"ス":874,"ッ":-1350,"ト":521,"ム":1109,"ル":1591,"ロ":2201,"ン":278,"・":-3794,"一":-1619,"下":-1759,"世":-2087,"両":3815,"中":653,"主":-758,"予":-1193,"二":974,"人":2742,"今":792,"他":1889,"以":-1368,"低":811,"何":4265,"作":-361,"保":-2439,"元":4858,"党":3593,"全":1574,"公":-3030,"六":755,"共":-1880,"円":5807,"再":3095,"分":457,"初":2475,"別":1129,"前":2286,"副":4437,"力":365,"動":-949,"務":-1872,"化":1327,"北":-1038,"区":4646,"千":-2309,"午":-783,"協":-1006,"口":483,"右":1233,"各":3588,"合":-241,"同":3906,"和":-837,"員":4513,"国":642,"型":1389,"場":1219,"外":-241,"妻":2016,"学":-1356,"安":-423,"実":-1008,"家":1078,"小":-513,"少":-3102,"州":1155,"市":3197,"平":-1804,"年":2416,"広":-1030,"府":1605,"度":1452,"建":-2352,"当":-3885,"得":1905,"思":-1291,"性":1822,"戸":-488,"指":-3973,"政":-2013,"教":-1479,"数":3222,"文":-1489,"新":1764,"日":2099,"旧":5792,"昨":-661,"時":-1248,"曜":-951,"最":-937,"月":4125,"期":360,"李":3094,"村":364,"東":-805,"核":5156,"森":2438,"業":484,"氏":2613,"民":-1694,"決":-1073,"法":1868,"海":-495,"無":979,"物":461,"特":-3850,"生":-273,"用":914,"町":1215,"的":7313,"直":-1835,"省":792,"県":6293,"知":-1528,"私":4231,"税":401,"立":-960,"第":1201,"米":7767,"系":3066,"約":3663,"級":1384,"統":-4229,"総":1163,"線":1255,"者":6457,"能":725,"自":-2869,"英":785,"見":1044,"調":-562,"財":-733,"費":1777,"車":1835,"軍":1375,"込":-1504,"通":-1136,"選":-681,"郎":1026,"郡":4404,"部":1200,"金":2163,"長":421,"開":-1432,"間":1302,"関":-1282,"雨":2009,"電":-1045,"非":2066,"駅":1620,"1":-800,"」":2670,"・":-3794,"ッ":-1350,"ア":551,"グ":1319,"ス":874,"ト":521,"ム":1109,"ル":1591,"ロ":2201,"ン":278}; + this.UW4__ = {",":3930,".":3508,"―":-4841,"、":3930,"。":3508,"〇":4999,"「":1895,"」":3798,"〓":-5156,"あ":4752,"い":-3435,"う":-640,"え":-2514,"お":2405,"か":530,"が":6006,"き":-4482,"ぎ":-3821,"く":-3788,"け":-4376,"げ":-4734,"こ":2255,"ご":1979,"さ":2864,"し":-843,"じ":-2506,"す":-731,"ず":1251,"せ":181,"そ":4091,"た":5034,"だ":5408,"ち":-3654,"っ":-5882,"つ":-1659,"て":3994,"で":7410,"と":4547,"な":5433,"に":6499,"ぬ":1853,"ね":1413,"の":7396,"は":8578,"ば":1940,"ひ":4249,"び":-4134,"ふ":1345,"へ":6665,"べ":-744,"ほ":1464,"ま":1051,"み":-2082,"む":-882,"め":-5046,"も":4169,"ゃ":-2666,"や":2795,"ょ":-1544,"よ":3351,"ら":-2922,"り":-9726,"る":-14896,"れ":-2613,"ろ":-4570,"わ":-1783,"を":13150,"ん":-2352,"カ":2145,"コ":1789,"セ":1287,"ッ":-724,"ト":-403,"メ":-1635,"ラ":-881,"リ":-541,"ル":-856,"ン":-3637,"・":-4371,"ー":-11870,"一":-2069,"中":2210,"予":782,"事":-190,"井":-1768,"人":1036,"以":544,"会":950,"体":-1286,"作":530,"側":4292,"先":601,"党":-2006,"共":-1212,"内":584,"円":788,"初":1347,"前":1623,"副":3879,"力":-302,"動":-740,"務":-2715,"化":776,"区":4517,"協":1013,"参":1555,"合":-1834,"和":-681,"員":-910,"器":-851,"回":1500,"国":-619,"園":-1200,"地":866,"場":-1410,"塁":-2094,"士":-1413,"多":1067,"大":571,"子":-4802,"学":-1397,"定":-1057,"寺":-809,"小":1910,"屋":-1328,"山":-1500,"島":-2056,"川":-2667,"市":2771,"年":374,"庁":-4556,"後":456,"性":553,"感":916,"所":-1566,"支":856,"改":787,"政":2182,"教":704,"文":522,"方":-856,"日":1798,"時":1829,"最":845,"月":-9066,"木":-485,"来":-442,"校":-360,"業":-1043,"氏":5388,"民":-2716,"気":-910,"沢":-939,"済":-543,"物":-735,"率":672,"球":-1267,"生":-1286,"産":-1101,"田":-2900,"町":1826,"的":2586,"目":922,"省":-3485,"県":2997,"空":-867,"立":-2112,"第":788,"米":2937,"系":786,"約":2171,"経":1146,"統":-1169,"総":940,"線":-994,"署":749,"者":2145,"能":-730,"般":-852,"行":-792,"規":792,"警":-1184,"議":-244,"谷":-1000,"賞":730,"車":-1481,"軍":1158,"輪":-1433,"込":-3370,"近":929,"道":-1291,"選":2596,"郎":-4866,"都":1192,"野":-1100,"銀":-2213,"長":357,"間":-2344,"院":-2297,"際":-2604,"電":-878,"領":-1659,"題":-792,"館":-1984,"首":1749,"高":2120,"「":1895,"」":3798,"・":-4371,"ッ":-724,"ー":-11870,"カ":2145,"コ":1789,"セ":1287,"ト":-403,"メ":-1635,"ラ":-881,"リ":-541,"ル":-856,"ン":-3637}; + this.UW5__ = {",":465,".":-299,"1":-514,"E2":-32768,"]":-2762,"、":465,"。":-299,"「":363,"あ":1655,"い":331,"う":-503,"え":1199,"お":527,"か":647,"が":-421,"き":1624,"ぎ":1971,"く":312,"げ":-983,"さ":-1537,"し":-1371,"す":-852,"だ":-1186,"ち":1093,"っ":52,"つ":921,"て":-18,"で":-850,"と":-127,"ど":1682,"な":-787,"に":-1224,"の":-635,"は":-578,"べ":1001,"み":502,"め":865,"ゃ":3350,"ょ":854,"り":-208,"る":429,"れ":504,"わ":419,"を":-1264,"ん":327,"イ":241,"ル":451,"ン":-343,"中":-871,"京":722,"会":-1153,"党":-654,"務":3519,"区":-901,"告":848,"員":2104,"大":-1296,"学":-548,"定":1785,"嵐":-1304,"市":-2991,"席":921,"年":1763,"思":872,"所":-814,"挙":1618,"新":-1682,"日":218,"月":-4353,"査":932,"格":1356,"機":-1508,"氏":-1347,"田":240,"町":-3912,"的":-3149,"相":1319,"省":-1052,"県":-4003,"研":-997,"社":-278,"空":-813,"統":1955,"者":-2233,"表":663,"語":-1073,"議":1219,"選":-1018,"郎":-368,"長":786,"間":1191,"題":2368,"館":-689,"1":-514,"E2":-32768,"「":363,"イ":241,"ル":451,"ン":-343}; + this.UW6__ = {",":227,".":808,"1":-270,"E1":306,"、":227,"。":808,"あ":-307,"う":189,"か":241,"が":-73,"く":-121,"こ":-200,"じ":1782,"す":383,"た":-428,"っ":573,"て":-1014,"で":101,"と":-105,"な":-253,"に":-149,"の":-417,"は":-236,"も":-206,"り":187,"る":-135,"を":195,"ル":-673,"ン":-496,"一":-277,"中":201,"件":-800,"会":624,"前":302,"区":1792,"員":-1212,"委":798,"学":-960,"市":887,"広":-695,"後":535,"業":-697,"相":753,"社":-507,"福":974,"空":-822,"者":1811,"連":463,"郎":1082,"1":-270,"E1":306,"ル":-673,"ン":-496}; + + return this; + } + TinySegmenter.prototype.ctype_ = function(str) { + for (var i in this.chartype_) { + if (str.match(this.chartype_[i][0])) { + return this.chartype_[i][1]; + } + } + return "O"; + } + + TinySegmenter.prototype.ts_ = function(v) { + if (v) { return v; } + return 0; + } + + TinySegmenter.prototype.segment = function(input) { + if (input == null || input == undefined || input == "") { + return []; + } + var result = []; + var seg = ["B3","B2","B1"]; + var ctype = ["O","O","O"]; + var o = input.split(""); + for (i = 0; i < o.length; ++i) { + seg.push(o[i]); + ctype.push(this.ctype_(o[i])) + } + seg.push("E1"); + seg.push("E2"); + seg.push("E3"); + ctype.push("O"); + ctype.push("O"); + ctype.push("O"); + var word = seg[3]; + var p1 = "U"; + var p2 = "U"; + var p3 = "U"; + for (var i = 4; i < seg.length - 3; ++i) { + var score = this.BIAS__; + var w1 = seg[i-3]; + var w2 = seg[i-2]; + var w3 = seg[i-1]; + var w4 = seg[i]; + var w5 = seg[i+1]; + var w6 = seg[i+2]; + var c1 = ctype[i-3]; + var c2 = ctype[i-2]; + var c3 = ctype[i-1]; + var c4 = ctype[i]; + var c5 = ctype[i+1]; + var c6 = ctype[i+2]; + score += this.ts_(this.UP1__[p1]); + score += this.ts_(this.UP2__[p2]); + score += this.ts_(this.UP3__[p3]); + score += this.ts_(this.BP1__[p1 + p2]); + score += this.ts_(this.BP2__[p2 + p3]); + score += this.ts_(this.UW1__[w1]); + score += this.ts_(this.UW2__[w2]); + score += this.ts_(this.UW3__[w3]); + score += this.ts_(this.UW4__[w4]); + score += this.ts_(this.UW5__[w5]); + score += this.ts_(this.UW6__[w6]); + score += this.ts_(this.BW1__[w2 + w3]); + score += this.ts_(this.BW2__[w3 + w4]); + score += this.ts_(this.BW3__[w4 + w5]); + score += this.ts_(this.TW1__[w1 + w2 + w3]); + score += this.ts_(this.TW2__[w2 + w3 + w4]); + score += this.ts_(this.TW3__[w3 + w4 + w5]); + score += this.ts_(this.TW4__[w4 + w5 + w6]); + score += this.ts_(this.UC1__[c1]); + score += this.ts_(this.UC2__[c2]); + score += this.ts_(this.UC3__[c3]); + score += this.ts_(this.UC4__[c4]); + score += this.ts_(this.UC5__[c5]); + score += this.ts_(this.UC6__[c6]); + score += this.ts_(this.BC1__[c2 + c3]); + score += this.ts_(this.BC2__[c3 + c4]); + score += this.ts_(this.BC3__[c4 + c5]); + score += this.ts_(this.TC1__[c1 + c2 + c3]); + score += this.ts_(this.TC2__[c2 + c3 + c4]); + score += this.ts_(this.TC3__[c3 + c4 + c5]); + score += this.ts_(this.TC4__[c4 + c5 + c6]); + // score += this.ts_(this.TC5__[c4 + c5 + c6]); + score += this.ts_(this.UQ1__[p1 + c1]); + score += this.ts_(this.UQ2__[p2 + c2]); + score += this.ts_(this.UQ3__[p3 + c3]); + score += this.ts_(this.BQ1__[p2 + c2 + c3]); + score += this.ts_(this.BQ2__[p2 + c3 + c4]); + score += this.ts_(this.BQ3__[p3 + c2 + c3]); + score += this.ts_(this.BQ4__[p3 + c3 + c4]); + score += this.ts_(this.TQ1__[p2 + c1 + c2 + c3]); + score += this.ts_(this.TQ2__[p2 + c2 + c3 + c4]); + score += this.ts_(this.TQ3__[p3 + c1 + c2 + c3]); + score += this.ts_(this.TQ4__[p3 + c2 + c3 + c4]); + var p = "O"; + if (score > 0) { + result.push(word); + word = ""; + p = "B"; + } + p1 = p2; + p2 = p3; + p3 = p; + word += seg[i]; + } + result.push(word); + + return result; + } + + lunr.TinySegmenter = TinySegmenter; + }; + +})); \ No newline at end of file diff --git a/assets/javascripts/lunr/wordcut.js b/assets/javascripts/lunr/wordcut.js new file mode 100644 index 00000000..146f4b44 --- /dev/null +++ b/assets/javascripts/lunr/wordcut.js @@ -0,0 +1,6708 @@ +(function(f){if(typeof exports==="object"&&typeof module!=="undefined"){module.exports=f()}else if(typeof define==="function"&&define.amd){define([],f)}else{var g;if(typeof window!=="undefined"){g=window}else if(typeof global!=="undefined"){g=global}else if(typeof self!=="undefined"){g=self}else{g=this}(g.lunr || (g.lunr = {})).wordcut = f()}})(function(){var define,module,exports;return (function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o 1; + }) + this.addWords(words, false) + } + if(finalize){ + this.finalizeDict(); + } + }, + + dictSeek: function (l, r, ch, strOffset, pos) { + var ans = null; + while (l <= r) { + var m = Math.floor((l + r) / 2), + dict_item = this.dict[m], + len = dict_item.length; + if (len <= strOffset) { + l = m + 1; + } else { + var ch_ = dict_item[strOffset]; + if (ch_ < ch) { + l = m + 1; + } else if (ch_ > ch) { + r = m - 1; + } else { + ans = m; + if (pos == LEFT) { + r = m - 1; + } else { + l = m + 1; + } + } + } + } + return ans; + }, + + isFinal: function (acceptor) { + return this.dict[acceptor.l].length == acceptor.strOffset; + }, + + createAcceptor: function () { + return { + l: 0, + r: this.dict.length - 1, + strOffset: 0, + isFinal: false, + dict: this, + transit: function (ch) { + return this.dict.transit(this, ch); + }, + isError: false, + tag: "DICT", + w: 1, + type: "DICT" + }; + }, + + transit: function (acceptor, ch) { + var l = this.dictSeek(acceptor.l, + acceptor.r, + ch, + acceptor.strOffset, + LEFT); + if (l !== null) { + var r = this.dictSeek(l, + acceptor.r, + ch, + acceptor.strOffset, + RIGHT); + acceptor.l = l; + acceptor.r = r; + acceptor.strOffset++; + acceptor.isFinal = this.isFinal(acceptor); + } else { + acceptor.isError = true; + } + return acceptor; + }, + + sortuniq: function(a){ + return a.sort().filter(function(item, pos, arr){ + return !pos || item != arr[pos - 1]; + }) + }, + + flatten: function(a){ + //[[1,2],[3]] -> [1,2,3] + return [].concat.apply([], a); + } +}; +module.exports = WordcutDict; + +}).call(this,"/dist/tmp") +},{"glob":16,"path":22}],3:[function(require,module,exports){ +var WordRule = { + createAcceptor: function(tag) { + if (tag["WORD_RULE"]) + return null; + + return {strOffset: 0, + isFinal: false, + transit: function(ch) { + var lch = ch.toLowerCase(); + if (lch >= "a" && lch <= "z") { + this.isFinal = true; + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: "WORD_RULE", + type: "WORD_RULE", + w: 1}; + } +}; + +var NumberRule = { + createAcceptor: function(tag) { + if (tag["NUMBER_RULE"]) + return null; + + return {strOffset: 0, + isFinal: false, + transit: function(ch) { + if (ch >= "0" && ch <= "9") { + this.isFinal = true; + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: "NUMBER_RULE", + type: "NUMBER_RULE", + w: 1}; + } +}; + +var SpaceRule = { + tag: "SPACE_RULE", + createAcceptor: function(tag) { + + if (tag["SPACE_RULE"]) + return null; + + return {strOffset: 0, + isFinal: false, + transit: function(ch) { + if (ch == " " || ch == "\t" || ch == "\r" || ch == "\n" || + ch == "\u00A0" || ch=="\u2003"//nbsp and emsp + ) { + this.isFinal = true; + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: SpaceRule.tag, + w: 1, + type: "SPACE_RULE"}; + } +} + +var SingleSymbolRule = { + tag: "SINSYM", + createAcceptor: function(tag) { + return {strOffset: 0, + isFinal: false, + transit: function(ch) { + if (this.strOffset == 0 && ch.match(/^[\@\(\)\/\,\-\."`]$/)) { + this.isFinal = true; + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: "SINSYM", + w: 1, + type: "SINSYM"}; + } +} + + +var LatinRules = [WordRule, SpaceRule, SingleSymbolRule, NumberRule]; + +module.exports = LatinRules; + +},{}],4:[function(require,module,exports){ +var _ = require("underscore") + , WordcutCore = require("./wordcut_core"); +var PathInfoBuilder = { + + /* + buildByPartAcceptors: function(path, acceptors, i) { + var + var genInfos = partAcceptors.reduce(function(genInfos, acceptor) { + + }, []); + + return genInfos; + } + */ + + buildByAcceptors: function(path, finalAcceptors, i) { + var self = this; + var infos = finalAcceptors.map(function(acceptor) { + var p = i - acceptor.strOffset + 1 + , _info = path[p]; + + var info = {p: p, + mw: _info.mw + (acceptor.mw === undefined ? 0 : acceptor.mw), + w: acceptor.w + _info.w, + unk: (acceptor.unk ? acceptor.unk : 0) + _info.unk, + type: acceptor.type}; + + if (acceptor.type == "PART") { + for(var j = p + 1; j <= i; j++) { + path[j].merge = p; + } + info.merge = p; + } + + return info; + }); + return infos.filter(function(info) { return info; }); + }, + + fallback: function(path, leftBoundary, text, i) { + var _info = path[leftBoundary]; + if (text[i].match(/[\u0E48-\u0E4E]/)) { + if (leftBoundary != 0) + leftBoundary = path[leftBoundary].p; + return {p: leftBoundary, + mw: 0, + w: 1 + _info.w, + unk: 1 + _info.unk, + type: "UNK"}; +/* } else if(leftBoundary > 0 && path[leftBoundary].type !== "UNK") { + leftBoundary = path[leftBoundary].p; + return {p: leftBoundary, + w: 1 + _info.w, + unk: 1 + _info.unk, + type: "UNK"}; */ + } else { + return {p: leftBoundary, + mw: _info.mw, + w: 1 + _info.w, + unk: 1 + _info.unk, + type: "UNK"}; + } + }, + + build: function(path, finalAcceptors, i, leftBoundary, text) { + var basicPathInfos = this.buildByAcceptors(path, finalAcceptors, i); + if (basicPathInfos.length > 0) { + return basicPathInfos; + } else { + return [this.fallback(path, leftBoundary, text, i)]; + } + } +}; + +module.exports = function() { + return _.clone(PathInfoBuilder); +} + +},{"./wordcut_core":8,"underscore":25}],5:[function(require,module,exports){ +var _ = require("underscore"); + + +var PathSelector = { + selectPath: function(paths) { + var path = paths.reduce(function(selectedPath, path) { + if (selectedPath == null) { + return path; + } else { + if (path.unk < selectedPath.unk) + return path; + if (path.unk == selectedPath.unk) { + if (path.mw < selectedPath.mw) + return path + if (path.mw == selectedPath.mw) { + if (path.w < selectedPath.w) + return path; + } + } + return selectedPath; + } + }, null); + return path; + }, + + createPath: function() { + return [{p:null, w:0, unk:0, type: "INIT", mw:0}]; + } +}; + +module.exports = function() { + return _.clone(PathSelector); +}; + +},{"underscore":25}],6:[function(require,module,exports){ +function isMatch(pat, offset, ch) { + if (pat.length <= offset) + return false; + var _ch = pat[offset]; + return _ch == ch || + (_ch.match(/[กข]/) && ch.match(/[ก-ฮ]/)) || + (_ch.match(/[มบ]/) && ch.match(/[ก-ฮ]/)) || + (_ch.match(/\u0E49/) && ch.match(/[\u0E48-\u0E4B]/)); +} + +var Rule0 = { + pat: "เหก็ม", + createAcceptor: function(tag) { + return {strOffset: 0, + isFinal: false, + transit: function(ch) { + if (isMatch(Rule0.pat, this.strOffset,ch)) { + this.isFinal = (this.strOffset + 1 == Rule0.pat.length); + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: "THAI_RULE", + type: "THAI_RULE", + w: 1}; + } +}; + +var PartRule = { + createAcceptor: function(tag) { + return {strOffset: 0, + patterns: [ + "แก", "เก", "ก้", "กก์", "กา", "กี", "กิ", "กืก" + ], + isFinal: false, + transit: function(ch) { + var offset = this.strOffset; + this.patterns = this.patterns.filter(function(pat) { + return isMatch(pat, offset, ch); + }); + + if (this.patterns.length > 0) { + var len = 1 + offset; + this.isFinal = this.patterns.some(function(pat) { + return pat.length == len; + }); + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: "PART", + type: "PART", + unk: 1, + w: 1}; + } +}; + +var ThaiRules = [Rule0, PartRule]; + +module.exports = ThaiRules; + +},{}],7:[function(require,module,exports){ +var sys = require("sys") + , WordcutDict = require("./dict") + , WordcutCore = require("./wordcut_core") + , PathInfoBuilder = require("./path_info_builder") + , PathSelector = require("./path_selector") + , Acceptors = require("./acceptors") + , latinRules = require("./latin_rules") + , thaiRules = require("./thai_rules") + , _ = require("underscore"); + + +var Wordcut = Object.create(WordcutCore); +Wordcut.defaultPathInfoBuilder = PathInfoBuilder; +Wordcut.defaultPathSelector = PathSelector; +Wordcut.defaultAcceptors = Acceptors; +Wordcut.defaultLatinRules = latinRules; +Wordcut.defaultThaiRules = thaiRules; +Wordcut.defaultDict = WordcutDict; + + +Wordcut.initNoDict = function(dict_path) { + var self = this; + self.pathInfoBuilder = new self.defaultPathInfoBuilder; + self.pathSelector = new self.defaultPathSelector; + self.acceptors = new self.defaultAcceptors; + self.defaultLatinRules.forEach(function(rule) { + self.acceptors.creators.push(rule); + }); + self.defaultThaiRules.forEach(function(rule) { + self.acceptors.creators.push(rule); + }); +}; + +Wordcut.init = function(dict_path, withDefault, additionalWords) { + withDefault = withDefault || false; + this.initNoDict(); + var dict = _.clone(this.defaultDict); + dict.init(dict_path, withDefault, additionalWords); + this.acceptors.creators.push(dict); +}; + +module.exports = Wordcut; + +},{"./acceptors":1,"./dict":2,"./latin_rules":3,"./path_info_builder":4,"./path_selector":5,"./thai_rules":6,"./wordcut_core":8,"sys":28,"underscore":25}],8:[function(require,module,exports){ +var WordcutCore = { + + buildPath: function(text) { + var self = this + , path = self.pathSelector.createPath() + , leftBoundary = 0; + self.acceptors.reset(); + for (var i = 0; i < text.length; i++) { + var ch = text[i]; + self.acceptors.transit(ch); + + var possiblePathInfos = self + .pathInfoBuilder + .build(path, + self.acceptors.getFinalAcceptors(), + i, + leftBoundary, + text); + var selectedPath = self.pathSelector.selectPath(possiblePathInfos) + + path.push(selectedPath); + if (selectedPath.type !== "UNK") { + leftBoundary = i; + } + } + return path; + }, + + pathToRanges: function(path) { + var e = path.length - 1 + , ranges = []; + + while (e > 0) { + var info = path[e] + , s = info.p; + + if (info.merge !== undefined && ranges.length > 0) { + var r = ranges[ranges.length - 1]; + r.s = info.merge; + s = r.s; + } else { + ranges.push({s:s, e:e}); + } + e = s; + } + return ranges.reverse(); + }, + + rangesToText: function(text, ranges, delimiter) { + return ranges.map(function(r) { + return text.substring(r.s, r.e); + }).join(delimiter); + }, + + cut: function(text, delimiter) { + var path = this.buildPath(text) + , ranges = this.pathToRanges(path); + return this + .rangesToText(text, ranges, + (delimiter === undefined ? "|" : delimiter)); + }, + + cutIntoRanges: function(text, noText) { + var path = this.buildPath(text) + , ranges = this.pathToRanges(path); + + if (!noText) { + ranges.forEach(function(r) { + r.text = text.substring(r.s, r.e); + }); + } + return ranges; + }, + + cutIntoArray: function(text) { + var path = this.buildPath(text) + , ranges = this.pathToRanges(path); + + return ranges.map(function(r) { + return text.substring(r.s, r.e) + }); + } +}; + +module.exports = WordcutCore; + +},{}],9:[function(require,module,exports){ +// http://wiki.commonjs.org/wiki/Unit_Testing/1.0 +// +// THIS IS NOT TESTED NOR LIKELY TO WORK OUTSIDE V8! +// +// Originally from narwhal.js (http://narwhaljs.org) +// Copyright (c) 2009 Thomas Robinson <280north.com> +// +// Permission is hereby granted, free of charge, to any person obtaining a copy +// of this software and associated documentation files (the 'Software'), to +// deal in the Software without restriction, including without limitation the +// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or +// sell copies of the Software, and to permit persons to whom the Software is +// furnished to do so, subject to the following conditions: +// +// The above copyright notice and this permission notice shall be included in +// all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +// AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +// ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +// when used in node, this will actually load the util module we depend on +// versus loading the builtin util module as happens otherwise +// this is a bug in node module loading as far as I am concerned +var util = require('util/'); + +var pSlice = Array.prototype.slice; +var hasOwn = Object.prototype.hasOwnProperty; + +// 1. The assert module provides functions that throw +// AssertionError's when particular conditions are not met. The +// assert module must conform to the following interface. + +var assert = module.exports = ok; + +// 2. The AssertionError is defined in assert. +// new assert.AssertionError({ message: message, +// actual: actual, +// expected: expected }) + +assert.AssertionError = function AssertionError(options) { + this.name = 'AssertionError'; + this.actual = options.actual; + this.expected = options.expected; + this.operator = options.operator; + if (options.message) { + this.message = options.message; + this.generatedMessage = false; + } else { + this.message = getMessage(this); + this.generatedMessage = true; + } + var stackStartFunction = options.stackStartFunction || fail; + + if (Error.captureStackTrace) { + Error.captureStackTrace(this, stackStartFunction); + } + else { + // non v8 browsers so we can have a stacktrace + var err = new Error(); + if (err.stack) { + var out = err.stack; + + // try to strip useless frames + var fn_name = stackStartFunction.name; + var idx = out.indexOf('\n' + fn_name); + if (idx >= 0) { + // once we have located the function frame + // we need to strip out everything before it (and its line) + var next_line = out.indexOf('\n', idx + 1); + out = out.substring(next_line + 1); + } + + this.stack = out; + } + } +}; + +// assert.AssertionError instanceof Error +util.inherits(assert.AssertionError, Error); + +function replacer(key, value) { + if (util.isUndefined(value)) { + return '' + value; + } + if (util.isNumber(value) && !isFinite(value)) { + return value.toString(); + } + if (util.isFunction(value) || util.isRegExp(value)) { + return value.toString(); + } + return value; +} + +function truncate(s, n) { + if (util.isString(s)) { + return s.length < n ? s : s.slice(0, n); + } else { + return s; + } +} + +function getMessage(self) { + return truncate(JSON.stringify(self.actual, replacer), 128) + ' ' + + self.operator + ' ' + + truncate(JSON.stringify(self.expected, replacer), 128); +} + +// At present only the three keys mentioned above are used and +// understood by the spec. Implementations or sub modules can pass +// other keys to the AssertionError's constructor - they will be +// ignored. + +// 3. All of the following functions must throw an AssertionError +// when a corresponding condition is not met, with a message that +// may be undefined if not provided. All assertion methods provide +// both the actual and expected values to the assertion error for +// display purposes. + +function fail(actual, expected, message, operator, stackStartFunction) { + throw new assert.AssertionError({ + message: message, + actual: actual, + expected: expected, + operator: operator, + stackStartFunction: stackStartFunction + }); +} + +// EXTENSION! allows for well behaved errors defined elsewhere. +assert.fail = fail; + +// 4. Pure assertion tests whether a value is truthy, as determined +// by !!guard. +// assert.ok(guard, message_opt); +// This statement is equivalent to assert.equal(true, !!guard, +// message_opt);. To test strictly for the value true, use +// assert.strictEqual(true, guard, message_opt);. + +function ok(value, message) { + if (!value) fail(value, true, message, '==', assert.ok); +} +assert.ok = ok; + +// 5. The equality assertion tests shallow, coercive equality with +// ==. +// assert.equal(actual, expected, message_opt); + +assert.equal = function equal(actual, expected, message) { + if (actual != expected) fail(actual, expected, message, '==', assert.equal); +}; + +// 6. The non-equality assertion tests for whether two objects are not equal +// with != assert.notEqual(actual, expected, message_opt); + +assert.notEqual = function notEqual(actual, expected, message) { + if (actual == expected) { + fail(actual, expected, message, '!=', assert.notEqual); + } +}; + +// 7. The equivalence assertion tests a deep equality relation. +// assert.deepEqual(actual, expected, message_opt); + +assert.deepEqual = function deepEqual(actual, expected, message) { + if (!_deepEqual(actual, expected)) { + fail(actual, expected, message, 'deepEqual', assert.deepEqual); + } +}; + +function _deepEqual(actual, expected) { + // 7.1. All identical values are equivalent, as determined by ===. + if (actual === expected) { + return true; + + } else if (util.isBuffer(actual) && util.isBuffer(expected)) { + if (actual.length != expected.length) return false; + + for (var i = 0; i < actual.length; i++) { + if (actual[i] !== expected[i]) return false; + } + + return true; + + // 7.2. If the expected value is a Date object, the actual value is + // equivalent if it is also a Date object that refers to the same time. + } else if (util.isDate(actual) && util.isDate(expected)) { + return actual.getTime() === expected.getTime(); + + // 7.3 If the expected value is a RegExp object, the actual value is + // equivalent if it is also a RegExp object with the same source and + // properties (`global`, `multiline`, `lastIndex`, `ignoreCase`). + } else if (util.isRegExp(actual) && util.isRegExp(expected)) { + return actual.source === expected.source && + actual.global === expected.global && + actual.multiline === expected.multiline && + actual.lastIndex === expected.lastIndex && + actual.ignoreCase === expected.ignoreCase; + + // 7.4. Other pairs that do not both pass typeof value == 'object', + // equivalence is determined by ==. + } else if (!util.isObject(actual) && !util.isObject(expected)) { + return actual == expected; + + // 7.5 For all other Object pairs, including Array objects, equivalence is + // determined by having the same number of owned properties (as verified + // with Object.prototype.hasOwnProperty.call), the same set of keys + // (although not necessarily the same order), equivalent values for every + // corresponding key, and an identical 'prototype' property. Note: this + // accounts for both named and indexed properties on Arrays. + } else { + return objEquiv(actual, expected); + } +} + +function isArguments(object) { + return Object.prototype.toString.call(object) == '[object Arguments]'; +} + +function objEquiv(a, b) { + if (util.isNullOrUndefined(a) || util.isNullOrUndefined(b)) + return false; + // an identical 'prototype' property. + if (a.prototype !== b.prototype) return false; + // if one is a primitive, the other must be same + if (util.isPrimitive(a) || util.isPrimitive(b)) { + return a === b; + } + var aIsArgs = isArguments(a), + bIsArgs = isArguments(b); + if ((aIsArgs && !bIsArgs) || (!aIsArgs && bIsArgs)) + return false; + if (aIsArgs) { + a = pSlice.call(a); + b = pSlice.call(b); + return _deepEqual(a, b); + } + var ka = objectKeys(a), + kb = objectKeys(b), + key, i; + // having the same number of owned properties (keys incorporates + // hasOwnProperty) + if (ka.length != kb.length) + return false; + //the same set of keys (although not necessarily the same order), + ka.sort(); + kb.sort(); + //~~~cheap key test + for (i = ka.length - 1; i >= 0; i--) { + if (ka[i] != kb[i]) + return false; + } + //equivalent values for every corresponding key, and + //~~~possibly expensive deep test + for (i = ka.length - 1; i >= 0; i--) { + key = ka[i]; + if (!_deepEqual(a[key], b[key])) return false; + } + return true; +} + +// 8. The non-equivalence assertion tests for any deep inequality. +// assert.notDeepEqual(actual, expected, message_opt); + +assert.notDeepEqual = function notDeepEqual(actual, expected, message) { + if (_deepEqual(actual, expected)) { + fail(actual, expected, message, 'notDeepEqual', assert.notDeepEqual); + } +}; + +// 9. The strict equality assertion tests strict equality, as determined by ===. +// assert.strictEqual(actual, expected, message_opt); + +assert.strictEqual = function strictEqual(actual, expected, message) { + if (actual !== expected) { + fail(actual, expected, message, '===', assert.strictEqual); + } +}; + +// 10. The strict non-equality assertion tests for strict inequality, as +// determined by !==. assert.notStrictEqual(actual, expected, message_opt); + +assert.notStrictEqual = function notStrictEqual(actual, expected, message) { + if (actual === expected) { + fail(actual, expected, message, '!==', assert.notStrictEqual); + } +}; + +function expectedException(actual, expected) { + if (!actual || !expected) { + return false; + } + + if (Object.prototype.toString.call(expected) == '[object RegExp]') { + return expected.test(actual); + } else if (actual instanceof expected) { + return true; + } else if (expected.call({}, actual) === true) { + return true; + } + + return false; +} + +function _throws(shouldThrow, block, expected, message) { + var actual; + + if (util.isString(expected)) { + message = expected; + expected = null; + } + + try { + block(); + } catch (e) { + actual = e; + } + + message = (expected && expected.name ? ' (' + expected.name + ').' : '.') + + (message ? ' ' + message : '.'); + + if (shouldThrow && !actual) { + fail(actual, expected, 'Missing expected exception' + message); + } + + if (!shouldThrow && expectedException(actual, expected)) { + fail(actual, expected, 'Got unwanted exception' + message); + } + + if ((shouldThrow && actual && expected && + !expectedException(actual, expected)) || (!shouldThrow && actual)) { + throw actual; + } +} + +// 11. Expected to throw an error: +// assert.throws(block, Error_opt, message_opt); + +assert.throws = function(block, /*optional*/error, /*optional*/message) { + _throws.apply(this, [true].concat(pSlice.call(arguments))); +}; + +// EXTENSION! This is annoying to write outside this module. +assert.doesNotThrow = function(block, /*optional*/message) { + _throws.apply(this, [false].concat(pSlice.call(arguments))); +}; + +assert.ifError = function(err) { if (err) {throw err;}}; + +var objectKeys = Object.keys || function (obj) { + var keys = []; + for (var key in obj) { + if (hasOwn.call(obj, key)) keys.push(key); + } + return keys; +}; + +},{"util/":28}],10:[function(require,module,exports){ +'use strict'; +module.exports = balanced; +function balanced(a, b, str) { + if (a instanceof RegExp) a = maybeMatch(a, str); + if (b instanceof RegExp) b = maybeMatch(b, str); + + var r = range(a, b, str); + + return r && { + start: r[0], + end: r[1], + pre: str.slice(0, r[0]), + body: str.slice(r[0] + a.length, r[1]), + post: str.slice(r[1] + b.length) + }; +} + +function maybeMatch(reg, str) { + var m = str.match(reg); + return m ? m[0] : null; +} + +balanced.range = range; +function range(a, b, str) { + var begs, beg, left, right, result; + var ai = str.indexOf(a); + var bi = str.indexOf(b, ai + 1); + var i = ai; + + if (ai >= 0 && bi > 0) { + begs = []; + left = str.length; + + while (i >= 0 && !result) { + if (i == ai) { + begs.push(i); + ai = str.indexOf(a, i + 1); + } else if (begs.length == 1) { + result = [ begs.pop(), bi ]; + } else { + beg = begs.pop(); + if (beg < left) { + left = beg; + right = bi; + } + + bi = str.indexOf(b, i + 1); + } + + i = ai < bi && ai >= 0 ? ai : bi; + } + + if (begs.length) { + result = [ left, right ]; + } + } + + return result; +} + +},{}],11:[function(require,module,exports){ +var concatMap = require('concat-map'); +var balanced = require('balanced-match'); + +module.exports = expandTop; + +var escSlash = '\0SLASH'+Math.random()+'\0'; +var escOpen = '\0OPEN'+Math.random()+'\0'; +var escClose = '\0CLOSE'+Math.random()+'\0'; +var escComma = '\0COMMA'+Math.random()+'\0'; +var escPeriod = '\0PERIOD'+Math.random()+'\0'; + +function numeric(str) { + return parseInt(str, 10) == str + ? parseInt(str, 10) + : str.charCodeAt(0); +} + +function escapeBraces(str) { + return str.split('\\\\').join(escSlash) + .split('\\{').join(escOpen) + .split('\\}').join(escClose) + .split('\\,').join(escComma) + .split('\\.').join(escPeriod); +} + +function unescapeBraces(str) { + return str.split(escSlash).join('\\') + .split(escOpen).join('{') + .split(escClose).join('}') + .split(escComma).join(',') + .split(escPeriod).join('.'); +} + + +// Basically just str.split(","), but handling cases +// where we have nested braced sections, which should be +// treated as individual members, like {a,{b,c},d} +function parseCommaParts(str) { + if (!str) + return ['']; + + var parts = []; + var m = balanced('{', '}', str); + + if (!m) + return str.split(','); + + var pre = m.pre; + var body = m.body; + var post = m.post; + var p = pre.split(','); + + p[p.length-1] += '{' + body + '}'; + var postParts = parseCommaParts(post); + if (post.length) { + p[p.length-1] += postParts.shift(); + p.push.apply(p, postParts); + } + + parts.push.apply(parts, p); + + return parts; +} + +function expandTop(str) { + if (!str) + return []; + + // I don't know why Bash 4.3 does this, but it does. + // Anything starting with {} will have the first two bytes preserved + // but *only* at the top level, so {},a}b will not expand to anything, + // but a{},b}c will be expanded to [a}c,abc]. + // One could argue that this is a bug in Bash, but since the goal of + // this module is to match Bash's rules, we escape a leading {} + if (str.substr(0, 2) === '{}') { + str = '\\{\\}' + str.substr(2); + } + + return expand(escapeBraces(str), true).map(unescapeBraces); +} + +function identity(e) { + return e; +} + +function embrace(str) { + return '{' + str + '}'; +} +function isPadded(el) { + return /^-?0\d/.test(el); +} + +function lte(i, y) { + return i <= y; +} +function gte(i, y) { + return i >= y; +} + +function expand(str, isTop) { + var expansions = []; + + var m = balanced('{', '}', str); + if (!m || /\$$/.test(m.pre)) return [str]; + + var isNumericSequence = /^-?\d+\.\.-?\d+(?:\.\.-?\d+)?$/.test(m.body); + var isAlphaSequence = /^[a-zA-Z]\.\.[a-zA-Z](?:\.\.-?\d+)?$/.test(m.body); + var isSequence = isNumericSequence || isAlphaSequence; + var isOptions = m.body.indexOf(',') >= 0; + if (!isSequence && !isOptions) { + // {a},b} + if (m.post.match(/,.*\}/)) { + str = m.pre + '{' + m.body + escClose + m.post; + return expand(str); + } + return [str]; + } + + var n; + if (isSequence) { + n = m.body.split(/\.\./); + } else { + n = parseCommaParts(m.body); + if (n.length === 1) { + // x{{a,b}}y ==> x{a}y x{b}y + n = expand(n[0], false).map(embrace); + if (n.length === 1) { + var post = m.post.length + ? expand(m.post, false) + : ['']; + return post.map(function(p) { + return m.pre + n[0] + p; + }); + } + } + } + + // at this point, n is the parts, and we know it's not a comma set + // with a single entry. + + // no need to expand pre, since it is guaranteed to be free of brace-sets + var pre = m.pre; + var post = m.post.length + ? expand(m.post, false) + : ['']; + + var N; + + if (isSequence) { + var x = numeric(n[0]); + var y = numeric(n[1]); + var width = Math.max(n[0].length, n[1].length) + var incr = n.length == 3 + ? Math.abs(numeric(n[2])) + : 1; + var test = lte; + var reverse = y < x; + if (reverse) { + incr *= -1; + test = gte; + } + var pad = n.some(isPadded); + + N = []; + + for (var i = x; test(i, y); i += incr) { + var c; + if (isAlphaSequence) { + c = String.fromCharCode(i); + if (c === '\\') + c = ''; + } else { + c = String(i); + if (pad) { + var need = width - c.length; + if (need > 0) { + var z = new Array(need + 1).join('0'); + if (i < 0) + c = '-' + z + c.slice(1); + else + c = z + c; + } + } + } + N.push(c); + } + } else { + N = concatMap(n, function(el) { return expand(el, false) }); + } + + for (var j = 0; j < N.length; j++) { + for (var k = 0; k < post.length; k++) { + var expansion = pre + N[j] + post[k]; + if (!isTop || isSequence || expansion) + expansions.push(expansion); + } + } + + return expansions; +} + + +},{"balanced-match":10,"concat-map":13}],12:[function(require,module,exports){ + +},{}],13:[function(require,module,exports){ +module.exports = function (xs, fn) { + var res = []; + for (var i = 0; i < xs.length; i++) { + var x = fn(xs[i], i); + if (isArray(x)) res.push.apply(res, x); + else res.push(x); + } + return res; +}; + +var isArray = Array.isArray || function (xs) { + return Object.prototype.toString.call(xs) === '[object Array]'; +}; + +},{}],14:[function(require,module,exports){ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +function EventEmitter() { + this._events = this._events || {}; + this._maxListeners = this._maxListeners || undefined; +} +module.exports = EventEmitter; + +// Backwards-compat with node 0.10.x +EventEmitter.EventEmitter = EventEmitter; + +EventEmitter.prototype._events = undefined; +EventEmitter.prototype._maxListeners = undefined; + +// By default EventEmitters will print a warning if more than 10 listeners are +// added to it. This is a useful default which helps finding memory leaks. +EventEmitter.defaultMaxListeners = 10; + +// Obviously not all Emitters should be limited to 10. This function allows +// that to be increased. Set to zero for unlimited. +EventEmitter.prototype.setMaxListeners = function(n) { + if (!isNumber(n) || n < 0 || isNaN(n)) + throw TypeError('n must be a positive number'); + this._maxListeners = n; + return this; +}; + +EventEmitter.prototype.emit = function(type) { + var er, handler, len, args, i, listeners; + + if (!this._events) + this._events = {}; + + // If there is no 'error' event listener then throw. + if (type === 'error') { + if (!this._events.error || + (isObject(this._events.error) && !this._events.error.length)) { + er = arguments[1]; + if (er instanceof Error) { + throw er; // Unhandled 'error' event + } + throw TypeError('Uncaught, unspecified "error" event.'); + } + } + + handler = this._events[type]; + + if (isUndefined(handler)) + return false; + + if (isFunction(handler)) { + switch (arguments.length) { + // fast cases + case 1: + handler.call(this); + break; + case 2: + handler.call(this, arguments[1]); + break; + case 3: + handler.call(this, arguments[1], arguments[2]); + break; + // slower + default: + len = arguments.length; + args = new Array(len - 1); + for (i = 1; i < len; i++) + args[i - 1] = arguments[i]; + handler.apply(this, args); + } + } else if (isObject(handler)) { + len = arguments.length; + args = new Array(len - 1); + for (i = 1; i < len; i++) + args[i - 1] = arguments[i]; + + listeners = handler.slice(); + len = listeners.length; + for (i = 0; i < len; i++) + listeners[i].apply(this, args); + } + + return true; +}; + +EventEmitter.prototype.addListener = function(type, listener) { + var m; + + if (!isFunction(listener)) + throw TypeError('listener must be a function'); + + if (!this._events) + this._events = {}; + + // To avoid recursion in the case that type === "newListener"! Before + // adding it to the listeners, first emit "newListener". + if (this._events.newListener) + this.emit('newListener', type, + isFunction(listener.listener) ? + listener.listener : listener); + + if (!this._events[type]) + // Optimize the case of one listener. Don't need the extra array object. + this._events[type] = listener; + else if (isObject(this._events[type])) + // If we've already got an array, just append. + this._events[type].push(listener); + else + // Adding the second element, need to change to array. + this._events[type] = [this._events[type], listener]; + + // Check for listener leak + if (isObject(this._events[type]) && !this._events[type].warned) { + var m; + if (!isUndefined(this._maxListeners)) { + m = this._maxListeners; + } else { + m = EventEmitter.defaultMaxListeners; + } + + if (m && m > 0 && this._events[type].length > m) { + this._events[type].warned = true; + console.error('(node) warning: possible EventEmitter memory ' + + 'leak detected. %d listeners added. ' + + 'Use emitter.setMaxListeners() to increase limit.', + this._events[type].length); + if (typeof console.trace === 'function') { + // not supported in IE 10 + console.trace(); + } + } + } + + return this; +}; + +EventEmitter.prototype.on = EventEmitter.prototype.addListener; + +EventEmitter.prototype.once = function(type, listener) { + if (!isFunction(listener)) + throw TypeError('listener must be a function'); + + var fired = false; + + function g() { + this.removeListener(type, g); + + if (!fired) { + fired = true; + listener.apply(this, arguments); + } + } + + g.listener = listener; + this.on(type, g); + + return this; +}; + +// emits a 'removeListener' event iff the listener was removed +EventEmitter.prototype.removeListener = function(type, listener) { + var list, position, length, i; + + if (!isFunction(listener)) + throw TypeError('listener must be a function'); + + if (!this._events || !this._events[type]) + return this; + + list = this._events[type]; + length = list.length; + position = -1; + + if (list === listener || + (isFunction(list.listener) && list.listener === listener)) { + delete this._events[type]; + if (this._events.removeListener) + this.emit('removeListener', type, listener); + + } else if (isObject(list)) { + for (i = length; i-- > 0;) { + if (list[i] === listener || + (list[i].listener && list[i].listener === listener)) { + position = i; + break; + } + } + + if (position < 0) + return this; + + if (list.length === 1) { + list.length = 0; + delete this._events[type]; + } else { + list.splice(position, 1); + } + + if (this._events.removeListener) + this.emit('removeListener', type, listener); + } + + return this; +}; + +EventEmitter.prototype.removeAllListeners = function(type) { + var key, listeners; + + if (!this._events) + return this; + + // not listening for removeListener, no need to emit + if (!this._events.removeListener) { + if (arguments.length === 0) + this._events = {}; + else if (this._events[type]) + delete this._events[type]; + return this; + } + + // emit removeListener for all listeners on all events + if (arguments.length === 0) { + for (key in this._events) { + if (key === 'removeListener') continue; + this.removeAllListeners(key); + } + this.removeAllListeners('removeListener'); + this._events = {}; + return this; + } + + listeners = this._events[type]; + + if (isFunction(listeners)) { + this.removeListener(type, listeners); + } else { + // LIFO order + while (listeners.length) + this.removeListener(type, listeners[listeners.length - 1]); + } + delete this._events[type]; + + return this; +}; + +EventEmitter.prototype.listeners = function(type) { + var ret; + if (!this._events || !this._events[type]) + ret = []; + else if (isFunction(this._events[type])) + ret = [this._events[type]]; + else + ret = this._events[type].slice(); + return ret; +}; + +EventEmitter.listenerCount = function(emitter, type) { + var ret; + if (!emitter._events || !emitter._events[type]) + ret = 0; + else if (isFunction(emitter._events[type])) + ret = 1; + else + ret = emitter._events[type].length; + return ret; +}; + +function isFunction(arg) { + return typeof arg === 'function'; +} + +function isNumber(arg) { + return typeof arg === 'number'; +} + +function isObject(arg) { + return typeof arg === 'object' && arg !== null; +} + +function isUndefined(arg) { + return arg === void 0; +} + +},{}],15:[function(require,module,exports){ +(function (process){ +exports.alphasort = alphasort +exports.alphasorti = alphasorti +exports.setopts = setopts +exports.ownProp = ownProp +exports.makeAbs = makeAbs +exports.finish = finish +exports.mark = mark +exports.isIgnored = isIgnored +exports.childrenIgnored = childrenIgnored + +function ownProp (obj, field) { + return Object.prototype.hasOwnProperty.call(obj, field) +} + +var path = require("path") +var minimatch = require("minimatch") +var isAbsolute = require("path-is-absolute") +var Minimatch = minimatch.Minimatch + +function alphasorti (a, b) { + return a.toLowerCase().localeCompare(b.toLowerCase()) +} + +function alphasort (a, b) { + return a.localeCompare(b) +} + +function setupIgnores (self, options) { + self.ignore = options.ignore || [] + + if (!Array.isArray(self.ignore)) + self.ignore = [self.ignore] + + if (self.ignore.length) { + self.ignore = self.ignore.map(ignoreMap) + } +} + +function ignoreMap (pattern) { + var gmatcher = null + if (pattern.slice(-3) === '/**') { + var gpattern = pattern.replace(/(\/\*\*)+$/, '') + gmatcher = new Minimatch(gpattern) + } + + return { + matcher: new Minimatch(pattern), + gmatcher: gmatcher + } +} + +function setopts (self, pattern, options) { + if (!options) + options = {} + + // base-matching: just use globstar for that. + if (options.matchBase && -1 === pattern.indexOf("/")) { + if (options.noglobstar) { + throw new Error("base matching requires globstar") + } + pattern = "**/" + pattern + } + + self.silent = !!options.silent + self.pattern = pattern + self.strict = options.strict !== false + self.realpath = !!options.realpath + self.realpathCache = options.realpathCache || Object.create(null) + self.follow = !!options.follow + self.dot = !!options.dot + self.mark = !!options.mark + self.nodir = !!options.nodir + if (self.nodir) + self.mark = true + self.sync = !!options.sync + self.nounique = !!options.nounique + self.nonull = !!options.nonull + self.nosort = !!options.nosort + self.nocase = !!options.nocase + self.stat = !!options.stat + self.noprocess = !!options.noprocess + + self.maxLength = options.maxLength || Infinity + self.cache = options.cache || Object.create(null) + self.statCache = options.statCache || Object.create(null) + self.symlinks = options.symlinks || Object.create(null) + + setupIgnores(self, options) + + self.changedCwd = false + var cwd = process.cwd() + if (!ownProp(options, "cwd")) + self.cwd = cwd + else { + self.cwd = options.cwd + self.changedCwd = path.resolve(options.cwd) !== cwd + } + + self.root = options.root || path.resolve(self.cwd, "/") + self.root = path.resolve(self.root) + if (process.platform === "win32") + self.root = self.root.replace(/\\/g, "/") + + self.nomount = !!options.nomount + + // disable comments and negation unless the user explicitly + // passes in false as the option. + options.nonegate = options.nonegate === false ? false : true + options.nocomment = options.nocomment === false ? false : true + deprecationWarning(options) + + self.minimatch = new Minimatch(pattern, options) + self.options = self.minimatch.options +} + +// TODO(isaacs): remove entirely in v6 +// exported to reset in tests +exports.deprecationWarned +function deprecationWarning(options) { + if (!options.nonegate || !options.nocomment) { + if (process.noDeprecation !== true && !exports.deprecationWarned) { + var msg = 'glob WARNING: comments and negation will be disabled in v6' + if (process.throwDeprecation) + throw new Error(msg) + else if (process.traceDeprecation) + console.trace(msg) + else + console.error(msg) + + exports.deprecationWarned = true + } + } +} + +function finish (self) { + var nou = self.nounique + var all = nou ? [] : Object.create(null) + + for (var i = 0, l = self.matches.length; i < l; i ++) { + var matches = self.matches[i] + if (!matches || Object.keys(matches).length === 0) { + if (self.nonull) { + // do like the shell, and spit out the literal glob + var literal = self.minimatch.globSet[i] + if (nou) + all.push(literal) + else + all[literal] = true + } + } else { + // had matches + var m = Object.keys(matches) + if (nou) + all.push.apply(all, m) + else + m.forEach(function (m) { + all[m] = true + }) + } + } + + if (!nou) + all = Object.keys(all) + + if (!self.nosort) + all = all.sort(self.nocase ? alphasorti : alphasort) + + // at *some* point we statted all of these + if (self.mark) { + for (var i = 0; i < all.length; i++) { + all[i] = self._mark(all[i]) + } + if (self.nodir) { + all = all.filter(function (e) { + return !(/\/$/.test(e)) + }) + } + } + + if (self.ignore.length) + all = all.filter(function(m) { + return !isIgnored(self, m) + }) + + self.found = all +} + +function mark (self, p) { + var abs = makeAbs(self, p) + var c = self.cache[abs] + var m = p + if (c) { + var isDir = c === 'DIR' || Array.isArray(c) + var slash = p.slice(-1) === '/' + + if (isDir && !slash) + m += '/' + else if (!isDir && slash) + m = m.slice(0, -1) + + if (m !== p) { + var mabs = makeAbs(self, m) + self.statCache[mabs] = self.statCache[abs] + self.cache[mabs] = self.cache[abs] + } + } + + return m +} + +// lotta situps... +function makeAbs (self, f) { + var abs = f + if (f.charAt(0) === '/') { + abs = path.join(self.root, f) + } else if (isAbsolute(f) || f === '') { + abs = f + } else if (self.changedCwd) { + abs = path.resolve(self.cwd, f) + } else { + abs = path.resolve(f) + } + return abs +} + + +// Return true, if pattern ends with globstar '**', for the accompanying parent directory. +// Ex:- If node_modules/** is the pattern, add 'node_modules' to ignore list along with it's contents +function isIgnored (self, path) { + if (!self.ignore.length) + return false + + return self.ignore.some(function(item) { + return item.matcher.match(path) || !!(item.gmatcher && item.gmatcher.match(path)) + }) +} + +function childrenIgnored (self, path) { + if (!self.ignore.length) + return false + + return self.ignore.some(function(item) { + return !!(item.gmatcher && item.gmatcher.match(path)) + }) +} + +}).call(this,require('_process')) +},{"_process":24,"minimatch":20,"path":22,"path-is-absolute":23}],16:[function(require,module,exports){ +(function (process){ +// Approach: +// +// 1. Get the minimatch set +// 2. For each pattern in the set, PROCESS(pattern, false) +// 3. Store matches per-set, then uniq them +// +// PROCESS(pattern, inGlobStar) +// Get the first [n] items from pattern that are all strings +// Join these together. This is PREFIX. +// If there is no more remaining, then stat(PREFIX) and +// add to matches if it succeeds. END. +// +// If inGlobStar and PREFIX is symlink and points to dir +// set ENTRIES = [] +// else readdir(PREFIX) as ENTRIES +// If fail, END +// +// with ENTRIES +// If pattern[n] is GLOBSTAR +// // handle the case where the globstar match is empty +// // by pruning it out, and testing the resulting pattern +// PROCESS(pattern[0..n] + pattern[n+1 .. $], false) +// // handle other cases. +// for ENTRY in ENTRIES (not dotfiles) +// // attach globstar + tail onto the entry +// // Mark that this entry is a globstar match +// PROCESS(pattern[0..n] + ENTRY + pattern[n .. $], true) +// +// else // not globstar +// for ENTRY in ENTRIES (not dotfiles, unless pattern[n] is dot) +// Test ENTRY against pattern[n] +// If fails, continue +// If passes, PROCESS(pattern[0..n] + item + pattern[n+1 .. $]) +// +// Caveat: +// Cache all stats and readdirs results to minimize syscall. Since all +// we ever care about is existence and directory-ness, we can just keep +// `true` for files, and [children,...] for directories, or `false` for +// things that don't exist. + +module.exports = glob + +var fs = require('fs') +var minimatch = require('minimatch') +var Minimatch = minimatch.Minimatch +var inherits = require('inherits') +var EE = require('events').EventEmitter +var path = require('path') +var assert = require('assert') +var isAbsolute = require('path-is-absolute') +var globSync = require('./sync.js') +var common = require('./common.js') +var alphasort = common.alphasort +var alphasorti = common.alphasorti +var setopts = common.setopts +var ownProp = common.ownProp +var inflight = require('inflight') +var util = require('util') +var childrenIgnored = common.childrenIgnored +var isIgnored = common.isIgnored + +var once = require('once') + +function glob (pattern, options, cb) { + if (typeof options === 'function') cb = options, options = {} + if (!options) options = {} + + if (options.sync) { + if (cb) + throw new TypeError('callback provided to sync glob') + return globSync(pattern, options) + } + + return new Glob(pattern, options, cb) +} + +glob.sync = globSync +var GlobSync = glob.GlobSync = globSync.GlobSync + +// old api surface +glob.glob = glob + +glob.hasMagic = function (pattern, options_) { + var options = util._extend({}, options_) + options.noprocess = true + + var g = new Glob(pattern, options) + var set = g.minimatch.set + if (set.length > 1) + return true + + for (var j = 0; j < set[0].length; j++) { + if (typeof set[0][j] !== 'string') + return true + } + + return false +} + +glob.Glob = Glob +inherits(Glob, EE) +function Glob (pattern, options, cb) { + if (typeof options === 'function') { + cb = options + options = null + } + + if (options && options.sync) { + if (cb) + throw new TypeError('callback provided to sync glob') + return new GlobSync(pattern, options) + } + + if (!(this instanceof Glob)) + return new Glob(pattern, options, cb) + + setopts(this, pattern, options) + this._didRealPath = false + + // process each pattern in the minimatch set + var n = this.minimatch.set.length + + // The matches are stored as {: true,...} so that + // duplicates are automagically pruned. + // Later, we do an Object.keys() on these. + // Keep them as a list so we can fill in when nonull is set. + this.matches = new Array(n) + + if (typeof cb === 'function') { + cb = once(cb) + this.on('error', cb) + this.on('end', function (matches) { + cb(null, matches) + }) + } + + var self = this + var n = this.minimatch.set.length + this._processing = 0 + this.matches = new Array(n) + + this._emitQueue = [] + this._processQueue = [] + this.paused = false + + if (this.noprocess) + return this + + if (n === 0) + return done() + + for (var i = 0; i < n; i ++) { + this._process(this.minimatch.set[i], i, false, done) + } + + function done () { + --self._processing + if (self._processing <= 0) + self._finish() + } +} + +Glob.prototype._finish = function () { + assert(this instanceof Glob) + if (this.aborted) + return + + if (this.realpath && !this._didRealpath) + return this._realpath() + + common.finish(this) + this.emit('end', this.found) +} + +Glob.prototype._realpath = function () { + if (this._didRealpath) + return + + this._didRealpath = true + + var n = this.matches.length + if (n === 0) + return this._finish() + + var self = this + for (var i = 0; i < this.matches.length; i++) + this._realpathSet(i, next) + + function next () { + if (--n === 0) + self._finish() + } +} + +Glob.prototype._realpathSet = function (index, cb) { + var matchset = this.matches[index] + if (!matchset) + return cb() + + var found = Object.keys(matchset) + var self = this + var n = found.length + + if (n === 0) + return cb() + + var set = this.matches[index] = Object.create(null) + found.forEach(function (p, i) { + // If there's a problem with the stat, then it means that + // one or more of the links in the realpath couldn't be + // resolved. just return the abs value in that case. + p = self._makeAbs(p) + fs.realpath(p, self.realpathCache, function (er, real) { + if (!er) + set[real] = true + else if (er.syscall === 'stat') + set[p] = true + else + self.emit('error', er) // srsly wtf right here + + if (--n === 0) { + self.matches[index] = set + cb() + } + }) + }) +} + +Glob.prototype._mark = function (p) { + return common.mark(this, p) +} + +Glob.prototype._makeAbs = function (f) { + return common.makeAbs(this, f) +} + +Glob.prototype.abort = function () { + this.aborted = true + this.emit('abort') +} + +Glob.prototype.pause = function () { + if (!this.paused) { + this.paused = true + this.emit('pause') + } +} + +Glob.prototype.resume = function () { + if (this.paused) { + this.emit('resume') + this.paused = false + if (this._emitQueue.length) { + var eq = this._emitQueue.slice(0) + this._emitQueue.length = 0 + for (var i = 0; i < eq.length; i ++) { + var e = eq[i] + this._emitMatch(e[0], e[1]) + } + } + if (this._processQueue.length) { + var pq = this._processQueue.slice(0) + this._processQueue.length = 0 + for (var i = 0; i < pq.length; i ++) { + var p = pq[i] + this._processing-- + this._process(p[0], p[1], p[2], p[3]) + } + } + } +} + +Glob.prototype._process = function (pattern, index, inGlobStar, cb) { + assert(this instanceof Glob) + assert(typeof cb === 'function') + + if (this.aborted) + return + + this._processing++ + if (this.paused) { + this._processQueue.push([pattern, index, inGlobStar, cb]) + return + } + + //console.error('PROCESS %d', this._processing, pattern) + + // Get the first [n] parts of pattern that are all strings. + var n = 0 + while (typeof pattern[n] === 'string') { + n ++ + } + // now n is the index of the first one that is *not* a string. + + // see if there's anything else + var prefix + switch (n) { + // if not, then this is rather simple + case pattern.length: + this._processSimple(pattern.join('/'), index, cb) + return + + case 0: + // pattern *starts* with some non-trivial item. + // going to readdir(cwd), but not include the prefix in matches. + prefix = null + break + + default: + // pattern has some string bits in the front. + // whatever it starts with, whether that's 'absolute' like /foo/bar, + // or 'relative' like '../baz' + prefix = pattern.slice(0, n).join('/') + break + } + + var remain = pattern.slice(n) + + // get the list of entries. + var read + if (prefix === null) + read = '.' + else if (isAbsolute(prefix) || isAbsolute(pattern.join('/'))) { + if (!prefix || !isAbsolute(prefix)) + prefix = '/' + prefix + read = prefix + } else + read = prefix + + var abs = this._makeAbs(read) + + //if ignored, skip _processing + if (childrenIgnored(this, read)) + return cb() + + var isGlobStar = remain[0] === minimatch.GLOBSTAR + if (isGlobStar) + this._processGlobStar(prefix, read, abs, remain, index, inGlobStar, cb) + else + this._processReaddir(prefix, read, abs, remain, index, inGlobStar, cb) +} + +Glob.prototype._processReaddir = function (prefix, read, abs, remain, index, inGlobStar, cb) { + var self = this + this._readdir(abs, inGlobStar, function (er, entries) { + return self._processReaddir2(prefix, read, abs, remain, index, inGlobStar, entries, cb) + }) +} + +Glob.prototype._processReaddir2 = function (prefix, read, abs, remain, index, inGlobStar, entries, cb) { + + // if the abs isn't a dir, then nothing can match! + if (!entries) + return cb() + + // It will only match dot entries if it starts with a dot, or if + // dot is set. Stuff like @(.foo|.bar) isn't allowed. + var pn = remain[0] + var negate = !!this.minimatch.negate + var rawGlob = pn._glob + var dotOk = this.dot || rawGlob.charAt(0) === '.' + + var matchedEntries = [] + for (var i = 0; i < entries.length; i++) { + var e = entries[i] + if (e.charAt(0) !== '.' || dotOk) { + var m + if (negate && !prefix) { + m = !e.match(pn) + } else { + m = e.match(pn) + } + if (m) + matchedEntries.push(e) + } + } + + //console.error('prd2', prefix, entries, remain[0]._glob, matchedEntries) + + var len = matchedEntries.length + // If there are no matched entries, then nothing matches. + if (len === 0) + return cb() + + // if this is the last remaining pattern bit, then no need for + // an additional stat *unless* the user has specified mark or + // stat explicitly. We know they exist, since readdir returned + // them. + + if (remain.length === 1 && !this.mark && !this.stat) { + if (!this.matches[index]) + this.matches[index] = Object.create(null) + + for (var i = 0; i < len; i ++) { + var e = matchedEntries[i] + if (prefix) { + if (prefix !== '/') + e = prefix + '/' + e + else + e = prefix + e + } + + if (e.charAt(0) === '/' && !this.nomount) { + e = path.join(this.root, e) + } + this._emitMatch(index, e) + } + // This was the last one, and no stats were needed + return cb() + } + + // now test all matched entries as stand-ins for that part + // of the pattern. + remain.shift() + for (var i = 0; i < len; i ++) { + var e = matchedEntries[i] + var newPattern + if (prefix) { + if (prefix !== '/') + e = prefix + '/' + e + else + e = prefix + e + } + this._process([e].concat(remain), index, inGlobStar, cb) + } + cb() +} + +Glob.prototype._emitMatch = function (index, e) { + if (this.aborted) + return + + if (this.matches[index][e]) + return + + if (isIgnored(this, e)) + return + + if (this.paused) { + this._emitQueue.push([index, e]) + return + } + + var abs = this._makeAbs(e) + + if (this.nodir) { + var c = this.cache[abs] + if (c === 'DIR' || Array.isArray(c)) + return + } + + if (this.mark) + e = this._mark(e) + + this.matches[index][e] = true + + var st = this.statCache[abs] + if (st) + this.emit('stat', e, st) + + this.emit('match', e) +} + +Glob.prototype._readdirInGlobStar = function (abs, cb) { + if (this.aborted) + return + + // follow all symlinked directories forever + // just proceed as if this is a non-globstar situation + if (this.follow) + return this._readdir(abs, false, cb) + + var lstatkey = 'lstat\0' + abs + var self = this + var lstatcb = inflight(lstatkey, lstatcb_) + + if (lstatcb) + fs.lstat(abs, lstatcb) + + function lstatcb_ (er, lstat) { + if (er) + return cb() + + var isSym = lstat.isSymbolicLink() + self.symlinks[abs] = isSym + + // If it's not a symlink or a dir, then it's definitely a regular file. + // don't bother doing a readdir in that case. + if (!isSym && !lstat.isDirectory()) { + self.cache[abs] = 'FILE' + cb() + } else + self._readdir(abs, false, cb) + } +} + +Glob.prototype._readdir = function (abs, inGlobStar, cb) { + if (this.aborted) + return + + cb = inflight('readdir\0'+abs+'\0'+inGlobStar, cb) + if (!cb) + return + + //console.error('RD %j %j', +inGlobStar, abs) + if (inGlobStar && !ownProp(this.symlinks, abs)) + return this._readdirInGlobStar(abs, cb) + + if (ownProp(this.cache, abs)) { + var c = this.cache[abs] + if (!c || c === 'FILE') + return cb() + + if (Array.isArray(c)) + return cb(null, c) + } + + var self = this + fs.readdir(abs, readdirCb(this, abs, cb)) +} + +function readdirCb (self, abs, cb) { + return function (er, entries) { + if (er) + self._readdirError(abs, er, cb) + else + self._readdirEntries(abs, entries, cb) + } +} + +Glob.prototype._readdirEntries = function (abs, entries, cb) { + if (this.aborted) + return + + // if we haven't asked to stat everything, then just + // assume that everything in there exists, so we can avoid + // having to stat it a second time. + if (!this.mark && !this.stat) { + for (var i = 0; i < entries.length; i ++) { + var e = entries[i] + if (abs === '/') + e = abs + e + else + e = abs + '/' + e + this.cache[e] = true + } + } + + this.cache[abs] = entries + return cb(null, entries) +} + +Glob.prototype._readdirError = function (f, er, cb) { + if (this.aborted) + return + + // handle errors, and cache the information + switch (er.code) { + case 'ENOTSUP': // https://github.com/isaacs/node-glob/issues/205 + case 'ENOTDIR': // totally normal. means it *does* exist. + this.cache[this._makeAbs(f)] = 'FILE' + break + + case 'ENOENT': // not terribly unusual + case 'ELOOP': + case 'ENAMETOOLONG': + case 'UNKNOWN': + this.cache[this._makeAbs(f)] = false + break + + default: // some unusual error. Treat as failure. + this.cache[this._makeAbs(f)] = false + if (this.strict) { + this.emit('error', er) + // If the error is handled, then we abort + // if not, we threw out of here + this.abort() + } + if (!this.silent) + console.error('glob error', er) + break + } + + return cb() +} + +Glob.prototype._processGlobStar = function (prefix, read, abs, remain, index, inGlobStar, cb) { + var self = this + this._readdir(abs, inGlobStar, function (er, entries) { + self._processGlobStar2(prefix, read, abs, remain, index, inGlobStar, entries, cb) + }) +} + + +Glob.prototype._processGlobStar2 = function (prefix, read, abs, remain, index, inGlobStar, entries, cb) { + //console.error('pgs2', prefix, remain[0], entries) + + // no entries means not a dir, so it can never have matches + // foo.txt/** doesn't match foo.txt + if (!entries) + return cb() + + // test without the globstar, and with every child both below + // and replacing the globstar. + var remainWithoutGlobStar = remain.slice(1) + var gspref = prefix ? [ prefix ] : [] + var noGlobStar = gspref.concat(remainWithoutGlobStar) + + // the noGlobStar pattern exits the inGlobStar state + this._process(noGlobStar, index, false, cb) + + var isSym = this.symlinks[abs] + var len = entries.length + + // If it's a symlink, and we're in a globstar, then stop + if (isSym && inGlobStar) + return cb() + + for (var i = 0; i < len; i++) { + var e = entries[i] + if (e.charAt(0) === '.' && !this.dot) + continue + + // these two cases enter the inGlobStar state + var instead = gspref.concat(entries[i], remainWithoutGlobStar) + this._process(instead, index, true, cb) + + var below = gspref.concat(entries[i], remain) + this._process(below, index, true, cb) + } + + cb() +} + +Glob.prototype._processSimple = function (prefix, index, cb) { + // XXX review this. Shouldn't it be doing the mounting etc + // before doing stat? kinda weird? + var self = this + this._stat(prefix, function (er, exists) { + self._processSimple2(prefix, index, er, exists, cb) + }) +} +Glob.prototype._processSimple2 = function (prefix, index, er, exists, cb) { + + //console.error('ps2', prefix, exists) + + if (!this.matches[index]) + this.matches[index] = Object.create(null) + + // If it doesn't exist, then just mark the lack of results + if (!exists) + return cb() + + if (prefix && isAbsolute(prefix) && !this.nomount) { + var trail = /[\/\\]$/.test(prefix) + if (prefix.charAt(0) === '/') { + prefix = path.join(this.root, prefix) + } else { + prefix = path.resolve(this.root, prefix) + if (trail) + prefix += '/' + } + } + + if (process.platform === 'win32') + prefix = prefix.replace(/\\/g, '/') + + // Mark this as a match + this._emitMatch(index, prefix) + cb() +} + +// Returns either 'DIR', 'FILE', or false +Glob.prototype._stat = function (f, cb) { + var abs = this._makeAbs(f) + var needDir = f.slice(-1) === '/' + + if (f.length > this.maxLength) + return cb() + + if (!this.stat && ownProp(this.cache, abs)) { + var c = this.cache[abs] + + if (Array.isArray(c)) + c = 'DIR' + + // It exists, but maybe not how we need it + if (!needDir || c === 'DIR') + return cb(null, c) + + if (needDir && c === 'FILE') + return cb() + + // otherwise we have to stat, because maybe c=true + // if we know it exists, but not what it is. + } + + var exists + var stat = this.statCache[abs] + if (stat !== undefined) { + if (stat === false) + return cb(null, stat) + else { + var type = stat.isDirectory() ? 'DIR' : 'FILE' + if (needDir && type === 'FILE') + return cb() + else + return cb(null, type, stat) + } + } + + var self = this + var statcb = inflight('stat\0' + abs, lstatcb_) + if (statcb) + fs.lstat(abs, statcb) + + function lstatcb_ (er, lstat) { + if (lstat && lstat.isSymbolicLink()) { + // If it's a symlink, then treat it as the target, unless + // the target does not exist, then treat it as a file. + return fs.stat(abs, function (er, stat) { + if (er) + self._stat2(f, abs, null, lstat, cb) + else + self._stat2(f, abs, er, stat, cb) + }) + } else { + self._stat2(f, abs, er, lstat, cb) + } + } +} + +Glob.prototype._stat2 = function (f, abs, er, stat, cb) { + if (er) { + this.statCache[abs] = false + return cb() + } + + var needDir = f.slice(-1) === '/' + this.statCache[abs] = stat + + if (abs.slice(-1) === '/' && !stat.isDirectory()) + return cb(null, false, stat) + + var c = stat.isDirectory() ? 'DIR' : 'FILE' + this.cache[abs] = this.cache[abs] || c + + if (needDir && c !== 'DIR') + return cb() + + return cb(null, c, stat) +} + +}).call(this,require('_process')) +},{"./common.js":15,"./sync.js":17,"_process":24,"assert":9,"events":14,"fs":12,"inflight":18,"inherits":19,"minimatch":20,"once":21,"path":22,"path-is-absolute":23,"util":28}],17:[function(require,module,exports){ +(function (process){ +module.exports = globSync +globSync.GlobSync = GlobSync + +var fs = require('fs') +var minimatch = require('minimatch') +var Minimatch = minimatch.Minimatch +var Glob = require('./glob.js').Glob +var util = require('util') +var path = require('path') +var assert = require('assert') +var isAbsolute = require('path-is-absolute') +var common = require('./common.js') +var alphasort = common.alphasort +var alphasorti = common.alphasorti +var setopts = common.setopts +var ownProp = common.ownProp +var childrenIgnored = common.childrenIgnored + +function globSync (pattern, options) { + if (typeof options === 'function' || arguments.length === 3) + throw new TypeError('callback provided to sync glob\n'+ + 'See: https://github.com/isaacs/node-glob/issues/167') + + return new GlobSync(pattern, options).found +} + +function GlobSync (pattern, options) { + if (!pattern) + throw new Error('must provide pattern') + + if (typeof options === 'function' || arguments.length === 3) + throw new TypeError('callback provided to sync glob\n'+ + 'See: https://github.com/isaacs/node-glob/issues/167') + + if (!(this instanceof GlobSync)) + return new GlobSync(pattern, options) + + setopts(this, pattern, options) + + if (this.noprocess) + return this + + var n = this.minimatch.set.length + this.matches = new Array(n) + for (var i = 0; i < n; i ++) { + this._process(this.minimatch.set[i], i, false) + } + this._finish() +} + +GlobSync.prototype._finish = function () { + assert(this instanceof GlobSync) + if (this.realpath) { + var self = this + this.matches.forEach(function (matchset, index) { + var set = self.matches[index] = Object.create(null) + for (var p in matchset) { + try { + p = self._makeAbs(p) + var real = fs.realpathSync(p, self.realpathCache) + set[real] = true + } catch (er) { + if (er.syscall === 'stat') + set[self._makeAbs(p)] = true + else + throw er + } + } + }) + } + common.finish(this) +} + + +GlobSync.prototype._process = function (pattern, index, inGlobStar) { + assert(this instanceof GlobSync) + + // Get the first [n] parts of pattern that are all strings. + var n = 0 + while (typeof pattern[n] === 'string') { + n ++ + } + // now n is the index of the first one that is *not* a string. + + // See if there's anything else + var prefix + switch (n) { + // if not, then this is rather simple + case pattern.length: + this._processSimple(pattern.join('/'), index) + return + + case 0: + // pattern *starts* with some non-trivial item. + // going to readdir(cwd), but not include the prefix in matches. + prefix = null + break + + default: + // pattern has some string bits in the front. + // whatever it starts with, whether that's 'absolute' like /foo/bar, + // or 'relative' like '../baz' + prefix = pattern.slice(0, n).join('/') + break + } + + var remain = pattern.slice(n) + + // get the list of entries. + var read + if (prefix === null) + read = '.' + else if (isAbsolute(prefix) || isAbsolute(pattern.join('/'))) { + if (!prefix || !isAbsolute(prefix)) + prefix = '/' + prefix + read = prefix + } else + read = prefix + + var abs = this._makeAbs(read) + + //if ignored, skip processing + if (childrenIgnored(this, read)) + return + + var isGlobStar = remain[0] === minimatch.GLOBSTAR + if (isGlobStar) + this._processGlobStar(prefix, read, abs, remain, index, inGlobStar) + else + this._processReaddir(prefix, read, abs, remain, index, inGlobStar) +} + + +GlobSync.prototype._processReaddir = function (prefix, read, abs, remain, index, inGlobStar) { + var entries = this._readdir(abs, inGlobStar) + + // if the abs isn't a dir, then nothing can match! + if (!entries) + return + + // It will only match dot entries if it starts with a dot, or if + // dot is set. Stuff like @(.foo|.bar) isn't allowed. + var pn = remain[0] + var negate = !!this.minimatch.negate + var rawGlob = pn._glob + var dotOk = this.dot || rawGlob.charAt(0) === '.' + + var matchedEntries = [] + for (var i = 0; i < entries.length; i++) { + var e = entries[i] + if (e.charAt(0) !== '.' || dotOk) { + var m + if (negate && !prefix) { + m = !e.match(pn) + } else { + m = e.match(pn) + } + if (m) + matchedEntries.push(e) + } + } + + var len = matchedEntries.length + // If there are no matched entries, then nothing matches. + if (len === 0) + return + + // if this is the last remaining pattern bit, then no need for + // an additional stat *unless* the user has specified mark or + // stat explicitly. We know they exist, since readdir returned + // them. + + if (remain.length === 1 && !this.mark && !this.stat) { + if (!this.matches[index]) + this.matches[index] = Object.create(null) + + for (var i = 0; i < len; i ++) { + var e = matchedEntries[i] + if (prefix) { + if (prefix.slice(-1) !== '/') + e = prefix + '/' + e + else + e = prefix + e + } + + if (e.charAt(0) === '/' && !this.nomount) { + e = path.join(this.root, e) + } + this.matches[index][e] = true + } + // This was the last one, and no stats were needed + return + } + + // now test all matched entries as stand-ins for that part + // of the pattern. + remain.shift() + for (var i = 0; i < len; i ++) { + var e = matchedEntries[i] + var newPattern + if (prefix) + newPattern = [prefix, e] + else + newPattern = [e] + this._process(newPattern.concat(remain), index, inGlobStar) + } +} + + +GlobSync.prototype._emitMatch = function (index, e) { + var abs = this._makeAbs(e) + if (this.mark) + e = this._mark(e) + + if (this.matches[index][e]) + return + + if (this.nodir) { + var c = this.cache[this._makeAbs(e)] + if (c === 'DIR' || Array.isArray(c)) + return + } + + this.matches[index][e] = true + if (this.stat) + this._stat(e) +} + + +GlobSync.prototype._readdirInGlobStar = function (abs) { + // follow all symlinked directories forever + // just proceed as if this is a non-globstar situation + if (this.follow) + return this._readdir(abs, false) + + var entries + var lstat + var stat + try { + lstat = fs.lstatSync(abs) + } catch (er) { + // lstat failed, doesn't exist + return null + } + + var isSym = lstat.isSymbolicLink() + this.symlinks[abs] = isSym + + // If it's not a symlink or a dir, then it's definitely a regular file. + // don't bother doing a readdir in that case. + if (!isSym && !lstat.isDirectory()) + this.cache[abs] = 'FILE' + else + entries = this._readdir(abs, false) + + return entries +} + +GlobSync.prototype._readdir = function (abs, inGlobStar) { + var entries + + if (inGlobStar && !ownProp(this.symlinks, abs)) + return this._readdirInGlobStar(abs) + + if (ownProp(this.cache, abs)) { + var c = this.cache[abs] + if (!c || c === 'FILE') + return null + + if (Array.isArray(c)) + return c + } + + try { + return this._readdirEntries(abs, fs.readdirSync(abs)) + } catch (er) { + this._readdirError(abs, er) + return null + } +} + +GlobSync.prototype._readdirEntries = function (abs, entries) { + // if we haven't asked to stat everything, then just + // assume that everything in there exists, so we can avoid + // having to stat it a second time. + if (!this.mark && !this.stat) { + for (var i = 0; i < entries.length; i ++) { + var e = entries[i] + if (abs === '/') + e = abs + e + else + e = abs + '/' + e + this.cache[e] = true + } + } + + this.cache[abs] = entries + + // mark and cache dir-ness + return entries +} + +GlobSync.prototype._readdirError = function (f, er) { + // handle errors, and cache the information + switch (er.code) { + case 'ENOTSUP': // https://github.com/isaacs/node-glob/issues/205 + case 'ENOTDIR': // totally normal. means it *does* exist. + this.cache[this._makeAbs(f)] = 'FILE' + break + + case 'ENOENT': // not terribly unusual + case 'ELOOP': + case 'ENAMETOOLONG': + case 'UNKNOWN': + this.cache[this._makeAbs(f)] = false + break + + default: // some unusual error. Treat as failure. + this.cache[this._makeAbs(f)] = false + if (this.strict) + throw er + if (!this.silent) + console.error('glob error', er) + break + } +} + +GlobSync.prototype._processGlobStar = function (prefix, read, abs, remain, index, inGlobStar) { + + var entries = this._readdir(abs, inGlobStar) + + // no entries means not a dir, so it can never have matches + // foo.txt/** doesn't match foo.txt + if (!entries) + return + + // test without the globstar, and with every child both below + // and replacing the globstar. + var remainWithoutGlobStar = remain.slice(1) + var gspref = prefix ? [ prefix ] : [] + var noGlobStar = gspref.concat(remainWithoutGlobStar) + + // the noGlobStar pattern exits the inGlobStar state + this._process(noGlobStar, index, false) + + var len = entries.length + var isSym = this.symlinks[abs] + + // If it's a symlink, and we're in a globstar, then stop + if (isSym && inGlobStar) + return + + for (var i = 0; i < len; i++) { + var e = entries[i] + if (e.charAt(0) === '.' && !this.dot) + continue + + // these two cases enter the inGlobStar state + var instead = gspref.concat(entries[i], remainWithoutGlobStar) + this._process(instead, index, true) + + var below = gspref.concat(entries[i], remain) + this._process(below, index, true) + } +} + +GlobSync.prototype._processSimple = function (prefix, index) { + // XXX review this. Shouldn't it be doing the mounting etc + // before doing stat? kinda weird? + var exists = this._stat(prefix) + + if (!this.matches[index]) + this.matches[index] = Object.create(null) + + // If it doesn't exist, then just mark the lack of results + if (!exists) + return + + if (prefix && isAbsolute(prefix) && !this.nomount) { + var trail = /[\/\\]$/.test(prefix) + if (prefix.charAt(0) === '/') { + prefix = path.join(this.root, prefix) + } else { + prefix = path.resolve(this.root, prefix) + if (trail) + prefix += '/' + } + } + + if (process.platform === 'win32') + prefix = prefix.replace(/\\/g, '/') + + // Mark this as a match + this.matches[index][prefix] = true +} + +// Returns either 'DIR', 'FILE', or false +GlobSync.prototype._stat = function (f) { + var abs = this._makeAbs(f) + var needDir = f.slice(-1) === '/' + + if (f.length > this.maxLength) + return false + + if (!this.stat && ownProp(this.cache, abs)) { + var c = this.cache[abs] + + if (Array.isArray(c)) + c = 'DIR' + + // It exists, but maybe not how we need it + if (!needDir || c === 'DIR') + return c + + if (needDir && c === 'FILE') + return false + + // otherwise we have to stat, because maybe c=true + // if we know it exists, but not what it is. + } + + var exists + var stat = this.statCache[abs] + if (!stat) { + var lstat + try { + lstat = fs.lstatSync(abs) + } catch (er) { + return false + } + + if (lstat.isSymbolicLink()) { + try { + stat = fs.statSync(abs) + } catch (er) { + stat = lstat + } + } else { + stat = lstat + } + } + + this.statCache[abs] = stat + + var c = stat.isDirectory() ? 'DIR' : 'FILE' + this.cache[abs] = this.cache[abs] || c + + if (needDir && c !== 'DIR') + return false + + return c +} + +GlobSync.prototype._mark = function (p) { + return common.mark(this, p) +} + +GlobSync.prototype._makeAbs = function (f) { + return common.makeAbs(this, f) +} + +}).call(this,require('_process')) +},{"./common.js":15,"./glob.js":16,"_process":24,"assert":9,"fs":12,"minimatch":20,"path":22,"path-is-absolute":23,"util":28}],18:[function(require,module,exports){ +(function (process){ +var wrappy = require('wrappy') +var reqs = Object.create(null) +var once = require('once') + +module.exports = wrappy(inflight) + +function inflight (key, cb) { + if (reqs[key]) { + reqs[key].push(cb) + return null + } else { + reqs[key] = [cb] + return makeres(key) + } +} + +function makeres (key) { + return once(function RES () { + var cbs = reqs[key] + var len = cbs.length + var args = slice(arguments) + + // XXX It's somewhat ambiguous whether a new callback added in this + // pass should be queued for later execution if something in the + // list of callbacks throws, or if it should just be discarded. + // However, it's such an edge case that it hardly matters, and either + // choice is likely as surprising as the other. + // As it happens, we do go ahead and schedule it for later execution. + try { + for (var i = 0; i < len; i++) { + cbs[i].apply(null, args) + } + } finally { + if (cbs.length > len) { + // added more in the interim. + // de-zalgo, just in case, but don't call again. + cbs.splice(0, len) + process.nextTick(function () { + RES.apply(null, args) + }) + } else { + delete reqs[key] + } + } + }) +} + +function slice (args) { + var length = args.length + var array = [] + + for (var i = 0; i < length; i++) array[i] = args[i] + return array +} + +}).call(this,require('_process')) +},{"_process":24,"once":21,"wrappy":29}],19:[function(require,module,exports){ +if (typeof Object.create === 'function') { + // implementation from standard node.js 'util' module + module.exports = function inherits(ctor, superCtor) { + ctor.super_ = superCtor + ctor.prototype = Object.create(superCtor.prototype, { + constructor: { + value: ctor, + enumerable: false, + writable: true, + configurable: true + } + }); + }; +} else { + // old school shim for old browsers + module.exports = function inherits(ctor, superCtor) { + ctor.super_ = superCtor + var TempCtor = function () {} + TempCtor.prototype = superCtor.prototype + ctor.prototype = new TempCtor() + ctor.prototype.constructor = ctor + } +} + +},{}],20:[function(require,module,exports){ +module.exports = minimatch +minimatch.Minimatch = Minimatch + +var path = { sep: '/' } +try { + path = require('path') +} catch (er) {} + +var GLOBSTAR = minimatch.GLOBSTAR = Minimatch.GLOBSTAR = {} +var expand = require('brace-expansion') + +var plTypes = { + '!': { open: '(?:(?!(?:', close: '))[^/]*?)'}, + '?': { open: '(?:', close: ')?' }, + '+': { open: '(?:', close: ')+' }, + '*': { open: '(?:', close: ')*' }, + '@': { open: '(?:', close: ')' } +} + +// any single thing other than / +// don't need to escape / when using new RegExp() +var qmark = '[^/]' + +// * => any number of characters +var star = qmark + '*?' + +// ** when dots are allowed. Anything goes, except .. and . +// not (^ or / followed by one or two dots followed by $ or /), +// followed by anything, any number of times. +var twoStarDot = '(?:(?!(?:\\\/|^)(?:\\.{1,2})($|\\\/)).)*?' + +// not a ^ or / followed by a dot, +// followed by anything, any number of times. +var twoStarNoDot = '(?:(?!(?:\\\/|^)\\.).)*?' + +// characters that need to be escaped in RegExp. +var reSpecials = charSet('().*{}+?[]^$\\!') + +// "abc" -> { a:true, b:true, c:true } +function charSet (s) { + return s.split('').reduce(function (set, c) { + set[c] = true + return set + }, {}) +} + +// normalizes slashes. +var slashSplit = /\/+/ + +minimatch.filter = filter +function filter (pattern, options) { + options = options || {} + return function (p, i, list) { + return minimatch(p, pattern, options) + } +} + +function ext (a, b) { + a = a || {} + b = b || {} + var t = {} + Object.keys(b).forEach(function (k) { + t[k] = b[k] + }) + Object.keys(a).forEach(function (k) { + t[k] = a[k] + }) + return t +} + +minimatch.defaults = function (def) { + if (!def || !Object.keys(def).length) return minimatch + + var orig = minimatch + + var m = function minimatch (p, pattern, options) { + return orig.minimatch(p, pattern, ext(def, options)) + } + + m.Minimatch = function Minimatch (pattern, options) { + return new orig.Minimatch(pattern, ext(def, options)) + } + + return m +} + +Minimatch.defaults = function (def) { + if (!def || !Object.keys(def).length) return Minimatch + return minimatch.defaults(def).Minimatch +} + +function minimatch (p, pattern, options) { + if (typeof pattern !== 'string') { + throw new TypeError('glob pattern string required') + } + + if (!options) options = {} + + // shortcut: comments match nothing. + if (!options.nocomment && pattern.charAt(0) === '#') { + return false + } + + // "" only matches "" + if (pattern.trim() === '') return p === '' + + return new Minimatch(pattern, options).match(p) +} + +function Minimatch (pattern, options) { + if (!(this instanceof Minimatch)) { + return new Minimatch(pattern, options) + } + + if (typeof pattern !== 'string') { + throw new TypeError('glob pattern string required') + } + + if (!options) options = {} + pattern = pattern.trim() + + // windows support: need to use /, not \ + if (path.sep !== '/') { + pattern = pattern.split(path.sep).join('/') + } + + this.options = options + this.set = [] + this.pattern = pattern + this.regexp = null + this.negate = false + this.comment = false + this.empty = false + + // make the set of regexps etc. + this.make() +} + +Minimatch.prototype.debug = function () {} + +Minimatch.prototype.make = make +function make () { + // don't do it more than once. + if (this._made) return + + var pattern = this.pattern + var options = this.options + + // empty patterns and comments match nothing. + if (!options.nocomment && pattern.charAt(0) === '#') { + this.comment = true + return + } + if (!pattern) { + this.empty = true + return + } + + // step 1: figure out negation, etc. + this.parseNegate() + + // step 2: expand braces + var set = this.globSet = this.braceExpand() + + if (options.debug) this.debug = console.error + + this.debug(this.pattern, set) + + // step 3: now we have a set, so turn each one into a series of path-portion + // matching patterns. + // These will be regexps, except in the case of "**", which is + // set to the GLOBSTAR object for globstar behavior, + // and will not contain any / characters + set = this.globParts = set.map(function (s) { + return s.split(slashSplit) + }) + + this.debug(this.pattern, set) + + // glob --> regexps + set = set.map(function (s, si, set) { + return s.map(this.parse, this) + }, this) + + this.debug(this.pattern, set) + + // filter out everything that didn't compile properly. + set = set.filter(function (s) { + return s.indexOf(false) === -1 + }) + + this.debug(this.pattern, set) + + this.set = set +} + +Minimatch.prototype.parseNegate = parseNegate +function parseNegate () { + var pattern = this.pattern + var negate = false + var options = this.options + var negateOffset = 0 + + if (options.nonegate) return + + for (var i = 0, l = pattern.length + ; i < l && pattern.charAt(i) === '!' + ; i++) { + negate = !negate + negateOffset++ + } + + if (negateOffset) this.pattern = pattern.substr(negateOffset) + this.negate = negate +} + +// Brace expansion: +// a{b,c}d -> abd acd +// a{b,}c -> abc ac +// a{0..3}d -> a0d a1d a2d a3d +// a{b,c{d,e}f}g -> abg acdfg acefg +// a{b,c}d{e,f}g -> abdeg acdeg abdeg abdfg +// +// Invalid sets are not expanded. +// a{2..}b -> a{2..}b +// a{b}c -> a{b}c +minimatch.braceExpand = function (pattern, options) { + return braceExpand(pattern, options) +} + +Minimatch.prototype.braceExpand = braceExpand + +function braceExpand (pattern, options) { + if (!options) { + if (this instanceof Minimatch) { + options = this.options + } else { + options = {} + } + } + + pattern = typeof pattern === 'undefined' + ? this.pattern : pattern + + if (typeof pattern === 'undefined') { + throw new TypeError('undefined pattern') + } + + if (options.nobrace || + !pattern.match(/\{.*\}/)) { + // shortcut. no need to expand. + return [pattern] + } + + return expand(pattern) +} + +// parse a component of the expanded set. +// At this point, no pattern may contain "/" in it +// so we're going to return a 2d array, where each entry is the full +// pattern, split on '/', and then turned into a regular expression. +// A regexp is made at the end which joins each array with an +// escaped /, and another full one which joins each regexp with |. +// +// Following the lead of Bash 4.1, note that "**" only has special meaning +// when it is the *only* thing in a path portion. Otherwise, any series +// of * is equivalent to a single *. Globstar behavior is enabled by +// default, and can be disabled by setting options.noglobstar. +Minimatch.prototype.parse = parse +var SUBPARSE = {} +function parse (pattern, isSub) { + if (pattern.length > 1024 * 64) { + throw new TypeError('pattern is too long') + } + + var options = this.options + + // shortcuts + if (!options.noglobstar && pattern === '**') return GLOBSTAR + if (pattern === '') return '' + + var re = '' + var hasMagic = !!options.nocase + var escaping = false + // ? => one single character + var patternListStack = [] + var negativeLists = [] + var stateChar + var inClass = false + var reClassStart = -1 + var classStart = -1 + // . and .. never match anything that doesn't start with ., + // even when options.dot is set. + var patternStart = pattern.charAt(0) === '.' ? '' // anything + // not (start or / followed by . or .. followed by / or end) + : options.dot ? '(?!(?:^|\\\/)\\.{1,2}(?:$|\\\/))' + : '(?!\\.)' + var self = this + + function clearStateChar () { + if (stateChar) { + // we had some state-tracking character + // that wasn't consumed by this pass. + switch (stateChar) { + case '*': + re += star + hasMagic = true + break + case '?': + re += qmark + hasMagic = true + break + default: + re += '\\' + stateChar + break + } + self.debug('clearStateChar %j %j', stateChar, re) + stateChar = false + } + } + + for (var i = 0, len = pattern.length, c + ; (i < len) && (c = pattern.charAt(i)) + ; i++) { + this.debug('%s\t%s %s %j', pattern, i, re, c) + + // skip over any that are escaped. + if (escaping && reSpecials[c]) { + re += '\\' + c + escaping = false + continue + } + + switch (c) { + case '/': + // completely not allowed, even escaped. + // Should already be path-split by now. + return false + + case '\\': + clearStateChar() + escaping = true + continue + + // the various stateChar values + // for the "extglob" stuff. + case '?': + case '*': + case '+': + case '@': + case '!': + this.debug('%s\t%s %s %j <-- stateChar', pattern, i, re, c) + + // all of those are literals inside a class, except that + // the glob [!a] means [^a] in regexp + if (inClass) { + this.debug(' in class') + if (c === '!' && i === classStart + 1) c = '^' + re += c + continue + } + + // if we already have a stateChar, then it means + // that there was something like ** or +? in there. + // Handle the stateChar, then proceed with this one. + self.debug('call clearStateChar %j', stateChar) + clearStateChar() + stateChar = c + // if extglob is disabled, then +(asdf|foo) isn't a thing. + // just clear the statechar *now*, rather than even diving into + // the patternList stuff. + if (options.noext) clearStateChar() + continue + + case '(': + if (inClass) { + re += '(' + continue + } + + if (!stateChar) { + re += '\\(' + continue + } + + patternListStack.push({ + type: stateChar, + start: i - 1, + reStart: re.length, + open: plTypes[stateChar].open, + close: plTypes[stateChar].close + }) + // negation is (?:(?!js)[^/]*) + re += stateChar === '!' ? '(?:(?!(?:' : '(?:' + this.debug('plType %j %j', stateChar, re) + stateChar = false + continue + + case ')': + if (inClass || !patternListStack.length) { + re += '\\)' + continue + } + + clearStateChar() + hasMagic = true + var pl = patternListStack.pop() + // negation is (?:(?!js)[^/]*) + // The others are (?:) + re += pl.close + if (pl.type === '!') { + negativeLists.push(pl) + } + pl.reEnd = re.length + continue + + case '|': + if (inClass || !patternListStack.length || escaping) { + re += '\\|' + escaping = false + continue + } + + clearStateChar() + re += '|' + continue + + // these are mostly the same in regexp and glob + case '[': + // swallow any state-tracking char before the [ + clearStateChar() + + if (inClass) { + re += '\\' + c + continue + } + + inClass = true + classStart = i + reClassStart = re.length + re += c + continue + + case ']': + // a right bracket shall lose its special + // meaning and represent itself in + // a bracket expression if it occurs + // first in the list. -- POSIX.2 2.8.3.2 + if (i === classStart + 1 || !inClass) { + re += '\\' + c + escaping = false + continue + } + + // handle the case where we left a class open. + // "[z-a]" is valid, equivalent to "\[z-a\]" + if (inClass) { + // split where the last [ was, make sure we don't have + // an invalid re. if so, re-walk the contents of the + // would-be class to re-translate any characters that + // were passed through as-is + // TODO: It would probably be faster to determine this + // without a try/catch and a new RegExp, but it's tricky + // to do safely. For now, this is safe and works. + var cs = pattern.substring(classStart + 1, i) + try { + RegExp('[' + cs + ']') + } catch (er) { + // not a valid class! + var sp = this.parse(cs, SUBPARSE) + re = re.substr(0, reClassStart) + '\\[' + sp[0] + '\\]' + hasMagic = hasMagic || sp[1] + inClass = false + continue + } + } + + // finish up the class. + hasMagic = true + inClass = false + re += c + continue + + default: + // swallow any state char that wasn't consumed + clearStateChar() + + if (escaping) { + // no need + escaping = false + } else if (reSpecials[c] + && !(c === '^' && inClass)) { + re += '\\' + } + + re += c + + } // switch + } // for + + // handle the case where we left a class open. + // "[abc" is valid, equivalent to "\[abc" + if (inClass) { + // split where the last [ was, and escape it + // this is a huge pita. We now have to re-walk + // the contents of the would-be class to re-translate + // any characters that were passed through as-is + cs = pattern.substr(classStart + 1) + sp = this.parse(cs, SUBPARSE) + re = re.substr(0, reClassStart) + '\\[' + sp[0] + hasMagic = hasMagic || sp[1] + } + + // handle the case where we had a +( thing at the *end* + // of the pattern. + // each pattern list stack adds 3 chars, and we need to go through + // and escape any | chars that were passed through as-is for the regexp. + // Go through and escape them, taking care not to double-escape any + // | chars that were already escaped. + for (pl = patternListStack.pop(); pl; pl = patternListStack.pop()) { + var tail = re.slice(pl.reStart + pl.open.length) + this.debug('setting tail', re, pl) + // maybe some even number of \, then maybe 1 \, followed by a | + tail = tail.replace(/((?:\\{2}){0,64})(\\?)\|/g, function (_, $1, $2) { + if (!$2) { + // the | isn't already escaped, so escape it. + $2 = '\\' + } + + // need to escape all those slashes *again*, without escaping the + // one that we need for escaping the | character. As it works out, + // escaping an even number of slashes can be done by simply repeating + // it exactly after itself. That's why this trick works. + // + // I am sorry that you have to see this. + return $1 + $1 + $2 + '|' + }) + + this.debug('tail=%j\n %s', tail, tail, pl, re) + var t = pl.type === '*' ? star + : pl.type === '?' ? qmark + : '\\' + pl.type + + hasMagic = true + re = re.slice(0, pl.reStart) + t + '\\(' + tail + } + + // handle trailing things that only matter at the very end. + clearStateChar() + if (escaping) { + // trailing \\ + re += '\\\\' + } + + // only need to apply the nodot start if the re starts with + // something that could conceivably capture a dot + var addPatternStart = false + switch (re.charAt(0)) { + case '.': + case '[': + case '(': addPatternStart = true + } + + // Hack to work around lack of negative lookbehind in JS + // A pattern like: *.!(x).!(y|z) needs to ensure that a name + // like 'a.xyz.yz' doesn't match. So, the first negative + // lookahead, has to look ALL the way ahead, to the end of + // the pattern. + for (var n = negativeLists.length - 1; n > -1; n--) { + var nl = negativeLists[n] + + var nlBefore = re.slice(0, nl.reStart) + var nlFirst = re.slice(nl.reStart, nl.reEnd - 8) + var nlLast = re.slice(nl.reEnd - 8, nl.reEnd) + var nlAfter = re.slice(nl.reEnd) + + nlLast += nlAfter + + // Handle nested stuff like *(*.js|!(*.json)), where open parens + // mean that we should *not* include the ) in the bit that is considered + // "after" the negated section. + var openParensBefore = nlBefore.split('(').length - 1 + var cleanAfter = nlAfter + for (i = 0; i < openParensBefore; i++) { + cleanAfter = cleanAfter.replace(/\)[+*?]?/, '') + } + nlAfter = cleanAfter + + var dollar = '' + if (nlAfter === '' && isSub !== SUBPARSE) { + dollar = '$' + } + var newRe = nlBefore + nlFirst + nlAfter + dollar + nlLast + re = newRe + } + + // if the re is not "" at this point, then we need to make sure + // it doesn't match against an empty path part. + // Otherwise a/* will match a/, which it should not. + if (re !== '' && hasMagic) { + re = '(?=.)' + re + } + + if (addPatternStart) { + re = patternStart + re + } + + // parsing just a piece of a larger pattern. + if (isSub === SUBPARSE) { + return [re, hasMagic] + } + + // skip the regexp for non-magical patterns + // unescape anything in it, though, so that it'll be + // an exact match against a file etc. + if (!hasMagic) { + return globUnescape(pattern) + } + + var flags = options.nocase ? 'i' : '' + try { + var regExp = new RegExp('^' + re + '$', flags) + } catch (er) { + // If it was an invalid regular expression, then it can't match + // anything. This trick looks for a character after the end of + // the string, which is of course impossible, except in multi-line + // mode, but it's not a /m regex. + return new RegExp('$.') + } + + regExp._glob = pattern + regExp._src = re + + return regExp +} + +minimatch.makeRe = function (pattern, options) { + return new Minimatch(pattern, options || {}).makeRe() +} + +Minimatch.prototype.makeRe = makeRe +function makeRe () { + if (this.regexp || this.regexp === false) return this.regexp + + // at this point, this.set is a 2d array of partial + // pattern strings, or "**". + // + // It's better to use .match(). This function shouldn't + // be used, really, but it's pretty convenient sometimes, + // when you just want to work with a regex. + var set = this.set + + if (!set.length) { + this.regexp = false + return this.regexp + } + var options = this.options + + var twoStar = options.noglobstar ? star + : options.dot ? twoStarDot + : twoStarNoDot + var flags = options.nocase ? 'i' : '' + + var re = set.map(function (pattern) { + return pattern.map(function (p) { + return (p === GLOBSTAR) ? twoStar + : (typeof p === 'string') ? regExpEscape(p) + : p._src + }).join('\\\/') + }).join('|') + + // must match entire pattern + // ending in a * or ** will make it less strict. + re = '^(?:' + re + ')$' + + // can match anything, as long as it's not this. + if (this.negate) re = '^(?!' + re + ').*$' + + try { + this.regexp = new RegExp(re, flags) + } catch (ex) { + this.regexp = false + } + return this.regexp +} + +minimatch.match = function (list, pattern, options) { + options = options || {} + var mm = new Minimatch(pattern, options) + list = list.filter(function (f) { + return mm.match(f) + }) + if (mm.options.nonull && !list.length) { + list.push(pattern) + } + return list +} + +Minimatch.prototype.match = match +function match (f, partial) { + this.debug('match', f, this.pattern) + // short-circuit in the case of busted things. + // comments, etc. + if (this.comment) return false + if (this.empty) return f === '' + + if (f === '/' && partial) return true + + var options = this.options + + // windows: need to use /, not \ + if (path.sep !== '/') { + f = f.split(path.sep).join('/') + } + + // treat the test path as a set of pathparts. + f = f.split(slashSplit) + this.debug(this.pattern, 'split', f) + + // just ONE of the pattern sets in this.set needs to match + // in order for it to be valid. If negating, then just one + // match means that we have failed. + // Either way, return on the first hit. + + var set = this.set + this.debug(this.pattern, 'set', set) + + // Find the basename of the path by looking for the last non-empty segment + var filename + var i + for (i = f.length - 1; i >= 0; i--) { + filename = f[i] + if (filename) break + } + + for (i = 0; i < set.length; i++) { + var pattern = set[i] + var file = f + if (options.matchBase && pattern.length === 1) { + file = [filename] + } + var hit = this.matchOne(file, pattern, partial) + if (hit) { + if (options.flipNegate) return true + return !this.negate + } + } + + // didn't get any hits. this is success if it's a negative + // pattern, failure otherwise. + if (options.flipNegate) return false + return this.negate +} + +// set partial to true to test if, for example, +// "/a/b" matches the start of "/*/b/*/d" +// Partial means, if you run out of file before you run +// out of pattern, then that's fine, as long as all +// the parts match. +Minimatch.prototype.matchOne = function (file, pattern, partial) { + var options = this.options + + this.debug('matchOne', + { 'this': this, file: file, pattern: pattern }) + + this.debug('matchOne', file.length, pattern.length) + + for (var fi = 0, + pi = 0, + fl = file.length, + pl = pattern.length + ; (fi < fl) && (pi < pl) + ; fi++, pi++) { + this.debug('matchOne loop') + var p = pattern[pi] + var f = file[fi] + + this.debug(pattern, p, f) + + // should be impossible. + // some invalid regexp stuff in the set. + if (p === false) return false + + if (p === GLOBSTAR) { + this.debug('GLOBSTAR', [pattern, p, f]) + + // "**" + // a/**/b/**/c would match the following: + // a/b/x/y/z/c + // a/x/y/z/b/c + // a/b/x/b/x/c + // a/b/c + // To do this, take the rest of the pattern after + // the **, and see if it would match the file remainder. + // If so, return success. + // If not, the ** "swallows" a segment, and try again. + // This is recursively awful. + // + // a/**/b/**/c matching a/b/x/y/z/c + // - a matches a + // - doublestar + // - matchOne(b/x/y/z/c, b/**/c) + // - b matches b + // - doublestar + // - matchOne(x/y/z/c, c) -> no + // - matchOne(y/z/c, c) -> no + // - matchOne(z/c, c) -> no + // - matchOne(c, c) yes, hit + var fr = fi + var pr = pi + 1 + if (pr === pl) { + this.debug('** at the end') + // a ** at the end will just swallow the rest. + // We have found a match. + // however, it will not swallow /.x, unless + // options.dot is set. + // . and .. are *never* matched by **, for explosively + // exponential reasons. + for (; fi < fl; fi++) { + if (file[fi] === '.' || file[fi] === '..' || + (!options.dot && file[fi].charAt(0) === '.')) return false + } + return true + } + + // ok, let's see if we can swallow whatever we can. + while (fr < fl) { + var swallowee = file[fr] + + this.debug('\nglobstar while', file, fr, pattern, pr, swallowee) + + // XXX remove this slice. Just pass the start index. + if (this.matchOne(file.slice(fr), pattern.slice(pr), partial)) { + this.debug('globstar found match!', fr, fl, swallowee) + // found a match. + return true + } else { + // can't swallow "." or ".." ever. + // can only swallow ".foo" when explicitly asked. + if (swallowee === '.' || swallowee === '..' || + (!options.dot && swallowee.charAt(0) === '.')) { + this.debug('dot detected!', file, fr, pattern, pr) + break + } + + // ** swallows a segment, and continue. + this.debug('globstar swallow a segment, and continue') + fr++ + } + } + + // no match was found. + // However, in partial mode, we can't say this is necessarily over. + // If there's more *pattern* left, then + if (partial) { + // ran out of file + this.debug('\n>>> no match, partial?', file, fr, pattern, pr) + if (fr === fl) return true + } + return false + } + + // something other than ** + // non-magic patterns just have to match exactly + // patterns with magic have been turned into regexps. + var hit + if (typeof p === 'string') { + if (options.nocase) { + hit = f.toLowerCase() === p.toLowerCase() + } else { + hit = f === p + } + this.debug('string match', p, f, hit) + } else { + hit = f.match(p) + this.debug('pattern match', p, f, hit) + } + + if (!hit) return false + } + + // Note: ending in / means that we'll get a final "" + // at the end of the pattern. This can only match a + // corresponding "" at the end of the file. + // If the file ends in /, then it can only match a + // a pattern that ends in /, unless the pattern just + // doesn't have any more for it. But, a/b/ should *not* + // match "a/b/*", even though "" matches against the + // [^/]*? pattern, except in partial mode, where it might + // simply not be reached yet. + // However, a/b/ should still satisfy a/* + + // now either we fell off the end of the pattern, or we're done. + if (fi === fl && pi === pl) { + // ran out of pattern and filename at the same time. + // an exact hit! + return true + } else if (fi === fl) { + // ran out of file, but still had pattern left. + // this is ok if we're doing the match as part of + // a glob fs traversal. + return partial + } else if (pi === pl) { + // ran out of pattern, still have file left. + // this is only acceptable if we're on the very last + // empty segment of a file with a trailing slash. + // a/* should match a/b/ + var emptyFileEnd = (fi === fl - 1) && (file[fi] === '') + return emptyFileEnd + } + + // should be unreachable. + throw new Error('wtf?') +} + +// replace stuff like \* with * +function globUnescape (s) { + return s.replace(/\\(.)/g, '$1') +} + +function regExpEscape (s) { + return s.replace(/[-[\]{}()*+?.,\\^$|#\s]/g, '\\$&') +} + +},{"brace-expansion":11,"path":22}],21:[function(require,module,exports){ +var wrappy = require('wrappy') +module.exports = wrappy(once) +module.exports.strict = wrappy(onceStrict) + +once.proto = once(function () { + Object.defineProperty(Function.prototype, 'once', { + value: function () { + return once(this) + }, + configurable: true + }) + + Object.defineProperty(Function.prototype, 'onceStrict', { + value: function () { + return onceStrict(this) + }, + configurable: true + }) +}) + +function once (fn) { + var f = function () { + if (f.called) return f.value + f.called = true + return f.value = fn.apply(this, arguments) + } + f.called = false + return f +} + +function onceStrict (fn) { + var f = function () { + if (f.called) + throw new Error(f.onceError) + f.called = true + return f.value = fn.apply(this, arguments) + } + var name = fn.name || 'Function wrapped with `once`' + f.onceError = name + " shouldn't be called more than once" + f.called = false + return f +} + +},{"wrappy":29}],22:[function(require,module,exports){ +(function (process){ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +// resolves . and .. elements in a path array with directory names there +// must be no slashes, empty elements, or device names (c:\) in the array +// (so also no leading and trailing slashes - it does not distinguish +// relative and absolute paths) +function normalizeArray(parts, allowAboveRoot) { + // if the path tries to go above the root, `up` ends up > 0 + var up = 0; + for (var i = parts.length - 1; i >= 0; i--) { + var last = parts[i]; + if (last === '.') { + parts.splice(i, 1); + } else if (last === '..') { + parts.splice(i, 1); + up++; + } else if (up) { + parts.splice(i, 1); + up--; + } + } + + // if the path is allowed to go above the root, restore leading ..s + if (allowAboveRoot) { + for (; up--; up) { + parts.unshift('..'); + } + } + + return parts; +} + +// Split a filename into [root, dir, basename, ext], unix version +// 'root' is just a slash, or nothing. +var splitPathRe = + /^(\/?|)([\s\S]*?)((?:\.{1,2}|[^\/]+?|)(\.[^.\/]*|))(?:[\/]*)$/; +var splitPath = function(filename) { + return splitPathRe.exec(filename).slice(1); +}; + +// path.resolve([from ...], to) +// posix version +exports.resolve = function() { + var resolvedPath = '', + resolvedAbsolute = false; + + for (var i = arguments.length - 1; i >= -1 && !resolvedAbsolute; i--) { + var path = (i >= 0) ? arguments[i] : process.cwd(); + + // Skip empty and invalid entries + if (typeof path !== 'string') { + throw new TypeError('Arguments to path.resolve must be strings'); + } else if (!path) { + continue; + } + + resolvedPath = path + '/' + resolvedPath; + resolvedAbsolute = path.charAt(0) === '/'; + } + + // At this point the path should be resolved to a full absolute path, but + // handle relative paths to be safe (might happen when process.cwd() fails) + + // Normalize the path + resolvedPath = normalizeArray(filter(resolvedPath.split('/'), function(p) { + return !!p; + }), !resolvedAbsolute).join('/'); + + return ((resolvedAbsolute ? '/' : '') + resolvedPath) || '.'; +}; + +// path.normalize(path) +// posix version +exports.normalize = function(path) { + var isAbsolute = exports.isAbsolute(path), + trailingSlash = substr(path, -1) === '/'; + + // Normalize the path + path = normalizeArray(filter(path.split('/'), function(p) { + return !!p; + }), !isAbsolute).join('/'); + + if (!path && !isAbsolute) { + path = '.'; + } + if (path && trailingSlash) { + path += '/'; + } + + return (isAbsolute ? '/' : '') + path; +}; + +// posix version +exports.isAbsolute = function(path) { + return path.charAt(0) === '/'; +}; + +// posix version +exports.join = function() { + var paths = Array.prototype.slice.call(arguments, 0); + return exports.normalize(filter(paths, function(p, index) { + if (typeof p !== 'string') { + throw new TypeError('Arguments to path.join must be strings'); + } + return p; + }).join('/')); +}; + + +// path.relative(from, to) +// posix version +exports.relative = function(from, to) { + from = exports.resolve(from).substr(1); + to = exports.resolve(to).substr(1); + + function trim(arr) { + var start = 0; + for (; start < arr.length; start++) { + if (arr[start] !== '') break; + } + + var end = arr.length - 1; + for (; end >= 0; end--) { + if (arr[end] !== '') break; + } + + if (start > end) return []; + return arr.slice(start, end - start + 1); + } + + var fromParts = trim(from.split('/')); + var toParts = trim(to.split('/')); + + var length = Math.min(fromParts.length, toParts.length); + var samePartsLength = length; + for (var i = 0; i < length; i++) { + if (fromParts[i] !== toParts[i]) { + samePartsLength = i; + break; + } + } + + var outputParts = []; + for (var i = samePartsLength; i < fromParts.length; i++) { + outputParts.push('..'); + } + + outputParts = outputParts.concat(toParts.slice(samePartsLength)); + + return outputParts.join('/'); +}; + +exports.sep = '/'; +exports.delimiter = ':'; + +exports.dirname = function(path) { + var result = splitPath(path), + root = result[0], + dir = result[1]; + + if (!root && !dir) { + // No dirname whatsoever + return '.'; + } + + if (dir) { + // It has a dirname, strip trailing slash + dir = dir.substr(0, dir.length - 1); + } + + return root + dir; +}; + + +exports.basename = function(path, ext) { + var f = splitPath(path)[2]; + // TODO: make this comparison case-insensitive on windows? + if (ext && f.substr(-1 * ext.length) === ext) { + f = f.substr(0, f.length - ext.length); + } + return f; +}; + + +exports.extname = function(path) { + return splitPath(path)[3]; +}; + +function filter (xs, f) { + if (xs.filter) return xs.filter(f); + var res = []; + for (var i = 0; i < xs.length; i++) { + if (f(xs[i], i, xs)) res.push(xs[i]); + } + return res; +} + +// String.prototype.substr - negative index don't work in IE8 +var substr = 'ab'.substr(-1) === 'b' + ? function (str, start, len) { return str.substr(start, len) } + : function (str, start, len) { + if (start < 0) start = str.length + start; + return str.substr(start, len); + } +; + +}).call(this,require('_process')) +},{"_process":24}],23:[function(require,module,exports){ +(function (process){ +'use strict'; + +function posix(path) { + return path.charAt(0) === '/'; +} + +function win32(path) { + // https://github.com/nodejs/node/blob/b3fcc245fb25539909ef1d5eaa01dbf92e168633/lib/path.js#L56 + var splitDeviceRe = /^([a-zA-Z]:|[\\\/]{2}[^\\\/]+[\\\/]+[^\\\/]+)?([\\\/])?([\s\S]*?)$/; + var result = splitDeviceRe.exec(path); + var device = result[1] || ''; + var isUnc = Boolean(device && device.charAt(1) !== ':'); + + // UNC paths are always absolute + return Boolean(result[2] || isUnc); +} + +module.exports = process.platform === 'win32' ? win32 : posix; +module.exports.posix = posix; +module.exports.win32 = win32; + +}).call(this,require('_process')) +},{"_process":24}],24:[function(require,module,exports){ +// shim for using process in browser +var process = module.exports = {}; + +// cached from whatever global is present so that test runners that stub it +// don't break things. But we need to wrap it in a try catch in case it is +// wrapped in strict mode code which doesn't define any globals. It's inside a +// function because try/catches deoptimize in certain engines. + +var cachedSetTimeout; +var cachedClearTimeout; + +function defaultSetTimout() { + throw new Error('setTimeout has not been defined'); +} +function defaultClearTimeout () { + throw new Error('clearTimeout has not been defined'); +} +(function () { + try { + if (typeof setTimeout === 'function') { + cachedSetTimeout = setTimeout; + } else { + cachedSetTimeout = defaultSetTimout; + } + } catch (e) { + cachedSetTimeout = defaultSetTimout; + } + try { + if (typeof clearTimeout === 'function') { + cachedClearTimeout = clearTimeout; + } else { + cachedClearTimeout = defaultClearTimeout; + } + } catch (e) { + cachedClearTimeout = defaultClearTimeout; + } +} ()) +function runTimeout(fun) { + if (cachedSetTimeout === setTimeout) { + //normal enviroments in sane situations + return setTimeout(fun, 0); + } + // if setTimeout wasn't available but was latter defined + if ((cachedSetTimeout === defaultSetTimout || !cachedSetTimeout) && setTimeout) { + cachedSetTimeout = setTimeout; + return setTimeout(fun, 0); + } + try { + // when when somebody has screwed with setTimeout but no I.E. maddness + return cachedSetTimeout(fun, 0); + } catch(e){ + try { + // When we are in I.E. but the script has been evaled so I.E. doesn't trust the global object when called normally + return cachedSetTimeout.call(null, fun, 0); + } catch(e){ + // same as above but when it's a version of I.E. that must have the global object for 'this', hopfully our context correct otherwise it will throw a global error + return cachedSetTimeout.call(this, fun, 0); + } + } + + +} +function runClearTimeout(marker) { + if (cachedClearTimeout === clearTimeout) { + //normal enviroments in sane situations + return clearTimeout(marker); + } + // if clearTimeout wasn't available but was latter defined + if ((cachedClearTimeout === defaultClearTimeout || !cachedClearTimeout) && clearTimeout) { + cachedClearTimeout = clearTimeout; + return clearTimeout(marker); + } + try { + // when when somebody has screwed with setTimeout but no I.E. maddness + return cachedClearTimeout(marker); + } catch (e){ + try { + // When we are in I.E. but the script has been evaled so I.E. doesn't trust the global object when called normally + return cachedClearTimeout.call(null, marker); + } catch (e){ + // same as above but when it's a version of I.E. that must have the global object for 'this', hopfully our context correct otherwise it will throw a global error. + // Some versions of I.E. have different rules for clearTimeout vs setTimeout + return cachedClearTimeout.call(this, marker); + } + } + + + +} +var queue = []; +var draining = false; +var currentQueue; +var queueIndex = -1; + +function cleanUpNextTick() { + if (!draining || !currentQueue) { + return; + } + draining = false; + if (currentQueue.length) { + queue = currentQueue.concat(queue); + } else { + queueIndex = -1; + } + if (queue.length) { + drainQueue(); + } +} + +function drainQueue() { + if (draining) { + return; + } + var timeout = runTimeout(cleanUpNextTick); + draining = true; + + var len = queue.length; + while(len) { + currentQueue = queue; + queue = []; + while (++queueIndex < len) { + if (currentQueue) { + currentQueue[queueIndex].run(); + } + } + queueIndex = -1; + len = queue.length; + } + currentQueue = null; + draining = false; + runClearTimeout(timeout); +} + +process.nextTick = function (fun) { + var args = new Array(arguments.length - 1); + if (arguments.length > 1) { + for (var i = 1; i < arguments.length; i++) { + args[i - 1] = arguments[i]; + } + } + queue.push(new Item(fun, args)); + if (queue.length === 1 && !draining) { + runTimeout(drainQueue); + } +}; + +// v8 likes predictible objects +function Item(fun, array) { + this.fun = fun; + this.array = array; +} +Item.prototype.run = function () { + this.fun.apply(null, this.array); +}; +process.title = 'browser'; +process.browser = true; +process.env = {}; +process.argv = []; +process.version = ''; // empty string to avoid regexp issues +process.versions = {}; + +function noop() {} + +process.on = noop; +process.addListener = noop; +process.once = noop; +process.off = noop; +process.removeListener = noop; +process.removeAllListeners = noop; +process.emit = noop; +process.prependListener = noop; +process.prependOnceListener = noop; + +process.listeners = function (name) { return [] } + +process.binding = function (name) { + throw new Error('process.binding is not supported'); +}; + +process.cwd = function () { return '/' }; +process.chdir = function (dir) { + throw new Error('process.chdir is not supported'); +}; +process.umask = function() { return 0; }; + +},{}],25:[function(require,module,exports){ +// Underscore.js 1.8.3 +// http://underscorejs.org +// (c) 2009-2015 Jeremy Ashkenas, DocumentCloud and Investigative Reporters & Editors +// Underscore may be freely distributed under the MIT license. + +(function() { + + // Baseline setup + // -------------- + + // Establish the root object, `window` in the browser, or `exports` on the server. + var root = this; + + // Save the previous value of the `_` variable. + var previousUnderscore = root._; + + // Save bytes in the minified (but not gzipped) version: + var ArrayProto = Array.prototype, ObjProto = Object.prototype, FuncProto = Function.prototype; + + // Create quick reference variables for speed access to core prototypes. + var + push = ArrayProto.push, + slice = ArrayProto.slice, + toString = ObjProto.toString, + hasOwnProperty = ObjProto.hasOwnProperty; + + // All **ECMAScript 5** native function implementations that we hope to use + // are declared here. + var + nativeIsArray = Array.isArray, + nativeKeys = Object.keys, + nativeBind = FuncProto.bind, + nativeCreate = Object.create; + + // Naked function reference for surrogate-prototype-swapping. + var Ctor = function(){}; + + // Create a safe reference to the Underscore object for use below. + var _ = function(obj) { + if (obj instanceof _) return obj; + if (!(this instanceof _)) return new _(obj); + this._wrapped = obj; + }; + + // Export the Underscore object for **Node.js**, with + // backwards-compatibility for the old `require()` API. If we're in + // the browser, add `_` as a global object. + if (typeof exports !== 'undefined') { + if (typeof module !== 'undefined' && module.exports) { + exports = module.exports = _; + } + exports._ = _; + } else { + root._ = _; + } + + // Current version. + _.VERSION = '1.8.3'; + + // Internal function that returns an efficient (for current engines) version + // of the passed-in callback, to be repeatedly applied in other Underscore + // functions. + var optimizeCb = function(func, context, argCount) { + if (context === void 0) return func; + switch (argCount == null ? 3 : argCount) { + case 1: return function(value) { + return func.call(context, value); + }; + case 2: return function(value, other) { + return func.call(context, value, other); + }; + case 3: return function(value, index, collection) { + return func.call(context, value, index, collection); + }; + case 4: return function(accumulator, value, index, collection) { + return func.call(context, accumulator, value, index, collection); + }; + } + return function() { + return func.apply(context, arguments); + }; + }; + + // A mostly-internal function to generate callbacks that can be applied + // to each element in a collection, returning the desired result — either + // identity, an arbitrary callback, a property matcher, or a property accessor. + var cb = function(value, context, argCount) { + if (value == null) return _.identity; + if (_.isFunction(value)) return optimizeCb(value, context, argCount); + if (_.isObject(value)) return _.matcher(value); + return _.property(value); + }; + _.iteratee = function(value, context) { + return cb(value, context, Infinity); + }; + + // An internal function for creating assigner functions. + var createAssigner = function(keysFunc, undefinedOnly) { + return function(obj) { + var length = arguments.length; + if (length < 2 || obj == null) return obj; + for (var index = 1; index < length; index++) { + var source = arguments[index], + keys = keysFunc(source), + l = keys.length; + for (var i = 0; i < l; i++) { + var key = keys[i]; + if (!undefinedOnly || obj[key] === void 0) obj[key] = source[key]; + } + } + return obj; + }; + }; + + // An internal function for creating a new object that inherits from another. + var baseCreate = function(prototype) { + if (!_.isObject(prototype)) return {}; + if (nativeCreate) return nativeCreate(prototype); + Ctor.prototype = prototype; + var result = new Ctor; + Ctor.prototype = null; + return result; + }; + + var property = function(key) { + return function(obj) { + return obj == null ? void 0 : obj[key]; + }; + }; + + // Helper for collection methods to determine whether a collection + // should be iterated as an array or as an object + // Related: http://people.mozilla.org/~jorendorff/es6-draft.html#sec-tolength + // Avoids a very nasty iOS 8 JIT bug on ARM-64. #2094 + var MAX_ARRAY_INDEX = Math.pow(2, 53) - 1; + var getLength = property('length'); + var isArrayLike = function(collection) { + var length = getLength(collection); + return typeof length == 'number' && length >= 0 && length <= MAX_ARRAY_INDEX; + }; + + // Collection Functions + // -------------------- + + // The cornerstone, an `each` implementation, aka `forEach`. + // Handles raw objects in addition to array-likes. Treats all + // sparse array-likes as if they were dense. + _.each = _.forEach = function(obj, iteratee, context) { + iteratee = optimizeCb(iteratee, context); + var i, length; + if (isArrayLike(obj)) { + for (i = 0, length = obj.length; i < length; i++) { + iteratee(obj[i], i, obj); + } + } else { + var keys = _.keys(obj); + for (i = 0, length = keys.length; i < length; i++) { + iteratee(obj[keys[i]], keys[i], obj); + } + } + return obj; + }; + + // Return the results of applying the iteratee to each element. + _.map = _.collect = function(obj, iteratee, context) { + iteratee = cb(iteratee, context); + var keys = !isArrayLike(obj) && _.keys(obj), + length = (keys || obj).length, + results = Array(length); + for (var index = 0; index < length; index++) { + var currentKey = keys ? keys[index] : index; + results[index] = iteratee(obj[currentKey], currentKey, obj); + } + return results; + }; + + // Create a reducing function iterating left or right. + function createReduce(dir) { + // Optimized iterator function as using arguments.length + // in the main function will deoptimize the, see #1991. + function iterator(obj, iteratee, memo, keys, index, length) { + for (; index >= 0 && index < length; index += dir) { + var currentKey = keys ? keys[index] : index; + memo = iteratee(memo, obj[currentKey], currentKey, obj); + } + return memo; + } + + return function(obj, iteratee, memo, context) { + iteratee = optimizeCb(iteratee, context, 4); + var keys = !isArrayLike(obj) && _.keys(obj), + length = (keys || obj).length, + index = dir > 0 ? 0 : length - 1; + // Determine the initial value if none is provided. + if (arguments.length < 3) { + memo = obj[keys ? keys[index] : index]; + index += dir; + } + return iterator(obj, iteratee, memo, keys, index, length); + }; + } + + // **Reduce** builds up a single result from a list of values, aka `inject`, + // or `foldl`. + _.reduce = _.foldl = _.inject = createReduce(1); + + // The right-associative version of reduce, also known as `foldr`. + _.reduceRight = _.foldr = createReduce(-1); + + // Return the first value which passes a truth test. Aliased as `detect`. + _.find = _.detect = function(obj, predicate, context) { + var key; + if (isArrayLike(obj)) { + key = _.findIndex(obj, predicate, context); + } else { + key = _.findKey(obj, predicate, context); + } + if (key !== void 0 && key !== -1) return obj[key]; + }; + + // Return all the elements that pass a truth test. + // Aliased as `select`. + _.filter = _.select = function(obj, predicate, context) { + var results = []; + predicate = cb(predicate, context); + _.each(obj, function(value, index, list) { + if (predicate(value, index, list)) results.push(value); + }); + return results; + }; + + // Return all the elements for which a truth test fails. + _.reject = function(obj, predicate, context) { + return _.filter(obj, _.negate(cb(predicate)), context); + }; + + // Determine whether all of the elements match a truth test. + // Aliased as `all`. + _.every = _.all = function(obj, predicate, context) { + predicate = cb(predicate, context); + var keys = !isArrayLike(obj) && _.keys(obj), + length = (keys || obj).length; + for (var index = 0; index < length; index++) { + var currentKey = keys ? keys[index] : index; + if (!predicate(obj[currentKey], currentKey, obj)) return false; + } + return true; + }; + + // Determine if at least one element in the object matches a truth test. + // Aliased as `any`. + _.some = _.any = function(obj, predicate, context) { + predicate = cb(predicate, context); + var keys = !isArrayLike(obj) && _.keys(obj), + length = (keys || obj).length; + for (var index = 0; index < length; index++) { + var currentKey = keys ? keys[index] : index; + if (predicate(obj[currentKey], currentKey, obj)) return true; + } + return false; + }; + + // Determine if the array or object contains a given item (using `===`). + // Aliased as `includes` and `include`. + _.contains = _.includes = _.include = function(obj, item, fromIndex, guard) { + if (!isArrayLike(obj)) obj = _.values(obj); + if (typeof fromIndex != 'number' || guard) fromIndex = 0; + return _.indexOf(obj, item, fromIndex) >= 0; + }; + + // Invoke a method (with arguments) on every item in a collection. + _.invoke = function(obj, method) { + var args = slice.call(arguments, 2); + var isFunc = _.isFunction(method); + return _.map(obj, function(value) { + var func = isFunc ? method : value[method]; + return func == null ? func : func.apply(value, args); + }); + }; + + // Convenience version of a common use case of `map`: fetching a property. + _.pluck = function(obj, key) { + return _.map(obj, _.property(key)); + }; + + // Convenience version of a common use case of `filter`: selecting only objects + // containing specific `key:value` pairs. + _.where = function(obj, attrs) { + return _.filter(obj, _.matcher(attrs)); + }; + + // Convenience version of a common use case of `find`: getting the first object + // containing specific `key:value` pairs. + _.findWhere = function(obj, attrs) { + return _.find(obj, _.matcher(attrs)); + }; + + // Return the maximum element (or element-based computation). + _.max = function(obj, iteratee, context) { + var result = -Infinity, lastComputed = -Infinity, + value, computed; + if (iteratee == null && obj != null) { + obj = isArrayLike(obj) ? obj : _.values(obj); + for (var i = 0, length = obj.length; i < length; i++) { + value = obj[i]; + if (value > result) { + result = value; + } + } + } else { + iteratee = cb(iteratee, context); + _.each(obj, function(value, index, list) { + computed = iteratee(value, index, list); + if (computed > lastComputed || computed === -Infinity && result === -Infinity) { + result = value; + lastComputed = computed; + } + }); + } + return result; + }; + + // Return the minimum element (or element-based computation). + _.min = function(obj, iteratee, context) { + var result = Infinity, lastComputed = Infinity, + value, computed; + if (iteratee == null && obj != null) { + obj = isArrayLike(obj) ? obj : _.values(obj); + for (var i = 0, length = obj.length; i < length; i++) { + value = obj[i]; + if (value < result) { + result = value; + } + } + } else { + iteratee = cb(iteratee, context); + _.each(obj, function(value, index, list) { + computed = iteratee(value, index, list); + if (computed < lastComputed || computed === Infinity && result === Infinity) { + result = value; + lastComputed = computed; + } + }); + } + return result; + }; + + // Shuffle a collection, using the modern version of the + // [Fisher-Yates shuffle](http://en.wikipedia.org/wiki/Fisher–Yates_shuffle). + _.shuffle = function(obj) { + var set = isArrayLike(obj) ? obj : _.values(obj); + var length = set.length; + var shuffled = Array(length); + for (var index = 0, rand; index < length; index++) { + rand = _.random(0, index); + if (rand !== index) shuffled[index] = shuffled[rand]; + shuffled[rand] = set[index]; + } + return shuffled; + }; + + // Sample **n** random values from a collection. + // If **n** is not specified, returns a single random element. + // The internal `guard` argument allows it to work with `map`. + _.sample = function(obj, n, guard) { + if (n == null || guard) { + if (!isArrayLike(obj)) obj = _.values(obj); + return obj[_.random(obj.length - 1)]; + } + return _.shuffle(obj).slice(0, Math.max(0, n)); + }; + + // Sort the object's values by a criterion produced by an iteratee. + _.sortBy = function(obj, iteratee, context) { + iteratee = cb(iteratee, context); + return _.pluck(_.map(obj, function(value, index, list) { + return { + value: value, + index: index, + criteria: iteratee(value, index, list) + }; + }).sort(function(left, right) { + var a = left.criteria; + var b = right.criteria; + if (a !== b) { + if (a > b || a === void 0) return 1; + if (a < b || b === void 0) return -1; + } + return left.index - right.index; + }), 'value'); + }; + + // An internal function used for aggregate "group by" operations. + var group = function(behavior) { + return function(obj, iteratee, context) { + var result = {}; + iteratee = cb(iteratee, context); + _.each(obj, function(value, index) { + var key = iteratee(value, index, obj); + behavior(result, value, key); + }); + return result; + }; + }; + + // Groups the object's values by a criterion. Pass either a string attribute + // to group by, or a function that returns the criterion. + _.groupBy = group(function(result, value, key) { + if (_.has(result, key)) result[key].push(value); else result[key] = [value]; + }); + + // Indexes the object's values by a criterion, similar to `groupBy`, but for + // when you know that your index values will be unique. + _.indexBy = group(function(result, value, key) { + result[key] = value; + }); + + // Counts instances of an object that group by a certain criterion. Pass + // either a string attribute to count by, or a function that returns the + // criterion. + _.countBy = group(function(result, value, key) { + if (_.has(result, key)) result[key]++; else result[key] = 1; + }); + + // Safely create a real, live array from anything iterable. + _.toArray = function(obj) { + if (!obj) return []; + if (_.isArray(obj)) return slice.call(obj); + if (isArrayLike(obj)) return _.map(obj, _.identity); + return _.values(obj); + }; + + // Return the number of elements in an object. + _.size = function(obj) { + if (obj == null) return 0; + return isArrayLike(obj) ? obj.length : _.keys(obj).length; + }; + + // Split a collection into two arrays: one whose elements all satisfy the given + // predicate, and one whose elements all do not satisfy the predicate. + _.partition = function(obj, predicate, context) { + predicate = cb(predicate, context); + var pass = [], fail = []; + _.each(obj, function(value, key, obj) { + (predicate(value, key, obj) ? pass : fail).push(value); + }); + return [pass, fail]; + }; + + // Array Functions + // --------------- + + // Get the first element of an array. Passing **n** will return the first N + // values in the array. Aliased as `head` and `take`. The **guard** check + // allows it to work with `_.map`. + _.first = _.head = _.take = function(array, n, guard) { + if (array == null) return void 0; + if (n == null || guard) return array[0]; + return _.initial(array, array.length - n); + }; + + // Returns everything but the last entry of the array. Especially useful on + // the arguments object. Passing **n** will return all the values in + // the array, excluding the last N. + _.initial = function(array, n, guard) { + return slice.call(array, 0, Math.max(0, array.length - (n == null || guard ? 1 : n))); + }; + + // Get the last element of an array. Passing **n** will return the last N + // values in the array. + _.last = function(array, n, guard) { + if (array == null) return void 0; + if (n == null || guard) return array[array.length - 1]; + return _.rest(array, Math.max(0, array.length - n)); + }; + + // Returns everything but the first entry of the array. Aliased as `tail` and `drop`. + // Especially useful on the arguments object. Passing an **n** will return + // the rest N values in the array. + _.rest = _.tail = _.drop = function(array, n, guard) { + return slice.call(array, n == null || guard ? 1 : n); + }; + + // Trim out all falsy values from an array. + _.compact = function(array) { + return _.filter(array, _.identity); + }; + + // Internal implementation of a recursive `flatten` function. + var flatten = function(input, shallow, strict, startIndex) { + var output = [], idx = 0; + for (var i = startIndex || 0, length = getLength(input); i < length; i++) { + var value = input[i]; + if (isArrayLike(value) && (_.isArray(value) || _.isArguments(value))) { + //flatten current level of array or arguments object + if (!shallow) value = flatten(value, shallow, strict); + var j = 0, len = value.length; + output.length += len; + while (j < len) { + output[idx++] = value[j++]; + } + } else if (!strict) { + output[idx++] = value; + } + } + return output; + }; + + // Flatten out an array, either recursively (by default), or just one level. + _.flatten = function(array, shallow) { + return flatten(array, shallow, false); + }; + + // Return a version of the array that does not contain the specified value(s). + _.without = function(array) { + return _.difference(array, slice.call(arguments, 1)); + }; + + // Produce a duplicate-free version of the array. If the array has already + // been sorted, you have the option of using a faster algorithm. + // Aliased as `unique`. + _.uniq = _.unique = function(array, isSorted, iteratee, context) { + if (!_.isBoolean(isSorted)) { + context = iteratee; + iteratee = isSorted; + isSorted = false; + } + if (iteratee != null) iteratee = cb(iteratee, context); + var result = []; + var seen = []; + for (var i = 0, length = getLength(array); i < length; i++) { + var value = array[i], + computed = iteratee ? iteratee(value, i, array) : value; + if (isSorted) { + if (!i || seen !== computed) result.push(value); + seen = computed; + } else if (iteratee) { + if (!_.contains(seen, computed)) { + seen.push(computed); + result.push(value); + } + } else if (!_.contains(result, value)) { + result.push(value); + } + } + return result; + }; + + // Produce an array that contains the union: each distinct element from all of + // the passed-in arrays. + _.union = function() { + return _.uniq(flatten(arguments, true, true)); + }; + + // Produce an array that contains every item shared between all the + // passed-in arrays. + _.intersection = function(array) { + var result = []; + var argsLength = arguments.length; + for (var i = 0, length = getLength(array); i < length; i++) { + var item = array[i]; + if (_.contains(result, item)) continue; + for (var j = 1; j < argsLength; j++) { + if (!_.contains(arguments[j], item)) break; + } + if (j === argsLength) result.push(item); + } + return result; + }; + + // Take the difference between one array and a number of other arrays. + // Only the elements present in just the first array will remain. + _.difference = function(array) { + var rest = flatten(arguments, true, true, 1); + return _.filter(array, function(value){ + return !_.contains(rest, value); + }); + }; + + // Zip together multiple lists into a single array -- elements that share + // an index go together. + _.zip = function() { + return _.unzip(arguments); + }; + + // Complement of _.zip. Unzip accepts an array of arrays and groups + // each array's elements on shared indices + _.unzip = function(array) { + var length = array && _.max(array, getLength).length || 0; + var result = Array(length); + + for (var index = 0; index < length; index++) { + result[index] = _.pluck(array, index); + } + return result; + }; + + // Converts lists into objects. Pass either a single array of `[key, value]` + // pairs, or two parallel arrays of the same length -- one of keys, and one of + // the corresponding values. + _.object = function(list, values) { + var result = {}; + for (var i = 0, length = getLength(list); i < length; i++) { + if (values) { + result[list[i]] = values[i]; + } else { + result[list[i][0]] = list[i][1]; + } + } + return result; + }; + + // Generator function to create the findIndex and findLastIndex functions + function createPredicateIndexFinder(dir) { + return function(array, predicate, context) { + predicate = cb(predicate, context); + var length = getLength(array); + var index = dir > 0 ? 0 : length - 1; + for (; index >= 0 && index < length; index += dir) { + if (predicate(array[index], index, array)) return index; + } + return -1; + }; + } + + // Returns the first index on an array-like that passes a predicate test + _.findIndex = createPredicateIndexFinder(1); + _.findLastIndex = createPredicateIndexFinder(-1); + + // Use a comparator function to figure out the smallest index at which + // an object should be inserted so as to maintain order. Uses binary search. + _.sortedIndex = function(array, obj, iteratee, context) { + iteratee = cb(iteratee, context, 1); + var value = iteratee(obj); + var low = 0, high = getLength(array); + while (low < high) { + var mid = Math.floor((low + high) / 2); + if (iteratee(array[mid]) < value) low = mid + 1; else high = mid; + } + return low; + }; + + // Generator function to create the indexOf and lastIndexOf functions + function createIndexFinder(dir, predicateFind, sortedIndex) { + return function(array, item, idx) { + var i = 0, length = getLength(array); + if (typeof idx == 'number') { + if (dir > 0) { + i = idx >= 0 ? idx : Math.max(idx + length, i); + } else { + length = idx >= 0 ? Math.min(idx + 1, length) : idx + length + 1; + } + } else if (sortedIndex && idx && length) { + idx = sortedIndex(array, item); + return array[idx] === item ? idx : -1; + } + if (item !== item) { + idx = predicateFind(slice.call(array, i, length), _.isNaN); + return idx >= 0 ? idx + i : -1; + } + for (idx = dir > 0 ? i : length - 1; idx >= 0 && idx < length; idx += dir) { + if (array[idx] === item) return idx; + } + return -1; + }; + } + + // Return the position of the first occurrence of an item in an array, + // or -1 if the item is not included in the array. + // If the array is large and already in sort order, pass `true` + // for **isSorted** to use binary search. + _.indexOf = createIndexFinder(1, _.findIndex, _.sortedIndex); + _.lastIndexOf = createIndexFinder(-1, _.findLastIndex); + + // Generate an integer Array containing an arithmetic progression. A port of + // the native Python `range()` function. See + // [the Python documentation](http://docs.python.org/library/functions.html#range). + _.range = function(start, stop, step) { + if (stop == null) { + stop = start || 0; + start = 0; + } + step = step || 1; + + var length = Math.max(Math.ceil((stop - start) / step), 0); + var range = Array(length); + + for (var idx = 0; idx < length; idx++, start += step) { + range[idx] = start; + } + + return range; + }; + + // Function (ahem) Functions + // ------------------ + + // Determines whether to execute a function as a constructor + // or a normal function with the provided arguments + var executeBound = function(sourceFunc, boundFunc, context, callingContext, args) { + if (!(callingContext instanceof boundFunc)) return sourceFunc.apply(context, args); + var self = baseCreate(sourceFunc.prototype); + var result = sourceFunc.apply(self, args); + if (_.isObject(result)) return result; + return self; + }; + + // Create a function bound to a given object (assigning `this`, and arguments, + // optionally). Delegates to **ECMAScript 5**'s native `Function.bind` if + // available. + _.bind = function(func, context) { + if (nativeBind && func.bind === nativeBind) return nativeBind.apply(func, slice.call(arguments, 1)); + if (!_.isFunction(func)) throw new TypeError('Bind must be called on a function'); + var args = slice.call(arguments, 2); + var bound = function() { + return executeBound(func, bound, context, this, args.concat(slice.call(arguments))); + }; + return bound; + }; + + // Partially apply a function by creating a version that has had some of its + // arguments pre-filled, without changing its dynamic `this` context. _ acts + // as a placeholder, allowing any combination of arguments to be pre-filled. + _.partial = function(func) { + var boundArgs = slice.call(arguments, 1); + var bound = function() { + var position = 0, length = boundArgs.length; + var args = Array(length); + for (var i = 0; i < length; i++) { + args[i] = boundArgs[i] === _ ? arguments[position++] : boundArgs[i]; + } + while (position < arguments.length) args.push(arguments[position++]); + return executeBound(func, bound, this, this, args); + }; + return bound; + }; + + // Bind a number of an object's methods to that object. Remaining arguments + // are the method names to be bound. Useful for ensuring that all callbacks + // defined on an object belong to it. + _.bindAll = function(obj) { + var i, length = arguments.length, key; + if (length <= 1) throw new Error('bindAll must be passed function names'); + for (i = 1; i < length; i++) { + key = arguments[i]; + obj[key] = _.bind(obj[key], obj); + } + return obj; + }; + + // Memoize an expensive function by storing its results. + _.memoize = function(func, hasher) { + var memoize = function(key) { + var cache = memoize.cache; + var address = '' + (hasher ? hasher.apply(this, arguments) : key); + if (!_.has(cache, address)) cache[address] = func.apply(this, arguments); + return cache[address]; + }; + memoize.cache = {}; + return memoize; + }; + + // Delays a function for the given number of milliseconds, and then calls + // it with the arguments supplied. + _.delay = function(func, wait) { + var args = slice.call(arguments, 2); + return setTimeout(function(){ + return func.apply(null, args); + }, wait); + }; + + // Defers a function, scheduling it to run after the current call stack has + // cleared. + _.defer = _.partial(_.delay, _, 1); + + // Returns a function, that, when invoked, will only be triggered at most once + // during a given window of time. Normally, the throttled function will run + // as much as it can, without ever going more than once per `wait` duration; + // but if you'd like to disable the execution on the leading edge, pass + // `{leading: false}`. To disable execution on the trailing edge, ditto. + _.throttle = function(func, wait, options) { + var context, args, result; + var timeout = null; + var previous = 0; + if (!options) options = {}; + var later = function() { + previous = options.leading === false ? 0 : _.now(); + timeout = null; + result = func.apply(context, args); + if (!timeout) context = args = null; + }; + return function() { + var now = _.now(); + if (!previous && options.leading === false) previous = now; + var remaining = wait - (now - previous); + context = this; + args = arguments; + if (remaining <= 0 || remaining > wait) { + if (timeout) { + clearTimeout(timeout); + timeout = null; + } + previous = now; + result = func.apply(context, args); + if (!timeout) context = args = null; + } else if (!timeout && options.trailing !== false) { + timeout = setTimeout(later, remaining); + } + return result; + }; + }; + + // Returns a function, that, as long as it continues to be invoked, will not + // be triggered. The function will be called after it stops being called for + // N milliseconds. If `immediate` is passed, trigger the function on the + // leading edge, instead of the trailing. + _.debounce = function(func, wait, immediate) { + var timeout, args, context, timestamp, result; + + var later = function() { + var last = _.now() - timestamp; + + if (last < wait && last >= 0) { + timeout = setTimeout(later, wait - last); + } else { + timeout = null; + if (!immediate) { + result = func.apply(context, args); + if (!timeout) context = args = null; + } + } + }; + + return function() { + context = this; + args = arguments; + timestamp = _.now(); + var callNow = immediate && !timeout; + if (!timeout) timeout = setTimeout(later, wait); + if (callNow) { + result = func.apply(context, args); + context = args = null; + } + + return result; + }; + }; + + // Returns the first function passed as an argument to the second, + // allowing you to adjust arguments, run code before and after, and + // conditionally execute the original function. + _.wrap = function(func, wrapper) { + return _.partial(wrapper, func); + }; + + // Returns a negated version of the passed-in predicate. + _.negate = function(predicate) { + return function() { + return !predicate.apply(this, arguments); + }; + }; + + // Returns a function that is the composition of a list of functions, each + // consuming the return value of the function that follows. + _.compose = function() { + var args = arguments; + var start = args.length - 1; + return function() { + var i = start; + var result = args[start].apply(this, arguments); + while (i--) result = args[i].call(this, result); + return result; + }; + }; + + // Returns a function that will only be executed on and after the Nth call. + _.after = function(times, func) { + return function() { + if (--times < 1) { + return func.apply(this, arguments); + } + }; + }; + + // Returns a function that will only be executed up to (but not including) the Nth call. + _.before = function(times, func) { + var memo; + return function() { + if (--times > 0) { + memo = func.apply(this, arguments); + } + if (times <= 1) func = null; + return memo; + }; + }; + + // Returns a function that will be executed at most one time, no matter how + // often you call it. Useful for lazy initialization. + _.once = _.partial(_.before, 2); + + // Object Functions + // ---------------- + + // Keys in IE < 9 that won't be iterated by `for key in ...` and thus missed. + var hasEnumBug = !{toString: null}.propertyIsEnumerable('toString'); + var nonEnumerableProps = ['valueOf', 'isPrototypeOf', 'toString', + 'propertyIsEnumerable', 'hasOwnProperty', 'toLocaleString']; + + function collectNonEnumProps(obj, keys) { + var nonEnumIdx = nonEnumerableProps.length; + var constructor = obj.constructor; + var proto = (_.isFunction(constructor) && constructor.prototype) || ObjProto; + + // Constructor is a special case. + var prop = 'constructor'; + if (_.has(obj, prop) && !_.contains(keys, prop)) keys.push(prop); + + while (nonEnumIdx--) { + prop = nonEnumerableProps[nonEnumIdx]; + if (prop in obj && obj[prop] !== proto[prop] && !_.contains(keys, prop)) { + keys.push(prop); + } + } + } + + // Retrieve the names of an object's own properties. + // Delegates to **ECMAScript 5**'s native `Object.keys` + _.keys = function(obj) { + if (!_.isObject(obj)) return []; + if (nativeKeys) return nativeKeys(obj); + var keys = []; + for (var key in obj) if (_.has(obj, key)) keys.push(key); + // Ahem, IE < 9. + if (hasEnumBug) collectNonEnumProps(obj, keys); + return keys; + }; + + // Retrieve all the property names of an object. + _.allKeys = function(obj) { + if (!_.isObject(obj)) return []; + var keys = []; + for (var key in obj) keys.push(key); + // Ahem, IE < 9. + if (hasEnumBug) collectNonEnumProps(obj, keys); + return keys; + }; + + // Retrieve the values of an object's properties. + _.values = function(obj) { + var keys = _.keys(obj); + var length = keys.length; + var values = Array(length); + for (var i = 0; i < length; i++) { + values[i] = obj[keys[i]]; + } + return values; + }; + + // Returns the results of applying the iteratee to each element of the object + // In contrast to _.map it returns an object + _.mapObject = function(obj, iteratee, context) { + iteratee = cb(iteratee, context); + var keys = _.keys(obj), + length = keys.length, + results = {}, + currentKey; + for (var index = 0; index < length; index++) { + currentKey = keys[index]; + results[currentKey] = iteratee(obj[currentKey], currentKey, obj); + } + return results; + }; + + // Convert an object into a list of `[key, value]` pairs. + _.pairs = function(obj) { + var keys = _.keys(obj); + var length = keys.length; + var pairs = Array(length); + for (var i = 0; i < length; i++) { + pairs[i] = [keys[i], obj[keys[i]]]; + } + return pairs; + }; + + // Invert the keys and values of an object. The values must be serializable. + _.invert = function(obj) { + var result = {}; + var keys = _.keys(obj); + for (var i = 0, length = keys.length; i < length; i++) { + result[obj[keys[i]]] = keys[i]; + } + return result; + }; + + // Return a sorted list of the function names available on the object. + // Aliased as `methods` + _.functions = _.methods = function(obj) { + var names = []; + for (var key in obj) { + if (_.isFunction(obj[key])) names.push(key); + } + return names.sort(); + }; + + // Extend a given object with all the properties in passed-in object(s). + _.extend = createAssigner(_.allKeys); + + // Assigns a given object with all the own properties in the passed-in object(s) + // (https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object/assign) + _.extendOwn = _.assign = createAssigner(_.keys); + + // Returns the first key on an object that passes a predicate test + _.findKey = function(obj, predicate, context) { + predicate = cb(predicate, context); + var keys = _.keys(obj), key; + for (var i = 0, length = keys.length; i < length; i++) { + key = keys[i]; + if (predicate(obj[key], key, obj)) return key; + } + }; + + // Return a copy of the object only containing the whitelisted properties. + _.pick = function(object, oiteratee, context) { + var result = {}, obj = object, iteratee, keys; + if (obj == null) return result; + if (_.isFunction(oiteratee)) { + keys = _.allKeys(obj); + iteratee = optimizeCb(oiteratee, context); + } else { + keys = flatten(arguments, false, false, 1); + iteratee = function(value, key, obj) { return key in obj; }; + obj = Object(obj); + } + for (var i = 0, length = keys.length; i < length; i++) { + var key = keys[i]; + var value = obj[key]; + if (iteratee(value, key, obj)) result[key] = value; + } + return result; + }; + + // Return a copy of the object without the blacklisted properties. + _.omit = function(obj, iteratee, context) { + if (_.isFunction(iteratee)) { + iteratee = _.negate(iteratee); + } else { + var keys = _.map(flatten(arguments, false, false, 1), String); + iteratee = function(value, key) { + return !_.contains(keys, key); + }; + } + return _.pick(obj, iteratee, context); + }; + + // Fill in a given object with default properties. + _.defaults = createAssigner(_.allKeys, true); + + // Creates an object that inherits from the given prototype object. + // If additional properties are provided then they will be added to the + // created object. + _.create = function(prototype, props) { + var result = baseCreate(prototype); + if (props) _.extendOwn(result, props); + return result; + }; + + // Create a (shallow-cloned) duplicate of an object. + _.clone = function(obj) { + if (!_.isObject(obj)) return obj; + return _.isArray(obj) ? obj.slice() : _.extend({}, obj); + }; + + // Invokes interceptor with the obj, and then returns obj. + // The primary purpose of this method is to "tap into" a method chain, in + // order to perform operations on intermediate results within the chain. + _.tap = function(obj, interceptor) { + interceptor(obj); + return obj; + }; + + // Returns whether an object has a given set of `key:value` pairs. + _.isMatch = function(object, attrs) { + var keys = _.keys(attrs), length = keys.length; + if (object == null) return !length; + var obj = Object(object); + for (var i = 0; i < length; i++) { + var key = keys[i]; + if (attrs[key] !== obj[key] || !(key in obj)) return false; + } + return true; + }; + + + // Internal recursive comparison function for `isEqual`. + var eq = function(a, b, aStack, bStack) { + // Identical objects are equal. `0 === -0`, but they aren't identical. + // See the [Harmony `egal` proposal](http://wiki.ecmascript.org/doku.php?id=harmony:egal). + if (a === b) return a !== 0 || 1 / a === 1 / b; + // A strict comparison is necessary because `null == undefined`. + if (a == null || b == null) return a === b; + // Unwrap any wrapped objects. + if (a instanceof _) a = a._wrapped; + if (b instanceof _) b = b._wrapped; + // Compare `[[Class]]` names. + var className = toString.call(a); + if (className !== toString.call(b)) return false; + switch (className) { + // Strings, numbers, regular expressions, dates, and booleans are compared by value. + case '[object RegExp]': + // RegExps are coerced to strings for comparison (Note: '' + /a/i === '/a/i') + case '[object String]': + // Primitives and their corresponding object wrappers are equivalent; thus, `"5"` is + // equivalent to `new String("5")`. + return '' + a === '' + b; + case '[object Number]': + // `NaN`s are equivalent, but non-reflexive. + // Object(NaN) is equivalent to NaN + if (+a !== +a) return +b !== +b; + // An `egal` comparison is performed for other numeric values. + return +a === 0 ? 1 / +a === 1 / b : +a === +b; + case '[object Date]': + case '[object Boolean]': + // Coerce dates and booleans to numeric primitive values. Dates are compared by their + // millisecond representations. Note that invalid dates with millisecond representations + // of `NaN` are not equivalent. + return +a === +b; + } + + var areArrays = className === '[object Array]'; + if (!areArrays) { + if (typeof a != 'object' || typeof b != 'object') return false; + + // Objects with different constructors are not equivalent, but `Object`s or `Array`s + // from different frames are. + var aCtor = a.constructor, bCtor = b.constructor; + if (aCtor !== bCtor && !(_.isFunction(aCtor) && aCtor instanceof aCtor && + _.isFunction(bCtor) && bCtor instanceof bCtor) + && ('constructor' in a && 'constructor' in b)) { + return false; + } + } + // Assume equality for cyclic structures. The algorithm for detecting cyclic + // structures is adapted from ES 5.1 section 15.12.3, abstract operation `JO`. + + // Initializing stack of traversed objects. + // It's done here since we only need them for objects and arrays comparison. + aStack = aStack || []; + bStack = bStack || []; + var length = aStack.length; + while (length--) { + // Linear search. Performance is inversely proportional to the number of + // unique nested structures. + if (aStack[length] === a) return bStack[length] === b; + } + + // Add the first object to the stack of traversed objects. + aStack.push(a); + bStack.push(b); + + // Recursively compare objects and arrays. + if (areArrays) { + // Compare array lengths to determine if a deep comparison is necessary. + length = a.length; + if (length !== b.length) return false; + // Deep compare the contents, ignoring non-numeric properties. + while (length--) { + if (!eq(a[length], b[length], aStack, bStack)) return false; + } + } else { + // Deep compare objects. + var keys = _.keys(a), key; + length = keys.length; + // Ensure that both objects contain the same number of properties before comparing deep equality. + if (_.keys(b).length !== length) return false; + while (length--) { + // Deep compare each member + key = keys[length]; + if (!(_.has(b, key) && eq(a[key], b[key], aStack, bStack))) return false; + } + } + // Remove the first object from the stack of traversed objects. + aStack.pop(); + bStack.pop(); + return true; + }; + + // Perform a deep comparison to check if two objects are equal. + _.isEqual = function(a, b) { + return eq(a, b); + }; + + // Is a given array, string, or object empty? + // An "empty" object has no enumerable own-properties. + _.isEmpty = function(obj) { + if (obj == null) return true; + if (isArrayLike(obj) && (_.isArray(obj) || _.isString(obj) || _.isArguments(obj))) return obj.length === 0; + return _.keys(obj).length === 0; + }; + + // Is a given value a DOM element? + _.isElement = function(obj) { + return !!(obj && obj.nodeType === 1); + }; + + // Is a given value an array? + // Delegates to ECMA5's native Array.isArray + _.isArray = nativeIsArray || function(obj) { + return toString.call(obj) === '[object Array]'; + }; + + // Is a given variable an object? + _.isObject = function(obj) { + var type = typeof obj; + return type === 'function' || type === 'object' && !!obj; + }; + + // Add some isType methods: isArguments, isFunction, isString, isNumber, isDate, isRegExp, isError. + _.each(['Arguments', 'Function', 'String', 'Number', 'Date', 'RegExp', 'Error'], function(name) { + _['is' + name] = function(obj) { + return toString.call(obj) === '[object ' + name + ']'; + }; + }); + + // Define a fallback version of the method in browsers (ahem, IE < 9), where + // there isn't any inspectable "Arguments" type. + if (!_.isArguments(arguments)) { + _.isArguments = function(obj) { + return _.has(obj, 'callee'); + }; + } + + // Optimize `isFunction` if appropriate. Work around some typeof bugs in old v8, + // IE 11 (#1621), and in Safari 8 (#1929). + if (typeof /./ != 'function' && typeof Int8Array != 'object') { + _.isFunction = function(obj) { + return typeof obj == 'function' || false; + }; + } + + // Is a given object a finite number? + _.isFinite = function(obj) { + return isFinite(obj) && !isNaN(parseFloat(obj)); + }; + + // Is the given value `NaN`? (NaN is the only number which does not equal itself). + _.isNaN = function(obj) { + return _.isNumber(obj) && obj !== +obj; + }; + + // Is a given value a boolean? + _.isBoolean = function(obj) { + return obj === true || obj === false || toString.call(obj) === '[object Boolean]'; + }; + + // Is a given value equal to null? + _.isNull = function(obj) { + return obj === null; + }; + + // Is a given variable undefined? + _.isUndefined = function(obj) { + return obj === void 0; + }; + + // Shortcut function for checking if an object has a given property directly + // on itself (in other words, not on a prototype). + _.has = function(obj, key) { + return obj != null && hasOwnProperty.call(obj, key); + }; + + // Utility Functions + // ----------------- + + // Run Underscore.js in *noConflict* mode, returning the `_` variable to its + // previous owner. Returns a reference to the Underscore object. + _.noConflict = function() { + root._ = previousUnderscore; + return this; + }; + + // Keep the identity function around for default iteratees. + _.identity = function(value) { + return value; + }; + + // Predicate-generating functions. Often useful outside of Underscore. + _.constant = function(value) { + return function() { + return value; + }; + }; + + _.noop = function(){}; + + _.property = property; + + // Generates a function for a given object that returns a given property. + _.propertyOf = function(obj) { + return obj == null ? function(){} : function(key) { + return obj[key]; + }; + }; + + // Returns a predicate for checking whether an object has a given set of + // `key:value` pairs. + _.matcher = _.matches = function(attrs) { + attrs = _.extendOwn({}, attrs); + return function(obj) { + return _.isMatch(obj, attrs); + }; + }; + + // Run a function **n** times. + _.times = function(n, iteratee, context) { + var accum = Array(Math.max(0, n)); + iteratee = optimizeCb(iteratee, context, 1); + for (var i = 0; i < n; i++) accum[i] = iteratee(i); + return accum; + }; + + // Return a random integer between min and max (inclusive). + _.random = function(min, max) { + if (max == null) { + max = min; + min = 0; + } + return min + Math.floor(Math.random() * (max - min + 1)); + }; + + // A (possibly faster) way to get the current timestamp as an integer. + _.now = Date.now || function() { + return new Date().getTime(); + }; + + // List of HTML entities for escaping. + var escapeMap = { + '&': '&', + '<': '<', + '>': '>', + '"': '"', + "'": ''', + '`': '`' + }; + var unescapeMap = _.invert(escapeMap); + + // Functions for escaping and unescaping strings to/from HTML interpolation. + var createEscaper = function(map) { + var escaper = function(match) { + return map[match]; + }; + // Regexes for identifying a key that needs to be escaped + var source = '(?:' + _.keys(map).join('|') + ')'; + var testRegexp = RegExp(source); + var replaceRegexp = RegExp(source, 'g'); + return function(string) { + string = string == null ? '' : '' + string; + return testRegexp.test(string) ? string.replace(replaceRegexp, escaper) : string; + }; + }; + _.escape = createEscaper(escapeMap); + _.unescape = createEscaper(unescapeMap); + + // If the value of the named `property` is a function then invoke it with the + // `object` as context; otherwise, return it. + _.result = function(object, property, fallback) { + var value = object == null ? void 0 : object[property]; + if (value === void 0) { + value = fallback; + } + return _.isFunction(value) ? value.call(object) : value; + }; + + // Generate a unique integer id (unique within the entire client session). + // Useful for temporary DOM ids. + var idCounter = 0; + _.uniqueId = function(prefix) { + var id = ++idCounter + ''; + return prefix ? prefix + id : id; + }; + + // By default, Underscore uses ERB-style template delimiters, change the + // following template settings to use alternative delimiters. + _.templateSettings = { + evaluate : /<%([\s\S]+?)%>/g, + interpolate : /<%=([\s\S]+?)%>/g, + escape : /<%-([\s\S]+?)%>/g + }; + + // When customizing `templateSettings`, if you don't want to define an + // interpolation, evaluation or escaping regex, we need one that is + // guaranteed not to match. + var noMatch = /(.)^/; + + // Certain characters need to be escaped so that they can be put into a + // string literal. + var escapes = { + "'": "'", + '\\': '\\', + '\r': 'r', + '\n': 'n', + '\u2028': 'u2028', + '\u2029': 'u2029' + }; + + var escaper = /\\|'|\r|\n|\u2028|\u2029/g; + + var escapeChar = function(match) { + return '\\' + escapes[match]; + }; + + // JavaScript micro-templating, similar to John Resig's implementation. + // Underscore templating handles arbitrary delimiters, preserves whitespace, + // and correctly escapes quotes within interpolated code. + // NB: `oldSettings` only exists for backwards compatibility. + _.template = function(text, settings, oldSettings) { + if (!settings && oldSettings) settings = oldSettings; + settings = _.defaults({}, settings, _.templateSettings); + + // Combine delimiters into one regular expression via alternation. + var matcher = RegExp([ + (settings.escape || noMatch).source, + (settings.interpolate || noMatch).source, + (settings.evaluate || noMatch).source + ].join('|') + '|$', 'g'); + + // Compile the template source, escaping string literals appropriately. + var index = 0; + var source = "__p+='"; + text.replace(matcher, function(match, escape, interpolate, evaluate, offset) { + source += text.slice(index, offset).replace(escaper, escapeChar); + index = offset + match.length; + + if (escape) { + source += "'+\n((__t=(" + escape + "))==null?'':_.escape(__t))+\n'"; + } else if (interpolate) { + source += "'+\n((__t=(" + interpolate + "))==null?'':__t)+\n'"; + } else if (evaluate) { + source += "';\n" + evaluate + "\n__p+='"; + } + + // Adobe VMs need the match returned to produce the correct offest. + return match; + }); + source += "';\n"; + + // If a variable is not specified, place data values in local scope. + if (!settings.variable) source = 'with(obj||{}){\n' + source + '}\n'; + + source = "var __t,__p='',__j=Array.prototype.join," + + "print=function(){__p+=__j.call(arguments,'');};\n" + + source + 'return __p;\n'; + + try { + var render = new Function(settings.variable || 'obj', '_', source); + } catch (e) { + e.source = source; + throw e; + } + + var template = function(data) { + return render.call(this, data, _); + }; + + // Provide the compiled source as a convenience for precompilation. + var argument = settings.variable || 'obj'; + template.source = 'function(' + argument + '){\n' + source + '}'; + + return template; + }; + + // Add a "chain" function. Start chaining a wrapped Underscore object. + _.chain = function(obj) { + var instance = _(obj); + instance._chain = true; + return instance; + }; + + // OOP + // --------------- + // If Underscore is called as a function, it returns a wrapped object that + // can be used OO-style. This wrapper holds altered versions of all the + // underscore functions. Wrapped objects may be chained. + + // Helper function to continue chaining intermediate results. + var result = function(instance, obj) { + return instance._chain ? _(obj).chain() : obj; + }; + + // Add your own custom functions to the Underscore object. + _.mixin = function(obj) { + _.each(_.functions(obj), function(name) { + var func = _[name] = obj[name]; + _.prototype[name] = function() { + var args = [this._wrapped]; + push.apply(args, arguments); + return result(this, func.apply(_, args)); + }; + }); + }; + + // Add all of the Underscore functions to the wrapper object. + _.mixin(_); + + // Add all mutator Array functions to the wrapper. + _.each(['pop', 'push', 'reverse', 'shift', 'sort', 'splice', 'unshift'], function(name) { + var method = ArrayProto[name]; + _.prototype[name] = function() { + var obj = this._wrapped; + method.apply(obj, arguments); + if ((name === 'shift' || name === 'splice') && obj.length === 0) delete obj[0]; + return result(this, obj); + }; + }); + + // Add all accessor Array functions to the wrapper. + _.each(['concat', 'join', 'slice'], function(name) { + var method = ArrayProto[name]; + _.prototype[name] = function() { + return result(this, method.apply(this._wrapped, arguments)); + }; + }); + + // Extracts the result from a wrapped and chained object. + _.prototype.value = function() { + return this._wrapped; + }; + + // Provide unwrapping proxy for some methods used in engine operations + // such as arithmetic and JSON stringification. + _.prototype.valueOf = _.prototype.toJSON = _.prototype.value; + + _.prototype.toString = function() { + return '' + this._wrapped; + }; + + // AMD registration happens at the end for compatibility with AMD loaders + // that may not enforce next-turn semantics on modules. Even though general + // practice for AMD registration is to be anonymous, underscore registers + // as a named module because, like jQuery, it is a base library that is + // popular enough to be bundled in a third party lib, but not be part of + // an AMD load request. Those cases could generate an error when an + // anonymous define() is called outside of a loader request. + if (typeof define === 'function' && define.amd) { + define('underscore', [], function() { + return _; + }); + } +}.call(this)); + +},{}],26:[function(require,module,exports){ +arguments[4][19][0].apply(exports,arguments) +},{"dup":19}],27:[function(require,module,exports){ +module.exports = function isBuffer(arg) { + return arg && typeof arg === 'object' + && typeof arg.copy === 'function' + && typeof arg.fill === 'function' + && typeof arg.readUInt8 === 'function'; +} +},{}],28:[function(require,module,exports){ +(function (process,global){ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +var formatRegExp = /%[sdj%]/g; +exports.format = function(f) { + if (!isString(f)) { + var objects = []; + for (var i = 0; i < arguments.length; i++) { + objects.push(inspect(arguments[i])); + } + return objects.join(' '); + } + + var i = 1; + var args = arguments; + var len = args.length; + var str = String(f).replace(formatRegExp, function(x) { + if (x === '%%') return '%'; + if (i >= len) return x; + switch (x) { + case '%s': return String(args[i++]); + case '%d': return Number(args[i++]); + case '%j': + try { + return JSON.stringify(args[i++]); + } catch (_) { + return '[Circular]'; + } + default: + return x; + } + }); + for (var x = args[i]; i < len; x = args[++i]) { + if (isNull(x) || !isObject(x)) { + str += ' ' + x; + } else { + str += ' ' + inspect(x); + } + } + return str; +}; + + +// Mark that a method should not be used. +// Returns a modified function which warns once by default. +// If --no-deprecation is set, then it is a no-op. +exports.deprecate = function(fn, msg) { + // Allow for deprecating things in the process of starting up. + if (isUndefined(global.process)) { + return function() { + return exports.deprecate(fn, msg).apply(this, arguments); + }; + } + + if (process.noDeprecation === true) { + return fn; + } + + var warned = false; + function deprecated() { + if (!warned) { + if (process.throwDeprecation) { + throw new Error(msg); + } else if (process.traceDeprecation) { + console.trace(msg); + } else { + console.error(msg); + } + warned = true; + } + return fn.apply(this, arguments); + } + + return deprecated; +}; + + +var debugs = {}; +var debugEnviron; +exports.debuglog = function(set) { + if (isUndefined(debugEnviron)) + debugEnviron = process.env.NODE_DEBUG || ''; + set = set.toUpperCase(); + if (!debugs[set]) { + if (new RegExp('\\b' + set + '\\b', 'i').test(debugEnviron)) { + var pid = process.pid; + debugs[set] = function() { + var msg = exports.format.apply(exports, arguments); + console.error('%s %d: %s', set, pid, msg); + }; + } else { + debugs[set] = function() {}; + } + } + return debugs[set]; +}; + + +/** + * Echos the value of a value. Trys to print the value out + * in the best way possible given the different types. + * + * @param {Object} obj The object to print out. + * @param {Object} opts Optional options object that alters the output. + */ +/* legacy: obj, showHidden, depth, colors*/ +function inspect(obj, opts) { + // default options + var ctx = { + seen: [], + stylize: stylizeNoColor + }; + // legacy... + if (arguments.length >= 3) ctx.depth = arguments[2]; + if (arguments.length >= 4) ctx.colors = arguments[3]; + if (isBoolean(opts)) { + // legacy... + ctx.showHidden = opts; + } else if (opts) { + // got an "options" object + exports._extend(ctx, opts); + } + // set default options + if (isUndefined(ctx.showHidden)) ctx.showHidden = false; + if (isUndefined(ctx.depth)) ctx.depth = 2; + if (isUndefined(ctx.colors)) ctx.colors = false; + if (isUndefined(ctx.customInspect)) ctx.customInspect = true; + if (ctx.colors) ctx.stylize = stylizeWithColor; + return formatValue(ctx, obj, ctx.depth); +} +exports.inspect = inspect; + + +// http://en.wikipedia.org/wiki/ANSI_escape_code#graphics +inspect.colors = { + 'bold' : [1, 22], + 'italic' : [3, 23], + 'underline' : [4, 24], + 'inverse' : [7, 27], + 'white' : [37, 39], + 'grey' : [90, 39], + 'black' : [30, 39], + 'blue' : [34, 39], + 'cyan' : [36, 39], + 'green' : [32, 39], + 'magenta' : [35, 39], + 'red' : [31, 39], + 'yellow' : [33, 39] +}; + +// Don't use 'blue' not visible on cmd.exe +inspect.styles = { + 'special': 'cyan', + 'number': 'yellow', + 'boolean': 'yellow', + 'undefined': 'grey', + 'null': 'bold', + 'string': 'green', + 'date': 'magenta', + // "name": intentionally not styling + 'regexp': 'red' +}; + + +function stylizeWithColor(str, styleType) { + var style = inspect.styles[styleType]; + + if (style) { + return '\u001b[' + inspect.colors[style][0] + 'm' + str + + '\u001b[' + inspect.colors[style][1] + 'm'; + } else { + return str; + } +} + + +function stylizeNoColor(str, styleType) { + return str; +} + + +function arrayToHash(array) { + var hash = {}; + + array.forEach(function(val, idx) { + hash[val] = true; + }); + + return hash; +} + + +function formatValue(ctx, value, recurseTimes) { + // Provide a hook for user-specified inspect functions. + // Check that value is an object with an inspect function on it + if (ctx.customInspect && + value && + isFunction(value.inspect) && + // Filter out the util module, it's inspect function is special + value.inspect !== exports.inspect && + // Also filter out any prototype objects using the circular check. + !(value.constructor && value.constructor.prototype === value)) { + var ret = value.inspect(recurseTimes, ctx); + if (!isString(ret)) { + ret = formatValue(ctx, ret, recurseTimes); + } + return ret; + } + + // Primitive types cannot have properties + var primitive = formatPrimitive(ctx, value); + if (primitive) { + return primitive; + } + + // Look up the keys of the object. + var keys = Object.keys(value); + var visibleKeys = arrayToHash(keys); + + if (ctx.showHidden) { + keys = Object.getOwnPropertyNames(value); + } + + // IE doesn't make error fields non-enumerable + // http://msdn.microsoft.com/en-us/library/ie/dww52sbt(v=vs.94).aspx + if (isError(value) + && (keys.indexOf('message') >= 0 || keys.indexOf('description') >= 0)) { + return formatError(value); + } + + // Some type of object without properties can be shortcutted. + if (keys.length === 0) { + if (isFunction(value)) { + var name = value.name ? ': ' + value.name : ''; + return ctx.stylize('[Function' + name + ']', 'special'); + } + if (isRegExp(value)) { + return ctx.stylize(RegExp.prototype.toString.call(value), 'regexp'); + } + if (isDate(value)) { + return ctx.stylize(Date.prototype.toString.call(value), 'date'); + } + if (isError(value)) { + return formatError(value); + } + } + + var base = '', array = false, braces = ['{', '}']; + + // Make Array say that they are Array + if (isArray(value)) { + array = true; + braces = ['[', ']']; + } + + // Make functions say that they are functions + if (isFunction(value)) { + var n = value.name ? ': ' + value.name : ''; + base = ' [Function' + n + ']'; + } + + // Make RegExps say that they are RegExps + if (isRegExp(value)) { + base = ' ' + RegExp.prototype.toString.call(value); + } + + // Make dates with properties first say the date + if (isDate(value)) { + base = ' ' + Date.prototype.toUTCString.call(value); + } + + // Make error with message first say the error + if (isError(value)) { + base = ' ' + formatError(value); + } + + if (keys.length === 0 && (!array || value.length == 0)) { + return braces[0] + base + braces[1]; + } + + if (recurseTimes < 0) { + if (isRegExp(value)) { + return ctx.stylize(RegExp.prototype.toString.call(value), 'regexp'); + } else { + return ctx.stylize('[Object]', 'special'); + } + } + + ctx.seen.push(value); + + var output; + if (array) { + output = formatArray(ctx, value, recurseTimes, visibleKeys, keys); + } else { + output = keys.map(function(key) { + return formatProperty(ctx, value, recurseTimes, visibleKeys, key, array); + }); + } + + ctx.seen.pop(); + + return reduceToSingleString(output, base, braces); +} + + +function formatPrimitive(ctx, value) { + if (isUndefined(value)) + return ctx.stylize('undefined', 'undefined'); + if (isString(value)) { + var simple = '\'' + JSON.stringify(value).replace(/^"|"$/g, '') + .replace(/'/g, "\\'") + .replace(/\\"/g, '"') + '\''; + return ctx.stylize(simple, 'string'); + } + if (isNumber(value)) + return ctx.stylize('' + value, 'number'); + if (isBoolean(value)) + return ctx.stylize('' + value, 'boolean'); + // For some reason typeof null is "object", so special case here. + if (isNull(value)) + return ctx.stylize('null', 'null'); +} + + +function formatError(value) { + return '[' + Error.prototype.toString.call(value) + ']'; +} + + +function formatArray(ctx, value, recurseTimes, visibleKeys, keys) { + var output = []; + for (var i = 0, l = value.length; i < l; ++i) { + if (hasOwnProperty(value, String(i))) { + output.push(formatProperty(ctx, value, recurseTimes, visibleKeys, + String(i), true)); + } else { + output.push(''); + } + } + keys.forEach(function(key) { + if (!key.match(/^\d+$/)) { + output.push(formatProperty(ctx, value, recurseTimes, visibleKeys, + key, true)); + } + }); + return output; +} + + +function formatProperty(ctx, value, recurseTimes, visibleKeys, key, array) { + var name, str, desc; + desc = Object.getOwnPropertyDescriptor(value, key) || { value: value[key] }; + if (desc.get) { + if (desc.set) { + str = ctx.stylize('[Getter/Setter]', 'special'); + } else { + str = ctx.stylize('[Getter]', 'special'); + } + } else { + if (desc.set) { + str = ctx.stylize('[Setter]', 'special'); + } + } + if (!hasOwnProperty(visibleKeys, key)) { + name = '[' + key + ']'; + } + if (!str) { + if (ctx.seen.indexOf(desc.value) < 0) { + if (isNull(recurseTimes)) { + str = formatValue(ctx, desc.value, null); + } else { + str = formatValue(ctx, desc.value, recurseTimes - 1); + } + if (str.indexOf('\n') > -1) { + if (array) { + str = str.split('\n').map(function(line) { + return ' ' + line; + }).join('\n').substr(2); + } else { + str = '\n' + str.split('\n').map(function(line) { + return ' ' + line; + }).join('\n'); + } + } + } else { + str = ctx.stylize('[Circular]', 'special'); + } + } + if (isUndefined(name)) { + if (array && key.match(/^\d+$/)) { + return str; + } + name = JSON.stringify('' + key); + if (name.match(/^"([a-zA-Z_][a-zA-Z_0-9]*)"$/)) { + name = name.substr(1, name.length - 2); + name = ctx.stylize(name, 'name'); + } else { + name = name.replace(/'/g, "\\'") + .replace(/\\"/g, '"') + .replace(/(^"|"$)/g, "'"); + name = ctx.stylize(name, 'string'); + } + } + + return name + ': ' + str; +} + + +function reduceToSingleString(output, base, braces) { + var numLinesEst = 0; + var length = output.reduce(function(prev, cur) { + numLinesEst++; + if (cur.indexOf('\n') >= 0) numLinesEst++; + return prev + cur.replace(/\u001b\[\d\d?m/g, '').length + 1; + }, 0); + + if (length > 60) { + return braces[0] + + (base === '' ? '' : base + '\n ') + + ' ' + + output.join(',\n ') + + ' ' + + braces[1]; + } + + return braces[0] + base + ' ' + output.join(', ') + ' ' + braces[1]; +} + + +// NOTE: These type checking functions intentionally don't use `instanceof` +// because it is fragile and can be easily faked with `Object.create()`. +function isArray(ar) { + return Array.isArray(ar); +} +exports.isArray = isArray; + +function isBoolean(arg) { + return typeof arg === 'boolean'; +} +exports.isBoolean = isBoolean; + +function isNull(arg) { + return arg === null; +} +exports.isNull = isNull; + +function isNullOrUndefined(arg) { + return arg == null; +} +exports.isNullOrUndefined = isNullOrUndefined; + +function isNumber(arg) { + return typeof arg === 'number'; +} +exports.isNumber = isNumber; + +function isString(arg) { + return typeof arg === 'string'; +} +exports.isString = isString; + +function isSymbol(arg) { + return typeof arg === 'symbol'; +} +exports.isSymbol = isSymbol; + +function isUndefined(arg) { + return arg === void 0; +} +exports.isUndefined = isUndefined; + +function isRegExp(re) { + return isObject(re) && objectToString(re) === '[object RegExp]'; +} +exports.isRegExp = isRegExp; + +function isObject(arg) { + return typeof arg === 'object' && arg !== null; +} +exports.isObject = isObject; + +function isDate(d) { + return isObject(d) && objectToString(d) === '[object Date]'; +} +exports.isDate = isDate; + +function isError(e) { + return isObject(e) && + (objectToString(e) === '[object Error]' || e instanceof Error); +} +exports.isError = isError; + +function isFunction(arg) { + return typeof arg === 'function'; +} +exports.isFunction = isFunction; + +function isPrimitive(arg) { + return arg === null || + typeof arg === 'boolean' || + typeof arg === 'number' || + typeof arg === 'string' || + typeof arg === 'symbol' || // ES6 symbol + typeof arg === 'undefined'; +} +exports.isPrimitive = isPrimitive; + +exports.isBuffer = require('./support/isBuffer'); + +function objectToString(o) { + return Object.prototype.toString.call(o); +} + + +function pad(n) { + return n < 10 ? '0' + n.toString(10) : n.toString(10); +} + + +var months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', + 'Oct', 'Nov', 'Dec']; + +// 26 Feb 16:19:34 +function timestamp() { + var d = new Date(); + var time = [pad(d.getHours()), + pad(d.getMinutes()), + pad(d.getSeconds())].join(':'); + return [d.getDate(), months[d.getMonth()], time].join(' '); +} + + +// log is just a thin wrapper to console.log that prepends a timestamp +exports.log = function() { + console.log('%s - %s', timestamp(), exports.format.apply(exports, arguments)); +}; + + +/** + * Inherit the prototype methods from one constructor into another. + * + * The Function.prototype.inherits from lang.js rewritten as a standalone + * function (not on Function.prototype). NOTE: If this file is to be loaded + * during bootstrapping this function needs to be rewritten using some native + * functions as prototype setup using normal JavaScript does not work as + * expected during bootstrapping (see mirror.js in r114903). + * + * @param {function} ctor Constructor function which needs to inherit the + * prototype. + * @param {function} superCtor Constructor function to inherit prototype from. + */ +exports.inherits = require('inherits'); + +exports._extend = function(origin, add) { + // Don't do anything if add isn't an object + if (!add || !isObject(add)) return origin; + + var keys = Object.keys(add); + var i = keys.length; + while (i--) { + origin[keys[i]] = add[keys[i]]; + } + return origin; +}; + +function hasOwnProperty(obj, prop) { + return Object.prototype.hasOwnProperty.call(obj, prop); +} + +}).call(this,require('_process'),typeof global !== "undefined" ? global : typeof self !== "undefined" ? self : typeof window !== "undefined" ? window : {}) +},{"./support/isBuffer":27,"_process":24,"inherits":26}],29:[function(require,module,exports){ +// Returns a wrapper function that returns a wrapped callback +// The wrapper function should do some stuff, and return a +// presumably different callback function. +// This makes sure that own properties are retained, so that +// decorations and such are not lost along the way. +module.exports = wrappy +function wrappy (fn, cb) { + if (fn && cb) return wrappy(fn)(cb) + + if (typeof fn !== 'function') + throw new TypeError('need wrapper function') + + Object.keys(fn).forEach(function (k) { + wrapper[k] = fn[k] + }) + + return wrapper + + function wrapper() { + var args = new Array(arguments.length) + for (var i = 0; i < args.length; i++) { + args[i] = arguments[i] + } + var ret = fn.apply(this, args) + var cb = args[args.length-1] + if (typeof ret === 'function' && ret !== cb) { + Object.keys(cb).forEach(function (k) { + ret[k] = cb[k] + }) + } + return ret + } +} + +},{}]},{},[7])(7) +}); \ No newline at end of file diff --git a/assets/javascripts/workers/search.16e2a7d4.min.js b/assets/javascripts/workers/search.16e2a7d4.min.js new file mode 100644 index 00000000..e0dc159e --- /dev/null +++ b/assets/javascripts/workers/search.16e2a7d4.min.js @@ -0,0 +1,48 @@ +"use strict";(()=>{var ge=Object.create;var W=Object.defineProperty,ye=Object.defineProperties,me=Object.getOwnPropertyDescriptor,ve=Object.getOwnPropertyDescriptors,xe=Object.getOwnPropertyNames,G=Object.getOwnPropertySymbols,Se=Object.getPrototypeOf,X=Object.prototype.hasOwnProperty,Qe=Object.prototype.propertyIsEnumerable;var J=(t,e,r)=>e in t?W(t,e,{enumerable:!0,configurable:!0,writable:!0,value:r}):t[e]=r,M=(t,e)=>{for(var r in e||(e={}))X.call(e,r)&&J(t,r,e[r]);if(G)for(var r of G(e))Qe.call(e,r)&&J(t,r,e[r]);return t},Z=(t,e)=>ye(t,ve(e));var K=(t,e)=>()=>(e||t((e={exports:{}}).exports,e),e.exports);var be=(t,e,r,n)=>{if(e&&typeof e=="object"||typeof e=="function")for(let i of xe(e))!X.call(t,i)&&i!==r&&W(t,i,{get:()=>e[i],enumerable:!(n=me(e,i))||n.enumerable});return t};var H=(t,e,r)=>(r=t!=null?ge(Se(t)):{},be(e||!t||!t.__esModule?W(r,"default",{value:t,enumerable:!0}):r,t));var z=(t,e,r)=>new Promise((n,i)=>{var s=u=>{try{a(r.next(u))}catch(c){i(c)}},o=u=>{try{a(r.throw(u))}catch(c){i(c)}},a=u=>u.done?n(u.value):Promise.resolve(u.value).then(s,o);a((r=r.apply(t,e)).next())});var re=K((ee,te)=>{/** + * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.3.9 + * Copyright (C) 2020 Oliver Nightingale + * @license MIT + */(function(){var t=function(e){var r=new t.Builder;return r.pipeline.add(t.trimmer,t.stopWordFilter,t.stemmer),r.searchPipeline.add(t.stemmer),e.call(r,r),r.build()};t.version="2.3.9";/*! + * lunr.utils + * Copyright (C) 2020 Oliver Nightingale + */t.utils={},t.utils.warn=function(e){return function(r){e.console&&console.warn&&console.warn(r)}}(this),t.utils.asString=function(e){return e==null?"":e.toString()},t.utils.clone=function(e){if(e==null)return e;for(var r=Object.create(null),n=Object.keys(e),i=0;i0){var h=t.utils.clone(r)||{};h.position=[a,c],h.index=s.length,s.push(new t.Token(n.slice(a,o),h))}a=o+1}}return s},t.tokenizer.separator=/[\s\-]+/;/*! + * lunr.Pipeline + * Copyright (C) 2020 Oliver Nightingale + */t.Pipeline=function(){this._stack=[]},t.Pipeline.registeredFunctions=Object.create(null),t.Pipeline.registerFunction=function(e,r){r in this.registeredFunctions&&t.utils.warn("Overwriting existing registered function: "+r),e.label=r,t.Pipeline.registeredFunctions[e.label]=e},t.Pipeline.warnIfFunctionNotRegistered=function(e){var r=e.label&&e.label in this.registeredFunctions;r||t.utils.warn(`Function is not registered with pipeline. This may cause problems when serialising the index. +`,e)},t.Pipeline.load=function(e){var r=new t.Pipeline;return e.forEach(function(n){var i=t.Pipeline.registeredFunctions[n];if(i)r.add(i);else throw new Error("Cannot load unregistered function: "+n)}),r},t.Pipeline.prototype.add=function(){var e=Array.prototype.slice.call(arguments);e.forEach(function(r){t.Pipeline.warnIfFunctionNotRegistered(r),this._stack.push(r)},this)},t.Pipeline.prototype.after=function(e,r){t.Pipeline.warnIfFunctionNotRegistered(r);var n=this._stack.indexOf(e);if(n==-1)throw new Error("Cannot find existingFn");n=n+1,this._stack.splice(n,0,r)},t.Pipeline.prototype.before=function(e,r){t.Pipeline.warnIfFunctionNotRegistered(r);var n=this._stack.indexOf(e);if(n==-1)throw new Error("Cannot find existingFn");this._stack.splice(n,0,r)},t.Pipeline.prototype.remove=function(e){var r=this._stack.indexOf(e);r!=-1&&this._stack.splice(r,1)},t.Pipeline.prototype.run=function(e){for(var r=this._stack.length,n=0;n1&&(oe&&(n=s),o!=e);)i=n-r,s=r+Math.floor(i/2),o=this.elements[s*2];if(o==e||o>e)return s*2;if(ou?h+=2:a==u&&(r+=n[c+1]*i[h+1],c+=2,h+=2);return r},t.Vector.prototype.similarity=function(e){return this.dot(e)/this.magnitude()||0},t.Vector.prototype.toArray=function(){for(var e=new Array(this.elements.length/2),r=1,n=0;r0){var o=s.str.charAt(0),a;o in s.node.edges?a=s.node.edges[o]:(a=new t.TokenSet,s.node.edges[o]=a),s.str.length==1&&(a.final=!0),i.push({node:a,editsRemaining:s.editsRemaining,str:s.str.slice(1)})}if(s.editsRemaining!=0){if("*"in s.node.edges)var u=s.node.edges["*"];else{var u=new t.TokenSet;s.node.edges["*"]=u}if(s.str.length==0&&(u.final=!0),i.push({node:u,editsRemaining:s.editsRemaining-1,str:s.str}),s.str.length>1&&i.push({node:s.node,editsRemaining:s.editsRemaining-1,str:s.str.slice(1)}),s.str.length==1&&(s.node.final=!0),s.str.length>=1){if("*"in s.node.edges)var c=s.node.edges["*"];else{var c=new t.TokenSet;s.node.edges["*"]=c}s.str.length==1&&(c.final=!0),i.push({node:c,editsRemaining:s.editsRemaining-1,str:s.str.slice(1)})}if(s.str.length>1){var h=s.str.charAt(0),y=s.str.charAt(1),g;y in s.node.edges?g=s.node.edges[y]:(g=new t.TokenSet,s.node.edges[y]=g),s.str.length==1&&(g.final=!0),i.push({node:g,editsRemaining:s.editsRemaining-1,str:h+s.str.slice(2)})}}}return n},t.TokenSet.fromString=function(e){for(var r=new t.TokenSet,n=r,i=0,s=e.length;i=e;r--){var n=this.uncheckedNodes[r],i=n.child.toString();i in this.minimizedNodes?n.parent.edges[n.char]=this.minimizedNodes[i]:(n.child._str=i,this.minimizedNodes[i]=n.child),this.uncheckedNodes.pop()}};/*! + * lunr.Index + * Copyright (C) 2020 Oliver Nightingale + */t.Index=function(e){this.invertedIndex=e.invertedIndex,this.fieldVectors=e.fieldVectors,this.tokenSet=e.tokenSet,this.fields=e.fields,this.pipeline=e.pipeline},t.Index.prototype.search=function(e){return this.query(function(r){var n=new t.QueryParser(e,r);n.parse()})},t.Index.prototype.query=function(e){for(var r=new t.Query(this.fields),n=Object.create(null),i=Object.create(null),s=Object.create(null),o=Object.create(null),a=Object.create(null),u=0;u1?this._b=1:this._b=e},t.Builder.prototype.k1=function(e){this._k1=e},t.Builder.prototype.add=function(e,r){var n=e[this._ref],i=Object.keys(this._fields);this._documents[n]=r||{},this.documentCount+=1;for(var s=0;s=this.length)return t.QueryLexer.EOS;var e=this.str.charAt(this.pos);return this.pos+=1,e},t.QueryLexer.prototype.width=function(){return this.pos-this.start},t.QueryLexer.prototype.ignore=function(){this.start==this.pos&&(this.pos+=1),this.start=this.pos},t.QueryLexer.prototype.backup=function(){this.pos-=1},t.QueryLexer.prototype.acceptDigitRun=function(){var e,r;do e=this.next(),r=e.charCodeAt(0);while(r>47&&r<58);e!=t.QueryLexer.EOS&&this.backup()},t.QueryLexer.prototype.more=function(){return this.pos1&&(e.backup(),e.emit(t.QueryLexer.TERM)),e.ignore(),e.more())return t.QueryLexer.lexText},t.QueryLexer.lexEditDistance=function(e){return e.ignore(),e.acceptDigitRun(),e.emit(t.QueryLexer.EDIT_DISTANCE),t.QueryLexer.lexText},t.QueryLexer.lexBoost=function(e){return e.ignore(),e.acceptDigitRun(),e.emit(t.QueryLexer.BOOST),t.QueryLexer.lexText},t.QueryLexer.lexEOS=function(e){e.width()>0&&e.emit(t.QueryLexer.TERM)},t.QueryLexer.termSeparator=t.tokenizer.separator,t.QueryLexer.lexText=function(e){for(;;){var r=e.next();if(r==t.QueryLexer.EOS)return t.QueryLexer.lexEOS;if(r.charCodeAt(0)==92){e.escapeCharacter();continue}if(r==":")return t.QueryLexer.lexField;if(r=="~")return e.backup(),e.width()>0&&e.emit(t.QueryLexer.TERM),t.QueryLexer.lexEditDistance;if(r=="^")return e.backup(),e.width()>0&&e.emit(t.QueryLexer.TERM),t.QueryLexer.lexBoost;if(r=="+"&&e.width()===1||r=="-"&&e.width()===1)return e.emit(t.QueryLexer.PRESENCE),t.QueryLexer.lexText;if(r.match(t.QueryLexer.termSeparator))return t.QueryLexer.lexTerm}},t.QueryParser=function(e,r){this.lexer=new t.QueryLexer(e),this.query=r,this.currentClause={},this.lexemeIdx=0},t.QueryParser.prototype.parse=function(){this.lexer.run(),this.lexemes=this.lexer.lexemes;for(var e=t.QueryParser.parseClause;e;)e=e(this);return this.query},t.QueryParser.prototype.peekLexeme=function(){return this.lexemes[this.lexemeIdx]},t.QueryParser.prototype.consumeLexeme=function(){var e=this.peekLexeme();return this.lexemeIdx+=1,e},t.QueryParser.prototype.nextClause=function(){var e=this.currentClause;this.query.clause(e),this.currentClause={}},t.QueryParser.parseClause=function(e){var r=e.peekLexeme();if(r!=null)switch(r.type){case t.QueryLexer.PRESENCE:return t.QueryParser.parsePresence;case t.QueryLexer.FIELD:return t.QueryParser.parseField;case t.QueryLexer.TERM:return t.QueryParser.parseTerm;default:var n="expected either a field or a term, found "+r.type;throw r.str.length>=1&&(n+=" with value '"+r.str+"'"),new t.QueryParseError(n,r.start,r.end)}},t.QueryParser.parsePresence=function(e){var r=e.consumeLexeme();if(r!=null){switch(r.str){case"-":e.currentClause.presence=t.Query.presence.PROHIBITED;break;case"+":e.currentClause.presence=t.Query.presence.REQUIRED;break;default:var n="unrecognised presence operator'"+r.str+"'";throw new t.QueryParseError(n,r.start,r.end)}var i=e.peekLexeme();if(i==null){var n="expecting term or field, found nothing";throw new t.QueryParseError(n,r.start,r.end)}switch(i.type){case t.QueryLexer.FIELD:return t.QueryParser.parseField;case t.QueryLexer.TERM:return t.QueryParser.parseTerm;default:var n="expecting term or field, found '"+i.type+"'";throw new t.QueryParseError(n,i.start,i.end)}}},t.QueryParser.parseField=function(e){var r=e.consumeLexeme();if(r!=null){if(e.query.allFields.indexOf(r.str)==-1){var n=e.query.allFields.map(function(o){return"'"+o+"'"}).join(", "),i="unrecognised field '"+r.str+"', possible fields: "+n;throw new t.QueryParseError(i,r.start,r.end)}e.currentClause.fields=[r.str];var s=e.peekLexeme();if(s==null){var i="expecting term, found nothing";throw new t.QueryParseError(i,r.start,r.end)}switch(s.type){case t.QueryLexer.TERM:return t.QueryParser.parseTerm;default:var i="expecting term, found '"+s.type+"'";throw new t.QueryParseError(i,s.start,s.end)}}},t.QueryParser.parseTerm=function(e){var r=e.consumeLexeme();if(r!=null){e.currentClause.term=r.str.toLowerCase(),r.str.indexOf("*")!=-1&&(e.currentClause.usePipeline=!1);var n=e.peekLexeme();if(n==null){e.nextClause();return}switch(n.type){case t.QueryLexer.TERM:return e.nextClause(),t.QueryParser.parseTerm;case t.QueryLexer.FIELD:return e.nextClause(),t.QueryParser.parseField;case t.QueryLexer.EDIT_DISTANCE:return t.QueryParser.parseEditDistance;case t.QueryLexer.BOOST:return t.QueryParser.parseBoost;case t.QueryLexer.PRESENCE:return e.nextClause(),t.QueryParser.parsePresence;default:var i="Unexpected lexeme type '"+n.type+"'";throw new t.QueryParseError(i,n.start,n.end)}}},t.QueryParser.parseEditDistance=function(e){var r=e.consumeLexeme();if(r!=null){var n=parseInt(r.str,10);if(isNaN(n)){var i="edit distance must be numeric";throw new t.QueryParseError(i,r.start,r.end)}e.currentClause.editDistance=n;var s=e.peekLexeme();if(s==null){e.nextClause();return}switch(s.type){case t.QueryLexer.TERM:return e.nextClause(),t.QueryParser.parseTerm;case t.QueryLexer.FIELD:return e.nextClause(),t.QueryParser.parseField;case t.QueryLexer.EDIT_DISTANCE:return t.QueryParser.parseEditDistance;case t.QueryLexer.BOOST:return t.QueryParser.parseBoost;case t.QueryLexer.PRESENCE:return e.nextClause(),t.QueryParser.parsePresence;default:var i="Unexpected lexeme type '"+s.type+"'";throw new t.QueryParseError(i,s.start,s.end)}}},t.QueryParser.parseBoost=function(e){var r=e.consumeLexeme();if(r!=null){var n=parseInt(r.str,10);if(isNaN(n)){var i="boost must be numeric";throw new t.QueryParseError(i,r.start,r.end)}e.currentClause.boost=n;var s=e.peekLexeme();if(s==null){e.nextClause();return}switch(s.type){case t.QueryLexer.TERM:return e.nextClause(),t.QueryParser.parseTerm;case t.QueryLexer.FIELD:return e.nextClause(),t.QueryParser.parseField;case t.QueryLexer.EDIT_DISTANCE:return t.QueryParser.parseEditDistance;case t.QueryLexer.BOOST:return t.QueryParser.parseBoost;case t.QueryLexer.PRESENCE:return e.nextClause(),t.QueryParser.parsePresence;default:var i="Unexpected lexeme type '"+s.type+"'";throw new t.QueryParseError(i,s.start,s.end)}}},function(e,r){typeof define=="function"&&define.amd?define(r):typeof ee=="object"?te.exports=r():e.lunr=r()}(this,function(){return t})})()});var q=K((Re,ne)=>{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var Le=/["'&<>]/;ne.exports=we;function we(t){var e=""+t,r=Le.exec(e);if(!r)return e;var n,i="",s=0,o=0;for(s=r.index;s=0;r--){let n=t[r];typeof n=="string"?n=document.createTextNode(n):n.parentNode&&n.parentNode.removeChild(n),r?e.insertBefore(this.previousSibling,n):e.replaceChild(n,this)}}}));var ie=H(q());function se(t){let e=new Map,r=new Set;for(let n of t){let[i,s]=n.location.split("#"),o=n.location,a=n.title,u=n.tags,c=(0,ie.default)(n.text).replace(/\s+(?=[,.:;!?])/g,"").replace(/\s+/g," ");if(s){let h=e.get(i);r.has(h)?e.set(o,{location:o,title:a,text:c,parent:h}):(h.title=n.title,h.text=c,r.add(h))}else e.set(o,M({location:o,title:a,text:c},u&&{tags:u}))}return e}var oe=H(q());function ae(t,e){let r=new RegExp(t.separator,"img"),n=(i,s,o)=>`${s}${o}`;return i=>{i=i.replace(/[\s*+\-:~^]+/g," ").trim();let s=new RegExp(`(^|${t.separator})(${i.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return o=>(e?(0,oe.default)(o):o).replace(s,n).replace(/<\/mark>(\s+)]*>/img,"$1")}}function ue(t){let e=new lunr.Query(["title","text"]);return new lunr.QueryParser(t,e).parse(),e.clauses}function ce(t,e){var i;let r=new Set(t),n={};for(let s=0;s!n.has(i)))]}var U=class{constructor({config:e,docs:r,options:n}){this.options=n,this.documents=se(r),this.highlight=ae(e,!1),lunr.tokenizer.separator=new RegExp(e.separator),this.index=lunr(function(){e.lang.length===1&&e.lang[0]!=="en"?this.use(lunr[e.lang[0]]):e.lang.length>1&&this.use(lunr.multiLanguage(...e.lang));let i=Ee(["trimmer","stopWordFilter","stemmer"],n.pipeline);for(let s of e.lang.map(o=>o==="en"?lunr:lunr[o]))for(let o of i)this.pipeline.remove(s[o]),this.searchPipeline.remove(s[o]);this.ref("location"),this.field("title",{boost:1e3}),this.field("text"),this.field("tags",{boost:1e6,extractor:s=>{let{tags:o=[]}=s;return o.reduce((a,u)=>[...a,...lunr.tokenizer(u)],[])}});for(let s of r)this.add(s,{boost:s.boost})})}search(e){if(e)try{let r=this.highlight(e),n=ue(e).filter(o=>o.presence!==lunr.Query.presence.PROHIBITED),i=this.index.search(`${e}*`).reduce((o,{ref:a,score:u,matchData:c})=>{let h=this.documents.get(a);if(typeof h!="undefined"){let{location:y,title:g,text:b,tags:m,parent:Q}=h,p=ce(n,Object.keys(c.metadata)),d=+!Q+ +Object.values(p).every(w=>w);o.push(Z(M({location:y,title:r(g),text:r(b)},m&&{tags:m.map(r)}),{score:u*(1+d),terms:p}))}return o},[]).sort((o,a)=>a.score-o.score).reduce((o,a)=>{let u=this.documents.get(a.location);if(typeof u!="undefined"){let c="parent"in u?u.parent.location:u.location;o.set(c,[...o.get(c)||[],a])}return o},new Map),s;if(this.options.suggestions){let o=this.index.query(a=>{for(let u of n)a.term(u.term,{fields:["title"],presence:lunr.Query.presence.REQUIRED,wildcard:lunr.Query.wildcard.TRAILING})});s=o.length?Object.keys(o[0].matchData.metadata):[]}return M({items:[...i.values()]},typeof s!="undefined"&&{suggestions:s})}catch(r){console.warn(`Invalid query: ${e} \u2013 see https://bit.ly/2s3ChXG`)}return{items:[]}}};var Y;function ke(t){return z(this,null,function*(){let e="../lunr";if(typeof parent!="undefined"&&"IFrameWorker"in parent){let n=document.querySelector("script[src]"),[i]=n.src.split("/worker");e=e.replace("..",i)}let r=[];for(let n of t.lang){switch(n){case"ja":r.push(`${e}/tinyseg.js`);break;case"hi":case"th":r.push(`${e}/wordcut.js`);break}n!=="en"&&r.push(`${e}/min/lunr.${n}.min.js`)}t.lang.length>1&&r.push(`${e}/min/lunr.multi.min.js`),r.length&&(yield importScripts(`${e}/min/lunr.stemmer.support.min.js`,...r))})}function Te(t){return z(this,null,function*(){switch(t.type){case 0:return yield ke(t.data.config),Y=new U(t.data),{type:1};case 2:return{type:3,data:Y?Y.search(t.data):{items:[]}};default:throw new TypeError("Invalid message type")}})}self.lunr=le.default;addEventListener("message",t=>z(void 0,null,function*(){postMessage(yield Te(t.data))}));})(); +//# sourceMappingURL=search.16e2a7d4.min.js.map + diff --git a/assets/javascripts/workers/search.16e2a7d4.min.js.map b/assets/javascripts/workers/search.16e2a7d4.min.js.map new file mode 100644 index 00000000..fa01f374 --- /dev/null +++ b/assets/javascripts/workers/search.16e2a7d4.min.js.map @@ -0,0 +1,8 @@ +{ + "version": 3, + "sources": ["node_modules/lunr/lunr.js", "node_modules/escape-html/index.js", "src/assets/javascripts/integrations/search/worker/main/index.ts", "src/assets/javascripts/polyfills/index.ts", "src/assets/javascripts/integrations/search/document/index.ts", "src/assets/javascripts/integrations/search/highlighter/index.ts", "src/assets/javascripts/integrations/search/query/_/index.ts", "src/assets/javascripts/integrations/search/_/index.ts"], + "sourceRoot": "../../../..", + "sourcesContent": ["/**\n * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.3.9\n * Copyright (C) 2020 Oliver Nightingale\n * @license MIT\n */\n\n;(function(){\n\n/**\n * A convenience function for configuring and constructing\n * a new lunr Index.\n *\n * A lunr.Builder instance is created and the pipeline setup\n * with a trimmer, stop word filter and stemmer.\n *\n * This builder object is yielded to the configuration function\n * that is passed as a parameter, allowing the list of fields\n * and other builder parameters to be customised.\n *\n * All documents _must_ be added within the passed config function.\n *\n * @example\n * var idx = lunr(function () {\n * this.field('title')\n * this.field('body')\n * this.ref('id')\n *\n * documents.forEach(function (doc) {\n * this.add(doc)\n * }, this)\n * })\n *\n * @see {@link lunr.Builder}\n * @see {@link lunr.Pipeline}\n * @see {@link lunr.trimmer}\n * @see {@link lunr.stopWordFilter}\n * @see {@link lunr.stemmer}\n * @namespace {function} lunr\n */\nvar lunr = function (config) {\n var builder = new lunr.Builder\n\n builder.pipeline.add(\n lunr.trimmer,\n lunr.stopWordFilter,\n lunr.stemmer\n )\n\n builder.searchPipeline.add(\n lunr.stemmer\n )\n\n config.call(builder, builder)\n return builder.build()\n}\n\nlunr.version = \"2.3.9\"\n/*!\n * lunr.utils\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * A namespace containing utils for the rest of the lunr library\n * @namespace lunr.utils\n */\nlunr.utils = {}\n\n/**\n * Print a warning message to the console.\n *\n * @param {String} message The message to be printed.\n * @memberOf lunr.utils\n * @function\n */\nlunr.utils.warn = (function (global) {\n /* eslint-disable no-console */\n return function (message) {\n if (global.console && console.warn) {\n console.warn(message)\n }\n }\n /* eslint-enable no-console */\n})(this)\n\n/**\n * Convert an object to a string.\n *\n * In the case of `null` and `undefined` the function returns\n * the empty string, in all other cases the result of calling\n * `toString` on the passed object is returned.\n *\n * @param {Any} obj The object to convert to a string.\n * @return {String} string representation of the passed object.\n * @memberOf lunr.utils\n */\nlunr.utils.asString = function (obj) {\n if (obj === void 0 || obj === null) {\n return \"\"\n } else {\n return obj.toString()\n }\n}\n\n/**\n * Clones an object.\n *\n * Will create a copy of an existing object such that any mutations\n * on the copy cannot affect the original.\n *\n * Only shallow objects are supported, passing a nested object to this\n * function will cause a TypeError.\n *\n * Objects with primitives, and arrays of primitives are supported.\n *\n * @param {Object} obj The object to clone.\n * @return {Object} a clone of the passed object.\n * @throws {TypeError} when a nested object is passed.\n * @memberOf Utils\n */\nlunr.utils.clone = function (obj) {\n if (obj === null || obj === undefined) {\n return obj\n }\n\n var clone = Object.create(null),\n keys = Object.keys(obj)\n\n for (var i = 0; i < keys.length; i++) {\n var key = keys[i],\n val = obj[key]\n\n if (Array.isArray(val)) {\n clone[key] = val.slice()\n continue\n }\n\n if (typeof val === 'string' ||\n typeof val === 'number' ||\n typeof val === 'boolean') {\n clone[key] = val\n continue\n }\n\n throw new TypeError(\"clone is not deep and does not support nested objects\")\n }\n\n return clone\n}\nlunr.FieldRef = function (docRef, fieldName, stringValue) {\n this.docRef = docRef\n this.fieldName = fieldName\n this._stringValue = stringValue\n}\n\nlunr.FieldRef.joiner = \"/\"\n\nlunr.FieldRef.fromString = function (s) {\n var n = s.indexOf(lunr.FieldRef.joiner)\n\n if (n === -1) {\n throw \"malformed field ref string\"\n }\n\n var fieldRef = s.slice(0, n),\n docRef = s.slice(n + 1)\n\n return new lunr.FieldRef (docRef, fieldRef, s)\n}\n\nlunr.FieldRef.prototype.toString = function () {\n if (this._stringValue == undefined) {\n this._stringValue = this.fieldName + lunr.FieldRef.joiner + this.docRef\n }\n\n return this._stringValue\n}\n/*!\n * lunr.Set\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * A lunr set.\n *\n * @constructor\n */\nlunr.Set = function (elements) {\n this.elements = Object.create(null)\n\n if (elements) {\n this.length = elements.length\n\n for (var i = 0; i < this.length; i++) {\n this.elements[elements[i]] = true\n }\n } else {\n this.length = 0\n }\n}\n\n/**\n * A complete set that contains all elements.\n *\n * @static\n * @readonly\n * @type {lunr.Set}\n */\nlunr.Set.complete = {\n intersect: function (other) {\n return other\n },\n\n union: function () {\n return this\n },\n\n contains: function () {\n return true\n }\n}\n\n/**\n * An empty set that contains no elements.\n *\n * @static\n * @readonly\n * @type {lunr.Set}\n */\nlunr.Set.empty = {\n intersect: function () {\n return this\n },\n\n union: function (other) {\n return other\n },\n\n contains: function () {\n return false\n }\n}\n\n/**\n * Returns true if this set contains the specified object.\n *\n * @param {object} object - Object whose presence in this set is to be tested.\n * @returns {boolean} - True if this set contains the specified object.\n */\nlunr.Set.prototype.contains = function (object) {\n return !!this.elements[object]\n}\n\n/**\n * Returns a new set containing only the elements that are present in both\n * this set and the specified set.\n *\n * @param {lunr.Set} other - set to intersect with this set.\n * @returns {lunr.Set} a new set that is the intersection of this and the specified set.\n */\n\nlunr.Set.prototype.intersect = function (other) {\n var a, b, elements, intersection = []\n\n if (other === lunr.Set.complete) {\n return this\n }\n\n if (other === lunr.Set.empty) {\n return other\n }\n\n if (this.length < other.length) {\n a = this\n b = other\n } else {\n a = other\n b = this\n }\n\n elements = Object.keys(a.elements)\n\n for (var i = 0; i < elements.length; i++) {\n var element = elements[i]\n if (element in b.elements) {\n intersection.push(element)\n }\n }\n\n return new lunr.Set (intersection)\n}\n\n/**\n * Returns a new set combining the elements of this and the specified set.\n *\n * @param {lunr.Set} other - set to union with this set.\n * @return {lunr.Set} a new set that is the union of this and the specified set.\n */\n\nlunr.Set.prototype.union = function (other) {\n if (other === lunr.Set.complete) {\n return lunr.Set.complete\n }\n\n if (other === lunr.Set.empty) {\n return this\n }\n\n return new lunr.Set(Object.keys(this.elements).concat(Object.keys(other.elements)))\n}\n/**\n * A function to calculate the inverse document frequency for\n * a posting. This is shared between the builder and the index\n *\n * @private\n * @param {object} posting - The posting for a given term\n * @param {number} documentCount - The total number of documents.\n */\nlunr.idf = function (posting, documentCount) {\n var documentsWithTerm = 0\n\n for (var fieldName in posting) {\n if (fieldName == '_index') continue // Ignore the term index, its not a field\n documentsWithTerm += Object.keys(posting[fieldName]).length\n }\n\n var x = (documentCount - documentsWithTerm + 0.5) / (documentsWithTerm + 0.5)\n\n return Math.log(1 + Math.abs(x))\n}\n\n/**\n * A token wraps a string representation of a token\n * as it is passed through the text processing pipeline.\n *\n * @constructor\n * @param {string} [str=''] - The string token being wrapped.\n * @param {object} [metadata={}] - Metadata associated with this token.\n */\nlunr.Token = function (str, metadata) {\n this.str = str || \"\"\n this.metadata = metadata || {}\n}\n\n/**\n * Returns the token string that is being wrapped by this object.\n *\n * @returns {string}\n */\nlunr.Token.prototype.toString = function () {\n return this.str\n}\n\n/**\n * A token update function is used when updating or optionally\n * when cloning a token.\n *\n * @callback lunr.Token~updateFunction\n * @param {string} str - The string representation of the token.\n * @param {Object} metadata - All metadata associated with this token.\n */\n\n/**\n * Applies the given function to the wrapped string token.\n *\n * @example\n * token.update(function (str, metadata) {\n * return str.toUpperCase()\n * })\n *\n * @param {lunr.Token~updateFunction} fn - A function to apply to the token string.\n * @returns {lunr.Token}\n */\nlunr.Token.prototype.update = function (fn) {\n this.str = fn(this.str, this.metadata)\n return this\n}\n\n/**\n * Creates a clone of this token. Optionally a function can be\n * applied to the cloned token.\n *\n * @param {lunr.Token~updateFunction} [fn] - An optional function to apply to the cloned token.\n * @returns {lunr.Token}\n */\nlunr.Token.prototype.clone = function (fn) {\n fn = fn || function (s) { return s }\n return new lunr.Token (fn(this.str, this.metadata), this.metadata)\n}\n/*!\n * lunr.tokenizer\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * A function for splitting a string into tokens ready to be inserted into\n * the search index. Uses `lunr.tokenizer.separator` to split strings, change\n * the value of this property to change how strings are split into tokens.\n *\n * This tokenizer will convert its parameter to a string by calling `toString` and\n * then will split this string on the character in `lunr.tokenizer.separator`.\n * Arrays will have their elements converted to strings and wrapped in a lunr.Token.\n *\n * Optional metadata can be passed to the tokenizer, this metadata will be cloned and\n * added as metadata to every token that is created from the object to be tokenized.\n *\n * @static\n * @param {?(string|object|object[])} obj - The object to convert into tokens\n * @param {?object} metadata - Optional metadata to associate with every token\n * @returns {lunr.Token[]}\n * @see {@link lunr.Pipeline}\n */\nlunr.tokenizer = function (obj, metadata) {\n if (obj == null || obj == undefined) {\n return []\n }\n\n if (Array.isArray(obj)) {\n return obj.map(function (t) {\n return new lunr.Token(\n lunr.utils.asString(t).toLowerCase(),\n lunr.utils.clone(metadata)\n )\n })\n }\n\n var str = obj.toString().toLowerCase(),\n len = str.length,\n tokens = []\n\n for (var sliceEnd = 0, sliceStart = 0; sliceEnd <= len; sliceEnd++) {\n var char = str.charAt(sliceEnd),\n sliceLength = sliceEnd - sliceStart\n\n if ((char.match(lunr.tokenizer.separator) || sliceEnd == len)) {\n\n if (sliceLength > 0) {\n var tokenMetadata = lunr.utils.clone(metadata) || {}\n tokenMetadata[\"position\"] = [sliceStart, sliceLength]\n tokenMetadata[\"index\"] = tokens.length\n\n tokens.push(\n new lunr.Token (\n str.slice(sliceStart, sliceEnd),\n tokenMetadata\n )\n )\n }\n\n sliceStart = sliceEnd + 1\n }\n\n }\n\n return tokens\n}\n\n/**\n * The separator used to split a string into tokens. Override this property to change the behaviour of\n * `lunr.tokenizer` behaviour when tokenizing strings. By default this splits on whitespace and hyphens.\n *\n * @static\n * @see lunr.tokenizer\n */\nlunr.tokenizer.separator = /[\\s\\-]+/\n/*!\n * lunr.Pipeline\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * lunr.Pipelines maintain an ordered list of functions to be applied to all\n * tokens in documents entering the search index and queries being ran against\n * the index.\n *\n * An instance of lunr.Index created with the lunr shortcut will contain a\n * pipeline with a stop word filter and an English language stemmer. Extra\n * functions can be added before or after either of these functions or these\n * default functions can be removed.\n *\n * When run the pipeline will call each function in turn, passing a token, the\n * index of that token in the original list of all tokens and finally a list of\n * all the original tokens.\n *\n * The output of functions in the pipeline will be passed to the next function\n * in the pipeline. To exclude a token from entering the index the function\n * should return undefined, the rest of the pipeline will not be called with\n * this token.\n *\n * For serialisation of pipelines to work, all functions used in an instance of\n * a pipeline should be registered with lunr.Pipeline. Registered functions can\n * then be loaded. If trying to load a serialised pipeline that uses functions\n * that are not registered an error will be thrown.\n *\n * If not planning on serialising the pipeline then registering pipeline functions\n * is not necessary.\n *\n * @constructor\n */\nlunr.Pipeline = function () {\n this._stack = []\n}\n\nlunr.Pipeline.registeredFunctions = Object.create(null)\n\n/**\n * A pipeline function maps lunr.Token to lunr.Token. A lunr.Token contains the token\n * string as well as all known metadata. A pipeline function can mutate the token string\n * or mutate (or add) metadata for a given token.\n *\n * A pipeline function can indicate that the passed token should be discarded by returning\n * null, undefined or an empty string. This token will not be passed to any downstream pipeline\n * functions and will not be added to the index.\n *\n * Multiple tokens can be returned by returning an array of tokens. Each token will be passed\n * to any downstream pipeline functions and all will returned tokens will be added to the index.\n *\n * Any number of pipeline functions may be chained together using a lunr.Pipeline.\n *\n * @interface lunr.PipelineFunction\n * @param {lunr.Token} token - A token from the document being processed.\n * @param {number} i - The index of this token in the complete list of tokens for this document/field.\n * @param {lunr.Token[]} tokens - All tokens for this document/field.\n * @returns {(?lunr.Token|lunr.Token[])}\n */\n\n/**\n * Register a function with the pipeline.\n *\n * Functions that are used in the pipeline should be registered if the pipeline\n * needs to be serialised, or a serialised pipeline needs to be loaded.\n *\n * Registering a function does not add it to a pipeline, functions must still be\n * added to instances of the pipeline for them to be used when running a pipeline.\n *\n * @param {lunr.PipelineFunction} fn - The function to check for.\n * @param {String} label - The label to register this function with\n */\nlunr.Pipeline.registerFunction = function (fn, label) {\n if (label in this.registeredFunctions) {\n lunr.utils.warn('Overwriting existing registered function: ' + label)\n }\n\n fn.label = label\n lunr.Pipeline.registeredFunctions[fn.label] = fn\n}\n\n/**\n * Warns if the function is not registered as a Pipeline function.\n *\n * @param {lunr.PipelineFunction} fn - The function to check for.\n * @private\n */\nlunr.Pipeline.warnIfFunctionNotRegistered = function (fn) {\n var isRegistered = fn.label && (fn.label in this.registeredFunctions)\n\n if (!isRegistered) {\n lunr.utils.warn('Function is not registered with pipeline. This may cause problems when serialising the index.\\n', fn)\n }\n}\n\n/**\n * Loads a previously serialised pipeline.\n *\n * All functions to be loaded must already be registered with lunr.Pipeline.\n * If any function from the serialised data has not been registered then an\n * error will be thrown.\n *\n * @param {Object} serialised - The serialised pipeline to load.\n * @returns {lunr.Pipeline}\n */\nlunr.Pipeline.load = function (serialised) {\n var pipeline = new lunr.Pipeline\n\n serialised.forEach(function (fnName) {\n var fn = lunr.Pipeline.registeredFunctions[fnName]\n\n if (fn) {\n pipeline.add(fn)\n } else {\n throw new Error('Cannot load unregistered function: ' + fnName)\n }\n })\n\n return pipeline\n}\n\n/**\n * Adds new functions to the end of the pipeline.\n *\n * Logs a warning if the function has not been registered.\n *\n * @param {lunr.PipelineFunction[]} functions - Any number of functions to add to the pipeline.\n */\nlunr.Pipeline.prototype.add = function () {\n var fns = Array.prototype.slice.call(arguments)\n\n fns.forEach(function (fn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(fn)\n this._stack.push(fn)\n }, this)\n}\n\n/**\n * Adds a single function after a function that already exists in the\n * pipeline.\n *\n * Logs a warning if the function has not been registered.\n *\n * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline.\n * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline.\n */\nlunr.Pipeline.prototype.after = function (existingFn, newFn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(newFn)\n\n var pos = this._stack.indexOf(existingFn)\n if (pos == -1) {\n throw new Error('Cannot find existingFn')\n }\n\n pos = pos + 1\n this._stack.splice(pos, 0, newFn)\n}\n\n/**\n * Adds a single function before a function that already exists in the\n * pipeline.\n *\n * Logs a warning if the function has not been registered.\n *\n * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline.\n * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline.\n */\nlunr.Pipeline.prototype.before = function (existingFn, newFn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(newFn)\n\n var pos = this._stack.indexOf(existingFn)\n if (pos == -1) {\n throw new Error('Cannot find existingFn')\n }\n\n this._stack.splice(pos, 0, newFn)\n}\n\n/**\n * Removes a function from the pipeline.\n *\n * @param {lunr.PipelineFunction} fn The function to remove from the pipeline.\n */\nlunr.Pipeline.prototype.remove = function (fn) {\n var pos = this._stack.indexOf(fn)\n if (pos == -1) {\n return\n }\n\n this._stack.splice(pos, 1)\n}\n\n/**\n * Runs the current list of functions that make up the pipeline against the\n * passed tokens.\n *\n * @param {Array} tokens The tokens to run through the pipeline.\n * @returns {Array}\n */\nlunr.Pipeline.prototype.run = function (tokens) {\n var stackLength = this._stack.length\n\n for (var i = 0; i < stackLength; i++) {\n var fn = this._stack[i]\n var memo = []\n\n for (var j = 0; j < tokens.length; j++) {\n var result = fn(tokens[j], j, tokens)\n\n if (result === null || result === void 0 || result === '') continue\n\n if (Array.isArray(result)) {\n for (var k = 0; k < result.length; k++) {\n memo.push(result[k])\n }\n } else {\n memo.push(result)\n }\n }\n\n tokens = memo\n }\n\n return tokens\n}\n\n/**\n * Convenience method for passing a string through a pipeline and getting\n * strings out. This method takes care of wrapping the passed string in a\n * token and mapping the resulting tokens back to strings.\n *\n * @param {string} str - The string to pass through the pipeline.\n * @param {?object} metadata - Optional metadata to associate with the token\n * passed to the pipeline.\n * @returns {string[]}\n */\nlunr.Pipeline.prototype.runString = function (str, metadata) {\n var token = new lunr.Token (str, metadata)\n\n return this.run([token]).map(function (t) {\n return t.toString()\n })\n}\n\n/**\n * Resets the pipeline by removing any existing processors.\n *\n */\nlunr.Pipeline.prototype.reset = function () {\n this._stack = []\n}\n\n/**\n * Returns a representation of the pipeline ready for serialisation.\n *\n * Logs a warning if the function has not been registered.\n *\n * @returns {Array}\n */\nlunr.Pipeline.prototype.toJSON = function () {\n return this._stack.map(function (fn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(fn)\n\n return fn.label\n })\n}\n/*!\n * lunr.Vector\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * A vector is used to construct the vector space of documents and queries. These\n * vectors support operations to determine the similarity between two documents or\n * a document and a query.\n *\n * Normally no parameters are required for initializing a vector, but in the case of\n * loading a previously dumped vector the raw elements can be provided to the constructor.\n *\n * For performance reasons vectors are implemented with a flat array, where an elements\n * index is immediately followed by its value. E.g. [index, value, index, value]. This\n * allows the underlying array to be as sparse as possible and still offer decent\n * performance when being used for vector calculations.\n *\n * @constructor\n * @param {Number[]} [elements] - The flat list of element index and element value pairs.\n */\nlunr.Vector = function (elements) {\n this._magnitude = 0\n this.elements = elements || []\n}\n\n\n/**\n * Calculates the position within the vector to insert a given index.\n *\n * This is used internally by insert and upsert. If there are duplicate indexes then\n * the position is returned as if the value for that index were to be updated, but it\n * is the callers responsibility to check whether there is a duplicate at that index\n *\n * @param {Number} insertIdx - The index at which the element should be inserted.\n * @returns {Number}\n */\nlunr.Vector.prototype.positionForIndex = function (index) {\n // For an empty vector the tuple can be inserted at the beginning\n if (this.elements.length == 0) {\n return 0\n }\n\n var start = 0,\n end = this.elements.length / 2,\n sliceLength = end - start,\n pivotPoint = Math.floor(sliceLength / 2),\n pivotIndex = this.elements[pivotPoint * 2]\n\n while (sliceLength > 1) {\n if (pivotIndex < index) {\n start = pivotPoint\n }\n\n if (pivotIndex > index) {\n end = pivotPoint\n }\n\n if (pivotIndex == index) {\n break\n }\n\n sliceLength = end - start\n pivotPoint = start + Math.floor(sliceLength / 2)\n pivotIndex = this.elements[pivotPoint * 2]\n }\n\n if (pivotIndex == index) {\n return pivotPoint * 2\n }\n\n if (pivotIndex > index) {\n return pivotPoint * 2\n }\n\n if (pivotIndex < index) {\n return (pivotPoint + 1) * 2\n }\n}\n\n/**\n * Inserts an element at an index within the vector.\n *\n * Does not allow duplicates, will throw an error if there is already an entry\n * for this index.\n *\n * @param {Number} insertIdx - The index at which the element should be inserted.\n * @param {Number} val - The value to be inserted into the vector.\n */\nlunr.Vector.prototype.insert = function (insertIdx, val) {\n this.upsert(insertIdx, val, function () {\n throw \"duplicate index\"\n })\n}\n\n/**\n * Inserts or updates an existing index within the vector.\n *\n * @param {Number} insertIdx - The index at which the element should be inserted.\n * @param {Number} val - The value to be inserted into the vector.\n * @param {function} fn - A function that is called for updates, the existing value and the\n * requested value are passed as arguments\n */\nlunr.Vector.prototype.upsert = function (insertIdx, val, fn) {\n this._magnitude = 0\n var position = this.positionForIndex(insertIdx)\n\n if (this.elements[position] == insertIdx) {\n this.elements[position + 1] = fn(this.elements[position + 1], val)\n } else {\n this.elements.splice(position, 0, insertIdx, val)\n }\n}\n\n/**\n * Calculates the magnitude of this vector.\n *\n * @returns {Number}\n */\nlunr.Vector.prototype.magnitude = function () {\n if (this._magnitude) return this._magnitude\n\n var sumOfSquares = 0,\n elementsLength = this.elements.length\n\n for (var i = 1; i < elementsLength; i += 2) {\n var val = this.elements[i]\n sumOfSquares += val * val\n }\n\n return this._magnitude = Math.sqrt(sumOfSquares)\n}\n\n/**\n * Calculates the dot product of this vector and another vector.\n *\n * @param {lunr.Vector} otherVector - The vector to compute the dot product with.\n * @returns {Number}\n */\nlunr.Vector.prototype.dot = function (otherVector) {\n var dotProduct = 0,\n a = this.elements, b = otherVector.elements,\n aLen = a.length, bLen = b.length,\n aVal = 0, bVal = 0,\n i = 0, j = 0\n\n while (i < aLen && j < bLen) {\n aVal = a[i], bVal = b[j]\n if (aVal < bVal) {\n i += 2\n } else if (aVal > bVal) {\n j += 2\n } else if (aVal == bVal) {\n dotProduct += a[i + 1] * b[j + 1]\n i += 2\n j += 2\n }\n }\n\n return dotProduct\n}\n\n/**\n * Calculates the similarity between this vector and another vector.\n *\n * @param {lunr.Vector} otherVector - The other vector to calculate the\n * similarity with.\n * @returns {Number}\n */\nlunr.Vector.prototype.similarity = function (otherVector) {\n return this.dot(otherVector) / this.magnitude() || 0\n}\n\n/**\n * Converts the vector to an array of the elements within the vector.\n *\n * @returns {Number[]}\n */\nlunr.Vector.prototype.toArray = function () {\n var output = new Array (this.elements.length / 2)\n\n for (var i = 1, j = 0; i < this.elements.length; i += 2, j++) {\n output[j] = this.elements[i]\n }\n\n return output\n}\n\n/**\n * A JSON serializable representation of the vector.\n *\n * @returns {Number[]}\n */\nlunr.Vector.prototype.toJSON = function () {\n return this.elements\n}\n/* eslint-disable */\n/*!\n * lunr.stemmer\n * Copyright (C) 2020 Oliver Nightingale\n * Includes code from - http://tartarus.org/~martin/PorterStemmer/js.txt\n */\n\n/**\n * lunr.stemmer is an english language stemmer, this is a JavaScript\n * implementation of the PorterStemmer taken from http://tartarus.org/~martin\n *\n * @static\n * @implements {lunr.PipelineFunction}\n * @param {lunr.Token} token - The string to stem\n * @returns {lunr.Token}\n * @see {@link lunr.Pipeline}\n * @function\n */\nlunr.stemmer = (function(){\n var step2list = {\n \"ational\" : \"ate\",\n \"tional\" : \"tion\",\n \"enci\" : \"ence\",\n \"anci\" : \"ance\",\n \"izer\" : \"ize\",\n \"bli\" : \"ble\",\n \"alli\" : \"al\",\n \"entli\" : \"ent\",\n \"eli\" : \"e\",\n \"ousli\" : \"ous\",\n \"ization\" : \"ize\",\n \"ation\" : \"ate\",\n \"ator\" : \"ate\",\n \"alism\" : \"al\",\n \"iveness\" : \"ive\",\n \"fulness\" : \"ful\",\n \"ousness\" : \"ous\",\n \"aliti\" : \"al\",\n \"iviti\" : \"ive\",\n \"biliti\" : \"ble\",\n \"logi\" : \"log\"\n },\n\n step3list = {\n \"icate\" : \"ic\",\n \"ative\" : \"\",\n \"alize\" : \"al\",\n \"iciti\" : \"ic\",\n \"ical\" : \"ic\",\n \"ful\" : \"\",\n \"ness\" : \"\"\n },\n\n c = \"[^aeiou]\", // consonant\n v = \"[aeiouy]\", // vowel\n C = c + \"[^aeiouy]*\", // consonant sequence\n V = v + \"[aeiou]*\", // vowel sequence\n\n mgr0 = \"^(\" + C + \")?\" + V + C, // [C]VC... is m>0\n meq1 = \"^(\" + C + \")?\" + V + C + \"(\" + V + \")?$\", // [C]VC[V] is m=1\n mgr1 = \"^(\" + C + \")?\" + V + C + V + C, // [C]VCVC... is m>1\n s_v = \"^(\" + C + \")?\" + v; // vowel in stem\n\n var re_mgr0 = new RegExp(mgr0);\n var re_mgr1 = new RegExp(mgr1);\n var re_meq1 = new RegExp(meq1);\n var re_s_v = new RegExp(s_v);\n\n var re_1a = /^(.+?)(ss|i)es$/;\n var re2_1a = /^(.+?)([^s])s$/;\n var re_1b = /^(.+?)eed$/;\n var re2_1b = /^(.+?)(ed|ing)$/;\n var re_1b_2 = /.$/;\n var re2_1b_2 = /(at|bl|iz)$/;\n var re3_1b_2 = new RegExp(\"([^aeiouylsz])\\\\1$\");\n var re4_1b_2 = new RegExp(\"^\" + C + v + \"[^aeiouwxy]$\");\n\n var re_1c = /^(.+?[^aeiou])y$/;\n var re_2 = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;\n\n var re_3 = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;\n\n var re_4 = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;\n var re2_4 = /^(.+?)(s|t)(ion)$/;\n\n var re_5 = /^(.+?)e$/;\n var re_5_1 = /ll$/;\n var re3_5 = new RegExp(\"^\" + C + v + \"[^aeiouwxy]$\");\n\n var porterStemmer = function porterStemmer(w) {\n var stem,\n suffix,\n firstch,\n re,\n re2,\n re3,\n re4;\n\n if (w.length < 3) { return w; }\n\n firstch = w.substr(0,1);\n if (firstch == \"y\") {\n w = firstch.toUpperCase() + w.substr(1);\n }\n\n // Step 1a\n re = re_1a\n re2 = re2_1a;\n\n if (re.test(w)) { w = w.replace(re,\"$1$2\"); }\n else if (re2.test(w)) { w = w.replace(re2,\"$1$2\"); }\n\n // Step 1b\n re = re_1b;\n re2 = re2_1b;\n if (re.test(w)) {\n var fp = re.exec(w);\n re = re_mgr0;\n if (re.test(fp[1])) {\n re = re_1b_2;\n w = w.replace(re,\"\");\n }\n } else if (re2.test(w)) {\n var fp = re2.exec(w);\n stem = fp[1];\n re2 = re_s_v;\n if (re2.test(stem)) {\n w = stem;\n re2 = re2_1b_2;\n re3 = re3_1b_2;\n re4 = re4_1b_2;\n if (re2.test(w)) { w = w + \"e\"; }\n else if (re3.test(w)) { re = re_1b_2; w = w.replace(re,\"\"); }\n else if (re4.test(w)) { w = w + \"e\"; }\n }\n }\n\n // Step 1c - replace suffix y or Y by i if preceded by a non-vowel which is not the first letter of the word (so cry -> cri, by -> by, say -> say)\n re = re_1c;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n w = stem + \"i\";\n }\n\n // Step 2\n re = re_2;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n suffix = fp[2];\n re = re_mgr0;\n if (re.test(stem)) {\n w = stem + step2list[suffix];\n }\n }\n\n // Step 3\n re = re_3;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n suffix = fp[2];\n re = re_mgr0;\n if (re.test(stem)) {\n w = stem + step3list[suffix];\n }\n }\n\n // Step 4\n re = re_4;\n re2 = re2_4;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n re = re_mgr1;\n if (re.test(stem)) {\n w = stem;\n }\n } else if (re2.test(w)) {\n var fp = re2.exec(w);\n stem = fp[1] + fp[2];\n re2 = re_mgr1;\n if (re2.test(stem)) {\n w = stem;\n }\n }\n\n // Step 5\n re = re_5;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n re = re_mgr1;\n re2 = re_meq1;\n re3 = re3_5;\n if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) {\n w = stem;\n }\n }\n\n re = re_5_1;\n re2 = re_mgr1;\n if (re.test(w) && re2.test(w)) {\n re = re_1b_2;\n w = w.replace(re,\"\");\n }\n\n // and turn initial Y back to y\n\n if (firstch == \"y\") {\n w = firstch.toLowerCase() + w.substr(1);\n }\n\n return w;\n };\n\n return function (token) {\n return token.update(porterStemmer);\n }\n})();\n\nlunr.Pipeline.registerFunction(lunr.stemmer, 'stemmer')\n/*!\n * lunr.stopWordFilter\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * lunr.generateStopWordFilter builds a stopWordFilter function from the provided\n * list of stop words.\n *\n * The built in lunr.stopWordFilter is built using this generator and can be used\n * to generate custom stopWordFilters for applications or non English languages.\n *\n * @function\n * @param {Array} token The token to pass through the filter\n * @returns {lunr.PipelineFunction}\n * @see lunr.Pipeline\n * @see lunr.stopWordFilter\n */\nlunr.generateStopWordFilter = function (stopWords) {\n var words = stopWords.reduce(function (memo, stopWord) {\n memo[stopWord] = stopWord\n return memo\n }, {})\n\n return function (token) {\n if (token && words[token.toString()] !== token.toString()) return token\n }\n}\n\n/**\n * lunr.stopWordFilter is an English language stop word list filter, any words\n * contained in the list will not be passed through the filter.\n *\n * This is intended to be used in the Pipeline. If the token does not pass the\n * filter then undefined will be returned.\n *\n * @function\n * @implements {lunr.PipelineFunction}\n * @params {lunr.Token} token - A token to check for being a stop word.\n * @returns {lunr.Token}\n * @see {@link lunr.Pipeline}\n */\nlunr.stopWordFilter = lunr.generateStopWordFilter([\n 'a',\n 'able',\n 'about',\n 'across',\n 'after',\n 'all',\n 'almost',\n 'also',\n 'am',\n 'among',\n 'an',\n 'and',\n 'any',\n 'are',\n 'as',\n 'at',\n 'be',\n 'because',\n 'been',\n 'but',\n 'by',\n 'can',\n 'cannot',\n 'could',\n 'dear',\n 'did',\n 'do',\n 'does',\n 'either',\n 'else',\n 'ever',\n 'every',\n 'for',\n 'from',\n 'get',\n 'got',\n 'had',\n 'has',\n 'have',\n 'he',\n 'her',\n 'hers',\n 'him',\n 'his',\n 'how',\n 'however',\n 'i',\n 'if',\n 'in',\n 'into',\n 'is',\n 'it',\n 'its',\n 'just',\n 'least',\n 'let',\n 'like',\n 'likely',\n 'may',\n 'me',\n 'might',\n 'most',\n 'must',\n 'my',\n 'neither',\n 'no',\n 'nor',\n 'not',\n 'of',\n 'off',\n 'often',\n 'on',\n 'only',\n 'or',\n 'other',\n 'our',\n 'own',\n 'rather',\n 'said',\n 'say',\n 'says',\n 'she',\n 'should',\n 'since',\n 'so',\n 'some',\n 'than',\n 'that',\n 'the',\n 'their',\n 'them',\n 'then',\n 'there',\n 'these',\n 'they',\n 'this',\n 'tis',\n 'to',\n 'too',\n 'twas',\n 'us',\n 'wants',\n 'was',\n 'we',\n 'were',\n 'what',\n 'when',\n 'where',\n 'which',\n 'while',\n 'who',\n 'whom',\n 'why',\n 'will',\n 'with',\n 'would',\n 'yet',\n 'you',\n 'your'\n])\n\nlunr.Pipeline.registerFunction(lunr.stopWordFilter, 'stopWordFilter')\n/*!\n * lunr.trimmer\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * lunr.trimmer is a pipeline function for trimming non word\n * characters from the beginning and end of tokens before they\n * enter the index.\n *\n * This implementation may not work correctly for non latin\n * characters and should either be removed or adapted for use\n * with languages with non-latin characters.\n *\n * @static\n * @implements {lunr.PipelineFunction}\n * @param {lunr.Token} token The token to pass through the filter\n * @returns {lunr.Token}\n * @see lunr.Pipeline\n */\nlunr.trimmer = function (token) {\n return token.update(function (s) {\n return s.replace(/^\\W+/, '').replace(/\\W+$/, '')\n })\n}\n\nlunr.Pipeline.registerFunction(lunr.trimmer, 'trimmer')\n/*!\n * lunr.TokenSet\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * A token set is used to store the unique list of all tokens\n * within an index. Token sets are also used to represent an\n * incoming query to the index, this query token set and index\n * token set are then intersected to find which tokens to look\n * up in the inverted index.\n *\n * A token set can hold multiple tokens, as in the case of the\n * index token set, or it can hold a single token as in the\n * case of a simple query token set.\n *\n * Additionally token sets are used to perform wildcard matching.\n * Leading, contained and trailing wildcards are supported, and\n * from this edit distance matching can also be provided.\n *\n * Token sets are implemented as a minimal finite state automata,\n * where both common prefixes and suffixes are shared between tokens.\n * This helps to reduce the space used for storing the token set.\n *\n * @constructor\n */\nlunr.TokenSet = function () {\n this.final = false\n this.edges = {}\n this.id = lunr.TokenSet._nextId\n lunr.TokenSet._nextId += 1\n}\n\n/**\n * Keeps track of the next, auto increment, identifier to assign\n * to a new tokenSet.\n *\n * TokenSets require a unique identifier to be correctly minimised.\n *\n * @private\n */\nlunr.TokenSet._nextId = 1\n\n/**\n * Creates a TokenSet instance from the given sorted array of words.\n *\n * @param {String[]} arr - A sorted array of strings to create the set from.\n * @returns {lunr.TokenSet}\n * @throws Will throw an error if the input array is not sorted.\n */\nlunr.TokenSet.fromArray = function (arr) {\n var builder = new lunr.TokenSet.Builder\n\n for (var i = 0, len = arr.length; i < len; i++) {\n builder.insert(arr[i])\n }\n\n builder.finish()\n return builder.root\n}\n\n/**\n * Creates a token set from a query clause.\n *\n * @private\n * @param {Object} clause - A single clause from lunr.Query.\n * @param {string} clause.term - The query clause term.\n * @param {number} [clause.editDistance] - The optional edit distance for the term.\n * @returns {lunr.TokenSet}\n */\nlunr.TokenSet.fromClause = function (clause) {\n if ('editDistance' in clause) {\n return lunr.TokenSet.fromFuzzyString(clause.term, clause.editDistance)\n } else {\n return lunr.TokenSet.fromString(clause.term)\n }\n}\n\n/**\n * Creates a token set representing a single string with a specified\n * edit distance.\n *\n * Insertions, deletions, substitutions and transpositions are each\n * treated as an edit distance of 1.\n *\n * Increasing the allowed edit distance will have a dramatic impact\n * on the performance of both creating and intersecting these TokenSets.\n * It is advised to keep the edit distance less than 3.\n *\n * @param {string} str - The string to create the token set from.\n * @param {number} editDistance - The allowed edit distance to match.\n * @returns {lunr.Vector}\n */\nlunr.TokenSet.fromFuzzyString = function (str, editDistance) {\n var root = new lunr.TokenSet\n\n var stack = [{\n node: root,\n editsRemaining: editDistance,\n str: str\n }]\n\n while (stack.length) {\n var frame = stack.pop()\n\n // no edit\n if (frame.str.length > 0) {\n var char = frame.str.charAt(0),\n noEditNode\n\n if (char in frame.node.edges) {\n noEditNode = frame.node.edges[char]\n } else {\n noEditNode = new lunr.TokenSet\n frame.node.edges[char] = noEditNode\n }\n\n if (frame.str.length == 1) {\n noEditNode.final = true\n }\n\n stack.push({\n node: noEditNode,\n editsRemaining: frame.editsRemaining,\n str: frame.str.slice(1)\n })\n }\n\n if (frame.editsRemaining == 0) {\n continue\n }\n\n // insertion\n if (\"*\" in frame.node.edges) {\n var insertionNode = frame.node.edges[\"*\"]\n } else {\n var insertionNode = new lunr.TokenSet\n frame.node.edges[\"*\"] = insertionNode\n }\n\n if (frame.str.length == 0) {\n insertionNode.final = true\n }\n\n stack.push({\n node: insertionNode,\n editsRemaining: frame.editsRemaining - 1,\n str: frame.str\n })\n\n // deletion\n // can only do a deletion if we have enough edits remaining\n // and if there are characters left to delete in the string\n if (frame.str.length > 1) {\n stack.push({\n node: frame.node,\n editsRemaining: frame.editsRemaining - 1,\n str: frame.str.slice(1)\n })\n }\n\n // deletion\n // just removing the last character from the str\n if (frame.str.length == 1) {\n frame.node.final = true\n }\n\n // substitution\n // can only do a substitution if we have enough edits remaining\n // and if there are characters left to substitute\n if (frame.str.length >= 1) {\n if (\"*\" in frame.node.edges) {\n var substitutionNode = frame.node.edges[\"*\"]\n } else {\n var substitutionNode = new lunr.TokenSet\n frame.node.edges[\"*\"] = substitutionNode\n }\n\n if (frame.str.length == 1) {\n substitutionNode.final = true\n }\n\n stack.push({\n node: substitutionNode,\n editsRemaining: frame.editsRemaining - 1,\n str: frame.str.slice(1)\n })\n }\n\n // transposition\n // can only do a transposition if there are edits remaining\n // and there are enough characters to transpose\n if (frame.str.length > 1) {\n var charA = frame.str.charAt(0),\n charB = frame.str.charAt(1),\n transposeNode\n\n if (charB in frame.node.edges) {\n transposeNode = frame.node.edges[charB]\n } else {\n transposeNode = new lunr.TokenSet\n frame.node.edges[charB] = transposeNode\n }\n\n if (frame.str.length == 1) {\n transposeNode.final = true\n }\n\n stack.push({\n node: transposeNode,\n editsRemaining: frame.editsRemaining - 1,\n str: charA + frame.str.slice(2)\n })\n }\n }\n\n return root\n}\n\n/**\n * Creates a TokenSet from a string.\n *\n * The string may contain one or more wildcard characters (*)\n * that will allow wildcard matching when intersecting with\n * another TokenSet.\n *\n * @param {string} str - The string to create a TokenSet from.\n * @returns {lunr.TokenSet}\n */\nlunr.TokenSet.fromString = function (str) {\n var node = new lunr.TokenSet,\n root = node\n\n /*\n * Iterates through all characters within the passed string\n * appending a node for each character.\n *\n * When a wildcard character is found then a self\n * referencing edge is introduced to continually match\n * any number of any characters.\n */\n for (var i = 0, len = str.length; i < len; i++) {\n var char = str[i],\n final = (i == len - 1)\n\n if (char == \"*\") {\n node.edges[char] = node\n node.final = final\n\n } else {\n var next = new lunr.TokenSet\n next.final = final\n\n node.edges[char] = next\n node = next\n }\n }\n\n return root\n}\n\n/**\n * Converts this TokenSet into an array of strings\n * contained within the TokenSet.\n *\n * This is not intended to be used on a TokenSet that\n * contains wildcards, in these cases the results are\n * undefined and are likely to cause an infinite loop.\n *\n * @returns {string[]}\n */\nlunr.TokenSet.prototype.toArray = function () {\n var words = []\n\n var stack = [{\n prefix: \"\",\n node: this\n }]\n\n while (stack.length) {\n var frame = stack.pop(),\n edges = Object.keys(frame.node.edges),\n len = edges.length\n\n if (frame.node.final) {\n /* In Safari, at this point the prefix is sometimes corrupted, see:\n * https://github.com/olivernn/lunr.js/issues/279 Calling any\n * String.prototype method forces Safari to \"cast\" this string to what\n * it's supposed to be, fixing the bug. */\n frame.prefix.charAt(0)\n words.push(frame.prefix)\n }\n\n for (var i = 0; i < len; i++) {\n var edge = edges[i]\n\n stack.push({\n prefix: frame.prefix.concat(edge),\n node: frame.node.edges[edge]\n })\n }\n }\n\n return words\n}\n\n/**\n * Generates a string representation of a TokenSet.\n *\n * This is intended to allow TokenSets to be used as keys\n * in objects, largely to aid the construction and minimisation\n * of a TokenSet. As such it is not designed to be a human\n * friendly representation of the TokenSet.\n *\n * @returns {string}\n */\nlunr.TokenSet.prototype.toString = function () {\n // NOTE: Using Object.keys here as this.edges is very likely\n // to enter 'hash-mode' with many keys being added\n //\n // avoiding a for-in loop here as it leads to the function\n // being de-optimised (at least in V8). From some simple\n // benchmarks the performance is comparable, but allowing\n // V8 to optimize may mean easy performance wins in the future.\n\n if (this._str) {\n return this._str\n }\n\n var str = this.final ? '1' : '0',\n labels = Object.keys(this.edges).sort(),\n len = labels.length\n\n for (var i = 0; i < len; i++) {\n var label = labels[i],\n node = this.edges[label]\n\n str = str + label + node.id\n }\n\n return str\n}\n\n/**\n * Returns a new TokenSet that is the intersection of\n * this TokenSet and the passed TokenSet.\n *\n * This intersection will take into account any wildcards\n * contained within the TokenSet.\n *\n * @param {lunr.TokenSet} b - An other TokenSet to intersect with.\n * @returns {lunr.TokenSet}\n */\nlunr.TokenSet.prototype.intersect = function (b) {\n var output = new lunr.TokenSet,\n frame = undefined\n\n var stack = [{\n qNode: b,\n output: output,\n node: this\n }]\n\n while (stack.length) {\n frame = stack.pop()\n\n // NOTE: As with the #toString method, we are using\n // Object.keys and a for loop instead of a for-in loop\n // as both of these objects enter 'hash' mode, causing\n // the function to be de-optimised in V8\n var qEdges = Object.keys(frame.qNode.edges),\n qLen = qEdges.length,\n nEdges = Object.keys(frame.node.edges),\n nLen = nEdges.length\n\n for (var q = 0; q < qLen; q++) {\n var qEdge = qEdges[q]\n\n for (var n = 0; n < nLen; n++) {\n var nEdge = nEdges[n]\n\n if (nEdge == qEdge || qEdge == '*') {\n var node = frame.node.edges[nEdge],\n qNode = frame.qNode.edges[qEdge],\n final = node.final && qNode.final,\n next = undefined\n\n if (nEdge in frame.output.edges) {\n // an edge already exists for this character\n // no need to create a new node, just set the finality\n // bit unless this node is already final\n next = frame.output.edges[nEdge]\n next.final = next.final || final\n\n } else {\n // no edge exists yet, must create one\n // set the finality bit and insert it\n // into the output\n next = new lunr.TokenSet\n next.final = final\n frame.output.edges[nEdge] = next\n }\n\n stack.push({\n qNode: qNode,\n output: next,\n node: node\n })\n }\n }\n }\n }\n\n return output\n}\nlunr.TokenSet.Builder = function () {\n this.previousWord = \"\"\n this.root = new lunr.TokenSet\n this.uncheckedNodes = []\n this.minimizedNodes = {}\n}\n\nlunr.TokenSet.Builder.prototype.insert = function (word) {\n var node,\n commonPrefix = 0\n\n if (word < this.previousWord) {\n throw new Error (\"Out of order word insertion\")\n }\n\n for (var i = 0; i < word.length && i < this.previousWord.length; i++) {\n if (word[i] != this.previousWord[i]) break\n commonPrefix++\n }\n\n this.minimize(commonPrefix)\n\n if (this.uncheckedNodes.length == 0) {\n node = this.root\n } else {\n node = this.uncheckedNodes[this.uncheckedNodes.length - 1].child\n }\n\n for (var i = commonPrefix; i < word.length; i++) {\n var nextNode = new lunr.TokenSet,\n char = word[i]\n\n node.edges[char] = nextNode\n\n this.uncheckedNodes.push({\n parent: node,\n char: char,\n child: nextNode\n })\n\n node = nextNode\n }\n\n node.final = true\n this.previousWord = word\n}\n\nlunr.TokenSet.Builder.prototype.finish = function () {\n this.minimize(0)\n}\n\nlunr.TokenSet.Builder.prototype.minimize = function (downTo) {\n for (var i = this.uncheckedNodes.length - 1; i >= downTo; i--) {\n var node = this.uncheckedNodes[i],\n childKey = node.child.toString()\n\n if (childKey in this.minimizedNodes) {\n node.parent.edges[node.char] = this.minimizedNodes[childKey]\n } else {\n // Cache the key for this node since\n // we know it can't change anymore\n node.child._str = childKey\n\n this.minimizedNodes[childKey] = node.child\n }\n\n this.uncheckedNodes.pop()\n }\n}\n/*!\n * lunr.Index\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * An index contains the built index of all documents and provides a query interface\n * to the index.\n *\n * Usually instances of lunr.Index will not be created using this constructor, instead\n * lunr.Builder should be used to construct new indexes, or lunr.Index.load should be\n * used to load previously built and serialized indexes.\n *\n * @constructor\n * @param {Object} attrs - The attributes of the built search index.\n * @param {Object} attrs.invertedIndex - An index of term/field to document reference.\n * @param {Object} attrs.fieldVectors - Field vectors\n * @param {lunr.TokenSet} attrs.tokenSet - An set of all corpus tokens.\n * @param {string[]} attrs.fields - The names of indexed document fields.\n * @param {lunr.Pipeline} attrs.pipeline - The pipeline to use for search terms.\n */\nlunr.Index = function (attrs) {\n this.invertedIndex = attrs.invertedIndex\n this.fieldVectors = attrs.fieldVectors\n this.tokenSet = attrs.tokenSet\n this.fields = attrs.fields\n this.pipeline = attrs.pipeline\n}\n\n/**\n * A result contains details of a document matching a search query.\n * @typedef {Object} lunr.Index~Result\n * @property {string} ref - The reference of the document this result represents.\n * @property {number} score - A number between 0 and 1 representing how similar this document is to the query.\n * @property {lunr.MatchData} matchData - Contains metadata about this match including which term(s) caused the match.\n */\n\n/**\n * Although lunr provides the ability to create queries using lunr.Query, it also provides a simple\n * query language which itself is parsed into an instance of lunr.Query.\n *\n * For programmatically building queries it is advised to directly use lunr.Query, the query language\n * is best used for human entered text rather than program generated text.\n *\n * At its simplest queries can just be a single term, e.g. `hello`, multiple terms are also supported\n * and will be combined with OR, e.g `hello world` will match documents that contain either 'hello'\n * or 'world', though those that contain both will rank higher in the results.\n *\n * Wildcards can be included in terms to match one or more unspecified characters, these wildcards can\n * be inserted anywhere within the term, and more than one wildcard can exist in a single term. Adding\n * wildcards will increase the number of documents that will be found but can also have a negative\n * impact on query performance, especially with wildcards at the beginning of a term.\n *\n * Terms can be restricted to specific fields, e.g. `title:hello`, only documents with the term\n * hello in the title field will match this query. Using a field not present in the index will lead\n * to an error being thrown.\n *\n * Modifiers can also be added to terms, lunr supports edit distance and boost modifiers on terms. A term\n * boost will make documents matching that term score higher, e.g. `foo^5`. Edit distance is also supported\n * to provide fuzzy matching, e.g. 'hello~2' will match documents with hello with an edit distance of 2.\n * Avoid large values for edit distance to improve query performance.\n *\n * Each term also supports a presence modifier. By default a term's presence in document is optional, however\n * this can be changed to either required or prohibited. For a term's presence to be required in a document the\n * term should be prefixed with a '+', e.g. `+foo bar` is a search for documents that must contain 'foo' and\n * optionally contain 'bar'. Conversely a leading '-' sets the terms presence to prohibited, i.e. it must not\n * appear in a document, e.g. `-foo bar` is a search for documents that do not contain 'foo' but may contain 'bar'.\n *\n * To escape special characters the backslash character '\\' can be used, this allows searches to include\n * characters that would normally be considered modifiers, e.g. `foo\\~2` will search for a term \"foo~2\" instead\n * of attempting to apply a boost of 2 to the search term \"foo\".\n *\n * @typedef {string} lunr.Index~QueryString\n * @example Simple single term query\n * hello\n * @example Multiple term query\n * hello world\n * @example term scoped to a field\n * title:hello\n * @example term with a boost of 10\n * hello^10\n * @example term with an edit distance of 2\n * hello~2\n * @example terms with presence modifiers\n * -foo +bar baz\n */\n\n/**\n * Performs a search against the index using lunr query syntax.\n *\n * Results will be returned sorted by their score, the most relevant results\n * will be returned first. For details on how the score is calculated, please see\n * the {@link https://lunrjs.com/guides/searching.html#scoring|guide}.\n *\n * For more programmatic querying use lunr.Index#query.\n *\n * @param {lunr.Index~QueryString} queryString - A string containing a lunr query.\n * @throws {lunr.QueryParseError} If the passed query string cannot be parsed.\n * @returns {lunr.Index~Result[]}\n */\nlunr.Index.prototype.search = function (queryString) {\n return this.query(function (query) {\n var parser = new lunr.QueryParser(queryString, query)\n parser.parse()\n })\n}\n\n/**\n * A query builder callback provides a query object to be used to express\n * the query to perform on the index.\n *\n * @callback lunr.Index~queryBuilder\n * @param {lunr.Query} query - The query object to build up.\n * @this lunr.Query\n */\n\n/**\n * Performs a query against the index using the yielded lunr.Query object.\n *\n * If performing programmatic queries against the index, this method is preferred\n * over lunr.Index#search so as to avoid the additional query parsing overhead.\n *\n * A query object is yielded to the supplied function which should be used to\n * express the query to be run against the index.\n *\n * Note that although this function takes a callback parameter it is _not_ an\n * asynchronous operation, the callback is just yielded a query object to be\n * customized.\n *\n * @param {lunr.Index~queryBuilder} fn - A function that is used to build the query.\n * @returns {lunr.Index~Result[]}\n */\nlunr.Index.prototype.query = function (fn) {\n // for each query clause\n // * process terms\n // * expand terms from token set\n // * find matching documents and metadata\n // * get document vectors\n // * score documents\n\n var query = new lunr.Query(this.fields),\n matchingFields = Object.create(null),\n queryVectors = Object.create(null),\n termFieldCache = Object.create(null),\n requiredMatches = Object.create(null),\n prohibitedMatches = Object.create(null)\n\n /*\n * To support field level boosts a query vector is created per\n * field. An empty vector is eagerly created to support negated\n * queries.\n */\n for (var i = 0; i < this.fields.length; i++) {\n queryVectors[this.fields[i]] = new lunr.Vector\n }\n\n fn.call(query, query)\n\n for (var i = 0; i < query.clauses.length; i++) {\n /*\n * Unless the pipeline has been disabled for this term, which is\n * the case for terms with wildcards, we need to pass the clause\n * term through the search pipeline. A pipeline returns an array\n * of processed terms. Pipeline functions may expand the passed\n * term, which means we may end up performing multiple index lookups\n * for a single query term.\n */\n var clause = query.clauses[i],\n terms = null,\n clauseMatches = lunr.Set.empty\n\n if (clause.usePipeline) {\n terms = this.pipeline.runString(clause.term, {\n fields: clause.fields\n })\n } else {\n terms = [clause.term]\n }\n\n for (var m = 0; m < terms.length; m++) {\n var term = terms[m]\n\n /*\n * Each term returned from the pipeline needs to use the same query\n * clause object, e.g. the same boost and or edit distance. The\n * simplest way to do this is to re-use the clause object but mutate\n * its term property.\n */\n clause.term = term\n\n /*\n * From the term in the clause we create a token set which will then\n * be used to intersect the indexes token set to get a list of terms\n * to lookup in the inverted index\n */\n var termTokenSet = lunr.TokenSet.fromClause(clause),\n expandedTerms = this.tokenSet.intersect(termTokenSet).toArray()\n\n /*\n * If a term marked as required does not exist in the tokenSet it is\n * impossible for the search to return any matches. We set all the field\n * scoped required matches set to empty and stop examining any further\n * clauses.\n */\n if (expandedTerms.length === 0 && clause.presence === lunr.Query.presence.REQUIRED) {\n for (var k = 0; k < clause.fields.length; k++) {\n var field = clause.fields[k]\n requiredMatches[field] = lunr.Set.empty\n }\n\n break\n }\n\n for (var j = 0; j < expandedTerms.length; j++) {\n /*\n * For each term get the posting and termIndex, this is required for\n * building the query vector.\n */\n var expandedTerm = expandedTerms[j],\n posting = this.invertedIndex[expandedTerm],\n termIndex = posting._index\n\n for (var k = 0; k < clause.fields.length; k++) {\n /*\n * For each field that this query term is scoped by (by default\n * all fields are in scope) we need to get all the document refs\n * that have this term in that field.\n *\n * The posting is the entry in the invertedIndex for the matching\n * term from above.\n */\n var field = clause.fields[k],\n fieldPosting = posting[field],\n matchingDocumentRefs = Object.keys(fieldPosting),\n termField = expandedTerm + \"/\" + field,\n matchingDocumentsSet = new lunr.Set(matchingDocumentRefs)\n\n /*\n * if the presence of this term is required ensure that the matching\n * documents are added to the set of required matches for this clause.\n *\n */\n if (clause.presence == lunr.Query.presence.REQUIRED) {\n clauseMatches = clauseMatches.union(matchingDocumentsSet)\n\n if (requiredMatches[field] === undefined) {\n requiredMatches[field] = lunr.Set.complete\n }\n }\n\n /*\n * if the presence of this term is prohibited ensure that the matching\n * documents are added to the set of prohibited matches for this field,\n * creating that set if it does not yet exist.\n */\n if (clause.presence == lunr.Query.presence.PROHIBITED) {\n if (prohibitedMatches[field] === undefined) {\n prohibitedMatches[field] = lunr.Set.empty\n }\n\n prohibitedMatches[field] = prohibitedMatches[field].union(matchingDocumentsSet)\n\n /*\n * Prohibited matches should not be part of the query vector used for\n * similarity scoring and no metadata should be extracted so we continue\n * to the next field\n */\n continue\n }\n\n /*\n * The query field vector is populated using the termIndex found for\n * the term and a unit value with the appropriate boost applied.\n * Using upsert because there could already be an entry in the vector\n * for the term we are working with. In that case we just add the scores\n * together.\n */\n queryVectors[field].upsert(termIndex, clause.boost, function (a, b) { return a + b })\n\n /**\n * If we've already seen this term, field combo then we've already collected\n * the matching documents and metadata, no need to go through all that again\n */\n if (termFieldCache[termField]) {\n continue\n }\n\n for (var l = 0; l < matchingDocumentRefs.length; l++) {\n /*\n * All metadata for this term/field/document triple\n * are then extracted and collected into an instance\n * of lunr.MatchData ready to be returned in the query\n * results\n */\n var matchingDocumentRef = matchingDocumentRefs[l],\n matchingFieldRef = new lunr.FieldRef (matchingDocumentRef, field),\n metadata = fieldPosting[matchingDocumentRef],\n fieldMatch\n\n if ((fieldMatch = matchingFields[matchingFieldRef]) === undefined) {\n matchingFields[matchingFieldRef] = new lunr.MatchData (expandedTerm, field, metadata)\n } else {\n fieldMatch.add(expandedTerm, field, metadata)\n }\n\n }\n\n termFieldCache[termField] = true\n }\n }\n }\n\n /**\n * If the presence was required we need to update the requiredMatches field sets.\n * We do this after all fields for the term have collected their matches because\n * the clause terms presence is required in _any_ of the fields not _all_ of the\n * fields.\n */\n if (clause.presence === lunr.Query.presence.REQUIRED) {\n for (var k = 0; k < clause.fields.length; k++) {\n var field = clause.fields[k]\n requiredMatches[field] = requiredMatches[field].intersect(clauseMatches)\n }\n }\n }\n\n /**\n * Need to combine the field scoped required and prohibited\n * matching documents into a global set of required and prohibited\n * matches\n */\n var allRequiredMatches = lunr.Set.complete,\n allProhibitedMatches = lunr.Set.empty\n\n for (var i = 0; i < this.fields.length; i++) {\n var field = this.fields[i]\n\n if (requiredMatches[field]) {\n allRequiredMatches = allRequiredMatches.intersect(requiredMatches[field])\n }\n\n if (prohibitedMatches[field]) {\n allProhibitedMatches = allProhibitedMatches.union(prohibitedMatches[field])\n }\n }\n\n var matchingFieldRefs = Object.keys(matchingFields),\n results = [],\n matches = Object.create(null)\n\n /*\n * If the query is negated (contains only prohibited terms)\n * we need to get _all_ fieldRefs currently existing in the\n * index. This is only done when we know that the query is\n * entirely prohibited terms to avoid any cost of getting all\n * fieldRefs unnecessarily.\n *\n * Additionally, blank MatchData must be created to correctly\n * populate the results.\n */\n if (query.isNegated()) {\n matchingFieldRefs = Object.keys(this.fieldVectors)\n\n for (var i = 0; i < matchingFieldRefs.length; i++) {\n var matchingFieldRef = matchingFieldRefs[i]\n var fieldRef = lunr.FieldRef.fromString(matchingFieldRef)\n matchingFields[matchingFieldRef] = new lunr.MatchData\n }\n }\n\n for (var i = 0; i < matchingFieldRefs.length; i++) {\n /*\n * Currently we have document fields that match the query, but we\n * need to return documents. The matchData and scores are combined\n * from multiple fields belonging to the same document.\n *\n * Scores are calculated by field, using the query vectors created\n * above, and combined into a final document score using addition.\n */\n var fieldRef = lunr.FieldRef.fromString(matchingFieldRefs[i]),\n docRef = fieldRef.docRef\n\n if (!allRequiredMatches.contains(docRef)) {\n continue\n }\n\n if (allProhibitedMatches.contains(docRef)) {\n continue\n }\n\n var fieldVector = this.fieldVectors[fieldRef],\n score = queryVectors[fieldRef.fieldName].similarity(fieldVector),\n docMatch\n\n if ((docMatch = matches[docRef]) !== undefined) {\n docMatch.score += score\n docMatch.matchData.combine(matchingFields[fieldRef])\n } else {\n var match = {\n ref: docRef,\n score: score,\n matchData: matchingFields[fieldRef]\n }\n matches[docRef] = match\n results.push(match)\n }\n }\n\n /*\n * Sort the results objects by score, highest first.\n */\n return results.sort(function (a, b) {\n return b.score - a.score\n })\n}\n\n/**\n * Prepares the index for JSON serialization.\n *\n * The schema for this JSON blob will be described in a\n * separate JSON schema file.\n *\n * @returns {Object}\n */\nlunr.Index.prototype.toJSON = function () {\n var invertedIndex = Object.keys(this.invertedIndex)\n .sort()\n .map(function (term) {\n return [term, this.invertedIndex[term]]\n }, this)\n\n var fieldVectors = Object.keys(this.fieldVectors)\n .map(function (ref) {\n return [ref, this.fieldVectors[ref].toJSON()]\n }, this)\n\n return {\n version: lunr.version,\n fields: this.fields,\n fieldVectors: fieldVectors,\n invertedIndex: invertedIndex,\n pipeline: this.pipeline.toJSON()\n }\n}\n\n/**\n * Loads a previously serialized lunr.Index\n *\n * @param {Object} serializedIndex - A previously serialized lunr.Index\n * @returns {lunr.Index}\n */\nlunr.Index.load = function (serializedIndex) {\n var attrs = {},\n fieldVectors = {},\n serializedVectors = serializedIndex.fieldVectors,\n invertedIndex = Object.create(null),\n serializedInvertedIndex = serializedIndex.invertedIndex,\n tokenSetBuilder = new lunr.TokenSet.Builder,\n pipeline = lunr.Pipeline.load(serializedIndex.pipeline)\n\n if (serializedIndex.version != lunr.version) {\n lunr.utils.warn(\"Version mismatch when loading serialised index. Current version of lunr '\" + lunr.version + \"' does not match serialized index '\" + serializedIndex.version + \"'\")\n }\n\n for (var i = 0; i < serializedVectors.length; i++) {\n var tuple = serializedVectors[i],\n ref = tuple[0],\n elements = tuple[1]\n\n fieldVectors[ref] = new lunr.Vector(elements)\n }\n\n for (var i = 0; i < serializedInvertedIndex.length; i++) {\n var tuple = serializedInvertedIndex[i],\n term = tuple[0],\n posting = tuple[1]\n\n tokenSetBuilder.insert(term)\n invertedIndex[term] = posting\n }\n\n tokenSetBuilder.finish()\n\n attrs.fields = serializedIndex.fields\n\n attrs.fieldVectors = fieldVectors\n attrs.invertedIndex = invertedIndex\n attrs.tokenSet = tokenSetBuilder.root\n attrs.pipeline = pipeline\n\n return new lunr.Index(attrs)\n}\n/*!\n * lunr.Builder\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * lunr.Builder performs indexing on a set of documents and\n * returns instances of lunr.Index ready for querying.\n *\n * All configuration of the index is done via the builder, the\n * fields to index, the document reference, the text processing\n * pipeline and document scoring parameters are all set on the\n * builder before indexing.\n *\n * @constructor\n * @property {string} _ref - Internal reference to the document reference field.\n * @property {string[]} _fields - Internal reference to the document fields to index.\n * @property {object} invertedIndex - The inverted index maps terms to document fields.\n * @property {object} documentTermFrequencies - Keeps track of document term frequencies.\n * @property {object} documentLengths - Keeps track of the length of documents added to the index.\n * @property {lunr.tokenizer} tokenizer - Function for splitting strings into tokens for indexing.\n * @property {lunr.Pipeline} pipeline - The pipeline performs text processing on tokens before indexing.\n * @property {lunr.Pipeline} searchPipeline - A pipeline for processing search terms before querying the index.\n * @property {number} documentCount - Keeps track of the total number of documents indexed.\n * @property {number} _b - A parameter to control field length normalization, setting this to 0 disabled normalization, 1 fully normalizes field lengths, the default value is 0.75.\n * @property {number} _k1 - A parameter to control how quickly an increase in term frequency results in term frequency saturation, the default value is 1.2.\n * @property {number} termIndex - A counter incremented for each unique term, used to identify a terms position in the vector space.\n * @property {array} metadataWhitelist - A list of metadata keys that have been whitelisted for entry in the index.\n */\nlunr.Builder = function () {\n this._ref = \"id\"\n this._fields = Object.create(null)\n this._documents = Object.create(null)\n this.invertedIndex = Object.create(null)\n this.fieldTermFrequencies = {}\n this.fieldLengths = {}\n this.tokenizer = lunr.tokenizer\n this.pipeline = new lunr.Pipeline\n this.searchPipeline = new lunr.Pipeline\n this.documentCount = 0\n this._b = 0.75\n this._k1 = 1.2\n this.termIndex = 0\n this.metadataWhitelist = []\n}\n\n/**\n * Sets the document field used as the document reference. Every document must have this field.\n * The type of this field in the document should be a string, if it is not a string it will be\n * coerced into a string by calling toString.\n *\n * The default ref is 'id'.\n *\n * The ref should _not_ be changed during indexing, it should be set before any documents are\n * added to the index. Changing it during indexing can lead to inconsistent results.\n *\n * @param {string} ref - The name of the reference field in the document.\n */\nlunr.Builder.prototype.ref = function (ref) {\n this._ref = ref\n}\n\n/**\n * A function that is used to extract a field from a document.\n *\n * Lunr expects a field to be at the top level of a document, if however the field\n * is deeply nested within a document an extractor function can be used to extract\n * the right field for indexing.\n *\n * @callback fieldExtractor\n * @param {object} doc - The document being added to the index.\n * @returns {?(string|object|object[])} obj - The object that will be indexed for this field.\n * @example Extracting a nested field\n * function (doc) { return doc.nested.field }\n */\n\n/**\n * Adds a field to the list of document fields that will be indexed. Every document being\n * indexed should have this field. Null values for this field in indexed documents will\n * not cause errors but will limit the chance of that document being retrieved by searches.\n *\n * All fields should be added before adding documents to the index. Adding fields after\n * a document has been indexed will have no effect on already indexed documents.\n *\n * Fields can be boosted at build time. This allows terms within that field to have more\n * importance when ranking search results. Use a field boost to specify that matches within\n * one field are more important than other fields.\n *\n * @param {string} fieldName - The name of a field to index in all documents.\n * @param {object} attributes - Optional attributes associated with this field.\n * @param {number} [attributes.boost=1] - Boost applied to all terms within this field.\n * @param {fieldExtractor} [attributes.extractor] - Function to extract a field from a document.\n * @throws {RangeError} fieldName cannot contain unsupported characters '/'\n */\nlunr.Builder.prototype.field = function (fieldName, attributes) {\n if (/\\//.test(fieldName)) {\n throw new RangeError (\"Field '\" + fieldName + \"' contains illegal character '/'\")\n }\n\n this._fields[fieldName] = attributes || {}\n}\n\n/**\n * A parameter to tune the amount of field length normalisation that is applied when\n * calculating relevance scores. A value of 0 will completely disable any normalisation\n * and a value of 1 will fully normalise field lengths. The default is 0.75. Values of b\n * will be clamped to the range 0 - 1.\n *\n * @param {number} number - The value to set for this tuning parameter.\n */\nlunr.Builder.prototype.b = function (number) {\n if (number < 0) {\n this._b = 0\n } else if (number > 1) {\n this._b = 1\n } else {\n this._b = number\n }\n}\n\n/**\n * A parameter that controls the speed at which a rise in term frequency results in term\n * frequency saturation. The default value is 1.2. Setting this to a higher value will give\n * slower saturation levels, a lower value will result in quicker saturation.\n *\n * @param {number} number - The value to set for this tuning parameter.\n */\nlunr.Builder.prototype.k1 = function (number) {\n this._k1 = number\n}\n\n/**\n * Adds a document to the index.\n *\n * Before adding fields to the index the index should have been fully setup, with the document\n * ref and all fields to index already having been specified.\n *\n * The document must have a field name as specified by the ref (by default this is 'id') and\n * it should have all fields defined for indexing, though null or undefined values will not\n * cause errors.\n *\n * Entire documents can be boosted at build time. Applying a boost to a document indicates that\n * this document should rank higher in search results than other documents.\n *\n * @param {object} doc - The document to add to the index.\n * @param {object} attributes - Optional attributes associated with this document.\n * @param {number} [attributes.boost=1] - Boost applied to all terms within this document.\n */\nlunr.Builder.prototype.add = function (doc, attributes) {\n var docRef = doc[this._ref],\n fields = Object.keys(this._fields)\n\n this._documents[docRef] = attributes || {}\n this.documentCount += 1\n\n for (var i = 0; i < fields.length; i++) {\n var fieldName = fields[i],\n extractor = this._fields[fieldName].extractor,\n field = extractor ? extractor(doc) : doc[fieldName],\n tokens = this.tokenizer(field, {\n fields: [fieldName]\n }),\n terms = this.pipeline.run(tokens),\n fieldRef = new lunr.FieldRef (docRef, fieldName),\n fieldTerms = Object.create(null)\n\n this.fieldTermFrequencies[fieldRef] = fieldTerms\n this.fieldLengths[fieldRef] = 0\n\n // store the length of this field for this document\n this.fieldLengths[fieldRef] += terms.length\n\n // calculate term frequencies for this field\n for (var j = 0; j < terms.length; j++) {\n var term = terms[j]\n\n if (fieldTerms[term] == undefined) {\n fieldTerms[term] = 0\n }\n\n fieldTerms[term] += 1\n\n // add to inverted index\n // create an initial posting if one doesn't exist\n if (this.invertedIndex[term] == undefined) {\n var posting = Object.create(null)\n posting[\"_index\"] = this.termIndex\n this.termIndex += 1\n\n for (var k = 0; k < fields.length; k++) {\n posting[fields[k]] = Object.create(null)\n }\n\n this.invertedIndex[term] = posting\n }\n\n // add an entry for this term/fieldName/docRef to the invertedIndex\n if (this.invertedIndex[term][fieldName][docRef] == undefined) {\n this.invertedIndex[term][fieldName][docRef] = Object.create(null)\n }\n\n // store all whitelisted metadata about this token in the\n // inverted index\n for (var l = 0; l < this.metadataWhitelist.length; l++) {\n var metadataKey = this.metadataWhitelist[l],\n metadata = term.metadata[metadataKey]\n\n if (this.invertedIndex[term][fieldName][docRef][metadataKey] == undefined) {\n this.invertedIndex[term][fieldName][docRef][metadataKey] = []\n }\n\n this.invertedIndex[term][fieldName][docRef][metadataKey].push(metadata)\n }\n }\n\n }\n}\n\n/**\n * Calculates the average document length for this index\n *\n * @private\n */\nlunr.Builder.prototype.calculateAverageFieldLengths = function () {\n\n var fieldRefs = Object.keys(this.fieldLengths),\n numberOfFields = fieldRefs.length,\n accumulator = {},\n documentsWithField = {}\n\n for (var i = 0; i < numberOfFields; i++) {\n var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]),\n field = fieldRef.fieldName\n\n documentsWithField[field] || (documentsWithField[field] = 0)\n documentsWithField[field] += 1\n\n accumulator[field] || (accumulator[field] = 0)\n accumulator[field] += this.fieldLengths[fieldRef]\n }\n\n var fields = Object.keys(this._fields)\n\n for (var i = 0; i < fields.length; i++) {\n var fieldName = fields[i]\n accumulator[fieldName] = accumulator[fieldName] / documentsWithField[fieldName]\n }\n\n this.averageFieldLength = accumulator\n}\n\n/**\n * Builds a vector space model of every document using lunr.Vector\n *\n * @private\n */\nlunr.Builder.prototype.createFieldVectors = function () {\n var fieldVectors = {},\n fieldRefs = Object.keys(this.fieldTermFrequencies),\n fieldRefsLength = fieldRefs.length,\n termIdfCache = Object.create(null)\n\n for (var i = 0; i < fieldRefsLength; i++) {\n var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]),\n fieldName = fieldRef.fieldName,\n fieldLength = this.fieldLengths[fieldRef],\n fieldVector = new lunr.Vector,\n termFrequencies = this.fieldTermFrequencies[fieldRef],\n terms = Object.keys(termFrequencies),\n termsLength = terms.length\n\n\n var fieldBoost = this._fields[fieldName].boost || 1,\n docBoost = this._documents[fieldRef.docRef].boost || 1\n\n for (var j = 0; j < termsLength; j++) {\n var term = terms[j],\n tf = termFrequencies[term],\n termIndex = this.invertedIndex[term]._index,\n idf, score, scoreWithPrecision\n\n if (termIdfCache[term] === undefined) {\n idf = lunr.idf(this.invertedIndex[term], this.documentCount)\n termIdfCache[term] = idf\n } else {\n idf = termIdfCache[term]\n }\n\n score = idf * ((this._k1 + 1) * tf) / (this._k1 * (1 - this._b + this._b * (fieldLength / this.averageFieldLength[fieldName])) + tf)\n score *= fieldBoost\n score *= docBoost\n scoreWithPrecision = Math.round(score * 1000) / 1000\n // Converts 1.23456789 to 1.234.\n // Reducing the precision so that the vectors take up less\n // space when serialised. Doing it now so that they behave\n // the same before and after serialisation. Also, this is\n // the fastest approach to reducing a number's precision in\n // JavaScript.\n\n fieldVector.insert(termIndex, scoreWithPrecision)\n }\n\n fieldVectors[fieldRef] = fieldVector\n }\n\n this.fieldVectors = fieldVectors\n}\n\n/**\n * Creates a token set of all tokens in the index using lunr.TokenSet\n *\n * @private\n */\nlunr.Builder.prototype.createTokenSet = function () {\n this.tokenSet = lunr.TokenSet.fromArray(\n Object.keys(this.invertedIndex).sort()\n )\n}\n\n/**\n * Builds the index, creating an instance of lunr.Index.\n *\n * This completes the indexing process and should only be called\n * once all documents have been added to the index.\n *\n * @returns {lunr.Index}\n */\nlunr.Builder.prototype.build = function () {\n this.calculateAverageFieldLengths()\n this.createFieldVectors()\n this.createTokenSet()\n\n return new lunr.Index({\n invertedIndex: this.invertedIndex,\n fieldVectors: this.fieldVectors,\n tokenSet: this.tokenSet,\n fields: Object.keys(this._fields),\n pipeline: this.searchPipeline\n })\n}\n\n/**\n * Applies a plugin to the index builder.\n *\n * A plugin is a function that is called with the index builder as its context.\n * Plugins can be used to customise or extend the behaviour of the index\n * in some way. A plugin is just a function, that encapsulated the custom\n * behaviour that should be applied when building the index.\n *\n * The plugin function will be called with the index builder as its argument, additional\n * arguments can also be passed when calling use. The function will be called\n * with the index builder as its context.\n *\n * @param {Function} plugin The plugin to apply.\n */\nlunr.Builder.prototype.use = function (fn) {\n var args = Array.prototype.slice.call(arguments, 1)\n args.unshift(this)\n fn.apply(this, args)\n}\n/**\n * Contains and collects metadata about a matching document.\n * A single instance of lunr.MatchData is returned as part of every\n * lunr.Index~Result.\n *\n * @constructor\n * @param {string} term - The term this match data is associated with\n * @param {string} field - The field in which the term was found\n * @param {object} metadata - The metadata recorded about this term in this field\n * @property {object} metadata - A cloned collection of metadata associated with this document.\n * @see {@link lunr.Index~Result}\n */\nlunr.MatchData = function (term, field, metadata) {\n var clonedMetadata = Object.create(null),\n metadataKeys = Object.keys(metadata || {})\n\n // Cloning the metadata to prevent the original\n // being mutated during match data combination.\n // Metadata is kept in an array within the inverted\n // index so cloning the data can be done with\n // Array#slice\n for (var i = 0; i < metadataKeys.length; i++) {\n var key = metadataKeys[i]\n clonedMetadata[key] = metadata[key].slice()\n }\n\n this.metadata = Object.create(null)\n\n if (term !== undefined) {\n this.metadata[term] = Object.create(null)\n this.metadata[term][field] = clonedMetadata\n }\n}\n\n/**\n * An instance of lunr.MatchData will be created for every term that matches a\n * document. However only one instance is required in a lunr.Index~Result. This\n * method combines metadata from another instance of lunr.MatchData with this\n * objects metadata.\n *\n * @param {lunr.MatchData} otherMatchData - Another instance of match data to merge with this one.\n * @see {@link lunr.Index~Result}\n */\nlunr.MatchData.prototype.combine = function (otherMatchData) {\n var terms = Object.keys(otherMatchData.metadata)\n\n for (var i = 0; i < terms.length; i++) {\n var term = terms[i],\n fields = Object.keys(otherMatchData.metadata[term])\n\n if (this.metadata[term] == undefined) {\n this.metadata[term] = Object.create(null)\n }\n\n for (var j = 0; j < fields.length; j++) {\n var field = fields[j],\n keys = Object.keys(otherMatchData.metadata[term][field])\n\n if (this.metadata[term][field] == undefined) {\n this.metadata[term][field] = Object.create(null)\n }\n\n for (var k = 0; k < keys.length; k++) {\n var key = keys[k]\n\n if (this.metadata[term][field][key] == undefined) {\n this.metadata[term][field][key] = otherMatchData.metadata[term][field][key]\n } else {\n this.metadata[term][field][key] = this.metadata[term][field][key].concat(otherMatchData.metadata[term][field][key])\n }\n\n }\n }\n }\n}\n\n/**\n * Add metadata for a term/field pair to this instance of match data.\n *\n * @param {string} term - The term this match data is associated with\n * @param {string} field - The field in which the term was found\n * @param {object} metadata - The metadata recorded about this term in this field\n */\nlunr.MatchData.prototype.add = function (term, field, metadata) {\n if (!(term in this.metadata)) {\n this.metadata[term] = Object.create(null)\n this.metadata[term][field] = metadata\n return\n }\n\n if (!(field in this.metadata[term])) {\n this.metadata[term][field] = metadata\n return\n }\n\n var metadataKeys = Object.keys(metadata)\n\n for (var i = 0; i < metadataKeys.length; i++) {\n var key = metadataKeys[i]\n\n if (key in this.metadata[term][field]) {\n this.metadata[term][field][key] = this.metadata[term][field][key].concat(metadata[key])\n } else {\n this.metadata[term][field][key] = metadata[key]\n }\n }\n}\n/**\n * A lunr.Query provides a programmatic way of defining queries to be performed\n * against a {@link lunr.Index}.\n *\n * Prefer constructing a lunr.Query using the {@link lunr.Index#query} method\n * so the query object is pre-initialized with the right index fields.\n *\n * @constructor\n * @property {lunr.Query~Clause[]} clauses - An array of query clauses.\n * @property {string[]} allFields - An array of all available fields in a lunr.Index.\n */\nlunr.Query = function (allFields) {\n this.clauses = []\n this.allFields = allFields\n}\n\n/**\n * Constants for indicating what kind of automatic wildcard insertion will be used when constructing a query clause.\n *\n * This allows wildcards to be added to the beginning and end of a term without having to manually do any string\n * concatenation.\n *\n * The wildcard constants can be bitwise combined to select both leading and trailing wildcards.\n *\n * @constant\n * @default\n * @property {number} wildcard.NONE - The term will have no wildcards inserted, this is the default behaviour\n * @property {number} wildcard.LEADING - Prepend the term with a wildcard, unless a leading wildcard already exists\n * @property {number} wildcard.TRAILING - Append a wildcard to the term, unless a trailing wildcard already exists\n * @see lunr.Query~Clause\n * @see lunr.Query#clause\n * @see lunr.Query#term\n * @example query term with trailing wildcard\n * query.term('foo', { wildcard: lunr.Query.wildcard.TRAILING })\n * @example query term with leading and trailing wildcard\n * query.term('foo', {\n * wildcard: lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING\n * })\n */\n\nlunr.Query.wildcard = new String (\"*\")\nlunr.Query.wildcard.NONE = 0\nlunr.Query.wildcard.LEADING = 1\nlunr.Query.wildcard.TRAILING = 2\n\n/**\n * Constants for indicating what kind of presence a term must have in matching documents.\n *\n * @constant\n * @enum {number}\n * @see lunr.Query~Clause\n * @see lunr.Query#clause\n * @see lunr.Query#term\n * @example query term with required presence\n * query.term('foo', { presence: lunr.Query.presence.REQUIRED })\n */\nlunr.Query.presence = {\n /**\n * Term's presence in a document is optional, this is the default value.\n */\n OPTIONAL: 1,\n\n /**\n * Term's presence in a document is required, documents that do not contain\n * this term will not be returned.\n */\n REQUIRED: 2,\n\n /**\n * Term's presence in a document is prohibited, documents that do contain\n * this term will not be returned.\n */\n PROHIBITED: 3\n}\n\n/**\n * A single clause in a {@link lunr.Query} contains a term and details on how to\n * match that term against a {@link lunr.Index}.\n *\n * @typedef {Object} lunr.Query~Clause\n * @property {string[]} fields - The fields in an index this clause should be matched against.\n * @property {number} [boost=1] - Any boost that should be applied when matching this clause.\n * @property {number} [editDistance] - Whether the term should have fuzzy matching applied, and how fuzzy the match should be.\n * @property {boolean} [usePipeline] - Whether the term should be passed through the search pipeline.\n * @property {number} [wildcard=lunr.Query.wildcard.NONE] - Whether the term should have wildcards appended or prepended.\n * @property {number} [presence=lunr.Query.presence.OPTIONAL] - The terms presence in any matching documents.\n */\n\n/**\n * Adds a {@link lunr.Query~Clause} to this query.\n *\n * Unless the clause contains the fields to be matched all fields will be matched. In addition\n * a default boost of 1 is applied to the clause.\n *\n * @param {lunr.Query~Clause} clause - The clause to add to this query.\n * @see lunr.Query~Clause\n * @returns {lunr.Query}\n */\nlunr.Query.prototype.clause = function (clause) {\n if (!('fields' in clause)) {\n clause.fields = this.allFields\n }\n\n if (!('boost' in clause)) {\n clause.boost = 1\n }\n\n if (!('usePipeline' in clause)) {\n clause.usePipeline = true\n }\n\n if (!('wildcard' in clause)) {\n clause.wildcard = lunr.Query.wildcard.NONE\n }\n\n if ((clause.wildcard & lunr.Query.wildcard.LEADING) && (clause.term.charAt(0) != lunr.Query.wildcard)) {\n clause.term = \"*\" + clause.term\n }\n\n if ((clause.wildcard & lunr.Query.wildcard.TRAILING) && (clause.term.slice(-1) != lunr.Query.wildcard)) {\n clause.term = \"\" + clause.term + \"*\"\n }\n\n if (!('presence' in clause)) {\n clause.presence = lunr.Query.presence.OPTIONAL\n }\n\n this.clauses.push(clause)\n\n return this\n}\n\n/**\n * A negated query is one in which every clause has a presence of\n * prohibited. These queries require some special processing to return\n * the expected results.\n *\n * @returns boolean\n */\nlunr.Query.prototype.isNegated = function () {\n for (var i = 0; i < this.clauses.length; i++) {\n if (this.clauses[i].presence != lunr.Query.presence.PROHIBITED) {\n return false\n }\n }\n\n return true\n}\n\n/**\n * Adds a term to the current query, under the covers this will create a {@link lunr.Query~Clause}\n * to the list of clauses that make up this query.\n *\n * The term is used as is, i.e. no tokenization will be performed by this method. Instead conversion\n * to a token or token-like string should be done before calling this method.\n *\n * The term will be converted to a string by calling `toString`. Multiple terms can be passed as an\n * array, each term in the array will share the same options.\n *\n * @param {object|object[]} term - The term(s) to add to the query.\n * @param {object} [options] - Any additional properties to add to the query clause.\n * @returns {lunr.Query}\n * @see lunr.Query#clause\n * @see lunr.Query~Clause\n * @example adding a single term to a query\n * query.term(\"foo\")\n * @example adding a single term to a query and specifying search fields, term boost and automatic trailing wildcard\n * query.term(\"foo\", {\n * fields: [\"title\"],\n * boost: 10,\n * wildcard: lunr.Query.wildcard.TRAILING\n * })\n * @example using lunr.tokenizer to convert a string to tokens before using them as terms\n * query.term(lunr.tokenizer(\"foo bar\"))\n */\nlunr.Query.prototype.term = function (term, options) {\n if (Array.isArray(term)) {\n term.forEach(function (t) { this.term(t, lunr.utils.clone(options)) }, this)\n return this\n }\n\n var clause = options || {}\n clause.term = term.toString()\n\n this.clause(clause)\n\n return this\n}\nlunr.QueryParseError = function (message, start, end) {\n this.name = \"QueryParseError\"\n this.message = message\n this.start = start\n this.end = end\n}\n\nlunr.QueryParseError.prototype = new Error\nlunr.QueryLexer = function (str) {\n this.lexemes = []\n this.str = str\n this.length = str.length\n this.pos = 0\n this.start = 0\n this.escapeCharPositions = []\n}\n\nlunr.QueryLexer.prototype.run = function () {\n var state = lunr.QueryLexer.lexText\n\n while (state) {\n state = state(this)\n }\n}\n\nlunr.QueryLexer.prototype.sliceString = function () {\n var subSlices = [],\n sliceStart = this.start,\n sliceEnd = this.pos\n\n for (var i = 0; i < this.escapeCharPositions.length; i++) {\n sliceEnd = this.escapeCharPositions[i]\n subSlices.push(this.str.slice(sliceStart, sliceEnd))\n sliceStart = sliceEnd + 1\n }\n\n subSlices.push(this.str.slice(sliceStart, this.pos))\n this.escapeCharPositions.length = 0\n\n return subSlices.join('')\n}\n\nlunr.QueryLexer.prototype.emit = function (type) {\n this.lexemes.push({\n type: type,\n str: this.sliceString(),\n start: this.start,\n end: this.pos\n })\n\n this.start = this.pos\n}\n\nlunr.QueryLexer.prototype.escapeCharacter = function () {\n this.escapeCharPositions.push(this.pos - 1)\n this.pos += 1\n}\n\nlunr.QueryLexer.prototype.next = function () {\n if (this.pos >= this.length) {\n return lunr.QueryLexer.EOS\n }\n\n var char = this.str.charAt(this.pos)\n this.pos += 1\n return char\n}\n\nlunr.QueryLexer.prototype.width = function () {\n return this.pos - this.start\n}\n\nlunr.QueryLexer.prototype.ignore = function () {\n if (this.start == this.pos) {\n this.pos += 1\n }\n\n this.start = this.pos\n}\n\nlunr.QueryLexer.prototype.backup = function () {\n this.pos -= 1\n}\n\nlunr.QueryLexer.prototype.acceptDigitRun = function () {\n var char, charCode\n\n do {\n char = this.next()\n charCode = char.charCodeAt(0)\n } while (charCode > 47 && charCode < 58)\n\n if (char != lunr.QueryLexer.EOS) {\n this.backup()\n }\n}\n\nlunr.QueryLexer.prototype.more = function () {\n return this.pos < this.length\n}\n\nlunr.QueryLexer.EOS = 'EOS'\nlunr.QueryLexer.FIELD = 'FIELD'\nlunr.QueryLexer.TERM = 'TERM'\nlunr.QueryLexer.EDIT_DISTANCE = 'EDIT_DISTANCE'\nlunr.QueryLexer.BOOST = 'BOOST'\nlunr.QueryLexer.PRESENCE = 'PRESENCE'\n\nlunr.QueryLexer.lexField = function (lexer) {\n lexer.backup()\n lexer.emit(lunr.QueryLexer.FIELD)\n lexer.ignore()\n return lunr.QueryLexer.lexText\n}\n\nlunr.QueryLexer.lexTerm = function (lexer) {\n if (lexer.width() > 1) {\n lexer.backup()\n lexer.emit(lunr.QueryLexer.TERM)\n }\n\n lexer.ignore()\n\n if (lexer.more()) {\n return lunr.QueryLexer.lexText\n }\n}\n\nlunr.QueryLexer.lexEditDistance = function (lexer) {\n lexer.ignore()\n lexer.acceptDigitRun()\n lexer.emit(lunr.QueryLexer.EDIT_DISTANCE)\n return lunr.QueryLexer.lexText\n}\n\nlunr.QueryLexer.lexBoost = function (lexer) {\n lexer.ignore()\n lexer.acceptDigitRun()\n lexer.emit(lunr.QueryLexer.BOOST)\n return lunr.QueryLexer.lexText\n}\n\nlunr.QueryLexer.lexEOS = function (lexer) {\n if (lexer.width() > 0) {\n lexer.emit(lunr.QueryLexer.TERM)\n }\n}\n\n// This matches the separator used when tokenising fields\n// within a document. These should match otherwise it is\n// not possible to search for some tokens within a document.\n//\n// It is possible for the user to change the separator on the\n// tokenizer so it _might_ clash with any other of the special\n// characters already used within the search string, e.g. :.\n//\n// This means that it is possible to change the separator in\n// such a way that makes some words unsearchable using a search\n// string.\nlunr.QueryLexer.termSeparator = lunr.tokenizer.separator\n\nlunr.QueryLexer.lexText = function (lexer) {\n while (true) {\n var char = lexer.next()\n\n if (char == lunr.QueryLexer.EOS) {\n return lunr.QueryLexer.lexEOS\n }\n\n // Escape character is '\\'\n if (char.charCodeAt(0) == 92) {\n lexer.escapeCharacter()\n continue\n }\n\n if (char == \":\") {\n return lunr.QueryLexer.lexField\n }\n\n if (char == \"~\") {\n lexer.backup()\n if (lexer.width() > 0) {\n lexer.emit(lunr.QueryLexer.TERM)\n }\n return lunr.QueryLexer.lexEditDistance\n }\n\n if (char == \"^\") {\n lexer.backup()\n if (lexer.width() > 0) {\n lexer.emit(lunr.QueryLexer.TERM)\n }\n return lunr.QueryLexer.lexBoost\n }\n\n // \"+\" indicates term presence is required\n // checking for length to ensure that only\n // leading \"+\" are considered\n if (char == \"+\" && lexer.width() === 1) {\n lexer.emit(lunr.QueryLexer.PRESENCE)\n return lunr.QueryLexer.lexText\n }\n\n // \"-\" indicates term presence is prohibited\n // checking for length to ensure that only\n // leading \"-\" are considered\n if (char == \"-\" && lexer.width() === 1) {\n lexer.emit(lunr.QueryLexer.PRESENCE)\n return lunr.QueryLexer.lexText\n }\n\n if (char.match(lunr.QueryLexer.termSeparator)) {\n return lunr.QueryLexer.lexTerm\n }\n }\n}\n\nlunr.QueryParser = function (str, query) {\n this.lexer = new lunr.QueryLexer (str)\n this.query = query\n this.currentClause = {}\n this.lexemeIdx = 0\n}\n\nlunr.QueryParser.prototype.parse = function () {\n this.lexer.run()\n this.lexemes = this.lexer.lexemes\n\n var state = lunr.QueryParser.parseClause\n\n while (state) {\n state = state(this)\n }\n\n return this.query\n}\n\nlunr.QueryParser.prototype.peekLexeme = function () {\n return this.lexemes[this.lexemeIdx]\n}\n\nlunr.QueryParser.prototype.consumeLexeme = function () {\n var lexeme = this.peekLexeme()\n this.lexemeIdx += 1\n return lexeme\n}\n\nlunr.QueryParser.prototype.nextClause = function () {\n var completedClause = this.currentClause\n this.query.clause(completedClause)\n this.currentClause = {}\n}\n\nlunr.QueryParser.parseClause = function (parser) {\n var lexeme = parser.peekLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n switch (lexeme.type) {\n case lunr.QueryLexer.PRESENCE:\n return lunr.QueryParser.parsePresence\n case lunr.QueryLexer.FIELD:\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.TERM:\n return lunr.QueryParser.parseTerm\n default:\n var errorMessage = \"expected either a field or a term, found \" + lexeme.type\n\n if (lexeme.str.length >= 1) {\n errorMessage += \" with value '\" + lexeme.str + \"'\"\n }\n\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n}\n\nlunr.QueryParser.parsePresence = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n switch (lexeme.str) {\n case \"-\":\n parser.currentClause.presence = lunr.Query.presence.PROHIBITED\n break\n case \"+\":\n parser.currentClause.presence = lunr.Query.presence.REQUIRED\n break\n default:\n var errorMessage = \"unrecognised presence operator'\" + lexeme.str + \"'\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n var errorMessage = \"expecting term or field, found nothing\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.FIELD:\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.TERM:\n return lunr.QueryParser.parseTerm\n default:\n var errorMessage = \"expecting term or field, found '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseField = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n if (parser.query.allFields.indexOf(lexeme.str) == -1) {\n var possibleFields = parser.query.allFields.map(function (f) { return \"'\" + f + \"'\" }).join(', '),\n errorMessage = \"unrecognised field '\" + lexeme.str + \"', possible fields: \" + possibleFields\n\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n parser.currentClause.fields = [lexeme.str]\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n var errorMessage = \"expecting term, found nothing\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n return lunr.QueryParser.parseTerm\n default:\n var errorMessage = \"expecting term, found '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseTerm = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n parser.currentClause.term = lexeme.str.toLowerCase()\n\n if (lexeme.str.indexOf(\"*\") != -1) {\n parser.currentClause.usePipeline = false\n }\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n parser.nextClause()\n return\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n parser.nextClause()\n return lunr.QueryParser.parseTerm\n case lunr.QueryLexer.FIELD:\n parser.nextClause()\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.EDIT_DISTANCE:\n return lunr.QueryParser.parseEditDistance\n case lunr.QueryLexer.BOOST:\n return lunr.QueryParser.parseBoost\n case lunr.QueryLexer.PRESENCE:\n parser.nextClause()\n return lunr.QueryParser.parsePresence\n default:\n var errorMessage = \"Unexpected lexeme type '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseEditDistance = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n var editDistance = parseInt(lexeme.str, 10)\n\n if (isNaN(editDistance)) {\n var errorMessage = \"edit distance must be numeric\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n parser.currentClause.editDistance = editDistance\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n parser.nextClause()\n return\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n parser.nextClause()\n return lunr.QueryParser.parseTerm\n case lunr.QueryLexer.FIELD:\n parser.nextClause()\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.EDIT_DISTANCE:\n return lunr.QueryParser.parseEditDistance\n case lunr.QueryLexer.BOOST:\n return lunr.QueryParser.parseBoost\n case lunr.QueryLexer.PRESENCE:\n parser.nextClause()\n return lunr.QueryParser.parsePresence\n default:\n var errorMessage = \"Unexpected lexeme type '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseBoost = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n var boost = parseInt(lexeme.str, 10)\n\n if (isNaN(boost)) {\n var errorMessage = \"boost must be numeric\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n parser.currentClause.boost = boost\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n parser.nextClause()\n return\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n parser.nextClause()\n return lunr.QueryParser.parseTerm\n case lunr.QueryLexer.FIELD:\n parser.nextClause()\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.EDIT_DISTANCE:\n return lunr.QueryParser.parseEditDistance\n case lunr.QueryLexer.BOOST:\n return lunr.QueryParser.parseBoost\n case lunr.QueryLexer.PRESENCE:\n parser.nextClause()\n return lunr.QueryParser.parsePresence\n default:\n var errorMessage = \"Unexpected lexeme type '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\n /**\n * export the module via AMD, CommonJS or as a browser global\n * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js\n */\n ;(function (root, factory) {\n if (typeof define === 'function' && define.amd) {\n // AMD. Register as an anonymous module.\n define(factory)\n } else if (typeof exports === 'object') {\n /**\n * Node. Does not work with strict CommonJS, but\n * only CommonJS-like enviroments that support module.exports,\n * like Node.\n */\n module.exports = factory()\n } else {\n // Browser globals (root is window)\n root.lunr = factory()\n }\n }(this, function () {\n /**\n * Just return a value to define the module export.\n * This example returns an object, but the module\n * can return a function as the exported value.\n */\n return lunr\n }))\n})();\n", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A RTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport lunr from \"lunr\"\n\nimport \"~/polyfills\"\n\nimport { Search, SearchIndexConfig } from \"../../_\"\nimport {\n SearchMessage,\n SearchMessageType\n} from \"../message\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Add support for usage with `iframe-worker` polyfill\n *\n * While `importScripts` is synchronous when executed inside of a web worker,\n * it's not possible to provide a synchronous polyfilled implementation. The\n * cool thing is that awaiting a non-Promise is a noop, so extending the type\n * definition to return a `Promise` shouldn't break anything.\n *\n * @see https://bit.ly/2PjDnXi - GitHub comment\n */\ndeclare global {\n function importScripts(...urls: string[]): Promise | void\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Search index\n */\nlet index: Search\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch (= import) multi-language support through `lunr-languages`\n *\n * This function automatically imports the stemmers necessary to process the\n * languages, which are defined through the search index configuration.\n *\n * If the worker runs inside of an `iframe` (when using `iframe-worker` as\n * a shim), the base URL for the stemmers to be loaded must be determined by\n * searching for the first `script` element with a `src` attribute, which will\n * contain the contents of this script.\n *\n * @param config - Search index configuration\n *\n * @returns Promise resolving with no result\n */\nasync function setupSearchLanguages(\n config: SearchIndexConfig\n): Promise {\n let base = \"../lunr\"\n\n /* Detect `iframe-worker` and fix base URL */\n if (typeof parent !== \"undefined\" && \"IFrameWorker\" in parent) {\n const worker = document.querySelector(\"script[src]\")!\n const [path] = worker.src.split(\"/worker\")\n\n /* Prefix base with path */\n base = base.replace(\"..\", path)\n }\n\n /* Add scripts for languages */\n const scripts = []\n for (const lang of config.lang) {\n switch (lang) {\n\n /* Add segmenter for Japanese */\n case \"ja\":\n scripts.push(`${base}/tinyseg.js`)\n break\n\n /* Add segmenter for Hindi and Thai */\n case \"hi\":\n case \"th\":\n scripts.push(`${base}/wordcut.js`)\n break\n }\n\n /* Add language support */\n if (lang !== \"en\")\n scripts.push(`${base}/min/lunr.${lang}.min.js`)\n }\n\n /* Add multi-language support */\n if (config.lang.length > 1)\n scripts.push(`${base}/min/lunr.multi.min.js`)\n\n /* Load scripts synchronously */\n if (scripts.length)\n await importScripts(\n `${base}/min/lunr.stemmer.support.min.js`,\n ...scripts\n )\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Message handler\n *\n * @param message - Source message\n *\n * @returns Target message\n */\nexport async function handler(\n message: SearchMessage\n): Promise {\n switch (message.type) {\n\n /* Search setup message */\n case SearchMessageType.SETUP:\n await setupSearchLanguages(message.data.config)\n index = new Search(message.data)\n return {\n type: SearchMessageType.READY\n }\n\n /* Search query message */\n case SearchMessageType.QUERY:\n return {\n type: SearchMessageType.RESULT,\n data: index ? index.search(message.data) : { items: [] }\n }\n\n /* All other messages */\n default:\n throw new TypeError(\"Invalid message type\")\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Worker\n * ------------------------------------------------------------------------- */\n\n/* @ts-expect-error - expose Lunr.js in global scope, or stemmers won't work */\nself.lunr = lunr\n\n/* Handle messages */\naddEventListener(\"message\", async ev => {\n postMessage(await handler(ev.data))\n})\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Polyfills\n * ------------------------------------------------------------------------- */\n\n/* Polyfill `Object.entries` */\nif (!Object.entries)\n Object.entries = function (obj: object) {\n const data: [string, string][] = []\n for (const key of Object.keys(obj))\n // @ts-expect-error - ignore property access warning\n data.push([key, obj[key]])\n\n /* Return entries */\n return data\n }\n\n/* Polyfill `Object.values` */\nif (!Object.values)\n Object.values = function (obj: object) {\n const data: string[] = []\n for (const key of Object.keys(obj))\n // @ts-expect-error - ignore property access warning\n data.push(obj[key])\n\n /* Return values */\n return data\n }\n\n/* ------------------------------------------------------------------------- */\n\n/* Polyfills for `Element` */\nif (typeof Element !== \"undefined\") {\n\n /* Polyfill `Element.scrollTo` */\n if (!Element.prototype.scrollTo)\n Element.prototype.scrollTo = function (\n x?: ScrollToOptions | number, y?: number\n ): void {\n if (typeof x === \"object\") {\n this.scrollLeft = x.left!\n this.scrollTop = x.top!\n } else {\n this.scrollLeft = x!\n this.scrollTop = y!\n }\n }\n\n /* Polyfill `Element.replaceWith` */\n if (!Element.prototype.replaceWith)\n Element.prototype.replaceWith = function (\n ...nodes: Array\n ): void {\n const parent = this.parentNode\n if (parent) {\n if (nodes.length === 0)\n parent.removeChild(this)\n\n /* Replace children and create text nodes */\n for (let i = nodes.length - 1; i >= 0; i--) {\n let node = nodes[i]\n if (typeof node === \"string\")\n node = document.createTextNode(node)\n else if (node.parentNode)\n node.parentNode.removeChild(node)\n\n /* Replace child or insert before previous sibling */\n if (!i)\n parent.replaceChild(node, this)\n else\n parent.insertBefore(this.previousSibling!, node)\n }\n }\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport escapeHTML from \"escape-html\"\n\nimport { SearchIndexDocument } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search document\n */\nexport interface SearchDocument extends SearchIndexDocument {\n parent?: SearchIndexDocument /* Parent article */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search document mapping\n */\nexport type SearchDocumentMap = Map\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search document mapping\n *\n * @param docs - Search index documents\n *\n * @returns Search document map\n */\nexport function setupSearchDocumentMap(\n docs: SearchIndexDocument[]\n): SearchDocumentMap {\n const documents = new Map()\n const parents = new Set()\n for (const doc of docs) {\n const [path, hash] = doc.location.split(\"#\")\n\n /* Extract location, title and tags */\n const location = doc.location\n const title = doc.title\n const tags = doc.tags\n\n /* Escape and cleanup text */\n const text = escapeHTML(doc.text)\n .replace(/\\s+(?=[,.:;!?])/g, \"\")\n .replace(/\\s+/g, \" \")\n\n /* Handle section */\n if (hash) {\n const parent = documents.get(path)!\n\n /* Ignore first section, override article */\n if (!parents.has(parent)) {\n parent.title = doc.title\n parent.text = text\n\n /* Remember that we processed the article */\n parents.add(parent)\n\n /* Add subsequent section */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n parent\n })\n }\n\n /* Add article */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n ...tags && { tags }\n })\n }\n }\n return documents\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport escapeHTML from \"escape-html\"\n\nimport { SearchIndexConfig } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search highlight function\n *\n * @param value - Value\n *\n * @returns Highlighted value\n */\nexport type SearchHighlightFn = (value: string) => string\n\n/**\n * Search highlight factory function\n *\n * @param query - Query value\n *\n * @returns Search highlight function\n */\nexport type SearchHighlightFactoryFn = (query: string) => SearchHighlightFn\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search highlighter\n *\n * @param config - Search index configuration\n * @param escape - Whether to escape HTML\n *\n * @returns Search highlight factory function\n */\nexport function setupSearchHighlighter(\n config: SearchIndexConfig, escape: boolean\n): SearchHighlightFactoryFn {\n const separator = new RegExp(config.separator, \"img\")\n const highlight = (_: unknown, data: string, term: string) => {\n return `${data}${term}`\n }\n\n /* Return factory function */\n return (query: string) => {\n query = query\n .replace(/[\\s*+\\-:~^]+/g, \" \")\n .trim()\n\n /* Create search term match expression */\n const match = new RegExp(`(^|${config.separator})(${\n query\n .replace(/[|\\\\{}()[\\]^$+*?.-]/g, \"\\\\$&\")\n .replace(separator, \"|\")\n })`, \"img\")\n\n /* Highlight string value */\n return value => (\n escape\n ? escapeHTML(value)\n : value\n )\n .replace(match, highlight)\n .replace(/<\\/mark>(\\s+)]*>/img, \"$1\")\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search query clause\n */\nexport interface SearchQueryClause {\n presence: lunr.Query.presence /* Clause presence */\n term: string /* Clause term */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search query terms\n */\nexport type SearchQueryTerms = Record\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Parse a search query for analysis\n *\n * @param value - Query value\n *\n * @returns Search query clauses\n */\nexport function parseSearchQuery(\n value: string\n): SearchQueryClause[] {\n const query = new (lunr as any).Query([\"title\", \"text\"])\n const parser = new (lunr as any).QueryParser(value, query)\n\n /* Parse and return query clauses */\n parser.parse()\n return query.clauses\n}\n\n/**\n * Analyze the search query clauses in regard to the search terms found\n *\n * @param query - Search query clauses\n * @param terms - Search terms\n *\n * @returns Search query terms\n */\nexport function getSearchQueryTerms(\n query: SearchQueryClause[], terms: string[]\n): SearchQueryTerms {\n const clauses = new Set(query)\n\n /* Match query clauses against terms */\n const result: SearchQueryTerms = {}\n for (let t = 0; t < terms.length; t++)\n for (const clause of clauses)\n if (terms[t].startsWith(clause.term)) {\n result[clause.term] = true\n clauses.delete(clause)\n }\n\n /* Annotate unmatched non-stopword query clauses */\n for (const clause of clauses)\n if (lunr.stopWordFilter?.(clause.term as any))\n result[clause.term] = false\n\n /* Return query terms */\n return result\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n SearchDocument,\n SearchDocumentMap,\n setupSearchDocumentMap\n} from \"../document\"\nimport {\n SearchHighlightFactoryFn,\n setupSearchHighlighter\n} from \"../highlighter\"\nimport { SearchOptions } from \"../options\"\nimport {\n SearchQueryTerms,\n getSearchQueryTerms,\n parseSearchQuery\n} from \"../query\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search index configuration\n */\nexport interface SearchIndexConfig {\n lang: string[] /* Search languages */\n separator: string /* Search separator */\n}\n\n/**\n * Search index document\n */\nexport interface SearchIndexDocument {\n location: string /* Document location */\n title: string /* Document title */\n text: string /* Document text */\n tags?: string[] /* Document tags */\n boost?: number /* Document boost */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search index\n *\n * This interfaces describes the format of the `search_index.json` file which\n * is automatically built by the MkDocs search plugin.\n */\nexport interface SearchIndex {\n config: SearchIndexConfig /* Search index configuration */\n docs: SearchIndexDocument[] /* Search index documents */\n options: SearchOptions /* Search options */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search metadata\n */\nexport interface SearchMetadata {\n score: number /* Score (relevance) */\n terms: SearchQueryTerms /* Search query terms */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search result document\n */\nexport type SearchResultDocument = SearchDocument & SearchMetadata\n\n/**\n * Search result item\n */\nexport type SearchResultItem = SearchResultDocument[]\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search result\n */\nexport interface SearchResult {\n items: SearchResultItem[] /* Search result items */\n suggestions?: string[] /* Search suggestions */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Compute the difference of two lists of strings\n *\n * @param a - 1st list of strings\n * @param b - 2nd list of strings\n *\n * @returns Difference\n */\nfunction difference(a: string[], b: string[]): string[] {\n const [x, y] = [new Set(a), new Set(b)]\n return [\n ...new Set([...x].filter(value => !y.has(value)))\n ]\n}\n\n/* ----------------------------------------------------------------------------\n * Class\n * ------------------------------------------------------------------------- */\n\n/**\n * Search index\n */\nexport class Search {\n\n /**\n * Search document mapping\n *\n * A mapping of URLs (including hash fragments) to the actual articles and\n * sections of the documentation. The search document mapping must be created\n * regardless of whether the index was prebuilt or not, as Lunr.js itself\n * only stores the actual index.\n */\n protected documents: SearchDocumentMap\n\n /**\n * Search highlight factory function\n */\n protected highlight: SearchHighlightFactoryFn\n\n /**\n * The underlying Lunr.js search index\n */\n protected index: lunr.Index\n\n /**\n * Search options\n */\n protected options: SearchOptions\n\n /**\n * Create the search integration\n *\n * @param data - Search index\n */\n public constructor({ config, docs, options }: SearchIndex) {\n this.options = options\n\n /* Set up document map and highlighter factory */\n this.documents = setupSearchDocumentMap(docs)\n this.highlight = setupSearchHighlighter(config, false)\n\n /* Set separator for tokenizer */\n lunr.tokenizer.separator = new RegExp(config.separator)\n\n /* Create search index */\n this.index = lunr(function () {\n\n /* Set up multi-language support */\n if (config.lang.length === 1 && config.lang[0] !== \"en\") {\n this.use((lunr as any)[config.lang[0]])\n } else if (config.lang.length > 1) {\n this.use((lunr as any).multiLanguage(...config.lang))\n }\n\n /* Compute functions to be removed from the pipeline */\n const fns = difference([\n \"trimmer\", \"stopWordFilter\", \"stemmer\"\n ], options.pipeline)\n\n /* Remove functions from the pipeline for registered languages */\n for (const lang of config.lang.map(language => (\n language === \"en\" ? lunr : (lunr as any)[language]\n ))) {\n for (const fn of fns) {\n this.pipeline.remove(lang[fn])\n this.searchPipeline.remove(lang[fn])\n }\n }\n\n /* Set up reference */\n this.ref(\"location\")\n\n /* Set up fields */\n this.field(\"title\", { boost: 1e3 })\n this.field(\"text\")\n this.field(\"tags\", { boost: 1e6, extractor: doc => {\n const { tags = [] } = doc as SearchDocument\n return tags.reduce((list, tag) => [\n ...list,\n ...lunr.tokenizer(tag)\n ], [] as lunr.Token[])\n } })\n\n /* Index documents */\n for (const doc of docs)\n this.add(doc, { boost: doc.boost })\n })\n }\n\n /**\n * Search for matching documents\n *\n * The search index which MkDocs provides is divided up into articles, which\n * contain the whole content of the individual pages, and sections, which only\n * contain the contents of the subsections obtained by breaking the individual\n * pages up at `h1` ... `h6`. As there may be many sections on different pages\n * with identical titles (for example within this very project, e.g. \"Usage\"\n * or \"Installation\"), they need to be put into the context of the containing\n * page. For this reason, section results are grouped within their respective\n * articles which are the top-level results that are returned.\n *\n * @param query - Query value\n *\n * @returns Search results\n */\n public search(query: string): SearchResult {\n if (query) {\n try {\n const highlight = this.highlight(query)\n\n /* Parse query to extract clauses for analysis */\n const clauses = parseSearchQuery(query)\n .filter(clause => (\n clause.presence !== lunr.Query.presence.PROHIBITED\n ))\n\n /* Perform search and post-process results */\n const groups = this.index.search(`${query}*`)\n\n /* Apply post-query boosts based on title and search query terms */\n .reduce((item, { ref, score, matchData }) => {\n const document = this.documents.get(ref)\n if (typeof document !== \"undefined\") {\n const { location, title, text, tags, parent } = document\n\n /* Compute and analyze search query terms */\n const terms = getSearchQueryTerms(\n clauses,\n Object.keys(matchData.metadata)\n )\n\n /* Highlight title and text and apply post-query boosts */\n const boost = +!parent + +Object.values(terms).every(t => t)\n item.push({\n location,\n title: highlight(title),\n text: highlight(text),\n ...tags && { tags: tags.map(highlight) },\n score: score * (1 + boost),\n terms\n })\n }\n return item\n }, [])\n\n /* Sort search results again after applying boosts */\n .sort((a, b) => b.score - a.score)\n\n /* Group search results by page */\n .reduce((items, result) => {\n const document = this.documents.get(result.location)\n if (typeof document !== \"undefined\") {\n const ref = \"parent\" in document\n ? document.parent!.location\n : document.location\n items.set(ref, [...items.get(ref) || [], result])\n }\n return items\n }, new Map())\n\n /* Generate search suggestions, if desired */\n let suggestions: string[] | undefined\n if (this.options.suggestions) {\n const titles = this.index.query(builder => {\n for (const clause of clauses)\n builder.term(clause.term, {\n fields: [\"title\"],\n presence: lunr.Query.presence.REQUIRED,\n wildcard: lunr.Query.wildcard.TRAILING\n })\n })\n\n /* Retrieve suggestions for best match */\n suggestions = titles.length\n ? Object.keys(titles[0].matchData.metadata)\n : []\n }\n\n /* Return items and suggestions */\n return {\n items: [...groups.values()],\n ...typeof suggestions !== \"undefined\" && { suggestions }\n }\n\n /* Log errors to console (for now) */\n } catch {\n console.warn(`Invalid query: ${query} \u2013 see https://bit.ly/2s3ChXG`)\n }\n }\n\n /* Return nothing in case of error or empty query */\n return { items: [] }\n }\n}\n"], + "mappings": "glCAAA,IAAAA,GAAAC,EAAA,CAAAC,GAAAC,KAAA;AAAA;AAAA;AAAA;AAAA,IAME,UAAU,CAiCZ,IAAIC,EAAO,SAAUC,EAAQ,CAC3B,IAAIC,EAAU,IAAIF,EAAK,QAEvB,OAAAE,EAAQ,SAAS,IACfF,EAAK,QACLA,EAAK,eACLA,EAAK,OACP,EAEAE,EAAQ,eAAe,IACrBF,EAAK,OACP,EAEAC,EAAO,KAAKC,EAASA,CAAO,EACrBA,EAAQ,MAAM,CACvB,EAEAF,EAAK,QAAU,QACf;AAAA;AAAA;AAAA,GASAA,EAAK,MAAQ,CAAC,EASdA,EAAK,MAAM,KAAQ,SAAUG,EAAQ,CAEnC,OAAO,SAAUC,EAAS,CACpBD,EAAO,SAAW,QAAQ,MAC5B,QAAQ,KAAKC,CAAO,CAExB,CAEF,EAAG,IAAI,EAaPJ,EAAK,MAAM,SAAW,SAAUK,EAAK,CACnC,OAAsBA,GAAQ,KACrB,GAEAA,EAAI,SAAS,CAExB,EAkBAL,EAAK,MAAM,MAAQ,SAAUK,EAAK,CAChC,GAAIA,GAAQ,KACV,OAAOA,EAMT,QAHIC,EAAQ,OAAO,OAAO,IAAI,EAC1BC,EAAO,OAAO,KAAKF,CAAG,EAEjB,EAAI,EAAG,EAAIE,EAAK,OAAQ,IAAK,CACpC,IAAIC,EAAMD,EAAK,GACXE,EAAMJ,EAAIG,GAEd,GAAI,MAAM,QAAQC,CAAG,EAAG,CACtBH,EAAME,GAAOC,EAAI,MAAM,EACvB,QACF,CAEA,GAAI,OAAOA,GAAQ,UACf,OAAOA,GAAQ,UACf,OAAOA,GAAQ,UAAW,CAC5BH,EAAME,GAAOC,EACb,QACF,CAEA,MAAM,IAAI,UAAU,uDAAuD,CAC7E,CAEA,OAAOH,CACT,EACAN,EAAK,SAAW,SAAUU,EAAQC,EAAWC,EAAa,CACxD,KAAK,OAASF,EACd,KAAK,UAAYC,EACjB,KAAK,aAAeC,CACtB,EAEAZ,EAAK,SAAS,OAAS,IAEvBA,EAAK,SAAS,WAAa,SAAUa,EAAG,CACtC,IAAIC,EAAID,EAAE,QAAQb,EAAK,SAAS,MAAM,EAEtC,GAAIc,IAAM,GACR,KAAM,6BAGR,IAAIC,EAAWF,EAAE,MAAM,EAAGC,CAAC,EACvBJ,EAASG,EAAE,MAAMC,EAAI,CAAC,EAE1B,OAAO,IAAId,EAAK,SAAUU,EAAQK,EAAUF,CAAC,CAC/C,EAEAb,EAAK,SAAS,UAAU,SAAW,UAAY,CAC7C,OAAI,KAAK,cAAgB,OACvB,KAAK,aAAe,KAAK,UAAYA,EAAK,SAAS,OAAS,KAAK,QAG5D,KAAK,YACd,EACA;AAAA;AAAA;AAAA,GAUAA,EAAK,IAAM,SAAUgB,EAAU,CAG7B,GAFA,KAAK,SAAW,OAAO,OAAO,IAAI,EAE9BA,EAAU,CACZ,KAAK,OAASA,EAAS,OAEvB,QAASC,EAAI,EAAGA,EAAI,KAAK,OAAQA,IAC/B,KAAK,SAASD,EAASC,IAAM,EAEjC,MACE,KAAK,OAAS,CAElB,EASAjB,EAAK,IAAI,SAAW,CAClB,UAAW,SAAUkB,EAAO,CAC1B,OAAOA,CACT,EAEA,MAAO,UAAY,CACjB,OAAO,IACT,EAEA,SAAU,UAAY,CACpB,MAAO,EACT,CACF,EASAlB,EAAK,IAAI,MAAQ,CACf,UAAW,UAAY,CACrB,OAAO,IACT,EAEA,MAAO,SAAUkB,EAAO,CACtB,OAAOA,CACT,EAEA,SAAU,UAAY,CACpB,MAAO,EACT,CACF,EAQAlB,EAAK,IAAI,UAAU,SAAW,SAAUmB,EAAQ,CAC9C,MAAO,CAAC,CAAC,KAAK,SAASA,EACzB,EAUAnB,EAAK,IAAI,UAAU,UAAY,SAAUkB,EAAO,CAC9C,IAAIE,EAAGC,EAAGL,EAAUM,EAAe,CAAC,EAEpC,GAAIJ,IAAUlB,EAAK,IAAI,SACrB,OAAO,KAGT,GAAIkB,IAAUlB,EAAK,IAAI,MACrB,OAAOkB,EAGL,KAAK,OAASA,EAAM,QACtBE,EAAI,KACJC,EAAIH,IAEJE,EAAIF,EACJG,EAAI,MAGNL,EAAW,OAAO,KAAKI,EAAE,QAAQ,EAEjC,QAASH,EAAI,EAAGA,EAAID,EAAS,OAAQC,IAAK,CACxC,IAAIM,EAAUP,EAASC,GACnBM,KAAWF,EAAE,UACfC,EAAa,KAAKC,CAAO,CAE7B,CAEA,OAAO,IAAIvB,EAAK,IAAKsB,CAAY,CACnC,EASAtB,EAAK,IAAI,UAAU,MAAQ,SAAUkB,EAAO,CAC1C,OAAIA,IAAUlB,EAAK,IAAI,SACdA,EAAK,IAAI,SAGdkB,IAAUlB,EAAK,IAAI,MACd,KAGF,IAAIA,EAAK,IAAI,OAAO,KAAK,KAAK,QAAQ,EAAE,OAAO,OAAO,KAAKkB,EAAM,QAAQ,CAAC,CAAC,CACpF,EASAlB,EAAK,IAAM,SAAUwB,EAASC,EAAe,CAC3C,IAAIC,EAAoB,EAExB,QAASf,KAAaa,EAChBb,GAAa,WACjBe,GAAqB,OAAO,KAAKF,EAAQb,EAAU,EAAE,QAGvD,IAAIgB,GAAKF,EAAgBC,EAAoB,KAAQA,EAAoB,IAEzE,OAAO,KAAK,IAAI,EAAI,KAAK,IAAIC,CAAC,CAAC,CACjC,EAUA3B,EAAK,MAAQ,SAAU4B,EAAKC,EAAU,CACpC,KAAK,IAAMD,GAAO,GAClB,KAAK,SAAWC,GAAY,CAAC,CAC/B,EAOA7B,EAAK,MAAM,UAAU,SAAW,UAAY,CAC1C,OAAO,KAAK,GACd,EAsBAA,EAAK,MAAM,UAAU,OAAS,SAAU8B,EAAI,CAC1C,YAAK,IAAMA,EAAG,KAAK,IAAK,KAAK,QAAQ,EAC9B,IACT,EASA9B,EAAK,MAAM,UAAU,MAAQ,SAAU8B,EAAI,CACzC,OAAAA,EAAKA,GAAM,SAAUjB,EAAG,CAAE,OAAOA,CAAE,EAC5B,IAAIb,EAAK,MAAO8B,EAAG,KAAK,IAAK,KAAK,QAAQ,EAAG,KAAK,QAAQ,CACnE,EACA;AAAA;AAAA;AAAA,GAuBA9B,EAAK,UAAY,SAAUK,EAAKwB,EAAU,CACxC,GAAIxB,GAAO,MAAQA,GAAO,KACxB,MAAO,CAAC,EAGV,GAAI,MAAM,QAAQA,CAAG,EACnB,OAAOA,EAAI,IAAI,SAAU0B,EAAG,CAC1B,OAAO,IAAI/B,EAAK,MACdA,EAAK,MAAM,SAAS+B,CAAC,EAAE,YAAY,EACnC/B,EAAK,MAAM,MAAM6B,CAAQ,CAC3B,CACF,CAAC,EAOH,QAJID,EAAMvB,EAAI,SAAS,EAAE,YAAY,EACjC2B,EAAMJ,EAAI,OACVK,EAAS,CAAC,EAELC,EAAW,EAAGC,EAAa,EAAGD,GAAYF,EAAKE,IAAY,CAClE,IAAIE,EAAOR,EAAI,OAAOM,CAAQ,EAC1BG,EAAcH,EAAWC,EAE7B,GAAKC,EAAK,MAAMpC,EAAK,UAAU,SAAS,GAAKkC,GAAYF,EAAM,CAE7D,GAAIK,EAAc,EAAG,CACnB,IAAIC,EAAgBtC,EAAK,MAAM,MAAM6B,CAAQ,GAAK,CAAC,EACnDS,EAAc,SAAc,CAACH,EAAYE,CAAW,EACpDC,EAAc,MAAWL,EAAO,OAEhCA,EAAO,KACL,IAAIjC,EAAK,MACP4B,EAAI,MAAMO,EAAYD,CAAQ,EAC9BI,CACF,CACF,CACF,CAEAH,EAAaD,EAAW,CAC1B,CAEF,CAEA,OAAOD,CACT,EASAjC,EAAK,UAAU,UAAY,UAC3B;AAAA;AAAA;AAAA,GAkCAA,EAAK,SAAW,UAAY,CAC1B,KAAK,OAAS,CAAC,CACjB,EAEAA,EAAK,SAAS,oBAAsB,OAAO,OAAO,IAAI,EAmCtDA,EAAK,SAAS,iBAAmB,SAAU8B,EAAIS,EAAO,CAChDA,KAAS,KAAK,qBAChBvC,EAAK,MAAM,KAAK,6CAA+CuC,CAAK,EAGtET,EAAG,MAAQS,EACXvC,EAAK,SAAS,oBAAoB8B,EAAG,OAASA,CAChD,EAQA9B,EAAK,SAAS,4BAA8B,SAAU8B,EAAI,CACxD,IAAIU,EAAeV,EAAG,OAAUA,EAAG,SAAS,KAAK,oBAE5CU,GACHxC,EAAK,MAAM,KAAK;AAAA,EAAmG8B,CAAE,CAEzH,EAYA9B,EAAK,SAAS,KAAO,SAAUyC,EAAY,CACzC,IAAIC,EAAW,IAAI1C,EAAK,SAExB,OAAAyC,EAAW,QAAQ,SAAUE,EAAQ,CACnC,IAAIb,EAAK9B,EAAK,SAAS,oBAAoB2C,GAE3C,GAAIb,EACFY,EAAS,IAAIZ,CAAE,MAEf,OAAM,IAAI,MAAM,sCAAwCa,CAAM,CAElE,CAAC,EAEMD,CACT,EASA1C,EAAK,SAAS,UAAU,IAAM,UAAY,CACxC,IAAI4C,EAAM,MAAM,UAAU,MAAM,KAAK,SAAS,EAE9CA,EAAI,QAAQ,SAAUd,EAAI,CACxB9B,EAAK,SAAS,4BAA4B8B,CAAE,EAC5C,KAAK,OAAO,KAAKA,CAAE,CACrB,EAAG,IAAI,CACT,EAWA9B,EAAK,SAAS,UAAU,MAAQ,SAAU6C,EAAYC,EAAO,CAC3D9C,EAAK,SAAS,4BAA4B8C,CAAK,EAE/C,IAAIC,EAAM,KAAK,OAAO,QAAQF,CAAU,EACxC,GAAIE,GAAO,GACT,MAAM,IAAI,MAAM,wBAAwB,EAG1CA,EAAMA,EAAM,EACZ,KAAK,OAAO,OAAOA,EAAK,EAAGD,CAAK,CAClC,EAWA9C,EAAK,SAAS,UAAU,OAAS,SAAU6C,EAAYC,EAAO,CAC5D9C,EAAK,SAAS,4BAA4B8C,CAAK,EAE/C,IAAIC,EAAM,KAAK,OAAO,QAAQF,CAAU,EACxC,GAAIE,GAAO,GACT,MAAM,IAAI,MAAM,wBAAwB,EAG1C,KAAK,OAAO,OAAOA,EAAK,EAAGD,CAAK,CAClC,EAOA9C,EAAK,SAAS,UAAU,OAAS,SAAU8B,EAAI,CAC7C,IAAIiB,EAAM,KAAK,OAAO,QAAQjB,CAAE,EAC5BiB,GAAO,IAIX,KAAK,OAAO,OAAOA,EAAK,CAAC,CAC3B,EASA/C,EAAK,SAAS,UAAU,IAAM,SAAUiC,EAAQ,CAG9C,QAFIe,EAAc,KAAK,OAAO,OAErB/B,EAAI,EAAGA,EAAI+B,EAAa/B,IAAK,CAIpC,QAHIa,EAAK,KAAK,OAAOb,GACjBgC,EAAO,CAAC,EAEHC,EAAI,EAAGA,EAAIjB,EAAO,OAAQiB,IAAK,CACtC,IAAIC,EAASrB,EAAGG,EAAOiB,GAAIA,EAAGjB,CAAM,EAEpC,GAAI,EAAAkB,GAAW,MAA6BA,IAAW,IAEvD,GAAI,MAAM,QAAQA,CAAM,EACtB,QAASC,EAAI,EAAGA,EAAID,EAAO,OAAQC,IACjCH,EAAK,KAAKE,EAAOC,EAAE,OAGrBH,EAAK,KAAKE,CAAM,CAEpB,CAEAlB,EAASgB,CACX,CAEA,OAAOhB,CACT,EAYAjC,EAAK,SAAS,UAAU,UAAY,SAAU4B,EAAKC,EAAU,CAC3D,IAAIwB,EAAQ,IAAIrD,EAAK,MAAO4B,EAAKC,CAAQ,EAEzC,OAAO,KAAK,IAAI,CAACwB,CAAK,CAAC,EAAE,IAAI,SAAUtB,EAAG,CACxC,OAAOA,EAAE,SAAS,CACpB,CAAC,CACH,EAMA/B,EAAK,SAAS,UAAU,MAAQ,UAAY,CAC1C,KAAK,OAAS,CAAC,CACjB,EASAA,EAAK,SAAS,UAAU,OAAS,UAAY,CAC3C,OAAO,KAAK,OAAO,IAAI,SAAU8B,EAAI,CACnC,OAAA9B,EAAK,SAAS,4BAA4B8B,CAAE,EAErCA,EAAG,KACZ,CAAC,CACH,EACA;AAAA;AAAA;AAAA,GAqBA9B,EAAK,OAAS,SAAUgB,EAAU,CAChC,KAAK,WAAa,EAClB,KAAK,SAAWA,GAAY,CAAC,CAC/B,EAaAhB,EAAK,OAAO,UAAU,iBAAmB,SAAUsD,EAAO,CAExD,GAAI,KAAK,SAAS,QAAU,EAC1B,MAAO,GAST,QANIC,EAAQ,EACRC,EAAM,KAAK,SAAS,OAAS,EAC7BnB,EAAcmB,EAAMD,EACpBE,EAAa,KAAK,MAAMpB,EAAc,CAAC,EACvCqB,EAAa,KAAK,SAASD,EAAa,GAErCpB,EAAc,IACfqB,EAAaJ,IACfC,EAAQE,GAGNC,EAAaJ,IACfE,EAAMC,GAGJC,GAAcJ,IAIlBjB,EAAcmB,EAAMD,EACpBE,EAAaF,EAAQ,KAAK,MAAMlB,EAAc,CAAC,EAC/CqB,EAAa,KAAK,SAASD,EAAa,GAO1C,GAJIC,GAAcJ,GAIdI,EAAaJ,EACf,OAAOG,EAAa,EAGtB,GAAIC,EAAaJ,EACf,OAAQG,EAAa,GAAK,CAE9B,EAWAzD,EAAK,OAAO,UAAU,OAAS,SAAU2D,EAAWlD,EAAK,CACvD,KAAK,OAAOkD,EAAWlD,EAAK,UAAY,CACtC,KAAM,iBACR,CAAC,CACH,EAUAT,EAAK,OAAO,UAAU,OAAS,SAAU2D,EAAWlD,EAAKqB,EAAI,CAC3D,KAAK,WAAa,EAClB,IAAI8B,EAAW,KAAK,iBAAiBD,CAAS,EAE1C,KAAK,SAASC,IAAaD,EAC7B,KAAK,SAASC,EAAW,GAAK9B,EAAG,KAAK,SAAS8B,EAAW,GAAInD,CAAG,EAEjE,KAAK,SAAS,OAAOmD,EAAU,EAAGD,EAAWlD,CAAG,CAEpD,EAOAT,EAAK,OAAO,UAAU,UAAY,UAAY,CAC5C,GAAI,KAAK,WAAY,OAAO,KAAK,WAKjC,QAHI6D,EAAe,EACfC,EAAiB,KAAK,SAAS,OAE1B7C,EAAI,EAAGA,EAAI6C,EAAgB7C,GAAK,EAAG,CAC1C,IAAIR,EAAM,KAAK,SAASQ,GACxB4C,GAAgBpD,EAAMA,CACxB,CAEA,OAAO,KAAK,WAAa,KAAK,KAAKoD,CAAY,CACjD,EAQA7D,EAAK,OAAO,UAAU,IAAM,SAAU+D,EAAa,CAOjD,QANIC,EAAa,EACb5C,EAAI,KAAK,SAAUC,EAAI0C,EAAY,SACnCE,EAAO7C,EAAE,OAAQ8C,EAAO7C,EAAE,OAC1B8C,EAAO,EAAGC,EAAO,EACjBnD,EAAI,EAAGiC,EAAI,EAERjC,EAAIgD,GAAQf,EAAIgB,GACrBC,EAAO/C,EAAEH,GAAImD,EAAO/C,EAAE6B,GAClBiB,EAAOC,EACTnD,GAAK,EACIkD,EAAOC,EAChBlB,GAAK,EACIiB,GAAQC,IACjBJ,GAAc5C,EAAEH,EAAI,GAAKI,EAAE6B,EAAI,GAC/BjC,GAAK,EACLiC,GAAK,GAIT,OAAOc,CACT,EASAhE,EAAK,OAAO,UAAU,WAAa,SAAU+D,EAAa,CACxD,OAAO,KAAK,IAAIA,CAAW,EAAI,KAAK,UAAU,GAAK,CACrD,EAOA/D,EAAK,OAAO,UAAU,QAAU,UAAY,CAG1C,QAFIqE,EAAS,IAAI,MAAO,KAAK,SAAS,OAAS,CAAC,EAEvCpD,EAAI,EAAGiC,EAAI,EAAGjC,EAAI,KAAK,SAAS,OAAQA,GAAK,EAAGiC,IACvDmB,EAAOnB,GAAK,KAAK,SAASjC,GAG5B,OAAOoD,CACT,EAOArE,EAAK,OAAO,UAAU,OAAS,UAAY,CACzC,OAAO,KAAK,QACd,EAEA;AAAA;AAAA;AAAA;AAAA,GAiBAA,EAAK,QAAW,UAAU,CACxB,IAAIsE,EAAY,CACZ,QAAY,MACZ,OAAW,OACX,KAAS,OACT,KAAS,OACT,KAAS,MACT,IAAQ,MACR,KAAS,KACT,MAAU,MACV,IAAQ,IACR,MAAU,MACV,QAAY,MACZ,MAAU,MACV,KAAS,MACT,MAAU,KACV,QAAY,MACZ,QAAY,MACZ,QAAY,MACZ,MAAU,KACV,MAAU,MACV,OAAW,MACX,KAAS,KACX,EAEAC,EAAY,CACV,MAAU,KACV,MAAU,GACV,MAAU,KACV,MAAU,KACV,KAAS,KACT,IAAQ,GACR,KAAS,EACX,EAEAC,EAAI,WACJC,EAAI,WACJC,EAAIF,EAAI,aACRG,EAAIF,EAAI,WAERG,EAAO,KAAOF,EAAI,KAAOC,EAAID,EAC7BG,EAAO,KAAOH,EAAI,KAAOC,EAAID,EAAI,IAAMC,EAAI,MAC3CG,EAAO,KAAOJ,EAAI,KAAOC,EAAID,EAAIC,EAAID,EACrCK,EAAM,KAAOL,EAAI,KAAOD,EAEtBO,EAAU,IAAI,OAAOJ,CAAI,EACzBK,EAAU,IAAI,OAAOH,CAAI,EACzBI,EAAU,IAAI,OAAOL,CAAI,EACzBM,EAAS,IAAI,OAAOJ,CAAG,EAEvBK,EAAQ,kBACRC,EAAS,iBACTC,EAAQ,aACRC,EAAS,kBACTC,EAAU,KACVC,EAAW,cACXC,EAAW,IAAI,OAAO,oBAAoB,EAC1CC,EAAW,IAAI,OAAO,IAAMjB,EAAID,EAAI,cAAc,EAElDmB,EAAQ,mBACRC,EAAO,2IAEPC,EAAO,iDAEPC,EAAO,sFACPC,EAAQ,oBAERC,EAAO,WACPC,EAAS,MACTC,EAAQ,IAAI,OAAO,IAAMzB,EAAID,EAAI,cAAc,EAE/C2B,EAAgB,SAAuBC,EAAG,CAC5C,IAAIC,EACFC,EACAC,EACAC,EACAC,EACAC,EACAC,EAEF,GAAIP,EAAE,OAAS,EAAK,OAAOA,EAiB3B,GAfAG,EAAUH,EAAE,OAAO,EAAE,CAAC,EAClBG,GAAW,MACbH,EAAIG,EAAQ,YAAY,EAAIH,EAAE,OAAO,CAAC,GAIxCI,EAAKrB,EACLsB,EAAMrB,EAEFoB,EAAG,KAAKJ,CAAC,EAAKA,EAAIA,EAAE,QAAQI,EAAG,MAAM,EAChCC,EAAI,KAAKL,CAAC,IAAKA,EAAIA,EAAE,QAAQK,EAAI,MAAM,GAGhDD,EAAKnB,EACLoB,EAAMnB,EACFkB,EAAG,KAAKJ,CAAC,EAAG,CACd,IAAIQ,EAAKJ,EAAG,KAAKJ,CAAC,EAClBI,EAAKzB,EACDyB,EAAG,KAAKI,EAAG,EAAE,IACfJ,EAAKjB,EACLa,EAAIA,EAAE,QAAQI,EAAG,EAAE,EAEvB,SAAWC,EAAI,KAAKL,CAAC,EAAG,CACtB,IAAIQ,EAAKH,EAAI,KAAKL,CAAC,EACnBC,EAAOO,EAAG,GACVH,EAAMvB,EACFuB,EAAI,KAAKJ,CAAI,IACfD,EAAIC,EACJI,EAAMjB,EACNkB,EAAMjB,EACNkB,EAAMjB,EACFe,EAAI,KAAKL,CAAC,EAAKA,EAAIA,EAAI,IAClBM,EAAI,KAAKN,CAAC,GAAKI,EAAKjB,EAASa,EAAIA,EAAE,QAAQI,EAAG,EAAE,GAChDG,EAAI,KAAKP,CAAC,IAAKA,EAAIA,EAAI,KAEpC,CAIA,GADAI,EAAKb,EACDa,EAAG,KAAKJ,CAAC,EAAG,CACd,IAAIQ,EAAKJ,EAAG,KAAKJ,CAAC,EAClBC,EAAOO,EAAG,GACVR,EAAIC,EAAO,GACb,CAIA,GADAG,EAAKZ,EACDY,EAAG,KAAKJ,CAAC,EAAG,CACd,IAAIQ,EAAKJ,EAAG,KAAKJ,CAAC,EAClBC,EAAOO,EAAG,GACVN,EAASM,EAAG,GACZJ,EAAKzB,EACDyB,EAAG,KAAKH,CAAI,IACdD,EAAIC,EAAOhC,EAAUiC,GAEzB,CAIA,GADAE,EAAKX,EACDW,EAAG,KAAKJ,CAAC,EAAG,CACd,IAAIQ,EAAKJ,EAAG,KAAKJ,CAAC,EAClBC,EAAOO,EAAG,GACVN,EAASM,EAAG,GACZJ,EAAKzB,EACDyB,EAAG,KAAKH,CAAI,IACdD,EAAIC,EAAO/B,EAAUgC,GAEzB,CAKA,GAFAE,EAAKV,EACLW,EAAMV,EACFS,EAAG,KAAKJ,CAAC,EAAG,CACd,IAAIQ,EAAKJ,EAAG,KAAKJ,CAAC,EAClBC,EAAOO,EAAG,GACVJ,EAAKxB,EACDwB,EAAG,KAAKH,CAAI,IACdD,EAAIC,EAER,SAAWI,EAAI,KAAKL,CAAC,EAAG,CACtB,IAAIQ,EAAKH,EAAI,KAAKL,CAAC,EACnBC,EAAOO,EAAG,GAAKA,EAAG,GAClBH,EAAMzB,EACFyB,EAAI,KAAKJ,CAAI,IACfD,EAAIC,EAER,CAIA,GADAG,EAAKR,EACDQ,EAAG,KAAKJ,CAAC,EAAG,CACd,IAAIQ,EAAKJ,EAAG,KAAKJ,CAAC,EAClBC,EAAOO,EAAG,GACVJ,EAAKxB,EACLyB,EAAMxB,EACNyB,EAAMR,GACFM,EAAG,KAAKH,CAAI,GAAMI,EAAI,KAAKJ,CAAI,GAAK,CAAEK,EAAI,KAAKL,CAAI,KACrDD,EAAIC,EAER,CAEA,OAAAG,EAAKP,EACLQ,EAAMzB,EACFwB,EAAG,KAAKJ,CAAC,GAAKK,EAAI,KAAKL,CAAC,IAC1BI,EAAKjB,EACLa,EAAIA,EAAE,QAAQI,EAAG,EAAE,GAKjBD,GAAW,MACbH,EAAIG,EAAQ,YAAY,EAAIH,EAAE,OAAO,CAAC,GAGjCA,CACT,EAEA,OAAO,SAAUhD,EAAO,CACtB,OAAOA,EAAM,OAAO+C,CAAa,CACnC,CACF,EAAG,EAEHpG,EAAK,SAAS,iBAAiBA,EAAK,QAAS,SAAS,EACtD;AAAA;AAAA;AAAA,GAkBAA,EAAK,uBAAyB,SAAU8G,EAAW,CACjD,IAAIC,EAAQD,EAAU,OAAO,SAAU7D,EAAM+D,EAAU,CACrD,OAAA/D,EAAK+D,GAAYA,EACV/D,CACT,EAAG,CAAC,CAAC,EAEL,OAAO,SAAUI,EAAO,CACtB,GAAIA,GAAS0D,EAAM1D,EAAM,SAAS,KAAOA,EAAM,SAAS,EAAG,OAAOA,CACpE,CACF,EAeArD,EAAK,eAAiBA,EAAK,uBAAuB,CAChD,IACA,OACA,QACA,SACA,QACA,MACA,SACA,OACA,KACA,QACA,KACA,MACA,MACA,MACA,KACA,KACA,KACA,UACA,OACA,MACA,KACA,MACA,SACA,QACA,OACA,MACA,KACA,OACA,SACA,OACA,OACA,QACA,MACA,OACA,MACA,MACA,MACA,MACA,OACA,KACA,MACA,OACA,MACA,MACA,MACA,UACA,IACA,KACA,KACA,OACA,KACA,KACA,MACA,OACA,QACA,MACA,OACA,SACA,MACA,KACA,QACA,OACA,OACA,KACA,UACA,KACA,MACA,MACA,KACA,MACA,QACA,KACA,OACA,KACA,QACA,MACA,MACA,SACA,OACA,MACA,OACA,MACA,SACA,QACA,KACA,OACA,OACA,OACA,MACA,QACA,OACA,OACA,QACA,QACA,OACA,OACA,MACA,KACA,MACA,OACA,KACA,QACA,MACA,KACA,OACA,OACA,OACA,QACA,QACA,QACA,MACA,OACA,MACA,OACA,OACA,QACA,MACA,MACA,MACF,CAAC,EAEDA,EAAK,SAAS,iBAAiBA,EAAK,eAAgB,gBAAgB,EACpE;AAAA;AAAA;AAAA,GAoBAA,EAAK,QAAU,SAAUqD,EAAO,CAC9B,OAAOA,EAAM,OAAO,SAAUxC,EAAG,CAC/B,OAAOA,EAAE,QAAQ,OAAQ,EAAE,EAAE,QAAQ,OAAQ,EAAE,CACjD,CAAC,CACH,EAEAb,EAAK,SAAS,iBAAiBA,EAAK,QAAS,SAAS,EACtD;AAAA;AAAA;AAAA,GA0BAA,EAAK,SAAW,UAAY,CAC1B,KAAK,MAAQ,GACb,KAAK,MAAQ,CAAC,EACd,KAAK,GAAKA,EAAK,SAAS,QACxBA,EAAK,SAAS,SAAW,CAC3B,EAUAA,EAAK,SAAS,QAAU,EASxBA,EAAK,SAAS,UAAY,SAAUiH,EAAK,CAGvC,QAFI/G,EAAU,IAAIF,EAAK,SAAS,QAEvBiB,EAAI,EAAGe,EAAMiF,EAAI,OAAQhG,EAAIe,EAAKf,IACzCf,EAAQ,OAAO+G,EAAIhG,EAAE,EAGvB,OAAAf,EAAQ,OAAO,EACRA,EAAQ,IACjB,EAWAF,EAAK,SAAS,WAAa,SAAUkH,EAAQ,CAC3C,MAAI,iBAAkBA,EACblH,EAAK,SAAS,gBAAgBkH,EAAO,KAAMA,EAAO,YAAY,EAE9DlH,EAAK,SAAS,WAAWkH,EAAO,IAAI,CAE/C,EAiBAlH,EAAK,SAAS,gBAAkB,SAAU4B,EAAKuF,EAAc,CAS3D,QARIC,EAAO,IAAIpH,EAAK,SAEhBqH,EAAQ,CAAC,CACX,KAAMD,EACN,eAAgBD,EAChB,IAAKvF,CACP,CAAC,EAEMyF,EAAM,QAAQ,CACnB,IAAIC,EAAQD,EAAM,IAAI,EAGtB,GAAIC,EAAM,IAAI,OAAS,EAAG,CACxB,IAAIlF,EAAOkF,EAAM,IAAI,OAAO,CAAC,EACzBC,EAEAnF,KAAQkF,EAAM,KAAK,MACrBC,EAAaD,EAAM,KAAK,MAAMlF,IAE9BmF,EAAa,IAAIvH,EAAK,SACtBsH,EAAM,KAAK,MAAMlF,GAAQmF,GAGvBD,EAAM,IAAI,QAAU,IACtBC,EAAW,MAAQ,IAGrBF,EAAM,KAAK,CACT,KAAME,EACN,eAAgBD,EAAM,eACtB,IAAKA,EAAM,IAAI,MAAM,CAAC,CACxB,CAAC,CACH,CAEA,GAAIA,EAAM,gBAAkB,EAK5B,IAAI,MAAOA,EAAM,KAAK,MACpB,IAAIE,EAAgBF,EAAM,KAAK,MAAM,SAChC,CACL,IAAIE,EAAgB,IAAIxH,EAAK,SAC7BsH,EAAM,KAAK,MAAM,KAAOE,CAC1B,CAgCA,GA9BIF,EAAM,IAAI,QAAU,IACtBE,EAAc,MAAQ,IAGxBH,EAAM,KAAK,CACT,KAAMG,EACN,eAAgBF,EAAM,eAAiB,EACvC,IAAKA,EAAM,GACb,CAAC,EAKGA,EAAM,IAAI,OAAS,GACrBD,EAAM,KAAK,CACT,KAAMC,EAAM,KACZ,eAAgBA,EAAM,eAAiB,EACvC,IAAKA,EAAM,IAAI,MAAM,CAAC,CACxB,CAAC,EAKCA,EAAM,IAAI,QAAU,IACtBA,EAAM,KAAK,MAAQ,IAMjBA,EAAM,IAAI,QAAU,EAAG,CACzB,GAAI,MAAOA,EAAM,KAAK,MACpB,IAAIG,EAAmBH,EAAM,KAAK,MAAM,SACnC,CACL,IAAIG,EAAmB,IAAIzH,EAAK,SAChCsH,EAAM,KAAK,MAAM,KAAOG,CAC1B,CAEIH,EAAM,IAAI,QAAU,IACtBG,EAAiB,MAAQ,IAG3BJ,EAAM,KAAK,CACT,KAAMI,EACN,eAAgBH,EAAM,eAAiB,EACvC,IAAKA,EAAM,IAAI,MAAM,CAAC,CACxB,CAAC,CACH,CAKA,GAAIA,EAAM,IAAI,OAAS,EAAG,CACxB,IAAII,EAAQJ,EAAM,IAAI,OAAO,CAAC,EAC1BK,EAAQL,EAAM,IAAI,OAAO,CAAC,EAC1BM,EAEAD,KAASL,EAAM,KAAK,MACtBM,EAAgBN,EAAM,KAAK,MAAMK,IAEjCC,EAAgB,IAAI5H,EAAK,SACzBsH,EAAM,KAAK,MAAMK,GAASC,GAGxBN,EAAM,IAAI,QAAU,IACtBM,EAAc,MAAQ,IAGxBP,EAAM,KAAK,CACT,KAAMO,EACN,eAAgBN,EAAM,eAAiB,EACvC,IAAKI,EAAQJ,EAAM,IAAI,MAAM,CAAC,CAChC,CAAC,CACH,EACF,CAEA,OAAOF,CACT,EAYApH,EAAK,SAAS,WAAa,SAAU4B,EAAK,CAYxC,QAXIiG,EAAO,IAAI7H,EAAK,SAChBoH,EAAOS,EAUF,EAAI,EAAG7F,EAAMJ,EAAI,OAAQ,EAAII,EAAK,IAAK,CAC9C,IAAII,EAAOR,EAAI,GACXkG,EAAS,GAAK9F,EAAM,EAExB,GAAII,GAAQ,IACVyF,EAAK,MAAMzF,GAAQyF,EACnBA,EAAK,MAAQC,MAER,CACL,IAAIC,EAAO,IAAI/H,EAAK,SACpB+H,EAAK,MAAQD,EAEbD,EAAK,MAAMzF,GAAQ2F,EACnBF,EAAOE,CACT,CACF,CAEA,OAAOX,CACT,EAYApH,EAAK,SAAS,UAAU,QAAU,UAAY,CAQ5C,QAPI+G,EAAQ,CAAC,EAETM,EAAQ,CAAC,CACX,OAAQ,GACR,KAAM,IACR,CAAC,EAEMA,EAAM,QAAQ,CACnB,IAAIC,EAAQD,EAAM,IAAI,EAClBW,EAAQ,OAAO,KAAKV,EAAM,KAAK,KAAK,EACpCtF,EAAMgG,EAAM,OAEZV,EAAM,KAAK,QAKbA,EAAM,OAAO,OAAO,CAAC,EACrBP,EAAM,KAAKO,EAAM,MAAM,GAGzB,QAASrG,EAAI,EAAGA,EAAIe,EAAKf,IAAK,CAC5B,IAAIgH,EAAOD,EAAM/G,GAEjBoG,EAAM,KAAK,CACT,OAAQC,EAAM,OAAO,OAAOW,CAAI,EAChC,KAAMX,EAAM,KAAK,MAAMW,EACzB,CAAC,CACH,CACF,CAEA,OAAOlB,CACT,EAYA/G,EAAK,SAAS,UAAU,SAAW,UAAY,CAS7C,GAAI,KAAK,KACP,OAAO,KAAK,KAOd,QAJI4B,EAAM,KAAK,MAAQ,IAAM,IACzBsG,EAAS,OAAO,KAAK,KAAK,KAAK,EAAE,KAAK,EACtClG,EAAMkG,EAAO,OAER,EAAI,EAAG,EAAIlG,EAAK,IAAK,CAC5B,IAAIO,EAAQ2F,EAAO,GACfL,EAAO,KAAK,MAAMtF,GAEtBX,EAAMA,EAAMW,EAAQsF,EAAK,EAC3B,CAEA,OAAOjG,CACT,EAYA5B,EAAK,SAAS,UAAU,UAAY,SAAUqB,EAAG,CAU/C,QATIgD,EAAS,IAAIrE,EAAK,SAClBsH,EAAQ,OAERD,EAAQ,CAAC,CACX,MAAOhG,EACP,OAAQgD,EACR,KAAM,IACR,CAAC,EAEMgD,EAAM,QAAQ,CACnBC,EAAQD,EAAM,IAAI,EAWlB,QALIc,EAAS,OAAO,KAAKb,EAAM,MAAM,KAAK,EACtCc,EAAOD,EAAO,OACdE,EAAS,OAAO,KAAKf,EAAM,KAAK,KAAK,EACrCgB,EAAOD,EAAO,OAETE,EAAI,EAAGA,EAAIH,EAAMG,IAGxB,QAFIC,EAAQL,EAAOI,GAEVzH,EAAI,EAAGA,EAAIwH,EAAMxH,IAAK,CAC7B,IAAI2H,EAAQJ,EAAOvH,GAEnB,GAAI2H,GAASD,GAASA,GAAS,IAAK,CAClC,IAAIX,EAAOP,EAAM,KAAK,MAAMmB,GACxBC,EAAQpB,EAAM,MAAM,MAAMkB,GAC1BV,EAAQD,EAAK,OAASa,EAAM,MAC5BX,EAAO,OAEPU,KAASnB,EAAM,OAAO,OAIxBS,EAAOT,EAAM,OAAO,MAAMmB,GAC1BV,EAAK,MAAQA,EAAK,OAASD,IAM3BC,EAAO,IAAI/H,EAAK,SAChB+H,EAAK,MAAQD,EACbR,EAAM,OAAO,MAAMmB,GAASV,GAG9BV,EAAM,KAAK,CACT,MAAOqB,EACP,OAAQX,EACR,KAAMF,CACR,CAAC,CACH,CACF,CAEJ,CAEA,OAAOxD,CACT,EACArE,EAAK,SAAS,QAAU,UAAY,CAClC,KAAK,aAAe,GACpB,KAAK,KAAO,IAAIA,EAAK,SACrB,KAAK,eAAiB,CAAC,EACvB,KAAK,eAAiB,CAAC,CACzB,EAEAA,EAAK,SAAS,QAAQ,UAAU,OAAS,SAAU2I,EAAM,CACvD,IAAId,EACAe,EAAe,EAEnB,GAAID,EAAO,KAAK,aACd,MAAM,IAAI,MAAO,6BAA6B,EAGhD,QAAS,EAAI,EAAG,EAAIA,EAAK,QAAU,EAAI,KAAK,aAAa,QACnDA,EAAK,IAAM,KAAK,aAAa,GAD8B,IAE/DC,IAGF,KAAK,SAASA,CAAY,EAEtB,KAAK,eAAe,QAAU,EAChCf,EAAO,KAAK,KAEZA,EAAO,KAAK,eAAe,KAAK,eAAe,OAAS,GAAG,MAG7D,QAAS,EAAIe,EAAc,EAAID,EAAK,OAAQ,IAAK,CAC/C,IAAIE,EAAW,IAAI7I,EAAK,SACpBoC,EAAOuG,EAAK,GAEhBd,EAAK,MAAMzF,GAAQyG,EAEnB,KAAK,eAAe,KAAK,CACvB,OAAQhB,EACR,KAAMzF,EACN,MAAOyG,CACT,CAAC,EAEDhB,EAAOgB,CACT,CAEAhB,EAAK,MAAQ,GACb,KAAK,aAAec,CACtB,EAEA3I,EAAK,SAAS,QAAQ,UAAU,OAAS,UAAY,CACnD,KAAK,SAAS,CAAC,CACjB,EAEAA,EAAK,SAAS,QAAQ,UAAU,SAAW,SAAU8I,EAAQ,CAC3D,QAAS7H,EAAI,KAAK,eAAe,OAAS,EAAGA,GAAK6H,EAAQ7H,IAAK,CAC7D,IAAI4G,EAAO,KAAK,eAAe5G,GAC3B8H,EAAWlB,EAAK,MAAM,SAAS,EAE/BkB,KAAY,KAAK,eACnBlB,EAAK,OAAO,MAAMA,EAAK,MAAQ,KAAK,eAAekB,IAInDlB,EAAK,MAAM,KAAOkB,EAElB,KAAK,eAAeA,GAAYlB,EAAK,OAGvC,KAAK,eAAe,IAAI,CAC1B,CACF,EACA;AAAA;AAAA;AAAA,GAqBA7H,EAAK,MAAQ,SAAUgJ,EAAO,CAC5B,KAAK,cAAgBA,EAAM,cAC3B,KAAK,aAAeA,EAAM,aAC1B,KAAK,SAAWA,EAAM,SACtB,KAAK,OAASA,EAAM,OACpB,KAAK,SAAWA,EAAM,QACxB,EAyEAhJ,EAAK,MAAM,UAAU,OAAS,SAAUiJ,EAAa,CACnD,OAAO,KAAK,MAAM,SAAUC,EAAO,CACjC,IAAIC,EAAS,IAAInJ,EAAK,YAAYiJ,EAAaC,CAAK,EACpDC,EAAO,MAAM,CACf,CAAC,CACH,EA2BAnJ,EAAK,MAAM,UAAU,MAAQ,SAAU8B,EAAI,CAoBzC,QAZIoH,EAAQ,IAAIlJ,EAAK,MAAM,KAAK,MAAM,EAClCoJ,EAAiB,OAAO,OAAO,IAAI,EACnCC,EAAe,OAAO,OAAO,IAAI,EACjCC,EAAiB,OAAO,OAAO,IAAI,EACnCC,EAAkB,OAAO,OAAO,IAAI,EACpCC,EAAoB,OAAO,OAAO,IAAI,EAOjCvI,EAAI,EAAGA,EAAI,KAAK,OAAO,OAAQA,IACtCoI,EAAa,KAAK,OAAOpI,IAAM,IAAIjB,EAAK,OAG1C8B,EAAG,KAAKoH,EAAOA,CAAK,EAEpB,QAASjI,EAAI,EAAGA,EAAIiI,EAAM,QAAQ,OAAQjI,IAAK,CAS7C,IAAIiG,EAASgC,EAAM,QAAQjI,GACvBwI,EAAQ,KACRC,EAAgB1J,EAAK,IAAI,MAEzBkH,EAAO,YACTuC,EAAQ,KAAK,SAAS,UAAUvC,EAAO,KAAM,CAC3C,OAAQA,EAAO,MACjB,CAAC,EAEDuC,EAAQ,CAACvC,EAAO,IAAI,EAGtB,QAASyC,EAAI,EAAGA,EAAIF,EAAM,OAAQE,IAAK,CACrC,IAAIC,EAAOH,EAAME,GAQjBzC,EAAO,KAAO0C,EAOd,IAAIC,EAAe7J,EAAK,SAAS,WAAWkH,CAAM,EAC9C4C,EAAgB,KAAK,SAAS,UAAUD,CAAY,EAAE,QAAQ,EAQlE,GAAIC,EAAc,SAAW,GAAK5C,EAAO,WAAalH,EAAK,MAAM,SAAS,SAAU,CAClF,QAASoD,EAAI,EAAGA,EAAI8D,EAAO,OAAO,OAAQ9D,IAAK,CAC7C,IAAI2G,EAAQ7C,EAAO,OAAO9D,GAC1BmG,EAAgBQ,GAAS/J,EAAK,IAAI,KACpC,CAEA,KACF,CAEA,QAASkD,EAAI,EAAGA,EAAI4G,EAAc,OAAQ5G,IASxC,QAJI8G,EAAeF,EAAc5G,GAC7B1B,EAAU,KAAK,cAAcwI,GAC7BC,EAAYzI,EAAQ,OAEf4B,EAAI,EAAGA,EAAI8D,EAAO,OAAO,OAAQ9D,IAAK,CAS7C,IAAI2G,EAAQ7C,EAAO,OAAO9D,GACtB8G,EAAe1I,EAAQuI,GACvBI,EAAuB,OAAO,KAAKD,CAAY,EAC/CE,EAAYJ,EAAe,IAAMD,EACjCM,EAAuB,IAAIrK,EAAK,IAAImK,CAAoB,EAoB5D,GAbIjD,EAAO,UAAYlH,EAAK,MAAM,SAAS,WACzC0J,EAAgBA,EAAc,MAAMW,CAAoB,EAEpDd,EAAgBQ,KAAW,SAC7BR,EAAgBQ,GAAS/J,EAAK,IAAI,WASlCkH,EAAO,UAAYlH,EAAK,MAAM,SAAS,WAAY,CACjDwJ,EAAkBO,KAAW,SAC/BP,EAAkBO,GAAS/J,EAAK,IAAI,OAGtCwJ,EAAkBO,GAASP,EAAkBO,GAAO,MAAMM,CAAoB,EAO9E,QACF,CAeA,GANAhB,EAAaU,GAAO,OAAOE,EAAW/C,EAAO,MAAO,SAAU9F,GAAGC,GAAG,CAAE,OAAOD,GAAIC,EAAE,CAAC,EAMhF,CAAAiI,EAAec,GAInB,SAASE,EAAI,EAAGA,EAAIH,EAAqB,OAAQG,IAAK,CAOpD,IAAIC,EAAsBJ,EAAqBG,GAC3CE,EAAmB,IAAIxK,EAAK,SAAUuK,EAAqBR,CAAK,EAChElI,EAAWqI,EAAaK,GACxBE,GAECA,EAAarB,EAAeoB,MAAuB,OACtDpB,EAAeoB,GAAoB,IAAIxK,EAAK,UAAWgK,EAAcD,EAAOlI,CAAQ,EAEpF4I,EAAW,IAAIT,EAAcD,EAAOlI,CAAQ,CAGhD,CAEAyH,EAAec,GAAa,GAC9B,CAEJ,CAQA,GAAIlD,EAAO,WAAalH,EAAK,MAAM,SAAS,SAC1C,QAASoD,EAAI,EAAGA,EAAI8D,EAAO,OAAO,OAAQ9D,IAAK,CAC7C,IAAI2G,EAAQ7C,EAAO,OAAO9D,GAC1BmG,EAAgBQ,GAASR,EAAgBQ,GAAO,UAAUL,CAAa,CACzE,CAEJ,CAUA,QAHIgB,EAAqB1K,EAAK,IAAI,SAC9B2K,EAAuB3K,EAAK,IAAI,MAE3BiB,EAAI,EAAGA,EAAI,KAAK,OAAO,OAAQA,IAAK,CAC3C,IAAI8I,EAAQ,KAAK,OAAO9I,GAEpBsI,EAAgBQ,KAClBW,EAAqBA,EAAmB,UAAUnB,EAAgBQ,EAAM,GAGtEP,EAAkBO,KACpBY,EAAuBA,EAAqB,MAAMnB,EAAkBO,EAAM,EAE9E,CAEA,IAAIa,EAAoB,OAAO,KAAKxB,CAAc,EAC9CyB,EAAU,CAAC,EACXC,EAAU,OAAO,OAAO,IAAI,EAYhC,GAAI5B,EAAM,UAAU,EAAG,CACrB0B,EAAoB,OAAO,KAAK,KAAK,YAAY,EAEjD,QAAS3J,EAAI,EAAGA,EAAI2J,EAAkB,OAAQ3J,IAAK,CACjD,IAAIuJ,EAAmBI,EAAkB3J,GACrCF,EAAWf,EAAK,SAAS,WAAWwK,CAAgB,EACxDpB,EAAeoB,GAAoB,IAAIxK,EAAK,SAC9C,CACF,CAEA,QAASiB,EAAI,EAAGA,EAAI2J,EAAkB,OAAQ3J,IAAK,CASjD,IAAIF,EAAWf,EAAK,SAAS,WAAW4K,EAAkB3J,EAAE,EACxDP,EAASK,EAAS,OAEtB,GAAI,EAAC2J,EAAmB,SAAShK,CAAM,GAInC,CAAAiK,EAAqB,SAASjK,CAAM,EAIxC,KAAIqK,EAAc,KAAK,aAAahK,GAChCiK,EAAQ3B,EAAatI,EAAS,WAAW,WAAWgK,CAAW,EAC/DE,EAEJ,IAAKA,EAAWH,EAAQpK,MAAa,OACnCuK,EAAS,OAASD,EAClBC,EAAS,UAAU,QAAQ7B,EAAerI,EAAS,MAC9C,CACL,IAAImK,EAAQ,CACV,IAAKxK,EACL,MAAOsK,EACP,UAAW5B,EAAerI,EAC5B,EACA+J,EAAQpK,GAAUwK,EAClBL,EAAQ,KAAKK,CAAK,CACpB,EACF,CAKA,OAAOL,EAAQ,KAAK,SAAUzJ,GAAGC,GAAG,CAClC,OAAOA,GAAE,MAAQD,GAAE,KACrB,CAAC,CACH,EAUApB,EAAK,MAAM,UAAU,OAAS,UAAY,CACxC,IAAImL,EAAgB,OAAO,KAAK,KAAK,aAAa,EAC/C,KAAK,EACL,IAAI,SAAUvB,EAAM,CACnB,MAAO,CAACA,EAAM,KAAK,cAAcA,EAAK,CACxC,EAAG,IAAI,EAELwB,EAAe,OAAO,KAAK,KAAK,YAAY,EAC7C,IAAI,SAAUC,EAAK,CAClB,MAAO,CAACA,EAAK,KAAK,aAAaA,GAAK,OAAO,CAAC,CAC9C,EAAG,IAAI,EAET,MAAO,CACL,QAASrL,EAAK,QACd,OAAQ,KAAK,OACb,aAAcoL,EACd,cAAeD,EACf,SAAU,KAAK,SAAS,OAAO,CACjC,CACF,EAQAnL,EAAK,MAAM,KAAO,SAAUsL,EAAiB,CAC3C,IAAItC,EAAQ,CAAC,EACToC,EAAe,CAAC,EAChBG,EAAoBD,EAAgB,aACpCH,EAAgB,OAAO,OAAO,IAAI,EAClCK,EAA0BF,EAAgB,cAC1CG,EAAkB,IAAIzL,EAAK,SAAS,QACpC0C,EAAW1C,EAAK,SAAS,KAAKsL,EAAgB,QAAQ,EAEtDA,EAAgB,SAAWtL,EAAK,SAClCA,EAAK,MAAM,KAAK,4EAA8EA,EAAK,QAAU,sCAAwCsL,EAAgB,QAAU,GAAG,EAGpL,QAASrK,EAAI,EAAGA,EAAIsK,EAAkB,OAAQtK,IAAK,CACjD,IAAIyK,EAAQH,EAAkBtK,GAC1BoK,EAAMK,EAAM,GACZ1K,EAAW0K,EAAM,GAErBN,EAAaC,GAAO,IAAIrL,EAAK,OAAOgB,CAAQ,CAC9C,CAEA,QAASC,EAAI,EAAGA,EAAIuK,EAAwB,OAAQvK,IAAK,CACvD,IAAIyK,EAAQF,EAAwBvK,GAChC2I,EAAO8B,EAAM,GACblK,EAAUkK,EAAM,GAEpBD,EAAgB,OAAO7B,CAAI,EAC3BuB,EAAcvB,GAAQpI,CACxB,CAEA,OAAAiK,EAAgB,OAAO,EAEvBzC,EAAM,OAASsC,EAAgB,OAE/BtC,EAAM,aAAeoC,EACrBpC,EAAM,cAAgBmC,EACtBnC,EAAM,SAAWyC,EAAgB,KACjCzC,EAAM,SAAWtG,EAEV,IAAI1C,EAAK,MAAMgJ,CAAK,CAC7B,EACA;AAAA;AAAA;AAAA,GA6BAhJ,EAAK,QAAU,UAAY,CACzB,KAAK,KAAO,KACZ,KAAK,QAAU,OAAO,OAAO,IAAI,EACjC,KAAK,WAAa,OAAO,OAAO,IAAI,EACpC,KAAK,cAAgB,OAAO,OAAO,IAAI,EACvC,KAAK,qBAAuB,CAAC,EAC7B,KAAK,aAAe,CAAC,EACrB,KAAK,UAAYA,EAAK,UACtB,KAAK,SAAW,IAAIA,EAAK,SACzB,KAAK,eAAiB,IAAIA,EAAK,SAC/B,KAAK,cAAgB,EACrB,KAAK,GAAK,IACV,KAAK,IAAM,IACX,KAAK,UAAY,EACjB,KAAK,kBAAoB,CAAC,CAC5B,EAcAA,EAAK,QAAQ,UAAU,IAAM,SAAUqL,EAAK,CAC1C,KAAK,KAAOA,CACd,EAkCArL,EAAK,QAAQ,UAAU,MAAQ,SAAUW,EAAWgL,EAAY,CAC9D,GAAI,KAAK,KAAKhL,CAAS,EACrB,MAAM,IAAI,WAAY,UAAYA,EAAY,kCAAkC,EAGlF,KAAK,QAAQA,GAAagL,GAAc,CAAC,CAC3C,EAUA3L,EAAK,QAAQ,UAAU,EAAI,SAAU4L,EAAQ,CACvCA,EAAS,EACX,KAAK,GAAK,EACDA,EAAS,EAClB,KAAK,GAAK,EAEV,KAAK,GAAKA,CAEd,EASA5L,EAAK,QAAQ,UAAU,GAAK,SAAU4L,EAAQ,CAC5C,KAAK,IAAMA,CACb,EAmBA5L,EAAK,QAAQ,UAAU,IAAM,SAAU6L,EAAKF,EAAY,CACtD,IAAIjL,EAASmL,EAAI,KAAK,MAClBC,EAAS,OAAO,KAAK,KAAK,OAAO,EAErC,KAAK,WAAWpL,GAAUiL,GAAc,CAAC,EACzC,KAAK,eAAiB,EAEtB,QAAS1K,EAAI,EAAGA,EAAI6K,EAAO,OAAQ7K,IAAK,CACtC,IAAIN,EAAYmL,EAAO7K,GACnB8K,EAAY,KAAK,QAAQpL,GAAW,UACpCoJ,EAAQgC,EAAYA,EAAUF,CAAG,EAAIA,EAAIlL,GACzCsB,EAAS,KAAK,UAAU8H,EAAO,CAC7B,OAAQ,CAACpJ,CAAS,CACpB,CAAC,EACD8I,EAAQ,KAAK,SAAS,IAAIxH,CAAM,EAChClB,EAAW,IAAIf,EAAK,SAAUU,EAAQC,CAAS,EAC/CqL,EAAa,OAAO,OAAO,IAAI,EAEnC,KAAK,qBAAqBjL,GAAYiL,EACtC,KAAK,aAAajL,GAAY,EAG9B,KAAK,aAAaA,IAAa0I,EAAM,OAGrC,QAASvG,EAAI,EAAGA,EAAIuG,EAAM,OAAQvG,IAAK,CACrC,IAAI0G,EAAOH,EAAMvG,GAUjB,GARI8I,EAAWpC,IAAS,OACtBoC,EAAWpC,GAAQ,GAGrBoC,EAAWpC,IAAS,EAIhB,KAAK,cAAcA,IAAS,KAAW,CACzC,IAAIpI,EAAU,OAAO,OAAO,IAAI,EAChCA,EAAQ,OAAY,KAAK,UACzB,KAAK,WAAa,EAElB,QAAS4B,EAAI,EAAGA,EAAI0I,EAAO,OAAQ1I,IACjC5B,EAAQsK,EAAO1I,IAAM,OAAO,OAAO,IAAI,EAGzC,KAAK,cAAcwG,GAAQpI,CAC7B,CAGI,KAAK,cAAcoI,GAAMjJ,GAAWD,IAAW,OACjD,KAAK,cAAckJ,GAAMjJ,GAAWD,GAAU,OAAO,OAAO,IAAI,GAKlE,QAAS4J,EAAI,EAAGA,EAAI,KAAK,kBAAkB,OAAQA,IAAK,CACtD,IAAI2B,EAAc,KAAK,kBAAkB3B,GACrCzI,EAAW+H,EAAK,SAASqC,GAEzB,KAAK,cAAcrC,GAAMjJ,GAAWD,GAAQuL,IAAgB,OAC9D,KAAK,cAAcrC,GAAMjJ,GAAWD,GAAQuL,GAAe,CAAC,GAG9D,KAAK,cAAcrC,GAAMjJ,GAAWD,GAAQuL,GAAa,KAAKpK,CAAQ,CACxE,CACF,CAEF,CACF,EAOA7B,EAAK,QAAQ,UAAU,6BAA+B,UAAY,CAOhE,QALIkM,EAAY,OAAO,KAAK,KAAK,YAAY,EACzCC,EAAiBD,EAAU,OAC3BE,EAAc,CAAC,EACfC,EAAqB,CAAC,EAEjBpL,EAAI,EAAGA,EAAIkL,EAAgBlL,IAAK,CACvC,IAAIF,EAAWf,EAAK,SAAS,WAAWkM,EAAUjL,EAAE,EAChD8I,EAAQhJ,EAAS,UAErBsL,EAAmBtC,KAAWsC,EAAmBtC,GAAS,GAC1DsC,EAAmBtC,IAAU,EAE7BqC,EAAYrC,KAAWqC,EAAYrC,GAAS,GAC5CqC,EAAYrC,IAAU,KAAK,aAAahJ,EAC1C,CAIA,QAFI+K,EAAS,OAAO,KAAK,KAAK,OAAO,EAE5B7K,EAAI,EAAGA,EAAI6K,EAAO,OAAQ7K,IAAK,CACtC,IAAIN,EAAYmL,EAAO7K,GACvBmL,EAAYzL,GAAayL,EAAYzL,GAAa0L,EAAmB1L,EACvE,CAEA,KAAK,mBAAqByL,CAC5B,EAOApM,EAAK,QAAQ,UAAU,mBAAqB,UAAY,CAMtD,QALIoL,EAAe,CAAC,EAChBc,EAAY,OAAO,KAAK,KAAK,oBAAoB,EACjDI,EAAkBJ,EAAU,OAC5BK,EAAe,OAAO,OAAO,IAAI,EAE5BtL,EAAI,EAAGA,EAAIqL,EAAiBrL,IAAK,CAaxC,QAZIF,EAAWf,EAAK,SAAS,WAAWkM,EAAUjL,EAAE,EAChDN,EAAYI,EAAS,UACrByL,EAAc,KAAK,aAAazL,GAChCgK,EAAc,IAAI/K,EAAK,OACvByM,EAAkB,KAAK,qBAAqB1L,GAC5C0I,EAAQ,OAAO,KAAKgD,CAAe,EACnCC,EAAcjD,EAAM,OAGpBkD,EAAa,KAAK,QAAQhM,GAAW,OAAS,EAC9CiM,EAAW,KAAK,WAAW7L,EAAS,QAAQ,OAAS,EAEhDmC,EAAI,EAAGA,EAAIwJ,EAAaxJ,IAAK,CACpC,IAAI0G,EAAOH,EAAMvG,GACb2J,EAAKJ,EAAgB7C,GACrBK,EAAY,KAAK,cAAcL,GAAM,OACrCkD,EAAK9B,EAAO+B,EAEZR,EAAa3C,KAAU,QACzBkD,EAAM9M,EAAK,IAAI,KAAK,cAAc4J,GAAO,KAAK,aAAa,EAC3D2C,EAAa3C,GAAQkD,GAErBA,EAAMP,EAAa3C,GAGrBoB,EAAQ8B,IAAQ,KAAK,IAAM,GAAKD,IAAO,KAAK,KAAO,EAAI,KAAK,GAAK,KAAK,IAAML,EAAc,KAAK,mBAAmB7L,KAAekM,GACjI7B,GAAS2B,EACT3B,GAAS4B,EACTG,EAAqB,KAAK,MAAM/B,EAAQ,GAAI,EAAI,IAQhDD,EAAY,OAAOd,EAAW8C,CAAkB,CAClD,CAEA3B,EAAarK,GAAYgK,CAC3B,CAEA,KAAK,aAAeK,CACtB,EAOApL,EAAK,QAAQ,UAAU,eAAiB,UAAY,CAClD,KAAK,SAAWA,EAAK,SAAS,UAC5B,OAAO,KAAK,KAAK,aAAa,EAAE,KAAK,CACvC,CACF,EAUAA,EAAK,QAAQ,UAAU,MAAQ,UAAY,CACzC,YAAK,6BAA6B,EAClC,KAAK,mBAAmB,EACxB,KAAK,eAAe,EAEb,IAAIA,EAAK,MAAM,CACpB,cAAe,KAAK,cACpB,aAAc,KAAK,aACnB,SAAU,KAAK,SACf,OAAQ,OAAO,KAAK,KAAK,OAAO,EAChC,SAAU,KAAK,cACjB,CAAC,CACH,EAgBAA,EAAK,QAAQ,UAAU,IAAM,SAAU8B,EAAI,CACzC,IAAIkL,EAAO,MAAM,UAAU,MAAM,KAAK,UAAW,CAAC,EAClDA,EAAK,QAAQ,IAAI,EACjBlL,EAAG,MAAM,KAAMkL,CAAI,CACrB,EAaAhN,EAAK,UAAY,SAAU4J,EAAMG,EAAOlI,EAAU,CAShD,QARIoL,EAAiB,OAAO,OAAO,IAAI,EACnCC,EAAe,OAAO,KAAKrL,GAAY,CAAC,CAAC,EAOpCZ,EAAI,EAAGA,EAAIiM,EAAa,OAAQjM,IAAK,CAC5C,IAAIT,EAAM0M,EAAajM,GACvBgM,EAAezM,GAAOqB,EAASrB,GAAK,MAAM,CAC5C,CAEA,KAAK,SAAW,OAAO,OAAO,IAAI,EAE9BoJ,IAAS,SACX,KAAK,SAASA,GAAQ,OAAO,OAAO,IAAI,EACxC,KAAK,SAASA,GAAMG,GAASkD,EAEjC,EAWAjN,EAAK,UAAU,UAAU,QAAU,SAAUmN,EAAgB,CAG3D,QAFI1D,EAAQ,OAAO,KAAK0D,EAAe,QAAQ,EAEtClM,EAAI,EAAGA,EAAIwI,EAAM,OAAQxI,IAAK,CACrC,IAAI2I,EAAOH,EAAMxI,GACb6K,EAAS,OAAO,KAAKqB,EAAe,SAASvD,EAAK,EAElD,KAAK,SAASA,IAAS,OACzB,KAAK,SAASA,GAAQ,OAAO,OAAO,IAAI,GAG1C,QAAS1G,EAAI,EAAGA,EAAI4I,EAAO,OAAQ5I,IAAK,CACtC,IAAI6G,EAAQ+B,EAAO5I,GACf3C,EAAO,OAAO,KAAK4M,EAAe,SAASvD,GAAMG,EAAM,EAEvD,KAAK,SAASH,GAAMG,IAAU,OAChC,KAAK,SAASH,GAAMG,GAAS,OAAO,OAAO,IAAI,GAGjD,QAAS3G,EAAI,EAAGA,EAAI7C,EAAK,OAAQ6C,IAAK,CACpC,IAAI5C,EAAMD,EAAK6C,GAEX,KAAK,SAASwG,GAAMG,GAAOvJ,IAAQ,KACrC,KAAK,SAASoJ,GAAMG,GAAOvJ,GAAO2M,EAAe,SAASvD,GAAMG,GAAOvJ,GAEvE,KAAK,SAASoJ,GAAMG,GAAOvJ,GAAO,KAAK,SAASoJ,GAAMG,GAAOvJ,GAAK,OAAO2M,EAAe,SAASvD,GAAMG,GAAOvJ,EAAI,CAGtH,CACF,CACF,CACF,EASAR,EAAK,UAAU,UAAU,IAAM,SAAU4J,EAAMG,EAAOlI,EAAU,CAC9D,GAAI,EAAE+H,KAAQ,KAAK,UAAW,CAC5B,KAAK,SAASA,GAAQ,OAAO,OAAO,IAAI,EACxC,KAAK,SAASA,GAAMG,GAASlI,EAC7B,MACF,CAEA,GAAI,EAAEkI,KAAS,KAAK,SAASH,IAAQ,CACnC,KAAK,SAASA,GAAMG,GAASlI,EAC7B,MACF,CAIA,QAFIqL,EAAe,OAAO,KAAKrL,CAAQ,EAE9BZ,EAAI,EAAGA,EAAIiM,EAAa,OAAQjM,IAAK,CAC5C,IAAIT,EAAM0M,EAAajM,GAEnBT,KAAO,KAAK,SAASoJ,GAAMG,GAC7B,KAAK,SAASH,GAAMG,GAAOvJ,GAAO,KAAK,SAASoJ,GAAMG,GAAOvJ,GAAK,OAAOqB,EAASrB,EAAI,EAEtF,KAAK,SAASoJ,GAAMG,GAAOvJ,GAAOqB,EAASrB,EAE/C,CACF,EAYAR,EAAK,MAAQ,SAAUoN,EAAW,CAChC,KAAK,QAAU,CAAC,EAChB,KAAK,UAAYA,CACnB,EA0BApN,EAAK,MAAM,SAAW,IAAI,OAAQ,GAAG,EACrCA,EAAK,MAAM,SAAS,KAAO,EAC3BA,EAAK,MAAM,SAAS,QAAU,EAC9BA,EAAK,MAAM,SAAS,SAAW,EAa/BA,EAAK,MAAM,SAAW,CAIpB,SAAU,EAMV,SAAU,EAMV,WAAY,CACd,EAyBAA,EAAK,MAAM,UAAU,OAAS,SAAUkH,EAAQ,CAC9C,MAAM,WAAYA,IAChBA,EAAO,OAAS,KAAK,WAGjB,UAAWA,IACfA,EAAO,MAAQ,GAGX,gBAAiBA,IACrBA,EAAO,YAAc,IAGjB,aAAcA,IAClBA,EAAO,SAAWlH,EAAK,MAAM,SAAS,MAGnCkH,EAAO,SAAWlH,EAAK,MAAM,SAAS,SAAakH,EAAO,KAAK,OAAO,CAAC,GAAKlH,EAAK,MAAM,WAC1FkH,EAAO,KAAO,IAAMA,EAAO,MAGxBA,EAAO,SAAWlH,EAAK,MAAM,SAAS,UAAckH,EAAO,KAAK,MAAM,EAAE,GAAKlH,EAAK,MAAM,WAC3FkH,EAAO,KAAO,GAAKA,EAAO,KAAO,KAG7B,aAAcA,IAClBA,EAAO,SAAWlH,EAAK,MAAM,SAAS,UAGxC,KAAK,QAAQ,KAAKkH,CAAM,EAEjB,IACT,EASAlH,EAAK,MAAM,UAAU,UAAY,UAAY,CAC3C,QAASiB,EAAI,EAAGA,EAAI,KAAK,QAAQ,OAAQA,IACvC,GAAI,KAAK,QAAQA,GAAG,UAAYjB,EAAK,MAAM,SAAS,WAClD,MAAO,GAIX,MAAO,EACT,EA4BAA,EAAK,MAAM,UAAU,KAAO,SAAU4J,EAAMyD,EAAS,CACnD,GAAI,MAAM,QAAQzD,CAAI,EACpB,OAAAA,EAAK,QAAQ,SAAU7H,EAAG,CAAE,KAAK,KAAKA,EAAG/B,EAAK,MAAM,MAAMqN,CAAO,CAAC,CAAE,EAAG,IAAI,EACpE,KAGT,IAAInG,EAASmG,GAAW,CAAC,EACzB,OAAAnG,EAAO,KAAO0C,EAAK,SAAS,EAE5B,KAAK,OAAO1C,CAAM,EAEX,IACT,EACAlH,EAAK,gBAAkB,SAAUI,EAASmD,EAAOC,EAAK,CACpD,KAAK,KAAO,kBACZ,KAAK,QAAUpD,EACf,KAAK,MAAQmD,EACb,KAAK,IAAMC,CACb,EAEAxD,EAAK,gBAAgB,UAAY,IAAI,MACrCA,EAAK,WAAa,SAAU4B,EAAK,CAC/B,KAAK,QAAU,CAAC,EAChB,KAAK,IAAMA,EACX,KAAK,OAASA,EAAI,OAClB,KAAK,IAAM,EACX,KAAK,MAAQ,EACb,KAAK,oBAAsB,CAAC,CAC9B,EAEA5B,EAAK,WAAW,UAAU,IAAM,UAAY,CAG1C,QAFIsN,EAAQtN,EAAK,WAAW,QAErBsN,GACLA,EAAQA,EAAM,IAAI,CAEtB,EAEAtN,EAAK,WAAW,UAAU,YAAc,UAAY,CAKlD,QAJIuN,EAAY,CAAC,EACbpL,EAAa,KAAK,MAClBD,EAAW,KAAK,IAEX,EAAI,EAAG,EAAI,KAAK,oBAAoB,OAAQ,IACnDA,EAAW,KAAK,oBAAoB,GACpCqL,EAAU,KAAK,KAAK,IAAI,MAAMpL,EAAYD,CAAQ,CAAC,EACnDC,EAAaD,EAAW,EAG1B,OAAAqL,EAAU,KAAK,KAAK,IAAI,MAAMpL,EAAY,KAAK,GAAG,CAAC,EACnD,KAAK,oBAAoB,OAAS,EAE3BoL,EAAU,KAAK,EAAE,CAC1B,EAEAvN,EAAK,WAAW,UAAU,KAAO,SAAUwN,EAAM,CAC/C,KAAK,QAAQ,KAAK,CAChB,KAAMA,EACN,IAAK,KAAK,YAAY,EACtB,MAAO,KAAK,MACZ,IAAK,KAAK,GACZ,CAAC,EAED,KAAK,MAAQ,KAAK,GACpB,EAEAxN,EAAK,WAAW,UAAU,gBAAkB,UAAY,CACtD,KAAK,oBAAoB,KAAK,KAAK,IAAM,CAAC,EAC1C,KAAK,KAAO,CACd,EAEAA,EAAK,WAAW,UAAU,KAAO,UAAY,CAC3C,GAAI,KAAK,KAAO,KAAK,OACnB,OAAOA,EAAK,WAAW,IAGzB,IAAIoC,EAAO,KAAK,IAAI,OAAO,KAAK,GAAG,EACnC,YAAK,KAAO,EACLA,CACT,EAEApC,EAAK,WAAW,UAAU,MAAQ,UAAY,CAC5C,OAAO,KAAK,IAAM,KAAK,KACzB,EAEAA,EAAK,WAAW,UAAU,OAAS,UAAY,CACzC,KAAK,OAAS,KAAK,MACrB,KAAK,KAAO,GAGd,KAAK,MAAQ,KAAK,GACpB,EAEAA,EAAK,WAAW,UAAU,OAAS,UAAY,CAC7C,KAAK,KAAO,CACd,EAEAA,EAAK,WAAW,UAAU,eAAiB,UAAY,CACrD,IAAIoC,EAAMqL,EAEV,GACErL,EAAO,KAAK,KAAK,EACjBqL,EAAWrL,EAAK,WAAW,CAAC,QACrBqL,EAAW,IAAMA,EAAW,IAEjCrL,GAAQpC,EAAK,WAAW,KAC1B,KAAK,OAAO,CAEhB,EAEAA,EAAK,WAAW,UAAU,KAAO,UAAY,CAC3C,OAAO,KAAK,IAAM,KAAK,MACzB,EAEAA,EAAK,WAAW,IAAM,MACtBA,EAAK,WAAW,MAAQ,QACxBA,EAAK,WAAW,KAAO,OACvBA,EAAK,WAAW,cAAgB,gBAChCA,EAAK,WAAW,MAAQ,QACxBA,EAAK,WAAW,SAAW,WAE3BA,EAAK,WAAW,SAAW,SAAU0N,EAAO,CAC1C,OAAAA,EAAM,OAAO,EACbA,EAAM,KAAK1N,EAAK,WAAW,KAAK,EAChC0N,EAAM,OAAO,EACN1N,EAAK,WAAW,OACzB,EAEAA,EAAK,WAAW,QAAU,SAAU0N,EAAO,CAQzC,GAPIA,EAAM,MAAM,EAAI,IAClBA,EAAM,OAAO,EACbA,EAAM,KAAK1N,EAAK,WAAW,IAAI,GAGjC0N,EAAM,OAAO,EAETA,EAAM,KAAK,EACb,OAAO1N,EAAK,WAAW,OAE3B,EAEAA,EAAK,WAAW,gBAAkB,SAAU0N,EAAO,CACjD,OAAAA,EAAM,OAAO,EACbA,EAAM,eAAe,EACrBA,EAAM,KAAK1N,EAAK,WAAW,aAAa,EACjCA,EAAK,WAAW,OACzB,EAEAA,EAAK,WAAW,SAAW,SAAU0N,EAAO,CAC1C,OAAAA,EAAM,OAAO,EACbA,EAAM,eAAe,EACrBA,EAAM,KAAK1N,EAAK,WAAW,KAAK,EACzBA,EAAK,WAAW,OACzB,EAEAA,EAAK,WAAW,OAAS,SAAU0N,EAAO,CACpCA,EAAM,MAAM,EAAI,GAClBA,EAAM,KAAK1N,EAAK,WAAW,IAAI,CAEnC,EAaAA,EAAK,WAAW,cAAgBA,EAAK,UAAU,UAE/CA,EAAK,WAAW,QAAU,SAAU0N,EAAO,CACzC,OAAa,CACX,IAAItL,EAAOsL,EAAM,KAAK,EAEtB,GAAItL,GAAQpC,EAAK,WAAW,IAC1B,OAAOA,EAAK,WAAW,OAIzB,GAAIoC,EAAK,WAAW,CAAC,GAAK,GAAI,CAC5BsL,EAAM,gBAAgB,EACtB,QACF,CAEA,GAAItL,GAAQ,IACV,OAAOpC,EAAK,WAAW,SAGzB,GAAIoC,GAAQ,IACV,OAAAsL,EAAM,OAAO,EACTA,EAAM,MAAM,EAAI,GAClBA,EAAM,KAAK1N,EAAK,WAAW,IAAI,EAE1BA,EAAK,WAAW,gBAGzB,GAAIoC,GAAQ,IACV,OAAAsL,EAAM,OAAO,EACTA,EAAM,MAAM,EAAI,GAClBA,EAAM,KAAK1N,EAAK,WAAW,IAAI,EAE1BA,EAAK,WAAW,SAczB,GARIoC,GAAQ,KAAOsL,EAAM,MAAM,IAAM,GAQjCtL,GAAQ,KAAOsL,EAAM,MAAM,IAAM,EACnC,OAAAA,EAAM,KAAK1N,EAAK,WAAW,QAAQ,EAC5BA,EAAK,WAAW,QAGzB,GAAIoC,EAAK,MAAMpC,EAAK,WAAW,aAAa,EAC1C,OAAOA,EAAK,WAAW,OAE3B,CACF,EAEAA,EAAK,YAAc,SAAU4B,EAAKsH,EAAO,CACvC,KAAK,MAAQ,IAAIlJ,EAAK,WAAY4B,CAAG,EACrC,KAAK,MAAQsH,EACb,KAAK,cAAgB,CAAC,EACtB,KAAK,UAAY,CACnB,EAEAlJ,EAAK,YAAY,UAAU,MAAQ,UAAY,CAC7C,KAAK,MAAM,IAAI,EACf,KAAK,QAAU,KAAK,MAAM,QAI1B,QAFIsN,EAAQtN,EAAK,YAAY,YAEtBsN,GACLA,EAAQA,EAAM,IAAI,EAGpB,OAAO,KAAK,KACd,EAEAtN,EAAK,YAAY,UAAU,WAAa,UAAY,CAClD,OAAO,KAAK,QAAQ,KAAK,UAC3B,EAEAA,EAAK,YAAY,UAAU,cAAgB,UAAY,CACrD,IAAI2N,EAAS,KAAK,WAAW,EAC7B,YAAK,WAAa,EACXA,CACT,EAEA3N,EAAK,YAAY,UAAU,WAAa,UAAY,CAClD,IAAI4N,EAAkB,KAAK,cAC3B,KAAK,MAAM,OAAOA,CAAe,EACjC,KAAK,cAAgB,CAAC,CACxB,EAEA5N,EAAK,YAAY,YAAc,SAAUmJ,EAAQ,CAC/C,IAAIwE,EAASxE,EAAO,WAAW,EAE/B,GAAIwE,GAAU,KAId,OAAQA,EAAO,KAAM,CACnB,KAAK3N,EAAK,WAAW,SACnB,OAAOA,EAAK,YAAY,cAC1B,KAAKA,EAAK,WAAW,MACnB,OAAOA,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,KACnB,OAAOA,EAAK,YAAY,UAC1B,QACE,IAAI6N,EAAe,4CAA8CF,EAAO,KAExE,MAAIA,EAAO,IAAI,QAAU,IACvBE,GAAgB,gBAAkBF,EAAO,IAAM,KAG3C,IAAI3N,EAAK,gBAAiB6N,EAAcF,EAAO,MAAOA,EAAO,GAAG,CAC1E,CACF,EAEA3N,EAAK,YAAY,cAAgB,SAAUmJ,EAAQ,CACjD,IAAIwE,EAASxE,EAAO,cAAc,EAElC,GAAIwE,GAAU,KAId,QAAQA,EAAO,IAAK,CAClB,IAAK,IACHxE,EAAO,cAAc,SAAWnJ,EAAK,MAAM,SAAS,WACpD,MACF,IAAK,IACHmJ,EAAO,cAAc,SAAWnJ,EAAK,MAAM,SAAS,SACpD,MACF,QACE,IAAI6N,EAAe,kCAAoCF,EAAO,IAAM,IACpE,MAAM,IAAI3N,EAAK,gBAAiB6N,EAAcF,EAAO,MAAOA,EAAO,GAAG,CAC1E,CAEA,IAAIG,EAAa3E,EAAO,WAAW,EAEnC,GAAI2E,GAAc,KAAW,CAC3B,IAAID,EAAe,yCACnB,MAAM,IAAI7N,EAAK,gBAAiB6N,EAAcF,EAAO,MAAOA,EAAO,GAAG,CACxE,CAEA,OAAQG,EAAW,KAAM,CACvB,KAAK9N,EAAK,WAAW,MACnB,OAAOA,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,KACnB,OAAOA,EAAK,YAAY,UAC1B,QACE,IAAI6N,EAAe,mCAAqCC,EAAW,KAAO,IAC1E,MAAM,IAAI9N,EAAK,gBAAiB6N,EAAcC,EAAW,MAAOA,EAAW,GAAG,CAClF,EACF,EAEA9N,EAAK,YAAY,WAAa,SAAUmJ,EAAQ,CAC9C,IAAIwE,EAASxE,EAAO,cAAc,EAElC,GAAIwE,GAAU,KAId,IAAIxE,EAAO,MAAM,UAAU,QAAQwE,EAAO,GAAG,GAAK,GAAI,CACpD,IAAII,EAAiB5E,EAAO,MAAM,UAAU,IAAI,SAAU6E,EAAG,CAAE,MAAO,IAAMA,EAAI,GAAI,CAAC,EAAE,KAAK,IAAI,EAC5FH,EAAe,uBAAyBF,EAAO,IAAM,uBAAyBI,EAElF,MAAM,IAAI/N,EAAK,gBAAiB6N,EAAcF,EAAO,MAAOA,EAAO,GAAG,CACxE,CAEAxE,EAAO,cAAc,OAAS,CAACwE,EAAO,GAAG,EAEzC,IAAIG,EAAa3E,EAAO,WAAW,EAEnC,GAAI2E,GAAc,KAAW,CAC3B,IAAID,EAAe,gCACnB,MAAM,IAAI7N,EAAK,gBAAiB6N,EAAcF,EAAO,MAAOA,EAAO,GAAG,CACxE,CAEA,OAAQG,EAAW,KAAM,CACvB,KAAK9N,EAAK,WAAW,KACnB,OAAOA,EAAK,YAAY,UAC1B,QACE,IAAI6N,EAAe,0BAA4BC,EAAW,KAAO,IACjE,MAAM,IAAI9N,EAAK,gBAAiB6N,EAAcC,EAAW,MAAOA,EAAW,GAAG,CAClF,EACF,EAEA9N,EAAK,YAAY,UAAY,SAAUmJ,EAAQ,CAC7C,IAAIwE,EAASxE,EAAO,cAAc,EAElC,GAAIwE,GAAU,KAId,CAAAxE,EAAO,cAAc,KAAOwE,EAAO,IAAI,YAAY,EAE/CA,EAAO,IAAI,QAAQ,GAAG,GAAK,KAC7BxE,EAAO,cAAc,YAAc,IAGrC,IAAI2E,EAAa3E,EAAO,WAAW,EAEnC,GAAI2E,GAAc,KAAW,CAC3B3E,EAAO,WAAW,EAClB,MACF,CAEA,OAAQ2E,EAAW,KAAM,CACvB,KAAK9N,EAAK,WAAW,KACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,UAC1B,KAAKA,EAAK,WAAW,MACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,cACnB,OAAOA,EAAK,YAAY,kBAC1B,KAAKA,EAAK,WAAW,MACnB,OAAOA,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,SACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,cAC1B,QACE,IAAI6N,EAAe,2BAA6BC,EAAW,KAAO,IAClE,MAAM,IAAI9N,EAAK,gBAAiB6N,EAAcC,EAAW,MAAOA,EAAW,GAAG,CAClF,EACF,EAEA9N,EAAK,YAAY,kBAAoB,SAAUmJ,EAAQ,CACrD,IAAIwE,EAASxE,EAAO,cAAc,EAElC,GAAIwE,GAAU,KAId,KAAIxG,EAAe,SAASwG,EAAO,IAAK,EAAE,EAE1C,GAAI,MAAMxG,CAAY,EAAG,CACvB,IAAI0G,EAAe,gCACnB,MAAM,IAAI7N,EAAK,gBAAiB6N,EAAcF,EAAO,MAAOA,EAAO,GAAG,CACxE,CAEAxE,EAAO,cAAc,aAAehC,EAEpC,IAAI2G,EAAa3E,EAAO,WAAW,EAEnC,GAAI2E,GAAc,KAAW,CAC3B3E,EAAO,WAAW,EAClB,MACF,CAEA,OAAQ2E,EAAW,KAAM,CACvB,KAAK9N,EAAK,WAAW,KACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,UAC1B,KAAKA,EAAK,WAAW,MACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,cACnB,OAAOA,EAAK,YAAY,kBAC1B,KAAKA,EAAK,WAAW,MACnB,OAAOA,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,SACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,cAC1B,QACE,IAAI6N,EAAe,2BAA6BC,EAAW,KAAO,IAClE,MAAM,IAAI9N,EAAK,gBAAiB6N,EAAcC,EAAW,MAAOA,EAAW,GAAG,CAClF,EACF,EAEA9N,EAAK,YAAY,WAAa,SAAUmJ,EAAQ,CAC9C,IAAIwE,EAASxE,EAAO,cAAc,EAElC,GAAIwE,GAAU,KAId,KAAIM,EAAQ,SAASN,EAAO,IAAK,EAAE,EAEnC,GAAI,MAAMM,CAAK,EAAG,CAChB,IAAIJ,EAAe,wBACnB,MAAM,IAAI7N,EAAK,gBAAiB6N,EAAcF,EAAO,MAAOA,EAAO,GAAG,CACxE,CAEAxE,EAAO,cAAc,MAAQ8E,EAE7B,IAAIH,EAAa3E,EAAO,WAAW,EAEnC,GAAI2E,GAAc,KAAW,CAC3B3E,EAAO,WAAW,EAClB,MACF,CAEA,OAAQ2E,EAAW,KAAM,CACvB,KAAK9N,EAAK,WAAW,KACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,UAC1B,KAAKA,EAAK,WAAW,MACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,cACnB,OAAOA,EAAK,YAAY,kBAC1B,KAAKA,EAAK,WAAW,MACnB,OAAOA,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,SACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,cAC1B,QACE,IAAI6N,EAAe,2BAA6BC,EAAW,KAAO,IAClE,MAAM,IAAI9N,EAAK,gBAAiB6N,EAAcC,EAAW,MAAOA,EAAW,GAAG,CAClF,EACF,EAMI,SAAU1G,EAAM8G,EAAS,CACrB,OAAO,QAAW,YAAc,OAAO,IAEzC,OAAOA,CAAO,EACL,OAAOpO,IAAY,SAM5BC,GAAO,QAAUmO,EAAQ,EAGzB9G,EAAK,KAAO8G,EAAQ,CAExB,EAAE,KAAM,UAAY,CAMlB,OAAOlO,CACT,CAAC,CACH,GAAG,ICl5GH,IAAAmO,EAAAC,EAAA,CAAAC,GAAAC,KAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,GAeA,IAAIC,GAAkB,UAOtBD,GAAO,QAAUE,GAUjB,SAASA,GAAWC,EAAQ,CAC1B,IAAIC,EAAM,GAAKD,EACXE,EAAQJ,GAAgB,KAAKG,CAAG,EAEpC,GAAI,CAACC,EACH,OAAOD,EAGT,IAAIE,EACAC,EAAO,GACPC,EAAQ,EACRC,EAAY,EAEhB,IAAKD,EAAQH,EAAM,MAAOG,EAAQJ,EAAI,OAAQI,IAAS,CACrD,OAAQJ,EAAI,WAAWI,CAAK,EAAG,CAC7B,IAAK,IACHF,EAAS,SACT,MACF,IAAK,IACHA,EAAS,QACT,MACF,IAAK,IACHA,EAAS,QACT,MACF,IAAK,IACHA,EAAS,OACT,MACF,IAAK,IACHA,EAAS,OACT,MACF,QACE,QACJ,CAEIG,IAAcD,IAChBD,GAAQH,EAAI,UAAUK,EAAWD,CAAK,GAGxCC,EAAYD,EAAQ,EACpBD,GAAQD,CACV,CAEA,OAAOG,IAAcD,EACjBD,EAAOH,EAAI,UAAUK,EAAWD,CAAK,EACrCD,CACN,ICvDA,IAAAG,GAAiB,QCKZ,OAAO,UACV,OAAO,QAAU,SAAUC,EAAa,CACtC,IAAMC,EAA2B,CAAC,EAClC,QAAWC,KAAO,OAAO,KAAKF,CAAG,EAE/BC,EAAK,KAAK,CAACC,EAAKF,EAAIE,EAAI,CAAC,EAG3B,OAAOD,CACT,GAGG,OAAO,SACV,OAAO,OAAS,SAAUD,EAAa,CACrC,IAAMC,EAAiB,CAAC,EACxB,QAAWC,KAAO,OAAO,KAAKF,CAAG,EAE/BC,EAAK,KAAKD,EAAIE,EAAI,EAGpB,OAAOD,CACT,GAKE,OAAO,SAAY,cAGhB,QAAQ,UAAU,WACrB,QAAQ,UAAU,SAAW,SAC3BE,EAA8BC,EACxB,CACF,OAAOD,GAAM,UACf,KAAK,WAAaA,EAAE,KACpB,KAAK,UAAYA,EAAE,MAEnB,KAAK,WAAaA,EAClB,KAAK,UAAYC,EAErB,GAGG,QAAQ,UAAU,cACrB,QAAQ,UAAU,YAAc,YAC3BC,EACG,CACN,IAAMC,EAAS,KAAK,WACpB,GAAIA,EAAQ,CACND,EAAM,SAAW,GACnBC,EAAO,YAAY,IAAI,EAGzB,QAASC,EAAIF,EAAM,OAAS,EAAGE,GAAK,EAAGA,IAAK,CAC1C,IAAIC,EAAOH,EAAME,GACb,OAAOC,GAAS,SAClBA,EAAO,SAAS,eAAeA,CAAI,EAC5BA,EAAK,YACZA,EAAK,WAAW,YAAYA,CAAI,EAG7BD,EAGHD,EAAO,aAAa,KAAK,gBAAkBE,CAAI,EAF/CF,EAAO,aAAaE,EAAM,IAAI,CAGlC,CACF,CACF,ICxEJ,IAAAC,GAAuB,OAiChB,SAASC,GACdC,EACmB,CACnB,IAAMC,EAAY,IAAI,IAChBC,EAAY,IAAI,IACtB,QAAWC,KAAOH,EAAM,CACtB,GAAM,CAACI,EAAMC,CAAI,EAAIF,EAAI,SAAS,MAAM,GAAG,EAGrCG,EAAWH,EAAI,SACfI,EAAWJ,EAAI,MACfK,EAAWL,EAAI,KAGfM,KAAO,GAAAC,SAAWP,EAAI,IAAI,EAC7B,QAAQ,mBAAoB,EAAE,EAC9B,QAAQ,OAAQ,GAAG,EAGtB,GAAIE,EAAM,CACR,IAAMM,EAASV,EAAU,IAAIG,CAAI,EAG5BF,EAAQ,IAAIS,CAAM,EASrBV,EAAU,IAAIK,EAAU,CACtB,SAAAA,EACA,MAAAC,EACA,KAAAE,EACA,OAAAE,CACF,CAAC,GAbDA,EAAO,MAAQR,EAAI,MACnBQ,EAAO,KAAQF,EAGfP,EAAQ,IAAIS,CAAM,EAatB,MACEV,EAAU,IAAIK,EAAUM,EAAA,CACtB,SAAAN,EACA,MAAAC,EACA,KAAAE,GACGD,GAAQ,CAAE,KAAAA,CAAK,EACnB,CAEL,CACA,OAAOP,CACT,CCpFA,IAAAY,GAAuB,OAsChB,SAASC,GACdC,EAA2BC,EACD,CAC1B,IAAMC,EAAY,IAAI,OAAOF,EAAO,UAAW,KAAK,EAC9CG,EAAY,CAACC,EAAYC,EAAcC,IACpC,GAAGD,4BAA+BC,WAI3C,OAAQC,GAAkB,CACxBA,EAAQA,EACL,QAAQ,gBAAiB,GAAG,EAC5B,KAAK,EAGR,IAAMC,EAAQ,IAAI,OAAO,MAAMR,EAAO,cACpCO,EACG,QAAQ,uBAAwB,MAAM,EACtC,QAAQL,EAAW,GAAG,KACtB,KAAK,EAGV,OAAOO,IACLR,KACI,GAAAS,SAAWD,CAAK,EAChBA,GAED,QAAQD,EAAOL,CAAS,EACxB,QAAQ,8BAA+B,IAAI,CAClD,CACF,CCtCO,SAASQ,GACdC,EACqB,CACrB,IAAMC,EAAS,IAAK,KAAa,MAAM,CAAC,QAAS,MAAM,CAAC,EAIxD,OAHe,IAAK,KAAa,YAAYD,EAAOC,CAAK,EAGlD,MAAM,EACNA,EAAM,OACf,CAUO,SAASC,GACdD,EAA4BE,EACV,CAzEpB,IAAAC,EA0EE,IAAMC,EAAU,IAAI,IAAuBJ,CAAK,EAG1CK,EAA2B,CAAC,EAClC,QAASC,EAAI,EAAGA,EAAIJ,EAAM,OAAQI,IAChC,QAAWC,KAAUH,EACfF,EAAMI,GAAG,WAAWC,EAAO,IAAI,IACjCF,EAAOE,EAAO,MAAQ,GACtBH,EAAQ,OAAOG,CAAM,GAI3B,QAAWA,KAAUH,GACfD,EAAA,KAAK,iBAAL,MAAAA,EAAA,UAAsBI,EAAO,QAC/BF,EAAOE,EAAO,MAAQ,IAG1B,OAAOF,CACT,CC2BA,SAASG,GAAWC,EAAaC,EAAuB,CACtD,GAAM,CAACC,EAAGC,CAAC,EAAI,CAAC,IAAI,IAAIH,CAAC,EAAG,IAAI,IAAIC,CAAC,CAAC,EACtC,MAAO,CACL,GAAG,IAAI,IAAI,CAAC,GAAGC,CAAC,EAAE,OAAOE,GAAS,CAACD,EAAE,IAAIC,CAAK,CAAC,CAAC,CAClD,CACF,CASO,IAAMC,EAAN,KAAa,CAgCX,YAAY,CAAE,OAAAC,EAAQ,KAAAC,EAAM,QAAAC,CAAQ,EAAgB,CACzD,KAAK,QAAUA,EAGf,KAAK,UAAYC,GAAuBF,CAAI,EAC5C,KAAK,UAAYG,GAAuBJ,EAAQ,EAAK,EAGrD,KAAK,UAAU,UAAY,IAAI,OAAOA,EAAO,SAAS,EAGtD,KAAK,MAAQ,KAAK,UAAY,CAGxBA,EAAO,KAAK,SAAW,GAAKA,EAAO,KAAK,KAAO,KACjD,KAAK,IAAK,KAAaA,EAAO,KAAK,GAAG,EAC7BA,EAAO,KAAK,OAAS,GAC9B,KAAK,IAAK,KAAa,cAAc,GAAGA,EAAO,IAAI,CAAC,EAItD,IAAMK,EAAMZ,GAAW,CACrB,UAAW,iBAAkB,SAC/B,EAAGS,EAAQ,QAAQ,EAGnB,QAAWI,KAAQN,EAAO,KAAK,IAAIO,GACjCA,IAAa,KAAO,KAAQ,KAAaA,EAC1C,EACC,QAAWC,KAAMH,EACf,KAAK,SAAS,OAAOC,EAAKE,EAAG,EAC7B,KAAK,eAAe,OAAOF,EAAKE,EAAG,EAKvC,KAAK,IAAI,UAAU,EAGnB,KAAK,MAAM,QAAS,CAAE,MAAO,GAAI,CAAC,EAClC,KAAK,MAAM,MAAM,EACjB,KAAK,MAAM,OAAQ,CAAE,MAAO,IAAK,UAAWC,GAAO,CACjD,GAAM,CAAE,KAAAC,EAAO,CAAC,CAAE,EAAID,EACtB,OAAOC,EAAK,OAAO,CAACC,EAAMC,IAAQ,CAChC,GAAGD,EACH,GAAG,KAAK,UAAUC,CAAG,CACvB,EAAG,CAAC,CAAiB,CACvB,CAAE,CAAC,EAGH,QAAWH,KAAOR,EAChB,KAAK,IAAIQ,EAAK,CAAE,MAAOA,EAAI,KAAM,CAAC,CACtC,CAAC,CACH,CAkBO,OAAOI,EAA6B,CACzC,GAAIA,EACF,GAAI,CACF,IAAMC,EAAY,KAAK,UAAUD,CAAK,EAGhCE,EAAUC,GAAiBH,CAAK,EACnC,OAAOI,GACNA,EAAO,WAAa,KAAK,MAAM,SAAS,UACzC,EAGGC,EAAS,KAAK,MAAM,OAAO,GAAGL,IAAQ,EAGzC,OAAyB,CAACM,EAAM,CAAE,IAAAC,EAAK,MAAAC,EAAO,UAAAC,CAAU,IAAM,CAC7D,IAAMC,EAAW,KAAK,UAAU,IAAIH,CAAG,EACvC,GAAI,OAAOG,GAAa,YAAa,CACnC,GAAM,CAAE,SAAAC,EAAU,MAAAC,EAAO,KAAAC,EAAM,KAAAhB,EAAM,OAAAiB,CAAO,EAAIJ,EAG1CK,EAAQC,GACZd,EACA,OAAO,KAAKO,EAAU,QAAQ,CAChC,EAGMQ,EAAQ,CAAC,CAACH,GAAS,CAAC,OAAO,OAAOC,CAAK,EAAE,MAAMG,GAAKA,CAAC,EAC3DZ,EAAK,KAAKa,EAAAC,EAAA,CACR,SAAAT,EACA,MAAOV,EAAUW,CAAK,EACtB,KAAOX,EAAUY,CAAI,GAClBhB,GAAQ,CAAE,KAAMA,EAAK,IAAII,CAAS,CAAE,GAJ/B,CAKR,MAAOO,GAAS,EAAIS,GACpB,MAAAF,CACF,EAAC,CACH,CACA,OAAOT,CACT,EAAG,CAAC,CAAC,EAGJ,KAAK,CAACzB,EAAGC,IAAMA,EAAE,MAAQD,EAAE,KAAK,EAGhC,OAAO,CAACwC,EAAOC,IAAW,CACzB,IAAMZ,EAAW,KAAK,UAAU,IAAIY,EAAO,QAAQ,EACnD,GAAI,OAAOZ,GAAa,YAAa,CACnC,IAAMH,EAAM,WAAYG,EACpBA,EAAS,OAAQ,SACjBA,EAAS,SACbW,EAAM,IAAId,EAAK,CAAC,GAAGc,EAAM,IAAId,CAAG,GAAK,CAAC,EAAGe,CAAM,CAAC,CAClD,CACA,OAAOD,CACT,EAAG,IAAI,GAA+B,EAGpCE,EACJ,GAAI,KAAK,QAAQ,YAAa,CAC5B,IAAMC,EAAS,KAAK,MAAM,MAAMC,GAAW,CACzC,QAAWrB,KAAUF,EACnBuB,EAAQ,KAAKrB,EAAO,KAAM,CACxB,OAAQ,CAAC,OAAO,EAChB,SAAU,KAAK,MAAM,SAAS,SAC9B,SAAU,KAAK,MAAM,SAAS,QAChC,CAAC,CACL,CAAC,EAGDmB,EAAcC,EAAO,OACjB,OAAO,KAAKA,EAAO,GAAG,UAAU,QAAQ,EACxC,CAAC,CACP,CAGA,OAAOJ,EAAA,CACL,MAAO,CAAC,GAAGf,EAAO,OAAO,CAAC,GACvB,OAAOkB,GAAgB,aAAe,CAAE,YAAAA,CAAY,EAI3D,OAAQG,EAAN,CACA,QAAQ,KAAK,kBAAkB1B,qCAAoC,CACrE,CAIF,MAAO,CAAE,MAAO,CAAC,CAAE,CACrB,CACF,EL3QA,IAAI2B,EAqBJ,SAAeC,GACbC,EACe,QAAAC,EAAA,sBACf,IAAIC,EAAO,UAGX,GAAI,OAAO,QAAW,aAAe,iBAAkB,OAAQ,CAC7D,IAAMC,EAAS,SAAS,cAAiC,aAAa,EAChE,CAACC,CAAI,EAAID,EAAO,IAAI,MAAM,SAAS,EAGzCD,EAAOA,EAAK,QAAQ,KAAME,CAAI,CAChC,CAGA,IAAMC,EAAU,CAAC,EACjB,QAAWC,KAAQN,EAAO,KAAM,CAC9B,OAAQM,EAAM,CAGZ,IAAK,KACHD,EAAQ,KAAK,GAAGH,cAAiB,EACjC,MAGF,IAAK,KACL,IAAK,KACHG,EAAQ,KAAK,GAAGH,cAAiB,EACjC,KACJ,CAGII,IAAS,MACXD,EAAQ,KAAK,GAAGH,cAAiBI,UAAa,CAClD,CAGIN,EAAO,KAAK,OAAS,GACvBK,EAAQ,KAAK,GAAGH,yBAA4B,EAG1CG,EAAQ,SACV,MAAM,cACJ,GAAGH,oCACH,GAAGG,CACL,EACJ,GAaA,SAAsBE,GACpBC,EACwB,QAAAP,EAAA,sBACxB,OAAQO,EAAQ,KAAM,CAGpB,OACE,aAAMT,GAAqBS,EAAQ,KAAK,MAAM,EAC9CV,EAAQ,IAAIW,EAAOD,EAAQ,IAAI,EACxB,CACL,MACF,EAGF,OACE,MAAO,CACL,OACA,KAAMV,EAAQA,EAAM,OAAOU,EAAQ,IAAI,EAAI,CAAE,MAAO,CAAC,CAAE,CACzD,EAGF,QACE,MAAM,IAAI,UAAU,sBAAsB,CAC9C,CACF,GAOA,KAAK,KAAO,GAAAE,QAGZ,iBAAiB,UAAiBC,GAAMV,EAAA,wBACtC,YAAY,MAAMM,GAAQI,EAAG,IAAI,CAAC,CACpC,EAAC", + "names": ["require_lunr", "__commonJSMin", "exports", "module", "lunr", "config", "builder", "global", "message", "obj", "clone", "keys", "key", "val", "docRef", "fieldName", "stringValue", "s", "n", "fieldRef", "elements", "i", "other", "object", "a", "b", "intersection", "element", "posting", "documentCount", "documentsWithTerm", "x", "str", "metadata", "fn", "t", "len", "tokens", "sliceEnd", "sliceStart", "char", "sliceLength", "tokenMetadata", "label", "isRegistered", "serialised", "pipeline", "fnName", "fns", "existingFn", "newFn", "pos", "stackLength", "memo", "j", "result", "k", "token", "index", "start", "end", "pivotPoint", "pivotIndex", "insertIdx", "position", "sumOfSquares", "elementsLength", "otherVector", "dotProduct", "aLen", "bLen", "aVal", "bVal", "output", "step2list", "step3list", "c", "v", "C", "V", "mgr0", "meq1", "mgr1", "s_v", "re_mgr0", "re_mgr1", "re_meq1", "re_s_v", "re_1a", "re2_1a", "re_1b", "re2_1b", "re_1b_2", "re2_1b_2", "re3_1b_2", "re4_1b_2", "re_1c", "re_2", "re_3", "re_4", "re2_4", "re_5", "re_5_1", "re3_5", "porterStemmer", "w", "stem", "suffix", "firstch", "re", "re2", "re3", "re4", "fp", "stopWords", "words", "stopWord", "arr", "clause", "editDistance", "root", "stack", "frame", "noEditNode", "insertionNode", "substitutionNode", "charA", "charB", "transposeNode", "node", "final", "next", "edges", "edge", "labels", "qEdges", "qLen", "nEdges", "nLen", "q", "qEdge", "nEdge", "qNode", "word", "commonPrefix", "nextNode", "downTo", "childKey", "attrs", "queryString", "query", "parser", "matchingFields", "queryVectors", "termFieldCache", "requiredMatches", "prohibitedMatches", "terms", "clauseMatches", "m", "term", "termTokenSet", "expandedTerms", "field", "expandedTerm", "termIndex", "fieldPosting", "matchingDocumentRefs", "termField", "matchingDocumentsSet", "l", "matchingDocumentRef", "matchingFieldRef", "fieldMatch", "allRequiredMatches", "allProhibitedMatches", "matchingFieldRefs", "results", "matches", "fieldVector", "score", "docMatch", "match", "invertedIndex", "fieldVectors", "ref", "serializedIndex", "serializedVectors", "serializedInvertedIndex", "tokenSetBuilder", "tuple", "attributes", "number", "doc", "fields", "extractor", "fieldTerms", "metadataKey", "fieldRefs", "numberOfFields", "accumulator", "documentsWithField", "fieldRefsLength", "termIdfCache", "fieldLength", "termFrequencies", "termsLength", "fieldBoost", "docBoost", "tf", "idf", "scoreWithPrecision", "args", "clonedMetadata", "metadataKeys", "otherMatchData", "allFields", "options", "state", "subSlices", "type", "charCode", "lexer", "lexeme", "completedClause", "errorMessage", "nextLexeme", "possibleFields", "f", "boost", "factory", "require_escape_html", "__commonJSMin", "exports", "module", "matchHtmlRegExp", "escapeHtml", "string", "str", "match", "escape", "html", "index", "lastIndex", "import_lunr", "obj", "data", "key", "x", "y", "nodes", "parent", "i", "node", "import_escape_html", "setupSearchDocumentMap", "docs", "documents", "parents", "doc", "path", "hash", "location", "title", "tags", "text", "escapeHTML", "parent", "__spreadValues", "import_escape_html", "setupSearchHighlighter", "config", "escape", "separator", "highlight", "_", "data", "term", "query", "match", "value", "escapeHTML", "parseSearchQuery", "value", "query", "getSearchQueryTerms", "terms", "_a", "clauses", "result", "t", "clause", "difference", "a", "b", "x", "y", "value", "Search", "config", "docs", "options", "setupSearchDocumentMap", "setupSearchHighlighter", "fns", "lang", "language", "fn", "doc", "tags", "list", "tag", "query", "highlight", "clauses", "parseSearchQuery", "clause", "groups", "item", "ref", "score", "matchData", "document", "location", "title", "text", "parent", "terms", "getSearchQueryTerms", "boost", "t", "__spreadProps", "__spreadValues", "items", "result", "suggestions", "titles", "builder", "e", "index", "setupSearchLanguages", "config", "__async", "base", "worker", "path", "scripts", "lang", "handler", "message", "Search", "lunr", "ev"] +} diff --git a/assets/stylesheets/extra.0d2c79a8.min.css b/assets/stylesheets/extra.0d2c79a8.min.css new file mode 100644 index 00000000..6e23ef17 --- /dev/null +++ b/assets/stylesheets/extra.0d2c79a8.min.css @@ -0,0 +1 @@ +@charset "UTF-8";@keyframes ᴴₒᴴₒᴴₒ{0%{transform:translate3d(var(--left-start),0,0)}to{transform:translate3d(var(--left-end),110vh,0)}}.ᴴₒᴴₒᴴₒ{--size:1vw;background:#fff;border:1px solid #ddd;border-radius:50%;cursor:pointer;height:var(--size);opacity:1;position:fixed;top:-5vh;transition:opacity 1s;width:var(--size);z-index:10}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):first-child{--size:0.4vw;--left-start:7vw;--left-end:-8vw;animation:ᴴₒᴴₒᴴₒ 12s linear infinite both;animation-delay:-4s;left:24vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(2){--size:0.4vw;--left-start:9vw;--left-end:0vw;animation:ᴴₒᴴₒᴴₒ 18s linear infinite both;animation-delay:-2s;left:68vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(3){--size:0.4vw;--left-start:1vw;--left-end:7vw;animation:ᴴₒᴴₒᴴₒ 11s linear infinite both;animation-delay:-6s;left:10vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(4){--size:0.5vw;--left-start:8vw;--left-end:10vw;animation:ᴴₒᴴₒᴴₒ 18s linear infinite both;animation-delay:-8s;left:63vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(5){--size:0.5vw;--left-start:5vw;--left-end:9vw;animation:ᴴₒᴴₒᴴₒ 19s linear infinite both;animation-delay:-4s;left:58vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(6){--size:0.1vw;--left-start:3vw;--left-end:10vw;animation:ᴴₒᴴₒᴴₒ 14s linear infinite both;animation-delay:-1s;left:55vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(7){--size:0.2vw;--left-start:-2vw;--left-end:6vw;animation:ᴴₒᴴₒᴴₒ 19s linear infinite both;animation-delay:-7s;left:50vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(8){--size:0.3vw;--left-start:7vw;--left-end:7vw;animation:ᴴₒᴴₒᴴₒ 19s linear infinite both;animation-delay:-3s;left:65vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(9){--size:0.2vw;--left-start:4vw;--left-end:5vw;animation:ᴴₒᴴₒᴴₒ 13s linear infinite both;animation-delay:-2s;left:1vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(10){--size:0.3vw;--left-start:2vw;--left-end:-3vw;animation:ᴴₒᴴₒᴴₒ 12s linear infinite both;animation-delay:-10s;left:92vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(11){--size:0.2vw;--left-start:1vw;--left-end:8vw;animation:ᴴₒᴴₒᴴₒ 13s linear infinite both;animation-delay:-6s;left:5vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(12){--size:0.4vw;--left-start:9vw;--left-end:1vw;animation:ᴴₒᴴₒᴴₒ 18s linear infinite both;animation-delay:-3s;left:77vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(13){--size:0.1vw;--left-start:-3vw;--left-end:3vw;animation:ᴴₒᴴₒᴴₒ 18s linear infinite both;animation-delay:-7s;left:93vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(14){--size:0.5vw;--left-start:0vw;--left-end:-5vw;animation:ᴴₒᴴₒᴴₒ 12s linear infinite both;animation-delay:-4s;left:35vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(15){--size:0.1vw;--left-start:-9vw;--left-end:4vw;animation:ᴴₒᴴₒᴴₒ 20s linear infinite both;animation-delay:-6s;left:15vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(16){--size:0.1vw;--left-start:1vw;--left-end:9vw;animation:ᴴₒᴴₒᴴₒ 17s linear infinite both;animation-delay:-6s;left:100vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(17){--size:0.1vw;--left-start:1vw;--left-end:0vw;animation:ᴴₒᴴₒᴴₒ 17s linear infinite both;animation-delay:-1s;left:44vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(18){--size:0.4vw;--left-start:-9vw;--left-end:-9vw;animation:ᴴₒᴴₒᴴₒ 16s linear infinite both;animation-delay:-6s;left:69vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(19){--size:0.2vw;--left-start:3vw;--left-end:-8vw;animation:ᴴₒᴴₒᴴₒ 14s linear infinite both;animation-delay:-1s;left:32vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(20){--size:0.1vw;--left-start:-7vw;--left-end:8vw;animation:ᴴₒᴴₒᴴₒ 19s linear infinite both;animation-delay:-8s;left:59vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(21){--size:0.2vw;--left-start:-1vw;--left-end:-8vw;animation:ᴴₒᴴₒᴴₒ 13s linear infinite both;animation-delay:-6s;left:96vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(22){--size:0.2vw;--left-start:9vw;--left-end:1vw;animation:ᴴₒᴴₒᴴₒ 11s linear infinite both;animation-delay:-7s;left:78vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(23){--size:0.4vw;--left-start:5vw;--left-end:-2vw;animation:ᴴₒᴴₒᴴₒ 19s linear infinite both;animation-delay:-10s;left:29vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(24){--size:0.1vw;--left-start:-4vw;--left-end:1vw;animation:ᴴₒᴴₒᴴₒ 20s linear infinite both;animation-delay:-7s;left:83vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(25){--size:0.3vw;--left-start:-1vw;--left-end:2vw;animation:ᴴₒᴴₒᴴₒ 19s linear infinite both;animation-delay:-8s;left:95vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(26){--size:0.5vw;--left-start:-3vw;--left-end:-6vw;animation:ᴴₒᴴₒᴴₒ 18s linear infinite both;animation-delay:-8s;left:74vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(27){--size:0.5vw;--left-start:9vw;--left-end:-9vw;animation:ᴴₒᴴₒᴴₒ 19s linear infinite both;animation-delay:-2s;left:94vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(28){--size:0.1vw;--left-start:0vw;--left-end:-4vw;animation:ᴴₒᴴₒᴴₒ 15s linear infinite both;animation-delay:-4s;left:95vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(29){--size:0.5vw;--left-start:8vw;--left-end:4vw;animation:ᴴₒᴴₒᴴₒ 11s linear infinite both;animation-delay:-3s;left:42vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(30){--size:0.4vw;--left-start:-5vw;--left-end:0vw;animation:ᴴₒᴴₒᴴₒ 19s linear infinite both;animation-delay:-10s;left:8vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(31){--size:0.4vw;--left-start:-7vw;--left-end:3vw;animation:ᴴₒᴴₒᴴₒ 11s linear infinite both;animation-delay:-4s;left:77vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(32){--size:0.4vw;--left-start:8vw;--left-end:-5vw;animation:ᴴₒᴴₒᴴₒ 15s linear infinite both;animation-delay:-3s;left:80vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(33){--size:0.2vw;--left-start:-3vw;--left-end:8vw;animation:ᴴₒᴴₒᴴₒ 20s linear infinite both;animation-delay:-6s;left:15vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(34){--size:0.5vw;--left-start:5vw;--left-end:1vw;animation:ᴴₒᴴₒᴴₒ 13s linear infinite both;animation-delay:-1s;left:91vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(35){--size:0.3vw;--left-start:-6vw;--left-end:-5vw;animation:ᴴₒᴴₒᴴₒ 11s linear infinite both;animation-delay:-5s;left:93vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(36){--size:0.1vw;--left-start:10vw;--left-end:10vw;animation:ᴴₒᴴₒᴴₒ 13s linear infinite both;animation-delay:-10s;left:59vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(37){--size:0.3vw;--left-start:4vw;--left-end:6vw;animation:ᴴₒᴴₒᴴₒ 14s linear infinite both;animation-delay:-8s;left:35vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(38){--size:0.5vw;--left-start:8vw;--left-end:-3vw;animation:ᴴₒᴴₒᴴₒ 19s linear infinite both;animation-delay:-6s;left:6vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(39){--size:0.2vw;--left-start:-6vw;--left-end:-2vw;animation:ᴴₒᴴₒᴴₒ 14s linear infinite both;animation-delay:-7s;left:58vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(40){--size:0.4vw;--left-start:3vw;--left-end:-5vw;animation:ᴴₒᴴₒᴴₒ 13s linear infinite both;animation-delay:-4s;left:15vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(41){--size:0.1vw;--left-start:2vw;--left-end:-7vw;animation:ᴴₒᴴₒᴴₒ 17s linear infinite both;animation-delay:-7s;left:24vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(42){--size:0.3vw;--left-start:8vw;--left-end:3vw;animation:ᴴₒᴴₒᴴₒ 19s linear infinite both;animation-delay:-9s;left:36vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(43){--size:0.2vw;--left-start:-9vw;--left-end:-3vw;animation:ᴴₒᴴₒᴴₒ 13s linear infinite both;animation-delay:-10s;left:23vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(44){--size:0.1vw;--left-start:4vw;--left-end:-6vw;animation:ᴴₒᴴₒᴴₒ 16s linear infinite both;animation-delay:-6s;left:9vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(45){--size:0.1vw;--left-start:-3vw;--left-end:-5vw;animation:ᴴₒᴴₒᴴₒ 19s linear infinite both;animation-delay:-5s;left:62vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(46){--size:0.3vw;--left-start:0vw;--left-end:2vw;animation:ᴴₒᴴₒᴴₒ 20s linear infinite both;animation-delay:-4s;left:1vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(47){--size:0.4vw;--left-start:8vw;--left-end:-4vw;animation:ᴴₒᴴₒᴴₒ 14s linear infinite both;animation-delay:-1s;left:76vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(48){--size:0.2vw;--left-start:5vw;--left-end:-3vw;animation:ᴴₒᴴₒᴴₒ 15s linear infinite both;animation-delay:-5s;left:19vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(49){--size:0.4vw;--left-start:1vw;--left-end:-1vw;animation:ᴴₒᴴₒᴴₒ 18s linear infinite both;animation-delay:-4s;left:72vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(50){--size:0.4vw;--left-start:8vw;--left-end:-6vw;animation:ᴴₒᴴₒᴴₒ 16s linear infinite both;animation-delay:-10s;left:25vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(51){--size:0.1vw;--left-start:-5vw;--left-end:-8vw;animation:ᴴₒᴴₒᴴₒ 17s linear infinite both;animation-delay:-9s;left:71vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(52){--size:0.4vw;--left-start:-4vw;--left-end:9vw;animation:ᴴₒᴴₒᴴₒ 15s linear infinite both;animation-delay:-7s;left:30vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(53){--size:0.5vw;--left-start:-1vw;--left-end:-8vw;animation:ᴴₒᴴₒᴴₒ 15s linear infinite both;animation-delay:-4s;left:37vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(54){--size:0.4vw;--left-start:-1vw;--left-end:-1vw;animation:ᴴₒᴴₒᴴₒ 12s linear infinite both;animation-delay:-9s;left:48vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(55){--size:0.5vw;--left-start:8vw;--left-end:6vw;animation:ᴴₒᴴₒᴴₒ 20s linear infinite both;animation-delay:-6s;left:65vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(56){--size:0.4vw;--left-start:9vw;--left-end:5vw;animation:ᴴₒᴴₒᴴₒ 18s linear infinite both;animation-delay:-6s;left:53vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(57){--size:0.4vw;--left-start:3vw;--left-end:-9vw;animation:ᴴₒᴴₒᴴₒ 12s linear infinite both;animation-delay:-1s;left:76vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(58){--size:0.2vw;--left-start:-7vw;--left-end:0vw;animation:ᴴₒᴴₒᴴₒ 16s linear infinite both;animation-delay:-9s;left:54vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(59){--size:0.1vw;--left-start:-9vw;--left-end:-2vw;animation:ᴴₒᴴₒᴴₒ 20s linear infinite both;animation-delay:-1s;left:66vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(60){--size:0.3vw;--left-start:-6vw;--left-end:2vw;animation:ᴴₒᴴₒᴴₒ 11s linear infinite both;animation-delay:-7s;left:91vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(61){--size:0.4vw;--left-start:6vw;--left-end:-8vw;animation:ᴴₒᴴₒᴴₒ 14s linear infinite both;animation-delay:-7s;left:35vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(62){--size:0.4vw;--left-start:-6vw;--left-end:2vw;animation:ᴴₒᴴₒᴴₒ 16s linear infinite both;animation-delay:-3s;left:86vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(63){--size:0.5vw;--left-start:-7vw;--left-end:7vw;animation:ᴴₒᴴₒᴴₒ 20s linear infinite both;animation-delay:-5s;left:86vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(64){--size:0.2vw;--left-start:-9vw;--left-end:1vw;animation:ᴴₒᴴₒᴴₒ 13s linear infinite both;animation-delay:-5s;left:53vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(65){--size:0.2vw;--left-start:-2vw;--left-end:3vw;animation:ᴴₒᴴₒᴴₒ 11s linear infinite both;animation-delay:-6s;left:56vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(66){--size:0.5vw;--left-start:1vw;--left-end:8vw;animation:ᴴₒᴴₒᴴₒ 17s linear infinite both;animation-delay:-5s;left:58vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(67){--size:0.5vw;--left-start:2vw;--left-end:9vw;animation:ᴴₒᴴₒᴴₒ 15s linear infinite both;animation-delay:-5s;left:14vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(68){--size:0.3vw;--left-start:-1vw;--left-end:6vw;animation:ᴴₒᴴₒᴴₒ 14s linear infinite both;animation-delay:-1s;left:100vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(69){--size:0.2vw;--left-start:9vw;--left-end:-2vw;animation:ᴴₒᴴₒᴴₒ 15s linear infinite both;animation-delay:-7s;left:8vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(70){--size:0.4vw;--left-start:-5vw;--left-end:8vw;animation:ᴴₒᴴₒᴴₒ 11s linear infinite both;animation-delay:-4s;left:82vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(71){--size:0.4vw;--left-start:3vw;--left-end:-7vw;animation:ᴴₒᴴₒᴴₒ 13s linear infinite both;animation-delay:-6s;left:26vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(72){--size:0.2vw;--left-start:-2vw;--left-end:-3vw;animation:ᴴₒᴴₒᴴₒ 15s linear infinite both;animation-delay:-3s;left:24vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(73){--size:0.3vw;--left-start:-7vw;--left-end:-8vw;animation:ᴴₒᴴₒᴴₒ 16s linear infinite both;animation-delay:-2s;left:2vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(74){--size:0.4vw;--left-start:-9vw;--left-end:-3vw;animation:ᴴₒᴴₒᴴₒ 14s linear infinite both;animation-delay:-10s;left:94vw}.ᴴₒᴴₒᴴₒ:not(.ᴴₒᴴₒᴴₒ--gotcha):nth-child(75){--size:0.3vw;--left-start:7vw;--left-end:2vw;animation:ᴴₒᴴₒᴴₒ 17s linear infinite both;animation-delay:-2s;left:26vw}.ᴴₒᴴₒᴴₒ:nth-child(5n){filter:blur(2px)}.ᴴₒᴴₒᴴₒ--ᵍₒᵗ꜀ᴴₐ{opacity:0}.ᴴₒᴴₒᴴₒ__button{display:block}.ᴴₒᴴₒᴴₒ__button:after{background-color:currentcolor;content:"";display:block;height:24px;margin:0 auto;-webkit-mask-image:url('data:image/svg+xml;charset=utf-8,');mask-image:url('data:image/svg+xml;charset=utf-8,');-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:24px}.ᴴₒᴴₒᴴₒ__button[hidden]:after{-webkit-mask-image:url('data:image/svg+xml;charset=utf-8,');mask-image:url('data:image/svg+xml;charset=utf-8,')} \ No newline at end of file diff --git a/assets/stylesheets/extra.0d2c79a8.min.css.map b/assets/stylesheets/extra.0d2c79a8.min.css.map new file mode 100644 index 00000000..cd262c03 --- /dev/null +++ b/assets/stylesheets/extra.0d2c79a8.min.css.map @@ -0,0 +1 @@ +{"version":3,"sources":["src/assets/stylesheets/extra.scss","../../../src/assets/stylesheets/extra.scss"],"names":[],"mappings":"AA6BA,gBCpBA,CDoBA,kBACE,GACE,4CC1BF,CD4BA,GACE,8CC1BF,CACF,CDkCA,QACE,UAAA,CAOA,eAAA,CACA,qBAAA,CACA,iBAAA,CACA,cAAA,CAJA,kBAAA,CAMA,SAAA,CAVA,cAAA,CACA,QAAA,CAQA,qBAAA,CANA,iBAAA,CADA,UCzBF,CDqCI,yCACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SClCN,CD6BI,0CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC1BN,CDqBI,0CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SClBN,CDaI,0CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCVN,CDKI,0CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCFN,CDHI,0CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCMN,CDXI,0CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCcN,CDnBI,0CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsBN,CD3BI,0CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,QC8BN,CDnCI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,oBAAA,CAFA,SCsCN,CD3CI,2CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,QC8CN,CDnDI,2CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsDN,CD3DI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8DN,CDnEI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsEN,CD3EI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8EN,CDnFI,2CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,UCsFN,CD3FI,2CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8FN,CDnGI,2CACE,YAAA,CACA,iBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsGN,CD3GI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8GN,CDnHI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsHN,CD3HI,2CACE,YAAA,CACA,iBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8HN,CDnII,2CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsIN,CD3II,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,oBAAA,CAFA,SC8IN,CDnJI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsJN,CD3JI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8JN,CDnKI,2CACE,YAAA,CACA,iBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsKN,CD3KI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8KN,CDnLI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsLN,CD3LI,2CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8LN,CDnMI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,oBAAA,CAFA,QCsMN,CD3MI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8MN,CDnNI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsNN,CD3NI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8NN,CDnOI,2CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsON,CD3OI,2CACE,YAAA,CACA,iBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8ON,CDnPI,2CACE,YAAA,CACA,iBAAA,CACA,eAAA,CAGA,yCAAA,CACA,oBAAA,CAFA,SCsPN,CD3PI,2CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8PN,CDnQI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,QCsQN,CD3QI,2CACE,YAAA,CACA,iBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8QN,CDnRI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsRN,CD3RI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8RN,CDnSI,2CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsSN,CD3SI,2CACE,YAAA,CACA,iBAAA,CACA,eAAA,CAGA,yCAAA,CACA,oBAAA,CAFA,SC8SN,CDnTI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,QCsTN,CD3TI,2CACE,YAAA,CACA,iBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8TN,CDnUI,2CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,QCsUN,CD3UI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8UN,CDnVI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsVN,CD3VI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8VN,CDnWI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,oBAAA,CAFA,SCsWN,CD3WI,2CACE,YAAA,CACA,iBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8WN,CDnXI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsXN,CD3XI,2CACE,YAAA,CACA,iBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8XN,CDnYI,2CACE,YAAA,CACA,iBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsYN,CD3YI,2CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8YN,CDnZI,2CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsZN,CD3ZI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8ZN,CDnaI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsaN,CD3aI,2CACE,YAAA,CACA,iBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8aN,CDnbI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsbN,CD3bI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8bN,CDncI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCscN,CD3cI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8cN,CDndI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsdN,CD3dI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8dN,CDneI,2CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCseN,CD3eI,2CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8eN,CDnfI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,UCsfN,CD3fI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,QC8fN,CDngBI,2CACE,YAAA,CACA,iBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCsgBN,CD3gBI,2CACE,YAAA,CACA,gBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8gBN,CDnhBI,2CACE,YAAA,CACA,iBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SCshBN,CD3hBI,2CACE,YAAA,CACA,iBAAA,CACA,eAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,QC8hBN,CDniBI,2CACE,YAAA,CACA,iBAAA,CACA,eAAA,CAGA,yCAAA,CACA,oBAAA,CAFA,SCsiBN,CD3iBI,2CACE,YAAA,CACA,gBAAA,CACA,cAAA,CAGA,yCAAA,CACA,mBAAA,CAFA,SC8iBN,CDviBE,sBACE,gBCyiBJ,CDriBE,gBACE,SCuiBJ,CDniBE,gBACE,aCqiBJ,CDjiBE,sBAKE,6BAAA,CAKA,UAAA,CATA,aAAA,CAEA,WAAA,CACA,aAAA,CAEA,ooBAAA,CAAA,4nBAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CAPA,UC2iBJ,CD/hBE,8BACE,qqBAAA,CAAA,6pBCiiBJ","file":"extra.css"} \ No newline at end of file diff --git a/assets/stylesheets/main.975780f9.min.css b/assets/stylesheets/main.975780f9.min.css new file mode 100644 index 00000000..dac48ba7 --- /dev/null +++ b/assets/stylesheets/main.975780f9.min.css @@ -0,0 +1 @@ +@charset "UTF-8";html{-webkit-text-size-adjust:none;-moz-text-size-adjust:none;text-size-adjust:none;box-sizing:border-box}*,:after,:before{box-sizing:inherit}@media (prefers-reduced-motion){*,:after,:before{transition:none!important}}body{margin:0}a,button,input,label{-webkit-tap-highlight-color:transparent}a{color:inherit;text-decoration:none}hr{border:0;box-sizing:initial;display:block;height:.05rem;overflow:visible;padding:0}small{font-size:80%}sub,sup{line-height:1em}img{border-style:none}table{border-collapse:initial;border-spacing:0}td,th{font-weight:400;vertical-align:top}button{background:#0000;border:0;font-family:inherit;font-size:inherit;margin:0;padding:0}input{border:0;outline:none}:root{--md-primary-fg-color:#4051b5;--md-primary-fg-color--light:#5d6cc0;--md-primary-fg-color--dark:#303fa1;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3;--md-accent-fg-color:#526cfe;--md-accent-fg-color--transparent:#526cfe1a;--md-accent-bg-color:#fff;--md-accent-bg-color--light:#ffffffb3}:root,[data-md-color-scheme=default]{--md-default-fg-color:#000000de;--md-default-fg-color--light:#0000008a;--md-default-fg-color--lighter:#00000052;--md-default-fg-color--lightest:#00000012;--md-default-bg-color:#fff;--md-default-bg-color--light:#ffffffb3;--md-default-bg-color--lighter:#ffffff4d;--md-default-bg-color--lightest:#ffffff1f;--md-code-fg-color:#36464e;--md-code-bg-color:#f5f5f5;--md-code-hl-color:#ffff0080;--md-code-hl-number-color:#d52a2a;--md-code-hl-special-color:#db1457;--md-code-hl-function-color:#a846b9;--md-code-hl-constant-color:#6e59d9;--md-code-hl-keyword-color:#3f6ec6;--md-code-hl-string-color:#1c7d4d;--md-code-hl-name-color:var(--md-code-fg-color);--md-code-hl-operator-color:var(--md-default-fg-color--light);--md-code-hl-punctuation-color:var(--md-default-fg-color--light);--md-code-hl-comment-color:var(--md-default-fg-color--light);--md-code-hl-generic-color:var(--md-default-fg-color--light);--md-code-hl-variable-color:var(--md-default-fg-color--light);--md-typeset-color:var(--md-default-fg-color);--md-typeset-a-color:var(--md-primary-fg-color);--md-typeset-mark-color:#ffff0080;--md-typeset-del-color:#f5503d26;--md-typeset-ins-color:#0bd57026;--md-typeset-kbd-color:#fafafa;--md-typeset-kbd-accent-color:#fff;--md-typeset-kbd-border-color:#b8b8b8;--md-typeset-table-color:#0000001f;--md-admonition-fg-color:var(--md-default-fg-color);--md-admonition-bg-color:var(--md-default-bg-color);--md-footer-fg-color:#fff;--md-footer-fg-color--light:#ffffffb3;--md-footer-fg-color--lighter:#ffffff4d;--md-footer-bg-color:#000000de;--md-footer-bg-color--dark:#00000052;--md-shadow-z1:0 0.2rem 0.5rem #0000000d,0 0 0.05rem #0000001a;--md-shadow-z2:0 0.2rem 0.5rem #0000001a,0 0 0.05rem #00000040;--md-shadow-z3:0 0.2rem 0.5rem #0003,0 0 0.05rem #00000059}.md-icon svg{fill:currentcolor;display:block;height:1.2rem;width:1.2rem}body{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;--md-text-font-family:var(--md-text-font,_),-apple-system,BlinkMacSystemFont,Helvetica,Arial,sans-serif;--md-code-font-family:var(--md-code-font,_),SFMono-Regular,Consolas,Menlo,monospace}body,input{font-feature-settings:"kern","liga";font-family:var(--md-text-font-family)}body,code,input,kbd,pre{color:var(--md-typeset-color)}code,kbd,pre{font-feature-settings:"kern";font-family:var(--md-code-font-family)}:root{--md-typeset-table-sort-icon:url('data:image/svg+xml;charset=utf-8,');--md-typeset-table-sort-icon--asc:url('data:image/svg+xml;charset=utf-8,');--md-typeset-table-sort-icon--desc:url('data:image/svg+xml;charset=utf-8,')}.md-typeset{-webkit-print-color-adjust:exact;color-adjust:exact;font-size:.8rem;line-height:1.6}@media print{.md-typeset{font-size:.68rem}}.md-typeset blockquote,.md-typeset dl,.md-typeset figure,.md-typeset ol,.md-typeset pre,.md-typeset ul{margin-bottom:1em;margin-top:1em}.md-typeset h1{color:var(--md-default-fg-color--light);font-size:2em;line-height:1.3;margin:0 0 1.25em}.md-typeset h1,.md-typeset h2{font-weight:300;letter-spacing:-.01em}.md-typeset h2{font-size:1.5625em;line-height:1.4;margin:1.6em 0 .64em}.md-typeset h3{font-size:1.25em;font-weight:400;letter-spacing:-.01em;line-height:1.5;margin:1.6em 0 .8em}.md-typeset h2+h3{margin-top:.8em}.md-typeset h4{font-weight:700;letter-spacing:-.01em;margin:1em 0}.md-typeset h5,.md-typeset h6{color:var(--md-default-fg-color--light);font-size:.8em;font-weight:700;letter-spacing:-.01em;margin:1.25em 0}.md-typeset h5{text-transform:uppercase}.md-typeset hr{border-bottom:.05rem solid var(--md-default-fg-color--lightest);display:flow-root;margin:1.5em 0}.md-typeset a{color:var(--md-typeset-a-color);word-break:break-word}.md-typeset a,.md-typeset a:before{transition:color 125ms}.md-typeset a:focus,.md-typeset a:hover{color:var(--md-accent-fg-color)}.md-typeset a:focus code,.md-typeset a:hover code{background-color:var(--md-accent-fg-color--transparent)}.md-typeset a code{color:currentcolor;transition:background-color 125ms}.md-typeset a.focus-visible{outline-color:var(--md-accent-fg-color);outline-offset:.2rem}.md-typeset code,.md-typeset kbd,.md-typeset pre{color:var(--md-code-fg-color);direction:ltr;font-variant-ligatures:none}@media print{.md-typeset code,.md-typeset kbd,.md-typeset pre{white-space:pre-wrap}}.md-typeset code{background-color:var(--md-code-bg-color);border-radius:.1rem;-webkit-box-decoration-break:clone;box-decoration-break:clone;font-size:.85em;padding:0 .2941176471em;word-break:break-word}.md-typeset code:not(.focus-visible){-webkit-tap-highlight-color:transparent;outline:none}.md-typeset pre{display:flow-root;line-height:1.4;position:relative}.md-typeset pre>code{-webkit-box-decoration-break:slice;box-decoration-break:slice;box-shadow:none;display:block;margin:0;outline-color:var(--md-accent-fg-color);overflow:auto;padding:.7720588235em 1.1764705882em;scrollbar-color:var(--md-default-fg-color--lighter) #0000;scrollbar-width:thin;touch-action:auto;word-break:normal}.md-typeset pre>code:hover{scrollbar-color:var(--md-accent-fg-color) #0000}.md-typeset pre>code::-webkit-scrollbar{height:.2rem;width:.2rem}.md-typeset pre>code::-webkit-scrollbar-thumb{background-color:var(--md-default-fg-color--lighter)}.md-typeset pre>code::-webkit-scrollbar-thumb:hover{background-color:var(--md-accent-fg-color)}.md-typeset kbd{background-color:var(--md-typeset-kbd-color);border-radius:.1rem;box-shadow:0 .1rem 0 .05rem var(--md-typeset-kbd-border-color),0 .1rem 0 var(--md-typeset-kbd-border-color),0 -.1rem .2rem var(--md-typeset-kbd-accent-color) inset;color:var(--md-default-fg-color);display:inline-block;font-size:.75em;padding:0 .6666666667em;vertical-align:text-top;word-break:break-word}.md-typeset mark{background-color:var(--md-typeset-mark-color);-webkit-box-decoration-break:clone;box-decoration-break:clone;color:inherit;word-break:break-word}.md-typeset abbr{border-bottom:.05rem dotted var(--md-default-fg-color--light);cursor:help;text-decoration:none}@media (hover:none){.md-typeset abbr{position:relative}.md-typeset abbr[title]:-webkit-any(:focus,:hover):after{background-color:var(--md-default-fg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z3);color:var(--md-default-bg-color);content:attr(title);display:inline-block;font-size:.7rem;margin-top:2em;max-width:80%;min-width:-webkit-max-content;min-width:max-content;padding:.2rem .3rem;position:absolute;width:auto}.md-typeset abbr[title]:-moz-any(:focus,:hover):after{background-color:var(--md-default-fg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z3);color:var(--md-default-bg-color);content:attr(title);display:inline-block;font-size:.7rem;margin-top:2em;max-width:80%;min-width:-moz-max-content;min-width:max-content;padding:.2rem .3rem;position:absolute;width:auto}[dir=ltr] .md-typeset abbr[title]:-webkit-any(:focus,:hover):after{left:0}[dir=ltr] .md-typeset abbr[title]:-moz-any(:focus,:hover):after{left:0}[dir=ltr] .md-typeset abbr[title]:is(:focus,:hover):after{left:0}[dir=rtl] .md-typeset abbr[title]:-webkit-any(:focus,:hover):after{right:0}[dir=rtl] .md-typeset abbr[title]:-moz-any(:focus,:hover):after{right:0}[dir=rtl] .md-typeset abbr[title]:is(:focus,:hover):after{right:0}.md-typeset abbr[title]:is(:focus,:hover):after{background-color:var(--md-default-fg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z3);color:var(--md-default-bg-color);content:attr(title);display:inline-block;font-size:.7rem;margin-top:2em;max-width:80%;min-width:-webkit-max-content;min-width:-moz-max-content;min-width:max-content;padding:.2rem .3rem;position:absolute;width:auto}}.md-typeset small{opacity:.75}[dir=ltr] .md-typeset sub,[dir=ltr] .md-typeset sup{margin-left:.078125em}[dir=rtl] .md-typeset sub,[dir=rtl] .md-typeset sup{margin-right:.078125em}[dir=ltr] .md-typeset blockquote{padding-left:.6rem}[dir=rtl] .md-typeset blockquote{padding-right:.6rem}[dir=ltr] .md-typeset blockquote{border-left:.2rem solid var(--md-default-fg-color--lighter)}[dir=rtl] .md-typeset blockquote{border-right:.2rem solid var(--md-default-fg-color--lighter)}.md-typeset blockquote{color:var(--md-default-fg-color--light);margin-left:0;margin-right:0}.md-typeset ul{list-style-type:disc}[dir=ltr] .md-typeset ol,[dir=ltr] .md-typeset ul{margin-left:.625em}[dir=rtl] .md-typeset ol,[dir=rtl] .md-typeset ul{margin-right:.625em}.md-typeset ol,.md-typeset ul{padding:0}.md-typeset ol:not([hidden]),.md-typeset ul:not([hidden]){display:flow-root}.md-typeset ol ol,.md-typeset ul ol{list-style-type:lower-alpha}.md-typeset ol ol ol,.md-typeset ul ol ol{list-style-type:lower-roman}[dir=ltr] .md-typeset ol li,[dir=ltr] .md-typeset ul li{margin-left:1.25em}[dir=rtl] .md-typeset ol li,[dir=rtl] .md-typeset ul li{margin-right:1.25em}.md-typeset ol li,.md-typeset ul li{margin-bottom:.5em}.md-typeset ol li blockquote,.md-typeset ol li p,.md-typeset ul li blockquote,.md-typeset ul li p{margin:.5em 0}.md-typeset ol li:last-child,.md-typeset ul li:last-child{margin-bottom:0}.md-typeset ol li :-webkit-any(ul,ol),.md-typeset ul li :-webkit-any(ul,ol){margin-bottom:.5em;margin-top:.5em}.md-typeset ol li :-moz-any(ul,ol),.md-typeset ul li :-moz-any(ul,ol){margin-bottom:.5em;margin-top:.5em}[dir=ltr] .md-typeset ol li :-webkit-any(ul,ol),[dir=ltr] .md-typeset ul li :-webkit-any(ul,ol){margin-left:.625em}[dir=ltr] .md-typeset ol li :-moz-any(ul,ol),[dir=ltr] .md-typeset ul li :-moz-any(ul,ol){margin-left:.625em}[dir=ltr] .md-typeset ol li :is(ul,ol),[dir=ltr] .md-typeset ul li :is(ul,ol){margin-left:.625em}[dir=rtl] .md-typeset ol li :-webkit-any(ul,ol),[dir=rtl] .md-typeset ul li :-webkit-any(ul,ol){margin-right:.625em}[dir=rtl] .md-typeset ol li :-moz-any(ul,ol),[dir=rtl] .md-typeset ul li :-moz-any(ul,ol){margin-right:.625em}[dir=rtl] .md-typeset ol li :is(ul,ol),[dir=rtl] .md-typeset ul li :is(ul,ol){margin-right:.625em}.md-typeset ol li :is(ul,ol),.md-typeset ul li :is(ul,ol){margin-bottom:.5em;margin-top:.5em}[dir=ltr] .md-typeset dd{margin-left:1.875em}[dir=rtl] .md-typeset dd{margin-right:1.875em}.md-typeset dd{margin-bottom:1.5em;margin-top:1em}.md-typeset img,.md-typeset svg,.md-typeset video{height:auto;max-width:100%}.md-typeset img[align=left]{margin:1em 1em 1em 0}.md-typeset img[align=right]{margin:1em 0 1em 1em}.md-typeset img[align]:only-child{margin-top:0}.md-typeset img[src$="#gh-dark-mode-only"],.md-typeset img[src$="#only-dark"]{display:none}.md-typeset figure{display:flow-root;margin:1em auto;max-width:100%;text-align:center;width:-webkit-fit-content;width:-moz-fit-content;width:fit-content}.md-typeset figure img{display:block}.md-typeset figcaption{font-style:italic;margin:1em auto;max-width:24rem}.md-typeset iframe{max-width:100%}.md-typeset table:not([class]){background-color:var(--md-default-bg-color);border:.05rem solid var(--md-typeset-table-color);border-radius:.1rem;display:inline-block;font-size:.64rem;max-width:100%;overflow:auto;touch-action:auto}@media print{.md-typeset table:not([class]){display:table}}.md-typeset table:not([class])+*{margin-top:1.5em}.md-typeset table:not([class]) :-webkit-any(th,td)>:first-child{margin-top:0}.md-typeset table:not([class]) :-moz-any(th,td)>:first-child{margin-top:0}.md-typeset table:not([class]) :is(th,td)>:first-child{margin-top:0}.md-typeset table:not([class]) :-webkit-any(th,td)>:last-child{margin-bottom:0}.md-typeset table:not([class]) :-moz-any(th,td)>:last-child{margin-bottom:0}.md-typeset table:not([class]) :is(th,td)>:last-child{margin-bottom:0}.md-typeset table:not([class]) :-webkit-any(th,td):not([align]){text-align:left}.md-typeset table:not([class]) :-moz-any(th,td):not([align]){text-align:left}.md-typeset table:not([class]) :is(th,td):not([align]){text-align:left}[dir=rtl] .md-typeset table:not([class]) :-webkit-any(th,td):not([align]){text-align:right}[dir=rtl] .md-typeset table:not([class]) :-moz-any(th,td):not([align]){text-align:right}[dir=rtl] .md-typeset table:not([class]) :is(th,td):not([align]){text-align:right}.md-typeset table:not([class]) th{font-weight:700;min-width:5rem;padding:.9375em 1.25em;vertical-align:top}.md-typeset table:not([class]) td{border-top:.05rem solid var(--md-typeset-table-color);padding:.9375em 1.25em;vertical-align:top}.md-typeset table:not([class]) tbody tr{transition:background-color 125ms}.md-typeset table:not([class]) tbody tr:hover{background-color:rgba(0,0,0,.035);box-shadow:0 .05rem 0 var(--md-default-bg-color) inset}.md-typeset table:not([class]) a{word-break:normal}.md-typeset table th[role=columnheader]{cursor:pointer}[dir=ltr] .md-typeset table th[role=columnheader]:after{margin-left:.5em}[dir=rtl] .md-typeset table th[role=columnheader]:after{margin-right:.5em}.md-typeset table th[role=columnheader]:after{content:"";display:inline-block;height:1.2em;-webkit-mask-image:var(--md-typeset-table-sort-icon);mask-image:var(--md-typeset-table-sort-icon);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;transition:background-color 125ms;vertical-align:text-bottom;width:1.2em}.md-typeset table th[role=columnheader]:hover:after{background-color:var(--md-default-fg-color--lighter)}.md-typeset table th[role=columnheader][aria-sort=ascending]:after{background-color:var(--md-default-fg-color--light);-webkit-mask-image:var(--md-typeset-table-sort-icon--asc);mask-image:var(--md-typeset-table-sort-icon--asc)}.md-typeset table th[role=columnheader][aria-sort=descending]:after{background-color:var(--md-default-fg-color--light);-webkit-mask-image:var(--md-typeset-table-sort-icon--desc);mask-image:var(--md-typeset-table-sort-icon--desc)}.md-typeset__scrollwrap{margin:1em -.8rem;overflow-x:auto;touch-action:auto}.md-typeset__table{display:inline-block;margin-bottom:.5em;padding:0 .8rem}@media print{.md-typeset__table{display:block}}html .md-typeset__table table{display:table;margin:0;overflow:hidden;width:100%}@media screen and (max-width:44.9375em){.md-content__inner>pre{margin:1em -.8rem}.md-content__inner>pre code{border-radius:0}}.md-banner{background-color:var(--md-footer-bg-color);color:var(--md-footer-fg-color);overflow:auto}@media print{.md-banner{display:none}}.md-banner--warning{background:var(--md-typeset-mark-color);color:var(--md-default-fg-color)}.md-banner__inner{font-size:.7rem;margin:.6rem auto;padding:0 .8rem}[dir=ltr] .md-banner__button{float:right}[dir=rtl] .md-banner__button{float:left}.md-banner__button{color:inherit;cursor:pointer;transition:opacity .25s}.md-banner__button:hover{opacity:.7}html{font-size:125%;height:100%;overflow-x:hidden}@media screen and (min-width:100em){html{font-size:137.5%}}@media screen and (min-width:125em){html{font-size:150%}}body{background-color:var(--md-default-bg-color);display:flex;flex-direction:column;font-size:.5rem;min-height:100%;position:relative;width:100%}@media print{body{display:block}}@media screen and (max-width:59.9375em){body[data-md-scrolllock]{position:fixed}}.md-grid{margin-left:auto;margin-right:auto;max-width:61rem}.md-container{display:flex;flex-direction:column;flex-grow:1}@media print{.md-container{display:block}}.md-main{flex-grow:1}.md-main__inner{display:flex;height:100%;margin-top:1.5rem}.md-ellipsis{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.md-toggle{display:none}.md-option{height:0;opacity:0;position:absolute;width:0}.md-option:checked+label:not([hidden]){display:block}.md-option.focus-visible+label{outline-color:var(--md-accent-fg-color);outline-style:auto}.md-skip{background-color:var(--md-default-fg-color);border-radius:.1rem;color:var(--md-default-bg-color);font-size:.64rem;margin:.5rem;opacity:0;outline-color:var(--md-accent-fg-color);padding:.3rem .5rem;position:fixed;transform:translateY(.4rem);z-index:-1}.md-skip:focus{opacity:1;transform:translateY(0);transition:transform .25s cubic-bezier(.4,0,.2,1),opacity 175ms 75ms;z-index:10}@page{margin:25mm}:root{--md-clipboard-icon:url('data:image/svg+xml;charset=utf-8,')}.md-clipboard{border-radius:.1rem;color:var(--md-default-fg-color--lightest);cursor:pointer;height:1.5em;outline-color:var(--md-accent-fg-color);outline-offset:.1rem;position:absolute;right:.5em;top:.5em;transition:color .25s;width:1.5em;z-index:1}@media print{.md-clipboard{display:none}}.md-clipboard:not(.focus-visible){-webkit-tap-highlight-color:transparent;outline:none}:hover>.md-clipboard{color:var(--md-default-fg-color--light)}.md-clipboard:-webkit-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-clipboard:-moz-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-clipboard:is(:focus,:hover){color:var(--md-accent-fg-color)}.md-clipboard:after{background-color:currentcolor;content:"";display:block;height:1.125em;margin:0 auto;-webkit-mask-image:var(--md-clipboard-icon);mask-image:var(--md-clipboard-icon);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:1.125em}.md-clipboard--inline{cursor:pointer}.md-clipboard--inline code{transition:color .25s,background-color .25s}.md-clipboard--inline:-webkit-any(:focus,:hover) code{background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-clipboard--inline:-moz-any(:focus,:hover) code{background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-clipboard--inline:is(:focus,:hover) code{background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}@keyframes consent{0%{opacity:0;transform:translateY(100%)}to{opacity:1;transform:translateY(0)}}@keyframes overlay{0%{opacity:0}to{opacity:1}}.md-consent__overlay{animation:overlay .25s both;-webkit-backdrop-filter:blur(.1rem);backdrop-filter:blur(.1rem);background-color:#0000008a;height:100%;opacity:1;position:fixed;top:0;width:100%;z-index:5}.md-consent__inner{animation:consent .5s cubic-bezier(.1,.7,.1,1) both;background-color:var(--md-default-bg-color);border:0;border-radius:.1rem;bottom:0;box-shadow:0 0 .2rem #0000001a,0 .2rem .4rem #0003;max-height:100%;overflow:auto;padding:0;position:fixed;width:100%;z-index:5}.md-consent__form{padding:.8rem}.md-consent__settings{display:none;margin:1em 0}input:checked+.md-consent__settings{display:block}.md-consent__controls{margin-bottom:.8rem}.md-typeset .md-consent__controls .md-button{display:inline}@media screen and (max-width:44.9375em){.md-typeset .md-consent__controls .md-button{display:block;margin-top:.4rem;text-align:center;width:100%}}.md-consent label{cursor:pointer}.md-content{flex-grow:1;min-width:0}.md-content__inner{margin:0 .8rem 1.2rem;padding-top:.6rem}@media screen and (min-width:76.25em){[dir=ltr] .md-sidebar--primary:not([hidden])~.md-content>.md-content__inner{margin-left:1.2rem}[dir=ltr] .md-sidebar--secondary:not([hidden])~.md-content>.md-content__inner,[dir=rtl] .md-sidebar--primary:not([hidden])~.md-content>.md-content__inner{margin-right:1.2rem}[dir=rtl] .md-sidebar--secondary:not([hidden])~.md-content>.md-content__inner{margin-left:1.2rem}}.md-content__inner:before{content:"";display:block;height:.4rem}.md-content__inner>:last-child{margin-bottom:0}[dir=ltr] .md-content__button{float:right}[dir=rtl] .md-content__button{float:left}[dir=ltr] .md-content__button{margin-left:.4rem}[dir=rtl] .md-content__button{margin-right:.4rem}.md-content__button{margin:.4rem 0;padding:0}@media print{.md-content__button{display:none}}.md-typeset .md-content__button{color:var(--md-default-fg-color--lighter)}.md-content__button svg{display:inline;vertical-align:top}[dir=rtl] .md-content__button svg{transform:scaleX(-1)}[dir=ltr] .md-dialog{right:.8rem}[dir=rtl] .md-dialog{left:.8rem}.md-dialog{background-color:var(--md-default-fg-color);border-radius:.1rem;bottom:.8rem;box-shadow:var(--md-shadow-z3);min-width:11.1rem;opacity:0;padding:.4rem .6rem;pointer-events:none;position:fixed;transform:translateY(100%);transition:transform 0ms .4s,opacity .4s;z-index:4}@media print{.md-dialog{display:none}}.md-dialog--active{opacity:1;pointer-events:auto;transform:translateY(0);transition:transform .4s cubic-bezier(.075,.85,.175,1),opacity .4s}.md-dialog__inner{color:var(--md-default-bg-color);font-size:.7rem}.md-feedback{margin:2em 0 1em;text-align:center}.md-feedback fieldset{border:none;margin:0;padding:0}.md-feedback__title{font-weight:700;margin:1em auto}.md-feedback__inner{position:relative}.md-feedback__list{align-content:baseline;display:flex;flex-wrap:wrap;justify-content:center;position:relative}.md-feedback__list:hover .md-icon:not(:disabled){color:var(--md-default-fg-color--lighter)}:disabled .md-feedback__list{min-height:1.8rem}.md-feedback__icon{color:var(--md-default-fg-color--light);cursor:pointer;flex-shrink:0;margin:0 .1rem;transition:color 125ms}.md-feedback__icon:not(:disabled).md-icon:hover{color:var(--md-accent-fg-color)}.md-feedback__icon:disabled{color:var(--md-default-fg-color--lightest);pointer-events:none}.md-feedback__note{opacity:0;position:relative;transform:translateY(.4rem);transition:transform .4s cubic-bezier(.1,.7,.1,1),opacity .15s}.md-feedback__note>*{margin:0 auto;max-width:16rem}:disabled .md-feedback__note{opacity:1;transform:translateY(0)}.md-footer{background-color:var(--md-footer-bg-color);color:var(--md-footer-fg-color)}@media print{.md-footer{display:none}}.md-footer__inner{justify-content:space-between;overflow:auto;padding:.2rem}.md-footer__inner:not([hidden]){display:flex}.md-footer__link{display:flex;flex-grow:0.01;outline-color:var(--md-accent-fg-color);overflow:hidden;padding-bottom:.4rem;padding-top:1.4rem;transition:opacity .25s}.md-footer__link:-webkit-any(:focus,:hover){opacity:.7}.md-footer__link:-moz-any(:focus,:hover){opacity:.7}.md-footer__link:is(:focus,:hover){opacity:.7}[dir=rtl] .md-footer__link svg{transform:scaleX(-1)}@media screen and (max-width:44.9375em){.md-footer__link--prev .md-footer__title{display:none}}[dir=ltr] .md-footer__link--next{margin-left:auto}[dir=rtl] .md-footer__link--next{margin-right:auto}.md-footer__link--next{text-align:right}[dir=rtl] .md-footer__link--next{text-align:left}.md-footer__title{flex-grow:1;font-size:.9rem;line-height:2.4rem;max-width:calc(100% - 2.4rem);padding:0 1rem;position:relative;white-space:nowrap}.md-footer__button{margin:.2rem;padding:.4rem}.md-footer__direction{font-size:.64rem;left:0;margin-top:-1rem;opacity:.7;padding:0 1rem;position:absolute;right:0}.md-footer-meta{background-color:var(--md-footer-bg-color--dark)}.md-footer-meta__inner{display:flex;flex-wrap:wrap;justify-content:space-between;padding:.2rem}html .md-footer-meta.md-typeset a{color:var(--md-footer-fg-color--light)}html .md-footer-meta.md-typeset a:-webkit-any(:focus,:hover){color:var(--md-footer-fg-color)}html .md-footer-meta.md-typeset a:-moz-any(:focus,:hover){color:var(--md-footer-fg-color)}html .md-footer-meta.md-typeset a:is(:focus,:hover){color:var(--md-footer-fg-color)}.md-copyright{color:var(--md-footer-fg-color--lighter);font-size:.64rem;margin:auto .6rem;padding:.4rem 0;width:100%}@media screen and (min-width:45em){.md-copyright{width:auto}}.md-copyright__highlight{color:var(--md-footer-fg-color--light)}.md-social{margin:0 .4rem;padding:.2rem 0 .6rem}@media screen and (min-width:45em){.md-social{padding:.6rem 0}}.md-social__link{display:inline-block;height:1.6rem;text-align:center;width:1.6rem}.md-social__link:before{line-height:1.9}.md-social__link svg{fill:currentcolor;max-height:.8rem;vertical-align:-25%}.md-typeset .md-button{border:.1rem solid;border-radius:.1rem;color:var(--md-primary-fg-color);cursor:pointer;display:inline-block;font-weight:700;padding:.625em 2em;transition:color 125ms,background-color 125ms,border-color 125ms}.md-typeset .md-button--primary{background-color:var(--md-primary-fg-color);border-color:var(--md-primary-fg-color);color:var(--md-primary-bg-color)}.md-typeset .md-button:-webkit-any(:focus,:hover){background-color:var(--md-accent-fg-color);border-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}.md-typeset .md-button:-moz-any(:focus,:hover){background-color:var(--md-accent-fg-color);border-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}.md-typeset .md-button:is(:focus,:hover){background-color:var(--md-accent-fg-color);border-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}[dir=ltr] .md-typeset .md-input{border-top-left-radius:.1rem}[dir=ltr] .md-typeset .md-input,[dir=rtl] .md-typeset .md-input{border-top-right-radius:.1rem}[dir=rtl] .md-typeset .md-input{border-top-left-radius:.1rem}.md-typeset .md-input{border-bottom:.1rem solid var(--md-default-fg-color--lighter);box-shadow:var(--md-shadow-z1);font-size:.8rem;height:1.8rem;padding:0 .6rem;transition:border .25s,box-shadow .25s}.md-typeset .md-input:-webkit-any(:focus,:hover){border-bottom-color:var(--md-accent-fg-color);box-shadow:var(--md-shadow-z2)}.md-typeset .md-input:-moz-any(:focus,:hover){border-bottom-color:var(--md-accent-fg-color);box-shadow:var(--md-shadow-z2)}.md-typeset .md-input:is(:focus,:hover){border-bottom-color:var(--md-accent-fg-color);box-shadow:var(--md-shadow-z2)}.md-typeset .md-input--stretch{width:100%}.md-header{background-color:var(--md-primary-fg-color);box-shadow:0 0 .2rem #0000,0 .2rem .4rem #0000;color:var(--md-primary-bg-color);display:block;left:0;position:-webkit-sticky;position:sticky;right:0;top:0;z-index:4}@media print{.md-header{display:none}}.md-header[hidden]{transform:translateY(-100%);transition:transform .25s cubic-bezier(.8,0,.6,1),box-shadow .25s}.md-header--shadow{box-shadow:0 0 .2rem #0000001a,0 .2rem .4rem #0003;transition:transform .25s cubic-bezier(.1,.7,.1,1),box-shadow .25s}.md-header__inner{align-items:center;display:flex;padding:0 .2rem}.md-header__button{color:currentcolor;cursor:pointer;margin:.2rem;outline-color:var(--md-accent-fg-color);padding:.4rem;position:relative;transition:opacity .25s;vertical-align:middle;z-index:1}.md-header__button:hover{opacity:.7}.md-header__button:not([hidden]){display:inline-block}.md-header__button:not(.focus-visible){-webkit-tap-highlight-color:transparent;outline:none}.md-header__button.md-logo{margin:.2rem;padding:.4rem}@media screen and (max-width:76.1875em){.md-header__button.md-logo{display:none}}.md-header__button.md-logo :-webkit-any(img,svg){fill:currentcolor;display:block;height:1.2rem;width:auto}.md-header__button.md-logo :-moz-any(img,svg){fill:currentcolor;display:block;height:1.2rem;width:auto}.md-header__button.md-logo :is(img,svg){fill:currentcolor;display:block;height:1.2rem;width:auto}@media screen and (min-width:60em){.md-header__button[for=__search]{display:none}}.no-js .md-header__button[for=__search]{display:none}[dir=rtl] .md-header__button[for=__search] svg{transform:scaleX(-1)}@media screen and (min-width:76.25em){.md-header__button[for=__drawer]{display:none}}.md-header__topic{display:flex;max-width:100%;position:absolute;transition:transform .4s cubic-bezier(.1,.7,.1,1),opacity .15s;white-space:nowrap}.md-header__topic+.md-header__topic{opacity:0;pointer-events:none;transform:translateX(1.25rem);transition:transform .4s cubic-bezier(1,.7,.1,.1),opacity .15s;z-index:-1}[dir=rtl] .md-header__topic+.md-header__topic{transform:translateX(-1.25rem)}.md-header__topic:first-child{font-weight:700}[dir=ltr] .md-header__title{margin-right:.4rem}[dir=rtl] .md-header__title{margin-left:.4rem}[dir=ltr] .md-header__title{margin-left:1rem}[dir=rtl] .md-header__title{margin-right:1rem}.md-header__title{flex-grow:1;font-size:.9rem;height:2.4rem;line-height:2.4rem}.md-header__title--active .md-header__topic{opacity:0;pointer-events:none;transform:translateX(-1.25rem);transition:transform .4s cubic-bezier(1,.7,.1,.1),opacity .15s;z-index:-1}[dir=rtl] .md-header__title--active .md-header__topic{transform:translateX(1.25rem)}.md-header__title--active .md-header__topic+.md-header__topic{opacity:1;pointer-events:auto;transform:translateX(0);transition:transform .4s cubic-bezier(.1,.7,.1,1),opacity .15s;z-index:0}.md-header__title>.md-header__ellipsis{height:100%;position:relative;width:100%}.md-header__option{display:flex;flex-shrink:0;max-width:100%;transition:max-width 0ms .25s,opacity .25s .25s;white-space:nowrap}[data-md-toggle=search]:checked~.md-header .md-header__option{max-width:0;opacity:0;transition:max-width 0ms,opacity 0ms}.md-header__source{display:none}@media screen and (min-width:60em){[dir=ltr] .md-header__source{margin-left:1rem}[dir=rtl] .md-header__source{margin-right:1rem}.md-header__source{display:block;max-width:11.7rem;width:11.7rem}}@media screen and (min-width:76.25em){[dir=ltr] .md-header__source{margin-left:1.4rem}[dir=rtl] .md-header__source{margin-right:1.4rem}}:root{--md-nav-icon--prev:url('data:image/svg+xml;charset=utf-8,');--md-nav-icon--next:url('data:image/svg+xml;charset=utf-8,');--md-toc-icon:url('data:image/svg+xml;charset=utf-8,')}.md-nav{font-size:.7rem;line-height:1.3}.md-nav__title{display:block;font-weight:700;overflow:hidden;padding:0 .6rem;text-overflow:ellipsis}.md-nav__title .md-nav__button{display:none}.md-nav__title .md-nav__button img{height:100%;width:auto}.md-nav__title .md-nav__button.md-logo :-webkit-any(img,svg){fill:currentcolor;display:block;height:2.4rem;max-width:100%;object-fit:contain;width:auto}.md-nav__title .md-nav__button.md-logo :-moz-any(img,svg){fill:currentcolor;display:block;height:2.4rem;max-width:100%;object-fit:contain;width:auto}.md-nav__title .md-nav__button.md-logo :is(img,svg){fill:currentcolor;display:block;height:2.4rem;max-width:100%;object-fit:contain;width:auto}.md-nav__list{list-style:none;margin:0;padding:0}.md-nav__item{padding:0 .6rem}[dir=ltr] .md-nav__item .md-nav__item{padding-right:0}[dir=rtl] .md-nav__item .md-nav__item{padding-left:0}.md-nav__link{align-items:center;cursor:pointer;display:flex;justify-content:space-between;margin-top:.625em;overflow:hidden;scroll-snap-align:start;text-overflow:ellipsis;transition:color 125ms}.md-nav__link--passed{color:var(--md-default-fg-color--light)}.md-nav__item .md-nav__link--active{color:var(--md-typeset-a-color)}.md-nav__item .md-nav__link--index [href]{width:100%}.md-nav__link:-webkit-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav__link:-moz-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav__link:is(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav__link.focus-visible{outline-color:var(--md-accent-fg-color);outline-offset:.2rem}.md-nav--primary .md-nav__link[for=__toc]{display:none}.md-nav--primary .md-nav__link[for=__toc] .md-icon:after{background-color:currentcolor;display:block;height:100%;-webkit-mask-image:var(--md-toc-icon);mask-image:var(--md-toc-icon);width:100%}.md-nav--primary .md-nav__link[for=__toc]~.md-nav{display:none}.md-nav__link>*{cursor:pointer;display:flex}.md-nav__icon{flex-shrink:0}.md-nav__source{display:none}@media screen and (max-width:76.1875em){.md-nav--primary,.md-nav--primary .md-nav{background-color:var(--md-default-bg-color);display:flex;flex-direction:column;height:100%;left:0;position:absolute;right:0;top:0;z-index:1}.md-nav--primary :-webkit-any(.md-nav__title,.md-nav__item){font-size:.8rem;line-height:1.5}.md-nav--primary :-moz-any(.md-nav__title,.md-nav__item){font-size:.8rem;line-height:1.5}.md-nav--primary :is(.md-nav__title,.md-nav__item){font-size:.8rem;line-height:1.5}.md-nav--primary .md-nav__title{background-color:var(--md-default-fg-color--lightest);color:var(--md-default-fg-color--light);cursor:pointer;height:5.6rem;line-height:2.4rem;padding:3rem .8rem .2rem;position:relative;white-space:nowrap}[dir=ltr] .md-nav--primary .md-nav__title .md-nav__icon{left:.4rem}[dir=rtl] .md-nav--primary .md-nav__title .md-nav__icon{right:.4rem}.md-nav--primary .md-nav__title .md-nav__icon{display:block;height:1.2rem;margin:.2rem;position:absolute;top:.4rem;width:1.2rem}.md-nav--primary .md-nav__title .md-nav__icon:after{background-color:currentcolor;content:"";display:block;height:100%;-webkit-mask-image:var(--md-nav-icon--prev);mask-image:var(--md-nav-icon--prev);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:100%}.md-nav--primary .md-nav__title~.md-nav__list{background-color:var(--md-default-bg-color);box-shadow:0 .05rem 0 var(--md-default-fg-color--lightest) inset;overflow-y:auto;scroll-snap-type:y mandatory;touch-action:pan-y}.md-nav--primary .md-nav__title~.md-nav__list>:first-child{border-top:0}.md-nav--primary .md-nav__title[for=__drawer]{background-color:var(--md-primary-fg-color);color:var(--md-primary-bg-color);font-weight:700}.md-nav--primary .md-nav__title .md-logo{display:block;left:.2rem;margin:.2rem;padding:.4rem;position:absolute;right:.2rem;top:.2rem}.md-nav--primary .md-nav__list{flex:1}.md-nav--primary .md-nav__item{border-top:.05rem solid var(--md-default-fg-color--lightest);padding:0}.md-nav--primary .md-nav__item--active>.md-nav__link{color:var(--md-typeset-a-color)}.md-nav--primary .md-nav__item--active>.md-nav__link:-webkit-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav--primary .md-nav__item--active>.md-nav__link:-moz-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav--primary .md-nav__item--active>.md-nav__link:is(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav--primary .md-nav__link{margin-top:0;padding:.6rem .8rem}[dir=ltr] .md-nav--primary .md-nav__link .md-nav__icon{margin-right:-.2rem}[dir=rtl] .md-nav--primary .md-nav__link .md-nav__icon{margin-left:-.2rem}.md-nav--primary .md-nav__link .md-nav__icon{font-size:1.2rem;height:1.2rem;width:1.2rem}.md-nav--primary .md-nav__link .md-nav__icon:after{background-color:currentcolor;content:"";display:block;height:100%;-webkit-mask-image:var(--md-nav-icon--next);mask-image:var(--md-nav-icon--next);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:100%}[dir=rtl] .md-nav--primary .md-nav__icon:after{transform:scale(-1)}.md-nav--primary .md-nav--secondary .md-nav{background-color:initial;position:static}[dir=ltr] .md-nav--primary .md-nav--secondary .md-nav .md-nav__link{padding-left:1.4rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav__link{padding-right:1.4rem}[dir=ltr] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav__link{padding-left:2rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav__link{padding-right:2rem}[dir=ltr] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav__link{padding-left:2.6rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav__link{padding-right:2.6rem}[dir=ltr] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav .md-nav__link{padding-left:3.2rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav .md-nav__link{padding-right:3.2rem}.md-nav--secondary{background-color:initial}.md-nav__toggle~.md-nav{display:flex;opacity:0;transform:translateX(100%);transition:transform .25s cubic-bezier(.8,0,.6,1),opacity 125ms 50ms}[dir=rtl] .md-nav__toggle~.md-nav{transform:translateX(-100%)}.md-nav__toggle:checked~.md-nav{opacity:1;transform:translateX(0);transition:transform .25s cubic-bezier(.4,0,.2,1),opacity 125ms 125ms}.md-nav__toggle:checked~.md-nav>.md-nav__list{-webkit-backface-visibility:hidden;backface-visibility:hidden}}@media screen and (max-width:59.9375em){.md-nav--primary .md-nav__link[for=__toc]{display:flex}.md-nav--primary .md-nav__link[for=__toc] .md-icon:after{content:""}.md-nav--primary .md-nav__link[for=__toc]+.md-nav__link{display:none}.md-nav--primary .md-nav__link[for=__toc]~.md-nav{display:flex}.md-nav__source{background-color:var(--md-primary-fg-color--dark);color:var(--md-primary-bg-color);display:block;padding:0 .2rem}}@media screen and (min-width:60em) and (max-width:76.1875em){.md-nav--integrated .md-nav__link[for=__toc]{display:flex}.md-nav--integrated .md-nav__link[for=__toc] .md-icon:after{content:""}.md-nav--integrated .md-nav__link[for=__toc]+.md-nav__link{display:none}.md-nav--integrated .md-nav__link[for=__toc]~.md-nav{display:flex}}@media screen and (min-width:60em){.md-nav--secondary .md-nav__title{background:var(--md-default-bg-color);box-shadow:0 0 .4rem .4rem var(--md-default-bg-color);position:-webkit-sticky;position:sticky;top:0;z-index:1}.md-nav--secondary .md-nav__title[for=__toc]{scroll-snap-align:start}.md-nav--secondary .md-nav__title .md-nav__icon{display:none}}@media screen and (min-width:76.25em){.md-nav{transition:max-height .25s cubic-bezier(.86,0,.07,1)}.md-nav--primary .md-nav__title{background:var(--md-default-bg-color);box-shadow:0 0 .4rem .4rem var(--md-default-bg-color);position:-webkit-sticky;position:sticky;top:0;z-index:1}.md-nav--primary .md-nav__title[for=__drawer]{scroll-snap-align:start}.md-nav--primary .md-nav__title .md-nav__icon,.md-nav__toggle~.md-nav{display:none}.md-nav__toggle:-webkit-any(:checked,:indeterminate)~.md-nav{display:block}.md-nav__toggle:-moz-any(:checked,:indeterminate)~.md-nav{display:block}.md-nav__toggle:is(:checked,:indeterminate)~.md-nav{display:block}.md-nav__item--nested>.md-nav>.md-nav__title{display:none}.md-nav__item--section{display:block;margin:1.25em 0}.md-nav__item--section:last-child{margin-bottom:0}.md-nav__item--section>.md-nav__link{font-weight:700;pointer-events:none}.md-nav__item--section>.md-nav__link--index [href]{pointer-events:auto}.md-nav__item--section>.md-nav__link .md-nav__icon{display:none}.md-nav__item--section>.md-nav{display:block}.md-nav__item--section>.md-nav>.md-nav__list>.md-nav__item{padding:0}.md-nav__icon{border-radius:100%;height:.9rem;transition:background-color .25s,transform .25s;width:.9rem}[dir=rtl] .md-nav__icon{transform:rotate(180deg)}.md-nav__icon:hover{background-color:var(--md-accent-fg-color--transparent)}.md-nav__icon:after{background-color:currentcolor;content:"";display:inline-block;height:100%;-webkit-mask-image:var(--md-nav-icon--next);mask-image:var(--md-nav-icon--next);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;vertical-align:-.1rem;width:100%}.md-nav__item--nested .md-nav__toggle:checked~.md-nav__link .md-nav__icon,.md-nav__item--nested .md-nav__toggle:indeterminate~.md-nav__link .md-nav__icon{transform:rotate(90deg)}.md-nav--lifted>.md-nav__list>.md-nav__item,.md-nav--lifted>.md-nav__list>.md-nav__item--nested,.md-nav--lifted>.md-nav__title{display:none}.md-nav--lifted>.md-nav__list>.md-nav__item--active{display:block;padding:0}.md-nav--lifted>.md-nav__list>.md-nav__item--active>.md-nav__link{background:var(--md-default-bg-color);box-shadow:0 0 .4rem .4rem var(--md-default-bg-color);font-weight:700;margin-top:0;padding:0 .6rem;position:-webkit-sticky;position:sticky;top:0;z-index:1}.md-nav--lifted>.md-nav__list>.md-nav__item--active>.md-nav__link:not(.md-nav__link--index){pointer-events:none}.md-nav--lifted>.md-nav__list>.md-nav__item--active>.md-nav__link .md-nav__icon{display:none}.md-nav--lifted .md-nav[data-md-level="1"]{display:block}[dir=ltr] .md-nav--lifted .md-nav[data-md-level="1"]>.md-nav__list>.md-nav__item{padding-right:.6rem}[dir=rtl] .md-nav--lifted .md-nav[data-md-level="1"]>.md-nav__list>.md-nav__item{padding-left:.6rem}.md-nav--integrated>.md-nav__list>.md-nav__item--active:not(.md-nav__item--nested){padding:0 .6rem}.md-nav--integrated>.md-nav__list>.md-nav__item--active:not(.md-nav__item--nested)>.md-nav__link{padding:0}[dir=ltr] .md-nav--integrated>.md-nav__list>.md-nav__item--active .md-nav--secondary{border-left:.05rem solid var(--md-primary-fg-color)}[dir=rtl] .md-nav--integrated>.md-nav__list>.md-nav__item--active .md-nav--secondary{border-right:.05rem solid var(--md-primary-fg-color)}.md-nav--integrated>.md-nav__list>.md-nav__item--active .md-nav--secondary{display:block;margin-bottom:1.25em}.md-nav--integrated>.md-nav__list>.md-nav__item--active .md-nav--secondary>.md-nav__title{display:none}}:root{--md-search-result-icon:url('data:image/svg+xml;charset=utf-8,')}.md-search{position:relative}@media screen and (min-width:60em){.md-search{padding:.2rem 0}}.no-js .md-search{display:none}.md-search__overlay{opacity:0;z-index:1}@media screen and (max-width:59.9375em){[dir=ltr] .md-search__overlay{left:-2.2rem}[dir=rtl] .md-search__overlay{right:-2.2rem}.md-search__overlay{background-color:var(--md-default-bg-color);border-radius:1rem;height:2rem;overflow:hidden;pointer-events:none;position:absolute;top:-1rem;transform-origin:center;transition:transform .3s .1s,opacity .2s .2s;width:2rem}[data-md-toggle=search]:checked~.md-header .md-search__overlay{opacity:1;transition:transform .4s,opacity .1s}}@media screen and (min-width:60em){[dir=ltr] .md-search__overlay{left:0}[dir=rtl] .md-search__overlay{right:0}.md-search__overlay{background-color:#0000008a;cursor:pointer;height:0;position:fixed;top:0;transition:width 0ms .25s,height 0ms .25s,opacity .25s;width:0}[data-md-toggle=search]:checked~.md-header .md-search__overlay{height:200vh;opacity:1;transition:width 0ms,height 0ms,opacity .25s;width:100%}}@media screen and (max-width:29.9375em){[data-md-toggle=search]:checked~.md-header .md-search__overlay{transform:scale(45)}}@media screen and (min-width:30em) and (max-width:44.9375em){[data-md-toggle=search]:checked~.md-header .md-search__overlay{transform:scale(60)}}@media screen and (min-width:45em) and (max-width:59.9375em){[data-md-toggle=search]:checked~.md-header .md-search__overlay{transform:scale(75)}}.md-search__inner{-webkit-backface-visibility:hidden;backface-visibility:hidden}@media screen and (max-width:59.9375em){[dir=ltr] .md-search__inner{left:0}[dir=rtl] .md-search__inner{right:0}.md-search__inner{height:0;opacity:0;overflow:hidden;position:fixed;top:0;transform:translateX(5%);transition:width 0ms .3s,height 0ms .3s,transform .15s cubic-bezier(.4,0,.2,1) .15s,opacity .15s .15s;width:0;z-index:2}[dir=rtl] .md-search__inner{transform:translateX(-5%)}[data-md-toggle=search]:checked~.md-header .md-search__inner{height:100%;opacity:1;transform:translateX(0);transition:width 0ms 0ms,height 0ms 0ms,transform .15s cubic-bezier(.1,.7,.1,1) .15s,opacity .15s .15s;width:100%}}@media screen and (min-width:60em){[dir=ltr] .md-search__inner{float:right}[dir=rtl] .md-search__inner{float:left}.md-search__inner{padding:.1rem 0;position:relative;transition:width .25s cubic-bezier(.1,.7,.1,1);width:11.7rem}}@media screen and (min-width:60em) and (max-width:76.1875em){[data-md-toggle=search]:checked~.md-header .md-search__inner{width:23.4rem}}@media screen and (min-width:76.25em){[data-md-toggle=search]:checked~.md-header .md-search__inner{width:34.4rem}}.md-search__form{background-color:var(--md-default-bg-color);box-shadow:0 0 .6rem #0000;height:2.4rem;position:relative;transition:color .25s,background-color .25s;z-index:2}@media screen and (min-width:60em){.md-search__form{background-color:#00000042;border-radius:.1rem;height:1.8rem}.md-search__form:hover{background-color:#ffffff1f}}[data-md-toggle=search]:checked~.md-header .md-search__form{background-color:var(--md-default-bg-color);border-radius:.1rem .1rem 0 0;box-shadow:0 0 .6rem #00000012;color:var(--md-default-fg-color)}[dir=ltr] .md-search__input{padding-left:3.6rem;padding-right:2.2rem}[dir=rtl] .md-search__input{padding-left:2.2rem;padding-right:3.6rem}.md-search__input{background:#0000;font-size:.9rem;height:100%;position:relative;text-overflow:ellipsis;width:100%;z-index:2}.md-search__input::placeholder{transition:color .25s}.md-search__input::placeholder,.md-search__input~.md-search__icon{color:var(--md-default-fg-color--light)}.md-search__input::-ms-clear{display:none}@media screen and (max-width:59.9375em){.md-search__input{font-size:.9rem;height:2.4rem;width:100%}}@media screen and (min-width:60em){[dir=ltr] .md-search__input{padding-left:2.2rem}[dir=rtl] .md-search__input{padding-right:2.2rem}.md-search__input{color:inherit;font-size:.8rem}.md-search__input::placeholder{color:var(--md-primary-bg-color--light)}.md-search__input+.md-search__icon{color:var(--md-primary-bg-color)}[data-md-toggle=search]:checked~.md-header .md-search__input{text-overflow:clip}[data-md-toggle=search]:checked~.md-header .md-search__input+.md-search__icon,[data-md-toggle=search]:checked~.md-header .md-search__input::placeholder{color:var(--md-default-fg-color--light)}}.md-search__icon{cursor:pointer;display:inline-block;height:1.2rem;transition:color .25s,opacity .25s;width:1.2rem}.md-search__icon:hover{opacity:.7}[dir=ltr] .md-search__icon[for=__search]{left:.5rem}[dir=rtl] .md-search__icon[for=__search]{right:.5rem}.md-search__icon[for=__search]{position:absolute;top:.3rem;z-index:2}[dir=rtl] .md-search__icon[for=__search] svg{transform:scaleX(-1)}@media screen and (max-width:59.9375em){[dir=ltr] .md-search__icon[for=__search]{left:.8rem}[dir=rtl] .md-search__icon[for=__search]{right:.8rem}.md-search__icon[for=__search]{top:.6rem}.md-search__icon[for=__search] svg:first-child{display:none}}@media screen and (min-width:60em){.md-search__icon[for=__search]{pointer-events:none}.md-search__icon[for=__search] svg:last-child{display:none}}[dir=ltr] .md-search__options{right:.5rem}[dir=rtl] .md-search__options{left:.5rem}.md-search__options{pointer-events:none;position:absolute;top:.3rem;z-index:2}@media screen and (max-width:59.9375em){[dir=ltr] .md-search__options{right:.8rem}[dir=rtl] .md-search__options{left:.8rem}.md-search__options{top:.6rem}}[dir=ltr] .md-search__options>*{margin-left:.2rem}[dir=rtl] .md-search__options>*{margin-right:.2rem}.md-search__options>*{color:var(--md-default-fg-color--light);opacity:0;transform:scale(.75);transition:transform .15s cubic-bezier(.1,.7,.1,1),opacity .15s}.md-search__options>:not(.focus-visible){-webkit-tap-highlight-color:transparent;outline:none}[data-md-toggle=search]:checked~.md-header .md-search__input:valid~.md-search__options>*{opacity:1;pointer-events:auto;transform:scale(1)}[data-md-toggle=search]:checked~.md-header .md-search__input:valid~.md-search__options>:hover{opacity:.7}[dir=ltr] .md-search__suggest{padding-left:3.6rem;padding-right:2.2rem}[dir=rtl] .md-search__suggest{padding-left:2.2rem;padding-right:3.6rem}.md-search__suggest{align-items:center;color:var(--md-default-fg-color--lighter);display:flex;font-size:.9rem;height:100%;opacity:0;position:absolute;top:0;transition:opacity 50ms;white-space:nowrap;width:100%}@media screen and (min-width:60em){[dir=ltr] .md-search__suggest{padding-left:2.2rem}[dir=rtl] .md-search__suggest{padding-right:2.2rem}.md-search__suggest{font-size:.8rem}}[data-md-toggle=search]:checked~.md-header .md-search__suggest{opacity:1;transition:opacity .3s .1s}[dir=ltr] .md-search__output{border-bottom-left-radius:.1rem}[dir=ltr] .md-search__output,[dir=rtl] .md-search__output{border-bottom-right-radius:.1rem}[dir=rtl] .md-search__output{border-bottom-left-radius:.1rem}.md-search__output{overflow:hidden;position:absolute;width:100%;z-index:1}@media screen and (max-width:59.9375em){.md-search__output{bottom:0;top:2.4rem}}@media screen and (min-width:60em){.md-search__output{opacity:0;top:1.9rem;transition:opacity .4s}[data-md-toggle=search]:checked~.md-header .md-search__output{box-shadow:var(--md-shadow-z3);opacity:1}}.md-search__scrollwrap{-webkit-backface-visibility:hidden;backface-visibility:hidden;background-color:var(--md-default-bg-color);height:100%;overflow-y:auto;touch-action:pan-y}@media (-webkit-max-device-pixel-ratio:1),(max-resolution:1dppx){.md-search__scrollwrap{transform:translateZ(0)}}@media screen and (min-width:60em) and (max-width:76.1875em){.md-search__scrollwrap{width:23.4rem}}@media screen and (min-width:76.25em){.md-search__scrollwrap{width:34.4rem}}@media screen and (min-width:60em){.md-search__scrollwrap{max-height:0;scrollbar-color:var(--md-default-fg-color--lighter) #0000;scrollbar-width:thin}[data-md-toggle=search]:checked~.md-header .md-search__scrollwrap{max-height:75vh}.md-search__scrollwrap:hover{scrollbar-color:var(--md-accent-fg-color) #0000}.md-search__scrollwrap::-webkit-scrollbar{height:.2rem;width:.2rem}.md-search__scrollwrap::-webkit-scrollbar-thumb{background-color:var(--md-default-fg-color--lighter)}.md-search__scrollwrap::-webkit-scrollbar-thumb:hover{background-color:var(--md-accent-fg-color)}}.md-search-result{color:var(--md-default-fg-color);word-break:break-word}.md-search-result__meta{background-color:var(--md-default-fg-color--lightest);color:var(--md-default-fg-color--light);font-size:.64rem;line-height:1.8rem;padding:0 .8rem;scroll-snap-align:start}@media screen and (min-width:60em){[dir=ltr] .md-search-result__meta{padding-left:2.2rem}[dir=rtl] .md-search-result__meta{padding-right:2.2rem}}.md-search-result__list{list-style:none;margin:0;padding:0;-webkit-user-select:none;-moz-user-select:none;user-select:none}.md-search-result__item{box-shadow:0 -.05rem var(--md-default-fg-color--lightest)}.md-search-result__item:first-child{box-shadow:none}.md-search-result__link{display:block;outline:none;scroll-snap-align:start;transition:background-color .25s}.md-search-result__link:-webkit-any(:focus,:hover){background-color:var(--md-accent-fg-color--transparent)}.md-search-result__link:-moz-any(:focus,:hover){background-color:var(--md-accent-fg-color--transparent)}.md-search-result__link:is(:focus,:hover){background-color:var(--md-accent-fg-color--transparent)}.md-search-result__link:last-child p:last-child{margin-bottom:.6rem}.md-search-result__more summary{color:var(--md-typeset-a-color);cursor:pointer;display:block;font-size:.64rem;outline:none;padding:.75em .8rem;scroll-snap-align:start;transition:color .25s,background-color .25s}@media screen and (min-width:60em){[dir=ltr] .md-search-result__more summary{padding-left:2.2rem}[dir=rtl] .md-search-result__more summary{padding-right:2.2rem}}.md-search-result__more summary:-webkit-any(:focus,:hover){background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-search-result__more summary:-moz-any(:focus,:hover){background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-search-result__more summary:is(:focus,:hover){background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-search-result__more summary::marker{display:none}.md-search-result__more summary::-webkit-details-marker{display:none}.md-search-result__more summary~*>*{opacity:.65}.md-search-result__article{overflow:hidden;padding:0 .8rem;position:relative}@media screen and (min-width:60em){[dir=ltr] .md-search-result__article{padding-left:2.2rem}[dir=rtl] .md-search-result__article{padding-right:2.2rem}}.md-search-result__article--document .md-search-result__title{font-size:.8rem;font-weight:400;line-height:1.4;margin:.55rem 0}[dir=ltr] .md-search-result__icon{left:0}[dir=rtl] .md-search-result__icon{right:0}.md-search-result__icon{color:var(--md-default-fg-color--light);height:1.2rem;margin:.5rem;position:absolute;width:1.2rem}@media screen and (max-width:59.9375em){.md-search-result__icon{display:none}}.md-search-result__icon:after{background-color:currentcolor;content:"";display:inline-block;height:100%;-webkit-mask-image:var(--md-search-result-icon);mask-image:var(--md-search-result-icon);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:100%}[dir=rtl] .md-search-result__icon:after{transform:scaleX(-1)}.md-search-result__title{font-size:.64rem;font-weight:700;line-height:1.6;margin:.5em 0}.md-search-result__teaser{-webkit-box-orient:vertical;-webkit-line-clamp:2;color:var(--md-default-fg-color--light);display:-webkit-box;font-size:.64rem;line-height:1.6;margin:.5em 0;max-height:2rem;overflow:hidden;text-overflow:ellipsis}@media screen and (max-width:44.9375em){.md-search-result__teaser{-webkit-line-clamp:3;max-height:3rem}}@media screen and (min-width:60em) and (max-width:76.1875em){.md-search-result__teaser{-webkit-line-clamp:3;max-height:3rem}}.md-search-result__teaser mark{background-color:initial;text-decoration:underline}.md-search-result__terms{font-size:.64rem;font-style:italic;margin:.5em 0}.md-search-result mark{background-color:initial;color:var(--md-accent-fg-color)}.md-select{position:relative;z-index:1}.md-select__inner{background-color:var(--md-default-bg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z2);color:var(--md-default-fg-color);left:50%;margin-top:.2rem;max-height:0;opacity:0;position:absolute;top:calc(100% - .2rem);transform:translate3d(-50%,.3rem,0);transition:transform .25s 375ms,opacity .25s .25s,max-height 0ms .5s}.md-select:-webkit-any(:focus-within,:hover) .md-select__inner{max-height:10rem;opacity:1;transform:translate3d(-50%,0,0);-webkit-transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height 0ms;transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height 0ms}.md-select:-moz-any(:focus-within,:hover) .md-select__inner{max-height:10rem;opacity:1;transform:translate3d(-50%,0,0);-moz-transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height 0ms;transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height 0ms}.md-select:is(:focus-within,:hover) .md-select__inner{max-height:10rem;opacity:1;transform:translate3d(-50%,0,0);transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height 0ms}.md-select__inner:after{border-bottom:.2rem solid #0000;border-bottom-color:var(--md-default-bg-color);border-left:.2rem solid #0000;border-right:.2rem solid #0000;border-top:0;content:"";height:0;left:50%;margin-left:-.2rem;margin-top:-.2rem;position:absolute;top:0;width:0}.md-select__list{border-radius:.1rem;font-size:.8rem;list-style-type:none;margin:0;max-height:inherit;overflow:auto;padding:0}.md-select__item{line-height:1.8rem}[dir=ltr] .md-select__link{padding-left:.6rem;padding-right:1.2rem}[dir=rtl] .md-select__link{padding-left:1.2rem;padding-right:.6rem}.md-select__link{cursor:pointer;display:block;outline:none;scroll-snap-align:start;transition:background-color .25s,color .25s;width:100%}.md-select__link:-webkit-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-select__link:-moz-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-select__link:is(:focus,:hover){color:var(--md-accent-fg-color)}.md-select__link:focus{background-color:var(--md-default-fg-color--lightest)}.md-sidebar{align-self:flex-start;flex-shrink:0;padding:1.2rem 0;position:-webkit-sticky;position:sticky;top:2.4rem;width:12.1rem}@media print{.md-sidebar{display:none}}@media screen and (max-width:76.1875em){[dir=ltr] .md-sidebar--primary{left:-12.1rem}[dir=rtl] .md-sidebar--primary{right:-12.1rem}.md-sidebar--primary{background-color:var(--md-default-bg-color);display:block;height:100%;position:fixed;top:0;transform:translateX(0);transition:transform .25s cubic-bezier(.4,0,.2,1),box-shadow .25s;width:12.1rem;z-index:5}[data-md-toggle=drawer]:checked~.md-container .md-sidebar--primary{box-shadow:var(--md-shadow-z3);transform:translateX(12.1rem)}[dir=rtl] [data-md-toggle=drawer]:checked~.md-container .md-sidebar--primary{transform:translateX(-12.1rem)}.md-sidebar--primary .md-sidebar__scrollwrap{bottom:0;left:0;margin:0;overflow:hidden;position:absolute;right:0;scroll-snap-type:none;top:0}}@media screen and (min-width:76.25em){.md-sidebar{height:0}.no-js .md-sidebar{height:auto}.md-header--lifted~.md-container .md-sidebar{top:4.8rem}}.md-sidebar--secondary{display:none;order:2}@media screen and (min-width:60em){.md-sidebar--secondary{height:0}.no-js .md-sidebar--secondary{height:auto}.md-sidebar--secondary:not([hidden]){display:block}.md-sidebar--secondary .md-sidebar__scrollwrap{touch-action:pan-y}}.md-sidebar__scrollwrap{scrollbar-gutter:stable;-webkit-backface-visibility:hidden;backface-visibility:hidden;margin:0 .2rem;overflow-y:auto;scrollbar-color:var(--md-default-fg-color--lighter) #0000;scrollbar-width:thin}.md-sidebar__scrollwrap:hover{scrollbar-color:var(--md-accent-fg-color) #0000}.md-sidebar__scrollwrap::-webkit-scrollbar{height:.2rem;width:.2rem}.md-sidebar__scrollwrap::-webkit-scrollbar-thumb{background-color:var(--md-default-fg-color--lighter)}.md-sidebar__scrollwrap::-webkit-scrollbar-thumb:hover{background-color:var(--md-accent-fg-color)}@supports selector(::-webkit-scrollbar){.md-sidebar__scrollwrap{scrollbar-gutter:auto}[dir=ltr] .md-sidebar__inner{padding-right:calc(100% - 11.5rem)}[dir=rtl] .md-sidebar__inner{padding-left:calc(100% - 11.5rem)}}@media screen and (max-width:76.1875em){.md-overlay{background-color:#0000008a;height:0;opacity:0;position:fixed;top:0;transition:width 0ms .25s,height 0ms .25s,opacity .25s;width:0;z-index:5}[data-md-toggle=drawer]:checked~.md-overlay{height:100%;opacity:1;transition:width 0ms,height 0ms,opacity .25s;width:100%}}@keyframes facts{0%{height:0}to{height:.65rem}}@keyframes fact{0%{opacity:0;transform:translateY(100%)}50%{opacity:0}to{opacity:1;transform:translateY(0)}}:root{--md-source-forks-icon:url('data:image/svg+xml;charset=utf-8,');--md-source-repositories-icon:url('data:image/svg+xml;charset=utf-8,');--md-source-stars-icon:url('data:image/svg+xml;charset=utf-8,');--md-source-version-icon:url('data:image/svg+xml;charset=utf-8,')}.md-source{-webkit-backface-visibility:hidden;backface-visibility:hidden;display:block;font-size:.65rem;line-height:1.2;outline-color:var(--md-accent-fg-color);transition:opacity .25s;white-space:nowrap}.md-source:hover{opacity:.7}.md-source__icon{display:inline-block;height:2.4rem;vertical-align:middle;width:2rem}[dir=ltr] .md-source__icon svg{margin-left:.6rem}[dir=rtl] .md-source__icon svg{margin-right:.6rem}.md-source__icon svg{margin-top:.6rem}[dir=ltr] .md-source__icon+.md-source__repository{margin-left:-2rem}[dir=rtl] .md-source__icon+.md-source__repository{margin-right:-2rem}[dir=ltr] .md-source__icon+.md-source__repository{padding-left:2rem}[dir=rtl] .md-source__icon+.md-source__repository{padding-right:2rem}[dir=ltr] .md-source__repository{margin-left:.6rem}[dir=rtl] .md-source__repository{margin-right:.6rem}.md-source__repository{display:inline-block;max-width:calc(100% - 1.2rem);overflow:hidden;text-overflow:ellipsis;vertical-align:middle}.md-source__facts{display:flex;font-size:.55rem;gap:.4rem;list-style-type:none;margin:.1rem 0 0;opacity:.75;overflow:hidden;padding:0;width:100%}.md-source__repository--active .md-source__facts{animation:facts .25s ease-in}.md-source__fact{overflow:hidden;text-overflow:ellipsis}.md-source__repository--active .md-source__fact{animation:fact .4s ease-out}[dir=ltr] .md-source__fact:before{margin-right:.1rem}[dir=rtl] .md-source__fact:before{margin-left:.1rem}.md-source__fact:before{background-color:currentcolor;content:"";display:inline-block;height:.6rem;-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;vertical-align:text-top;width:.6rem}.md-source__fact:nth-child(1n+2){flex-shrink:0}.md-source__fact--version:before{-webkit-mask-image:var(--md-source-version-icon);mask-image:var(--md-source-version-icon)}.md-source__fact--stars:before{-webkit-mask-image:var(--md-source-stars-icon);mask-image:var(--md-source-stars-icon)}.md-source__fact--forks:before{-webkit-mask-image:var(--md-source-forks-icon);mask-image:var(--md-source-forks-icon)}.md-source__fact--repositories:before{-webkit-mask-image:var(--md-source-repositories-icon);mask-image:var(--md-source-repositories-icon)}.md-tabs{background-color:var(--md-primary-fg-color);color:var(--md-primary-bg-color);display:block;line-height:1.3;overflow:auto;width:100%;z-index:3}@media print{.md-tabs{display:none}}@media screen and (max-width:76.1875em){.md-tabs{display:none}}.md-tabs[hidden]{pointer-events:none}[dir=ltr] .md-tabs__list{margin-left:.2rem}[dir=rtl] .md-tabs__list{margin-right:.2rem}.md-tabs__list{contain:content;list-style:none;margin:0;padding:0;white-space:nowrap}.md-tabs__item{display:inline-block;height:2.4rem;padding-left:.6rem;padding-right:.6rem}.md-tabs__link{-webkit-backface-visibility:hidden;backface-visibility:hidden;display:block;font-size:.7rem;margin-top:.8rem;opacity:.7;outline-color:var(--md-accent-fg-color);outline-offset:.2rem;transition:transform .4s cubic-bezier(.1,.7,.1,1),opacity .25s}.md-tabs__link--active,.md-tabs__link:-webkit-any(:focus,:hover){color:inherit;opacity:1}.md-tabs__link--active,.md-tabs__link:-moz-any(:focus,:hover){color:inherit;opacity:1}.md-tabs__link--active,.md-tabs__link:is(:focus,:hover){color:inherit;opacity:1}.md-tabs__item:nth-child(2) .md-tabs__link{transition-delay:20ms}.md-tabs__item:nth-child(3) .md-tabs__link{transition-delay:40ms}.md-tabs__item:nth-child(4) .md-tabs__link{transition-delay:60ms}.md-tabs__item:nth-child(5) .md-tabs__link{transition-delay:80ms}.md-tabs__item:nth-child(6) .md-tabs__link{transition-delay:.1s}.md-tabs__item:nth-child(7) .md-tabs__link{transition-delay:.12s}.md-tabs__item:nth-child(8) .md-tabs__link{transition-delay:.14s}.md-tabs__item:nth-child(9) .md-tabs__link{transition-delay:.16s}.md-tabs__item:nth-child(10) .md-tabs__link{transition-delay:.18s}.md-tabs__item:nth-child(11) .md-tabs__link{transition-delay:.2s}.md-tabs__item:nth-child(12) .md-tabs__link{transition-delay:.22s}.md-tabs__item:nth-child(13) .md-tabs__link{transition-delay:.24s}.md-tabs__item:nth-child(14) .md-tabs__link{transition-delay:.26s}.md-tabs__item:nth-child(15) .md-tabs__link{transition-delay:.28s}.md-tabs__item:nth-child(16) .md-tabs__link{transition-delay:.3s}.md-tabs[hidden] .md-tabs__link{opacity:0;transform:translateY(50%);transition:transform 0ms .1s,opacity .1s}:root{--md-tag-icon:url('data:image/svg+xml;charset=utf-8,')}.md-typeset .md-tags{margin-bottom:.75em;margin-top:-.125em}[dir=ltr] .md-typeset .md-tag{margin-right:.5em}[dir=rtl] .md-typeset .md-tag{margin-left:.5em}.md-typeset .md-tag{background:var(--md-default-fg-color--lightest);border-radius:2.4rem;display:inline-block;font-size:.64rem;font-weight:700;letter-spacing:normal;line-height:1.6;margin-bottom:.5em;padding:.3125em .9375em;vertical-align:middle}.md-typeset .md-tag[href]{-webkit-tap-highlight-color:transparent;color:inherit;outline:none;transition:color 125ms,background-color 125ms}.md-typeset .md-tag[href]:focus,.md-typeset .md-tag[href]:hover{background-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}[id]>.md-typeset .md-tag{vertical-align:text-top}.md-typeset .md-tag-icon:before{background-color:var(--md-default-fg-color--lighter);content:"";display:inline-block;height:1.2em;margin-right:.4em;-webkit-mask-image:var(--md-tag-icon);mask-image:var(--md-tag-icon);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;transition:background-color 125ms;vertical-align:text-bottom;width:1.2em}.md-typeset .md-tag-icon:-webkit-any(a:focus,a:hover):before{background-color:var(--md-accent-bg-color)}.md-typeset .md-tag-icon:-moz-any(a:focus,a:hover):before{background-color:var(--md-accent-bg-color)}.md-typeset .md-tag-icon:is(a:focus,a:hover):before{background-color:var(--md-accent-bg-color)}@keyframes pulse{0%{box-shadow:0 0 0 0 var(--md-default-fg-color--lightest);transform:scale(.95)}75%{box-shadow:0 0 0 .625em #0000;transform:scale(1)}to{box-shadow:0 0 0 0 #0000;transform:scale(.95)}}:root{--md-tooltip-width:20rem}.md-tooltip{-webkit-backface-visibility:hidden;backface-visibility:hidden;background-color:var(--md-default-bg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z2);color:var(--md-default-fg-color);font-family:var(--md-text-font-family);left:clamp(var(--md-tooltip-0,0rem) + .8rem,var(--md-tooltip-x),100vw + var(--md-tooltip-0,0rem) + .8rem - var(--md-tooltip-width) - 2 * .8rem);max-width:calc(100vw - 1.6rem);opacity:0;position:absolute;top:var(--md-tooltip-y);transform:translateY(-.4rem);transition:transform 0ms .25s,opacity .25s,z-index .25s;width:var(--md-tooltip-width);z-index:0}.md-tooltip--active{opacity:1;transform:translateY(0);transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,z-index 0ms;z-index:2}:-webkit-any(.focus-visible>.md-tooltip,.md-tooltip:target){outline:var(--md-accent-fg-color) auto}:-moz-any(.focus-visible>.md-tooltip,.md-tooltip:target){outline:var(--md-accent-fg-color) auto}:is(.focus-visible>.md-tooltip,.md-tooltip:target){outline:var(--md-accent-fg-color) auto}.md-tooltip__inner{font-size:.64rem;padding:.8rem}.md-tooltip__inner.md-typeset>:first-child{margin-top:0}.md-tooltip__inner.md-typeset>:last-child{margin-bottom:0}.md-annotation{font-weight:400;outline:none;white-space:normal}[dir=rtl] .md-annotation{direction:rtl}.md-annotation:not([hidden]){display:inline-block;line-height:1.325}.md-annotation__index{cursor:pointer;font-family:var(--md-code-font-family);font-size:.85em;margin:0 1ch;outline:none;position:relative;-webkit-user-select:none;-moz-user-select:none;user-select:none;z-index:0}.md-annotation .md-annotation__index{color:#fff;transition:z-index .25s}.md-annotation .md-annotation__index:-webkit-any(:focus,:hover){color:#fff}.md-annotation .md-annotation__index:-moz-any(:focus,:hover){color:#fff}.md-annotation .md-annotation__index:is(:focus,:hover){color:#fff}.md-annotation__index:after{background-color:var(--md-default-fg-color--lighter);border-radius:2ch;content:"";height:2.2ch;left:-.125em;margin:0 -.4ch;padding:0 .4ch;position:absolute;top:0;transition:color .25s,background-color .25s;width:calc(100% + 1.2ch);width:max(2.2ch,100% + 1.2ch);z-index:-1}@media not all and (prefers-reduced-motion){[data-md-visible]>.md-annotation__index:after{animation:pulse 2s infinite}}.md-tooltip--active+.md-annotation__index:after{animation:none;transition:color .25s,background-color .25s}code .md-annotation__index{font-family:var(--md-code-font-family);font-size:inherit}:-webkit-any(.md-tooltip--active+.md-annotation__index,:hover>.md-annotation__index){color:var(--md-accent-bg-color)}:-moz-any(.md-tooltip--active+.md-annotation__index,:hover>.md-annotation__index){color:var(--md-accent-bg-color)}:is(.md-tooltip--active+.md-annotation__index,:hover>.md-annotation__index){color:var(--md-accent-bg-color)}:-webkit-any(.md-tooltip--active+.md-annotation__index,:hover>.md-annotation__index):after{background-color:var(--md-accent-fg-color)}:-moz-any(.md-tooltip--active+.md-annotation__index,:hover>.md-annotation__index):after{background-color:var(--md-accent-fg-color)}:is(.md-tooltip--active+.md-annotation__index,:hover>.md-annotation__index):after{background-color:var(--md-accent-fg-color)}.md-tooltip--active+.md-annotation__index{animation:none;transition:none;z-index:2}.md-annotation__index [data-md-annotation-id]{display:inline-block;line-height:90%}.md-annotation__index [data-md-annotation-id]:before{content:attr(data-md-annotation-id);display:inline-block;padding-bottom:.1em;transform:scale(1.15);transition:transform .4s cubic-bezier(.1,.7,.1,1);vertical-align:.065em}@media not print{.md-annotation__index [data-md-annotation-id]:before{content:"+"}:focus-within>.md-annotation__index [data-md-annotation-id]:before{transform:scale(1.25) rotate(45deg)}}[dir=ltr] .md-top{margin-left:50%}[dir=rtl] .md-top{margin-right:50%}.md-top{background-color:var(--md-default-bg-color);border-radius:1.6rem;box-shadow:var(--md-shadow-z2);color:var(--md-default-fg-color--light);display:block;font-size:.7rem;outline:none;padding:.4rem .8rem;position:fixed;top:3.2rem;transform:translate(-50%);transition:color 125ms,background-color 125ms,transform 125ms cubic-bezier(.4,0,.2,1),opacity 125ms;z-index:2}@media print{.md-top{display:none}}[dir=rtl] .md-top{transform:translate(50%)}.md-top[hidden]{opacity:0;pointer-events:none;transform:translate(-50%,.2rem);transition-duration:0ms}[dir=rtl] .md-top[hidden]{transform:translate(50%,.2rem)}.md-top:-webkit-any(:focus,:hover){background-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}.md-top:-moz-any(:focus,:hover){background-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}.md-top:is(:focus,:hover){background-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}.md-top svg{display:inline-block;vertical-align:-.5em}@keyframes hoverfix{0%{pointer-events:none}}:root{--md-version-icon:url('data:image/svg+xml;charset=utf-8,')}.md-version{flex-shrink:0;font-size:.8rem;height:2.4rem}[dir=ltr] .md-version__current{margin-left:1.4rem;margin-right:.4rem}[dir=rtl] .md-version__current{margin-left:.4rem;margin-right:1.4rem}.md-version__current{color:inherit;cursor:pointer;outline:none;position:relative;top:.05rem}[dir=ltr] .md-version__current:after{margin-left:.4rem}[dir=rtl] .md-version__current:after{margin-right:.4rem}.md-version__current:after{background-color:currentcolor;content:"";display:inline-block;height:.6rem;-webkit-mask-image:var(--md-version-icon);mask-image:var(--md-version-icon);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:.4rem}.md-version__list{background-color:var(--md-default-bg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z2);color:var(--md-default-fg-color);list-style-type:none;margin:.2rem .8rem;max-height:0;opacity:0;overflow:auto;padding:0;position:absolute;scroll-snap-type:y mandatory;top:.15rem;transition:max-height 0ms .5s,opacity .25s .25s;z-index:3}.md-version:-webkit-any(:focus-within,:hover) .md-version__list{max-height:10rem;opacity:1;-webkit-transition:max-height 0ms,opacity .25s;transition:max-height 0ms,opacity .25s}.md-version:-moz-any(:focus-within,:hover) .md-version__list{max-height:10rem;opacity:1;-moz-transition:max-height 0ms,opacity .25s;transition:max-height 0ms,opacity .25s}.md-version:is(:focus-within,:hover) .md-version__list{max-height:10rem;opacity:1;transition:max-height 0ms,opacity .25s}@media (pointer:coarse){.md-version:hover .md-version__list{animation:hoverfix .25s forwards}.md-version:focus-within .md-version__list{animation:none}}.md-version__item{line-height:1.8rem}[dir=ltr] .md-version__link{padding-left:.6rem;padding-right:1.2rem}[dir=rtl] .md-version__link{padding-left:1.2rem;padding-right:.6rem}.md-version__link{cursor:pointer;display:block;outline:none;scroll-snap-align:start;transition:color .25s,background-color .25s;white-space:nowrap;width:100%}.md-version__link:-webkit-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-version__link:-moz-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-version__link:is(:focus,:hover){color:var(--md-accent-fg-color)}.md-version__link:focus{background-color:var(--md-default-fg-color--lightest)}:root{--md-admonition-icon--note:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--abstract:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--info:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--tip:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--success:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--question:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--warning:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--failure:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--danger:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--bug:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--example:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--quote:url('data:image/svg+xml;charset=utf-8,')}.md-typeset .admonition,.md-typeset details{background-color:var(--md-admonition-bg-color);border:.05rem solid #448aff;border-radius:.2rem;box-shadow:var(--md-shadow-z1);color:var(--md-admonition-fg-color);display:flow-root;font-size:.64rem;margin:1.5625em 0;padding:0 .6rem;page-break-inside:avoid}@media print{.md-typeset .admonition,.md-typeset details{box-shadow:none}}.md-typeset .admonition>*,.md-typeset details>*{box-sizing:border-box}.md-typeset .admonition :-webkit-any(.admonition,details),.md-typeset details :-webkit-any(.admonition,details){margin-bottom:1em;margin-top:1em}.md-typeset .admonition :-moz-any(.admonition,details),.md-typeset details :-moz-any(.admonition,details){margin-bottom:1em;margin-top:1em}.md-typeset .admonition :is(.admonition,details),.md-typeset details :is(.admonition,details){margin-bottom:1em;margin-top:1em}.md-typeset .admonition .md-typeset__scrollwrap,.md-typeset details .md-typeset__scrollwrap{margin:1em -.6rem}.md-typeset .admonition .md-typeset__table,.md-typeset details .md-typeset__table{padding:0 .6rem}.md-typeset .admonition>.tabbed-set:only-child,.md-typeset details>.tabbed-set:only-child{margin-top:0}html .md-typeset .admonition>:last-child,html .md-typeset details>:last-child{margin-bottom:.6rem}[dir=ltr] .md-typeset .admonition-title,[dir=ltr] .md-typeset summary{padding-left:2rem;padding-right:.6rem}[dir=rtl] .md-typeset .admonition-title,[dir=rtl] .md-typeset summary{padding-left:.6rem;padding-right:2rem}[dir=ltr] .md-typeset .admonition-title,[dir=ltr] .md-typeset summary{border-left-width:.2rem}[dir=rtl] .md-typeset .admonition-title,[dir=rtl] .md-typeset summary{border-right-width:.2rem}[dir=ltr] .md-typeset .admonition-title,[dir=ltr] .md-typeset summary{border-top-left-radius:.1rem}[dir=ltr] .md-typeset .admonition-title,[dir=ltr] .md-typeset summary,[dir=rtl] .md-typeset .admonition-title,[dir=rtl] .md-typeset summary{border-top-right-radius:.1rem}[dir=rtl] .md-typeset .admonition-title,[dir=rtl] .md-typeset summary{border-top-left-radius:.1rem}.md-typeset .admonition-title,.md-typeset summary{background-color:#448aff1a;border:none;font-weight:700;margin:0 -.6rem;padding-bottom:.4rem;padding-top:.4rem;position:relative}html .md-typeset .admonition-title:last-child,html .md-typeset summary:last-child{margin-bottom:0}[dir=ltr] .md-typeset .admonition-title:before,[dir=ltr] .md-typeset summary:before{left:.6rem}[dir=rtl] .md-typeset .admonition-title:before,[dir=rtl] .md-typeset summary:before{right:.6rem}.md-typeset .admonition-title:before,.md-typeset summary:before{background-color:#448aff;content:"";height:1rem;-webkit-mask-image:var(--md-admonition-icon--note);mask-image:var(--md-admonition-icon--note);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;position:absolute;top:.625em;width:1rem}.md-typeset .admonition-title code,.md-typeset summary code{box-shadow:0 0 0 .05rem var(--md-default-fg-color--lightest)}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.note){border-color:#448aff}.md-typeset :-moz-any(.admonition,details):-moz-any(.note){border-color:#448aff}.md-typeset :is(.admonition,details):is(.note){border-color:#448aff}.md-typeset :-webkit-any(.note)>:-webkit-any(.admonition-title,summary){background-color:#448aff1a}.md-typeset :-moz-any(.note)>:-moz-any(.admonition-title,summary){background-color:#448aff1a}.md-typeset :is(.note)>:is(.admonition-title,summary){background-color:#448aff1a}.md-typeset :-webkit-any(.note)>:-webkit-any(.admonition-title,summary):before{background-color:#448aff;-webkit-mask-image:var(--md-admonition-icon--note);mask-image:var(--md-admonition-icon--note)}.md-typeset :-moz-any(.note)>:-moz-any(.admonition-title,summary):before{background-color:#448aff;mask-image:var(--md-admonition-icon--note)}.md-typeset :is(.note)>:is(.admonition-title,summary):before{background-color:#448aff;-webkit-mask-image:var(--md-admonition-icon--note);mask-image:var(--md-admonition-icon--note)}.md-typeset :-webkit-any(.note)>:-webkit-any(.admonition-title,summary):after{color:#448aff}.md-typeset :-moz-any(.note)>:-moz-any(.admonition-title,summary):after{color:#448aff}.md-typeset :is(.note)>:is(.admonition-title,summary):after{color:#448aff}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.abstract,.summary,.tldr){border-color:#00b0ff}.md-typeset :-moz-any(.admonition,details):-moz-any(.abstract,.summary,.tldr){border-color:#00b0ff}.md-typeset :is(.admonition,details):is(.abstract,.summary,.tldr){border-color:#00b0ff}.md-typeset :-webkit-any(.abstract,.summary,.tldr)>:-webkit-any(.admonition-title,summary){background-color:#00b0ff1a}.md-typeset :-moz-any(.abstract,.summary,.tldr)>:-moz-any(.admonition-title,summary){background-color:#00b0ff1a}.md-typeset :is(.abstract,.summary,.tldr)>:is(.admonition-title,summary){background-color:#00b0ff1a}.md-typeset :-webkit-any(.abstract,.summary,.tldr)>:-webkit-any(.admonition-title,summary):before{background-color:#00b0ff;-webkit-mask-image:var(--md-admonition-icon--abstract);mask-image:var(--md-admonition-icon--abstract)}.md-typeset :-moz-any(.abstract,.summary,.tldr)>:-moz-any(.admonition-title,summary):before{background-color:#00b0ff;mask-image:var(--md-admonition-icon--abstract)}.md-typeset :is(.abstract,.summary,.tldr)>:is(.admonition-title,summary):before{background-color:#00b0ff;-webkit-mask-image:var(--md-admonition-icon--abstract);mask-image:var(--md-admonition-icon--abstract)}.md-typeset :-webkit-any(.abstract,.summary,.tldr)>:-webkit-any(.admonition-title,summary):after{color:#00b0ff}.md-typeset :-moz-any(.abstract,.summary,.tldr)>:-moz-any(.admonition-title,summary):after{color:#00b0ff}.md-typeset :is(.abstract,.summary,.tldr)>:is(.admonition-title,summary):after{color:#00b0ff}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.info,.todo){border-color:#00b8d4}.md-typeset :-moz-any(.admonition,details):-moz-any(.info,.todo){border-color:#00b8d4}.md-typeset :is(.admonition,details):is(.info,.todo){border-color:#00b8d4}.md-typeset :-webkit-any(.info,.todo)>:-webkit-any(.admonition-title,summary){background-color:#00b8d41a}.md-typeset :-moz-any(.info,.todo)>:-moz-any(.admonition-title,summary){background-color:#00b8d41a}.md-typeset :is(.info,.todo)>:is(.admonition-title,summary){background-color:#00b8d41a}.md-typeset :-webkit-any(.info,.todo)>:-webkit-any(.admonition-title,summary):before{background-color:#00b8d4;-webkit-mask-image:var(--md-admonition-icon--info);mask-image:var(--md-admonition-icon--info)}.md-typeset :-moz-any(.info,.todo)>:-moz-any(.admonition-title,summary):before{background-color:#00b8d4;mask-image:var(--md-admonition-icon--info)}.md-typeset :is(.info,.todo)>:is(.admonition-title,summary):before{background-color:#00b8d4;-webkit-mask-image:var(--md-admonition-icon--info);mask-image:var(--md-admonition-icon--info)}.md-typeset :-webkit-any(.info,.todo)>:-webkit-any(.admonition-title,summary):after{color:#00b8d4}.md-typeset :-moz-any(.info,.todo)>:-moz-any(.admonition-title,summary):after{color:#00b8d4}.md-typeset :is(.info,.todo)>:is(.admonition-title,summary):after{color:#00b8d4}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.tip,.hint,.important){border-color:#00bfa5}.md-typeset :-moz-any(.admonition,details):-moz-any(.tip,.hint,.important){border-color:#00bfa5}.md-typeset :is(.admonition,details):is(.tip,.hint,.important){border-color:#00bfa5}.md-typeset :-webkit-any(.tip,.hint,.important)>:-webkit-any(.admonition-title,summary){background-color:#00bfa51a}.md-typeset :-moz-any(.tip,.hint,.important)>:-moz-any(.admonition-title,summary){background-color:#00bfa51a}.md-typeset :is(.tip,.hint,.important)>:is(.admonition-title,summary){background-color:#00bfa51a}.md-typeset :-webkit-any(.tip,.hint,.important)>:-webkit-any(.admonition-title,summary):before{background-color:#00bfa5;-webkit-mask-image:var(--md-admonition-icon--tip);mask-image:var(--md-admonition-icon--tip)}.md-typeset :-moz-any(.tip,.hint,.important)>:-moz-any(.admonition-title,summary):before{background-color:#00bfa5;mask-image:var(--md-admonition-icon--tip)}.md-typeset :is(.tip,.hint,.important)>:is(.admonition-title,summary):before{background-color:#00bfa5;-webkit-mask-image:var(--md-admonition-icon--tip);mask-image:var(--md-admonition-icon--tip)}.md-typeset :-webkit-any(.tip,.hint,.important)>:-webkit-any(.admonition-title,summary):after{color:#00bfa5}.md-typeset :-moz-any(.tip,.hint,.important)>:-moz-any(.admonition-title,summary):after{color:#00bfa5}.md-typeset :is(.tip,.hint,.important)>:is(.admonition-title,summary):after{color:#00bfa5}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.success,.check,.done){border-color:#00c853}.md-typeset :-moz-any(.admonition,details):-moz-any(.success,.check,.done){border-color:#00c853}.md-typeset :is(.admonition,details):is(.success,.check,.done){border-color:#00c853}.md-typeset :-webkit-any(.success,.check,.done)>:-webkit-any(.admonition-title,summary){background-color:#00c8531a}.md-typeset :-moz-any(.success,.check,.done)>:-moz-any(.admonition-title,summary){background-color:#00c8531a}.md-typeset :is(.success,.check,.done)>:is(.admonition-title,summary){background-color:#00c8531a}.md-typeset :-webkit-any(.success,.check,.done)>:-webkit-any(.admonition-title,summary):before{background-color:#00c853;-webkit-mask-image:var(--md-admonition-icon--success);mask-image:var(--md-admonition-icon--success)}.md-typeset :-moz-any(.success,.check,.done)>:-moz-any(.admonition-title,summary):before{background-color:#00c853;mask-image:var(--md-admonition-icon--success)}.md-typeset :is(.success,.check,.done)>:is(.admonition-title,summary):before{background-color:#00c853;-webkit-mask-image:var(--md-admonition-icon--success);mask-image:var(--md-admonition-icon--success)}.md-typeset :-webkit-any(.success,.check,.done)>:-webkit-any(.admonition-title,summary):after{color:#00c853}.md-typeset :-moz-any(.success,.check,.done)>:-moz-any(.admonition-title,summary):after{color:#00c853}.md-typeset :is(.success,.check,.done)>:is(.admonition-title,summary):after{color:#00c853}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.question,.help,.faq){border-color:#64dd17}.md-typeset :-moz-any(.admonition,details):-moz-any(.question,.help,.faq){border-color:#64dd17}.md-typeset :is(.admonition,details):is(.question,.help,.faq){border-color:#64dd17}.md-typeset :-webkit-any(.question,.help,.faq)>:-webkit-any(.admonition-title,summary){background-color:#64dd171a}.md-typeset :-moz-any(.question,.help,.faq)>:-moz-any(.admonition-title,summary){background-color:#64dd171a}.md-typeset :is(.question,.help,.faq)>:is(.admonition-title,summary){background-color:#64dd171a}.md-typeset :-webkit-any(.question,.help,.faq)>:-webkit-any(.admonition-title,summary):before{background-color:#64dd17;-webkit-mask-image:var(--md-admonition-icon--question);mask-image:var(--md-admonition-icon--question)}.md-typeset :-moz-any(.question,.help,.faq)>:-moz-any(.admonition-title,summary):before{background-color:#64dd17;mask-image:var(--md-admonition-icon--question)}.md-typeset :is(.question,.help,.faq)>:is(.admonition-title,summary):before{background-color:#64dd17;-webkit-mask-image:var(--md-admonition-icon--question);mask-image:var(--md-admonition-icon--question)}.md-typeset :-webkit-any(.question,.help,.faq)>:-webkit-any(.admonition-title,summary):after{color:#64dd17}.md-typeset :-moz-any(.question,.help,.faq)>:-moz-any(.admonition-title,summary):after{color:#64dd17}.md-typeset :is(.question,.help,.faq)>:is(.admonition-title,summary):after{color:#64dd17}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.warning,.caution,.attention){border-color:#ff9100}.md-typeset :-moz-any(.admonition,details):-moz-any(.warning,.caution,.attention){border-color:#ff9100}.md-typeset :is(.admonition,details):is(.warning,.caution,.attention){border-color:#ff9100}.md-typeset :-webkit-any(.warning,.caution,.attention)>:-webkit-any(.admonition-title,summary){background-color:#ff91001a}.md-typeset :-moz-any(.warning,.caution,.attention)>:-moz-any(.admonition-title,summary){background-color:#ff91001a}.md-typeset :is(.warning,.caution,.attention)>:is(.admonition-title,summary){background-color:#ff91001a}.md-typeset :-webkit-any(.warning,.caution,.attention)>:-webkit-any(.admonition-title,summary):before{background-color:#ff9100;-webkit-mask-image:var(--md-admonition-icon--warning);mask-image:var(--md-admonition-icon--warning)}.md-typeset :-moz-any(.warning,.caution,.attention)>:-moz-any(.admonition-title,summary):before{background-color:#ff9100;mask-image:var(--md-admonition-icon--warning)}.md-typeset :is(.warning,.caution,.attention)>:is(.admonition-title,summary):before{background-color:#ff9100;-webkit-mask-image:var(--md-admonition-icon--warning);mask-image:var(--md-admonition-icon--warning)}.md-typeset :-webkit-any(.warning,.caution,.attention)>:-webkit-any(.admonition-title,summary):after{color:#ff9100}.md-typeset :-moz-any(.warning,.caution,.attention)>:-moz-any(.admonition-title,summary):after{color:#ff9100}.md-typeset :is(.warning,.caution,.attention)>:is(.admonition-title,summary):after{color:#ff9100}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.failure,.fail,.missing){border-color:#ff5252}.md-typeset :-moz-any(.admonition,details):-moz-any(.failure,.fail,.missing){border-color:#ff5252}.md-typeset :is(.admonition,details):is(.failure,.fail,.missing){border-color:#ff5252}.md-typeset :-webkit-any(.failure,.fail,.missing)>:-webkit-any(.admonition-title,summary){background-color:#ff52521a}.md-typeset :-moz-any(.failure,.fail,.missing)>:-moz-any(.admonition-title,summary){background-color:#ff52521a}.md-typeset :is(.failure,.fail,.missing)>:is(.admonition-title,summary){background-color:#ff52521a}.md-typeset :-webkit-any(.failure,.fail,.missing)>:-webkit-any(.admonition-title,summary):before{background-color:#ff5252;-webkit-mask-image:var(--md-admonition-icon--failure);mask-image:var(--md-admonition-icon--failure)}.md-typeset :-moz-any(.failure,.fail,.missing)>:-moz-any(.admonition-title,summary):before{background-color:#ff5252;mask-image:var(--md-admonition-icon--failure)}.md-typeset :is(.failure,.fail,.missing)>:is(.admonition-title,summary):before{background-color:#ff5252;-webkit-mask-image:var(--md-admonition-icon--failure);mask-image:var(--md-admonition-icon--failure)}.md-typeset :-webkit-any(.failure,.fail,.missing)>:-webkit-any(.admonition-title,summary):after{color:#ff5252}.md-typeset :-moz-any(.failure,.fail,.missing)>:-moz-any(.admonition-title,summary):after{color:#ff5252}.md-typeset :is(.failure,.fail,.missing)>:is(.admonition-title,summary):after{color:#ff5252}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.danger,.error){border-color:#ff1744}.md-typeset :-moz-any(.admonition,details):-moz-any(.danger,.error){border-color:#ff1744}.md-typeset :is(.admonition,details):is(.danger,.error){border-color:#ff1744}.md-typeset :-webkit-any(.danger,.error)>:-webkit-any(.admonition-title,summary){background-color:#ff17441a}.md-typeset :-moz-any(.danger,.error)>:-moz-any(.admonition-title,summary){background-color:#ff17441a}.md-typeset :is(.danger,.error)>:is(.admonition-title,summary){background-color:#ff17441a}.md-typeset :-webkit-any(.danger,.error)>:-webkit-any(.admonition-title,summary):before{background-color:#ff1744;-webkit-mask-image:var(--md-admonition-icon--danger);mask-image:var(--md-admonition-icon--danger)}.md-typeset :-moz-any(.danger,.error)>:-moz-any(.admonition-title,summary):before{background-color:#ff1744;mask-image:var(--md-admonition-icon--danger)}.md-typeset :is(.danger,.error)>:is(.admonition-title,summary):before{background-color:#ff1744;-webkit-mask-image:var(--md-admonition-icon--danger);mask-image:var(--md-admonition-icon--danger)}.md-typeset :-webkit-any(.danger,.error)>:-webkit-any(.admonition-title,summary):after{color:#ff1744}.md-typeset :-moz-any(.danger,.error)>:-moz-any(.admonition-title,summary):after{color:#ff1744}.md-typeset :is(.danger,.error)>:is(.admonition-title,summary):after{color:#ff1744}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.bug){border-color:#f50057}.md-typeset :-moz-any(.admonition,details):-moz-any(.bug){border-color:#f50057}.md-typeset :is(.admonition,details):is(.bug){border-color:#f50057}.md-typeset :-webkit-any(.bug)>:-webkit-any(.admonition-title,summary){background-color:#f500571a}.md-typeset :-moz-any(.bug)>:-moz-any(.admonition-title,summary){background-color:#f500571a}.md-typeset :is(.bug)>:is(.admonition-title,summary){background-color:#f500571a}.md-typeset :-webkit-any(.bug)>:-webkit-any(.admonition-title,summary):before{background-color:#f50057;-webkit-mask-image:var(--md-admonition-icon--bug);mask-image:var(--md-admonition-icon--bug)}.md-typeset :-moz-any(.bug)>:-moz-any(.admonition-title,summary):before{background-color:#f50057;mask-image:var(--md-admonition-icon--bug)}.md-typeset :is(.bug)>:is(.admonition-title,summary):before{background-color:#f50057;-webkit-mask-image:var(--md-admonition-icon--bug);mask-image:var(--md-admonition-icon--bug)}.md-typeset :-webkit-any(.bug)>:-webkit-any(.admonition-title,summary):after{color:#f50057}.md-typeset :-moz-any(.bug)>:-moz-any(.admonition-title,summary):after{color:#f50057}.md-typeset :is(.bug)>:is(.admonition-title,summary):after{color:#f50057}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.example){border-color:#7c4dff}.md-typeset :-moz-any(.admonition,details):-moz-any(.example){border-color:#7c4dff}.md-typeset :is(.admonition,details):is(.example){border-color:#7c4dff}.md-typeset :-webkit-any(.example)>:-webkit-any(.admonition-title,summary){background-color:#7c4dff1a}.md-typeset :-moz-any(.example)>:-moz-any(.admonition-title,summary){background-color:#7c4dff1a}.md-typeset :is(.example)>:is(.admonition-title,summary){background-color:#7c4dff1a}.md-typeset :-webkit-any(.example)>:-webkit-any(.admonition-title,summary):before{background-color:#7c4dff;-webkit-mask-image:var(--md-admonition-icon--example);mask-image:var(--md-admonition-icon--example)}.md-typeset :-moz-any(.example)>:-moz-any(.admonition-title,summary):before{background-color:#7c4dff;mask-image:var(--md-admonition-icon--example)}.md-typeset :is(.example)>:is(.admonition-title,summary):before{background-color:#7c4dff;-webkit-mask-image:var(--md-admonition-icon--example);mask-image:var(--md-admonition-icon--example)}.md-typeset :-webkit-any(.example)>:-webkit-any(.admonition-title,summary):after{color:#7c4dff}.md-typeset :-moz-any(.example)>:-moz-any(.admonition-title,summary):after{color:#7c4dff}.md-typeset :is(.example)>:is(.admonition-title,summary):after{color:#7c4dff}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.quote,.cite){border-color:#9e9e9e}.md-typeset :-moz-any(.admonition,details):-moz-any(.quote,.cite){border-color:#9e9e9e}.md-typeset :is(.admonition,details):is(.quote,.cite){border-color:#9e9e9e}.md-typeset :-webkit-any(.quote,.cite)>:-webkit-any(.admonition-title,summary){background-color:#9e9e9e1a}.md-typeset :-moz-any(.quote,.cite)>:-moz-any(.admonition-title,summary){background-color:#9e9e9e1a}.md-typeset :is(.quote,.cite)>:is(.admonition-title,summary){background-color:#9e9e9e1a}.md-typeset :-webkit-any(.quote,.cite)>:-webkit-any(.admonition-title,summary):before{background-color:#9e9e9e;-webkit-mask-image:var(--md-admonition-icon--quote);mask-image:var(--md-admonition-icon--quote)}.md-typeset :-moz-any(.quote,.cite)>:-moz-any(.admonition-title,summary):before{background-color:#9e9e9e;mask-image:var(--md-admonition-icon--quote)}.md-typeset :is(.quote,.cite)>:is(.admonition-title,summary):before{background-color:#9e9e9e;-webkit-mask-image:var(--md-admonition-icon--quote);mask-image:var(--md-admonition-icon--quote)}.md-typeset :-webkit-any(.quote,.cite)>:-webkit-any(.admonition-title,summary):after{color:#9e9e9e}.md-typeset :-moz-any(.quote,.cite)>:-moz-any(.admonition-title,summary):after{color:#9e9e9e}.md-typeset :is(.quote,.cite)>:is(.admonition-title,summary):after{color:#9e9e9e}:root{--md-footnotes-icon:url('data:image/svg+xml;charset=utf-8,')}.md-typeset .footnote{color:var(--md-default-fg-color--light);font-size:.64rem}[dir=ltr] .md-typeset .footnote>ol{margin-left:0}[dir=rtl] .md-typeset .footnote>ol{margin-right:0}.md-typeset .footnote>ol>li{transition:color 125ms}.md-typeset .footnote>ol>li:target{color:var(--md-default-fg-color)}.md-typeset .footnote>ol>li:focus-within .footnote-backref{opacity:1;transform:translateX(0);transition:none}.md-typeset .footnote>ol>li:-webkit-any(:hover,:target) .footnote-backref{opacity:1;transform:translateX(0)}.md-typeset .footnote>ol>li:-moz-any(:hover,:target) .footnote-backref{opacity:1;transform:translateX(0)}.md-typeset .footnote>ol>li:is(:hover,:target) .footnote-backref{opacity:1;transform:translateX(0)}.md-typeset .footnote>ol>li>:first-child{margin-top:0}.md-typeset .footnote-ref{font-size:.75em;font-weight:700}html .md-typeset .footnote-ref{outline-offset:.1rem}.md-typeset [id^="fnref:"]:target>.footnote-ref{outline:auto}.md-typeset .footnote-backref{color:var(--md-typeset-a-color);display:inline-block;font-size:0;opacity:0;transform:translateX(.25rem);transition:color .25s,transform .25s .25s,opacity 125ms .25s;vertical-align:text-bottom}@media print{.md-typeset .footnote-backref{color:var(--md-typeset-a-color);opacity:1;transform:translateX(0)}}[dir=rtl] .md-typeset .footnote-backref{transform:translateX(-.25rem)}.md-typeset .footnote-backref:hover{color:var(--md-accent-fg-color)}.md-typeset .footnote-backref:before{background-color:currentcolor;content:"";display:inline-block;height:.8rem;-webkit-mask-image:var(--md-footnotes-icon);mask-image:var(--md-footnotes-icon);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:.8rem}[dir=rtl] .md-typeset .footnote-backref:before svg{transform:scaleX(-1)}[dir=ltr] .md-typeset .headerlink{margin-left:.5rem}[dir=rtl] .md-typeset .headerlink{margin-right:.5rem}.md-typeset .headerlink{color:var(--md-default-fg-color--lighter);display:inline-block;opacity:0;transition:color .25s,opacity 125ms}@media print{.md-typeset .headerlink{display:none}}.md-typeset .headerlink:focus,.md-typeset :-webkit-any(:hover,:target)>.headerlink{opacity:1;-webkit-transition:color .25s,opacity 125ms;transition:color .25s,opacity 125ms}.md-typeset .headerlink:focus,.md-typeset :-moz-any(:hover,:target)>.headerlink{opacity:1;-moz-transition:color .25s,opacity 125ms;transition:color .25s,opacity 125ms}.md-typeset .headerlink:focus,.md-typeset :is(:hover,:target)>.headerlink{opacity:1;transition:color .25s,opacity 125ms}.md-typeset .headerlink:-webkit-any(:focus,:hover),.md-typeset :target>.headerlink{color:var(--md-accent-fg-color)}.md-typeset .headerlink:-moz-any(:focus,:hover),.md-typeset :target>.headerlink{color:var(--md-accent-fg-color)}.md-typeset .headerlink:is(:focus,:hover),.md-typeset :target>.headerlink{color:var(--md-accent-fg-color)}.md-typeset :target{--md-scroll-margin:3.6rem;--md-scroll-offset:0rem;scroll-margin-top:calc(var(--md-scroll-margin) - var(--md-scroll-offset))}@media screen and (min-width:76.25em){.md-header--lifted~.md-container .md-typeset :target{--md-scroll-margin:6rem}}.md-typeset :-webkit-any(h1,h2,h3):target{--md-scroll-offset:0.2rem}.md-typeset :-moz-any(h1,h2,h3):target{--md-scroll-offset:0.2rem}.md-typeset :is(h1,h2,h3):target{--md-scroll-offset:0.2rem}.md-typeset h4:target{--md-scroll-offset:0.15rem}.md-typeset div.arithmatex{overflow:auto}@media screen and (max-width:44.9375em){.md-typeset div.arithmatex{margin:0 -.8rem}}.md-typeset div.arithmatex>*{margin-left:auto!important;margin-right:auto!important;padding:0 .8rem;touch-action:auto;width:-webkit-min-content;width:-moz-min-content;width:min-content}.md-typeset div.arithmatex>* mjx-container{margin:0!important}.md-typeset :-webkit-any(del,ins,.comment).critic{-webkit-box-decoration-break:clone;box-decoration-break:clone}.md-typeset :-moz-any(del,ins,.comment).critic{box-decoration-break:clone}.md-typeset :is(del,ins,.comment).critic{-webkit-box-decoration-break:clone;box-decoration-break:clone}.md-typeset del.critic{background-color:var(--md-typeset-del-color)}.md-typeset ins.critic{background-color:var(--md-typeset-ins-color)}.md-typeset .critic.comment{color:var(--md-code-hl-comment-color)}.md-typeset .critic.comment:before{content:"/* "}.md-typeset .critic.comment:after{content:" */"}.md-typeset .critic.block{box-shadow:none;display:block;margin:1em 0;overflow:auto;padding-left:.8rem;padding-right:.8rem}.md-typeset .critic.block>:first-child{margin-top:.5em}.md-typeset .critic.block>:last-child{margin-bottom:.5em}:root{--md-details-icon:url('data:image/svg+xml;charset=utf-8,')}.md-typeset details{display:flow-root;overflow:visible;padding-top:0}.md-typeset details[open]>summary:after{transform:rotate(90deg)}.md-typeset details:not([open]){box-shadow:none;padding-bottom:0}.md-typeset details:not([open])>summary{border-radius:.1rem}[dir=ltr] .md-typeset summary{padding-right:1.8rem}[dir=rtl] .md-typeset summary{padding-left:1.8rem}[dir=ltr] .md-typeset summary{border-top-left-radius:.1rem}[dir=ltr] .md-typeset summary,[dir=rtl] .md-typeset summary{border-top-right-radius:.1rem}[dir=rtl] .md-typeset summary{border-top-left-radius:.1rem}.md-typeset summary{cursor:pointer;display:block;min-height:1rem}.md-typeset summary.focus-visible{outline-color:var(--md-accent-fg-color);outline-offset:.2rem}.md-typeset summary:not(.focus-visible){-webkit-tap-highlight-color:transparent;outline:none}[dir=ltr] .md-typeset summary:after{right:.4rem}[dir=rtl] .md-typeset summary:after{left:.4rem}.md-typeset summary:after{background-color:currentcolor;content:"";height:1rem;-webkit-mask-image:var(--md-details-icon);mask-image:var(--md-details-icon);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;position:absolute;top:.625em;transform:rotate(0deg);transition:transform .25s;width:1rem}[dir=rtl] .md-typeset summary:after{transform:rotate(180deg)}.md-typeset summary::marker{display:none}.md-typeset summary::-webkit-details-marker{display:none}.md-typeset :-webkit-any(.emojione,.twemoji,.gemoji){display:inline-flex;height:1.125em;vertical-align:text-top}.md-typeset :-moz-any(.emojione,.twemoji,.gemoji){display:inline-flex;height:1.125em;vertical-align:text-top}.md-typeset :is(.emojione,.twemoji,.gemoji){display:inline-flex;height:1.125em;vertical-align:text-top}.md-typeset :-webkit-any(.emojione,.twemoji,.gemoji) svg{fill:currentcolor;max-height:100%;width:1.125em}.md-typeset :-moz-any(.emojione,.twemoji,.gemoji) svg{fill:currentcolor;max-height:100%;width:1.125em}.md-typeset :is(.emojione,.twemoji,.gemoji) svg{fill:currentcolor;max-height:100%;width:1.125em}.highlight :-webkit-any(.o,.ow){color:var(--md-code-hl-operator-color)}.highlight :-moz-any(.o,.ow){color:var(--md-code-hl-operator-color)}.highlight :is(.o,.ow){color:var(--md-code-hl-operator-color)}.highlight .p{color:var(--md-code-hl-punctuation-color)}.highlight :-webkit-any(.cpf,.l,.s,.sb,.sc,.s2,.si,.s1,.ss){color:var(--md-code-hl-string-color)}.highlight :-moz-any(.cpf,.l,.s,.sb,.sc,.s2,.si,.s1,.ss){color:var(--md-code-hl-string-color)}.highlight :is(.cpf,.l,.s,.sb,.sc,.s2,.si,.s1,.ss){color:var(--md-code-hl-string-color)}.highlight :-webkit-any(.cp,.se,.sh,.sr,.sx){color:var(--md-code-hl-special-color)}.highlight :-moz-any(.cp,.se,.sh,.sr,.sx){color:var(--md-code-hl-special-color)}.highlight :is(.cp,.se,.sh,.sr,.sx){color:var(--md-code-hl-special-color)}.highlight :-webkit-any(.m,.mb,.mf,.mh,.mi,.il,.mo){color:var(--md-code-hl-number-color)}.highlight :-moz-any(.m,.mb,.mf,.mh,.mi,.il,.mo){color:var(--md-code-hl-number-color)}.highlight :is(.m,.mb,.mf,.mh,.mi,.il,.mo){color:var(--md-code-hl-number-color)}.highlight :-webkit-any(.k,.kd,.kn,.kp,.kr,.kt){color:var(--md-code-hl-keyword-color)}.highlight :-moz-any(.k,.kd,.kn,.kp,.kr,.kt){color:var(--md-code-hl-keyword-color)}.highlight :is(.k,.kd,.kn,.kp,.kr,.kt){color:var(--md-code-hl-keyword-color)}.highlight :-webkit-any(.kc,.n){color:var(--md-code-hl-name-color)}.highlight :-moz-any(.kc,.n){color:var(--md-code-hl-name-color)}.highlight :is(.kc,.n){color:var(--md-code-hl-name-color)}.highlight :-webkit-any(.no,.nb,.bp){color:var(--md-code-hl-constant-color)}.highlight :-moz-any(.no,.nb,.bp){color:var(--md-code-hl-constant-color)}.highlight :is(.no,.nb,.bp){color:var(--md-code-hl-constant-color)}.highlight :-webkit-any(.nc,.ne,.nf,.nn){color:var(--md-code-hl-function-color)}.highlight :-moz-any(.nc,.ne,.nf,.nn){color:var(--md-code-hl-function-color)}.highlight :is(.nc,.ne,.nf,.nn){color:var(--md-code-hl-function-color)}.highlight :-webkit-any(.nd,.ni,.nl,.nt){color:var(--md-code-hl-keyword-color)}.highlight :-moz-any(.nd,.ni,.nl,.nt){color:var(--md-code-hl-keyword-color)}.highlight :is(.nd,.ni,.nl,.nt){color:var(--md-code-hl-keyword-color)}.highlight :-webkit-any(.c,.cm,.c1,.ch,.cs,.sd){color:var(--md-code-hl-comment-color)}.highlight :-moz-any(.c,.cm,.c1,.ch,.cs,.sd){color:var(--md-code-hl-comment-color)}.highlight :is(.c,.cm,.c1,.ch,.cs,.sd){color:var(--md-code-hl-comment-color)}.highlight :-webkit-any(.na,.nv,.vc,.vg,.vi){color:var(--md-code-hl-variable-color)}.highlight :-moz-any(.na,.nv,.vc,.vg,.vi){color:var(--md-code-hl-variable-color)}.highlight :is(.na,.nv,.vc,.vg,.vi){color:var(--md-code-hl-variable-color)}.highlight :-webkit-any(.ge,.gr,.gh,.go,.gp,.gs,.gu,.gt){color:var(--md-code-hl-generic-color)}.highlight :-moz-any(.ge,.gr,.gh,.go,.gp,.gs,.gu,.gt){color:var(--md-code-hl-generic-color)}.highlight :is(.ge,.gr,.gh,.go,.gp,.gs,.gu,.gt){color:var(--md-code-hl-generic-color)}.highlight :-webkit-any(.gd,.gi){border-radius:.1rem;margin:0 -.125em;padding:0 .125em}.highlight :-moz-any(.gd,.gi){border-radius:.1rem;margin:0 -.125em;padding:0 .125em}.highlight :is(.gd,.gi){border-radius:.1rem;margin:0 -.125em;padding:0 .125em}.highlight .gd{background-color:var(--md-typeset-del-color)}.highlight .gi{background-color:var(--md-typeset-ins-color)}.highlight .hll{background-color:var(--md-code-hl-color);display:block;margin:0 -1.1764705882em;padding:0 1.1764705882em}.highlight span.filename{background-color:var(--md-code-bg-color);border-bottom:.05rem solid var(--md-default-fg-color--lightest);border-top-left-radius:.1rem;border-top-right-radius:.1rem;display:flow-root;font-size:.85em;font-weight:700;margin-top:1em;padding:.6617647059em 1.1764705882em;position:relative}.highlight span.filename+pre{margin-top:0}.highlight span.filename+pre>code{border-top-left-radius:0;border-top-right-radius:0}.highlight [data-linenos]:before{background-color:var(--md-code-bg-color);box-shadow:-.05rem 0 var(--md-default-fg-color--lightest) inset;color:var(--md-default-fg-color--light);content:attr(data-linenos);float:left;left:-1.1764705882em;margin-left:-1.1764705882em;margin-right:1.1764705882em;padding-left:1.1764705882em;position:-webkit-sticky;position:sticky;-webkit-user-select:none;-moz-user-select:none;user-select:none;z-index:3}.highlight code a[id]{position:absolute;visibility:hidden}.highlight code[data-md-copying] .hll{display:contents}.highlight code[data-md-copying] .md-annotation{display:none}.highlighttable{display:flow-root}.highlighttable :-webkit-any(tbody,td){display:block;padding:0}.highlighttable :-moz-any(tbody,td){display:block;padding:0}.highlighttable :is(tbody,td){display:block;padding:0}.highlighttable tr{display:flex}.highlighttable pre{margin:0}.highlighttable th.filename{flex-grow:1;padding:0;text-align:left}.highlighttable th.filename span.filename{margin-top:0}.highlighttable .linenos{background-color:var(--md-code-bg-color);border-bottom-left-radius:.1rem;border-top-left-radius:.1rem;font-size:.85em;padding:.7720588235em 0 .7720588235em 1.1764705882em;-webkit-user-select:none;-moz-user-select:none;user-select:none}.highlighttable .linenodiv{box-shadow:-.05rem 0 var(--md-default-fg-color--lightest) inset;padding-right:.5882352941em}.highlighttable .linenodiv pre{color:var(--md-default-fg-color--light);text-align:right}.highlighttable .code{flex:1;min-width:0}.linenodiv a{color:inherit}.md-typeset .highlighttable{direction:ltr;margin:1em 0}.md-typeset .highlighttable>tbody>tr>.code>div>pre>code{border-bottom-left-radius:0;border-top-left-radius:0}.md-typeset .highlight+.result{border:.05rem solid var(--md-code-bg-color);border-bottom-left-radius:.1rem;border-bottom-right-radius:.1rem;border-top-width:.1rem;margin-top:-1.125em;overflow:visible;padding:0 1em}.md-typeset .highlight+.result:after{clear:both;content:"";display:block}@media screen and (max-width:44.9375em){.md-content__inner>.highlight{margin:1em -.8rem}.md-content__inner>.highlight>.filename,.md-content__inner>.highlight>.highlighttable>tbody>tr>.code>div>pre>code,.md-content__inner>.highlight>.highlighttable>tbody>tr>.filename span.filename,.md-content__inner>.highlight>.highlighttable>tbody>tr>.linenos,.md-content__inner>.highlight>pre>code{border-radius:0}.md-content__inner>.highlight+.result{border-left-width:0;border-radius:0;border-right-width:0;margin-left:-.8rem;margin-right:-.8rem}}.md-typeset .keys kbd:-webkit-any(:before,:after){-moz-osx-font-smoothing:initial;-webkit-font-smoothing:initial;color:inherit;margin:0;position:relative}.md-typeset .keys kbd:-moz-any(:before,:after){-moz-osx-font-smoothing:initial;-webkit-font-smoothing:initial;color:inherit;margin:0;position:relative}.md-typeset .keys kbd:is(:before,:after){-moz-osx-font-smoothing:initial;-webkit-font-smoothing:initial;color:inherit;margin:0;position:relative}.md-typeset .keys span{color:var(--md-default-fg-color--light);padding:0 .2em}.md-typeset .keys .key-alt:before,.md-typeset .keys .key-left-alt:before,.md-typeset .keys .key-right-alt:before{content:"⎇";padding-right:.4em}.md-typeset .keys .key-command:before,.md-typeset .keys .key-left-command:before,.md-typeset .keys .key-right-command:before{content:"⌘";padding-right:.4em}.md-typeset .keys .key-control:before,.md-typeset .keys .key-left-control:before,.md-typeset .keys .key-right-control:before{content:"⌃";padding-right:.4em}.md-typeset .keys .key-left-meta:before,.md-typeset .keys .key-meta:before,.md-typeset .keys .key-right-meta:before{content:"◆";padding-right:.4em}.md-typeset .keys .key-left-option:before,.md-typeset .keys .key-option:before,.md-typeset .keys .key-right-option:before{content:"⌥";padding-right:.4em}.md-typeset .keys .key-left-shift:before,.md-typeset .keys .key-right-shift:before,.md-typeset .keys .key-shift:before{content:"⇧";padding-right:.4em}.md-typeset .keys .key-left-super:before,.md-typeset .keys .key-right-super:before,.md-typeset .keys .key-super:before{content:"❖";padding-right:.4em}.md-typeset .keys .key-left-windows:before,.md-typeset .keys .key-right-windows:before,.md-typeset .keys .key-windows:before{content:"⊞";padding-right:.4em}.md-typeset .keys .key-arrow-down:before{content:"↓";padding-right:.4em}.md-typeset .keys .key-arrow-left:before{content:"←";padding-right:.4em}.md-typeset .keys .key-arrow-right:before{content:"→";padding-right:.4em}.md-typeset .keys .key-arrow-up:before{content:"↑";padding-right:.4em}.md-typeset .keys .key-backspace:before{content:"⌫";padding-right:.4em}.md-typeset .keys .key-backtab:before{content:"⇤";padding-right:.4em}.md-typeset .keys .key-caps-lock:before{content:"⇪";padding-right:.4em}.md-typeset .keys .key-clear:before{content:"⌧";padding-right:.4em}.md-typeset .keys .key-context-menu:before{content:"☰";padding-right:.4em}.md-typeset .keys .key-delete:before{content:"⌦";padding-right:.4em}.md-typeset .keys .key-eject:before{content:"⏏";padding-right:.4em}.md-typeset .keys .key-end:before{content:"⤓";padding-right:.4em}.md-typeset .keys .key-escape:before{content:"⎋";padding-right:.4em}.md-typeset .keys .key-home:before{content:"⤒";padding-right:.4em}.md-typeset .keys .key-insert:before{content:"⎀";padding-right:.4em}.md-typeset .keys .key-page-down:before{content:"⇟";padding-right:.4em}.md-typeset .keys .key-page-up:before{content:"⇞";padding-right:.4em}.md-typeset .keys .key-print-screen:before{content:"⎙";padding-right:.4em}.md-typeset .keys .key-tab:after{content:"⇥";padding-left:.4em}.md-typeset .keys .key-num-enter:after{content:"⌤";padding-left:.4em}.md-typeset .keys .key-enter:after{content:"⏎";padding-left:.4em}:root{--md-tabbed-icon--prev:url('data:image/svg+xml;charset=utf-8,');--md-tabbed-icon--next:url('data:image/svg+xml;charset=utf-8,')}.md-typeset .tabbed-set{border-radius:.1rem;display:flex;flex-flow:column wrap;margin:1em 0;position:relative}.md-typeset .tabbed-set>input{height:0;opacity:0;position:absolute;width:0}.md-typeset .tabbed-set>input:target{--md-scroll-offset:0.625em}.md-typeset .tabbed-labels{-ms-overflow-style:none;box-shadow:0 -.05rem var(--md-default-fg-color--lightest) inset;display:flex;max-width:100%;overflow:auto;scrollbar-width:none}@media print{.md-typeset .tabbed-labels{display:contents}}@media screen{.js .md-typeset .tabbed-labels{position:relative}.js .md-typeset .tabbed-labels:before{background:var(--md-accent-fg-color);bottom:0;content:"";display:block;height:2px;left:0;position:absolute;transform:translateX(var(--md-indicator-x));transition:width 225ms,transform .25s;transition-timing-function:cubic-bezier(.4,0,.2,1);width:var(--md-indicator-width)}}.md-typeset .tabbed-labels::-webkit-scrollbar{display:none}.md-typeset .tabbed-labels>label{border-bottom:.1rem solid #0000;border-radius:.1rem .1rem 0 0;color:var(--md-default-fg-color--light);cursor:pointer;flex-shrink:0;font-size:.64rem;font-weight:700;padding:.78125em 1.25em .625em;scroll-margin-inline-start:1rem;transition:background-color .25s,color .25s;white-space:nowrap;width:auto}@media print{.md-typeset .tabbed-labels>label:first-child{order:1}.md-typeset .tabbed-labels>label:nth-child(2){order:2}.md-typeset .tabbed-labels>label:nth-child(3){order:3}.md-typeset .tabbed-labels>label:nth-child(4){order:4}.md-typeset .tabbed-labels>label:nth-child(5){order:5}.md-typeset .tabbed-labels>label:nth-child(6){order:6}.md-typeset .tabbed-labels>label:nth-child(7){order:7}.md-typeset .tabbed-labels>label:nth-child(8){order:8}.md-typeset .tabbed-labels>label:nth-child(9){order:9}.md-typeset .tabbed-labels>label:nth-child(10){order:10}.md-typeset .tabbed-labels>label:nth-child(11){order:11}.md-typeset .tabbed-labels>label:nth-child(12){order:12}.md-typeset .tabbed-labels>label:nth-child(13){order:13}.md-typeset .tabbed-labels>label:nth-child(14){order:14}.md-typeset .tabbed-labels>label:nth-child(15){order:15}.md-typeset .tabbed-labels>label:nth-child(16){order:16}.md-typeset .tabbed-labels>label:nth-child(17){order:17}.md-typeset .tabbed-labels>label:nth-child(18){order:18}.md-typeset .tabbed-labels>label:nth-child(19){order:19}.md-typeset .tabbed-labels>label:nth-child(20){order:20}}.md-typeset .tabbed-labels>label:hover{color:var(--md-accent-fg-color)}.md-typeset .tabbed-content{width:100%}@media print{.md-typeset .tabbed-content{display:contents}}.md-typeset .tabbed-block{display:none}@media print{.md-typeset .tabbed-block{display:block}.md-typeset .tabbed-block:first-child{order:1}.md-typeset .tabbed-block:nth-child(2){order:2}.md-typeset .tabbed-block:nth-child(3){order:3}.md-typeset .tabbed-block:nth-child(4){order:4}.md-typeset .tabbed-block:nth-child(5){order:5}.md-typeset .tabbed-block:nth-child(6){order:6}.md-typeset .tabbed-block:nth-child(7){order:7}.md-typeset .tabbed-block:nth-child(8){order:8}.md-typeset .tabbed-block:nth-child(9){order:9}.md-typeset .tabbed-block:nth-child(10){order:10}.md-typeset .tabbed-block:nth-child(11){order:11}.md-typeset .tabbed-block:nth-child(12){order:12}.md-typeset .tabbed-block:nth-child(13){order:13}.md-typeset .tabbed-block:nth-child(14){order:14}.md-typeset .tabbed-block:nth-child(15){order:15}.md-typeset .tabbed-block:nth-child(16){order:16}.md-typeset .tabbed-block:nth-child(17){order:17}.md-typeset .tabbed-block:nth-child(18){order:18}.md-typeset .tabbed-block:nth-child(19){order:19}.md-typeset .tabbed-block:nth-child(20){order:20}}.md-typeset .tabbed-block>.highlight:first-child>pre,.md-typeset .tabbed-block>pre:first-child{margin:0}.md-typeset .tabbed-block>.highlight:first-child>pre>code,.md-typeset .tabbed-block>pre:first-child>code{border-top-left-radius:0;border-top-right-radius:0}.md-typeset .tabbed-block>.highlight:first-child>.filename{border-top-left-radius:0;border-top-right-radius:0;margin:0}.md-typeset .tabbed-block>.highlight:first-child>.highlighttable{margin:0}.md-typeset .tabbed-block>.highlight:first-child>.highlighttable>tbody>tr>.filename span.filename,.md-typeset .tabbed-block>.highlight:first-child>.highlighttable>tbody>tr>.linenos{border-top-left-radius:0;border-top-right-radius:0;margin:0}.md-typeset .tabbed-block>.highlight:first-child>.highlighttable>tbody>tr>.code>div>pre>code{border-top-left-radius:0;border-top-right-radius:0}.md-typeset .tabbed-block>.highlight:first-child+.result{margin-top:-.125em}.md-typeset .tabbed-block>.tabbed-set{margin:0}.md-typeset .tabbed-button{align-self:center;border-radius:100%;color:var(--md-default-fg-color--light);cursor:pointer;display:block;height:.9rem;margin-top:.1rem;pointer-events:auto;transition:background-color .25s;width:.9rem}.md-typeset .tabbed-button:hover{background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-typeset .tabbed-button:after{background-color:currentcolor;content:"";display:block;height:100%;-webkit-mask-image:var(--md-tabbed-icon--prev);mask-image:var(--md-tabbed-icon--prev);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;transition:background-color .25s,transform .25s;width:100%}.md-typeset .tabbed-control{background:linear-gradient(to right,var(--md-default-bg-color) 60%,#0000);display:flex;height:1.9rem;justify-content:start;pointer-events:none;position:absolute;transition:opacity 125ms;width:1.2rem}[dir=rtl] .md-typeset .tabbed-control{transform:rotate(180deg)}.md-typeset .tabbed-control[hidden]{opacity:0}.md-typeset .tabbed-control--next{background:linear-gradient(to left,var(--md-default-bg-color) 60%,#0000);justify-content:end;right:0}.md-typeset .tabbed-control--next .tabbed-button:after{-webkit-mask-image:var(--md-tabbed-icon--next);mask-image:var(--md-tabbed-icon--next)}@media screen and (max-width:44.9375em){[dir=ltr] .md-content__inner>.tabbed-set .tabbed-labels{padding-left:.8rem}[dir=rtl] .md-content__inner>.tabbed-set .tabbed-labels{padding-right:.8rem}.md-content__inner>.tabbed-set .tabbed-labels{margin:0 -.8rem;max-width:100vw;scroll-padding-inline-start:.8rem}[dir=ltr] .md-content__inner>.tabbed-set .tabbed-labels:after{padding-right:.8rem}[dir=rtl] .md-content__inner>.tabbed-set .tabbed-labels:after{padding-left:.8rem}.md-content__inner>.tabbed-set .tabbed-labels:after{content:""}[dir=ltr] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--prev{margin-left:-.8rem}[dir=rtl] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--prev{margin-right:-.8rem}[dir=ltr] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--prev{padding-left:.8rem}[dir=rtl] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--prev{padding-right:.8rem}.md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--prev{width:2rem}[dir=ltr] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--next{margin-right:-.8rem}[dir=rtl] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--next{margin-left:-.8rem}[dir=ltr] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--next{padding-right:.8rem}[dir=rtl] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--next{padding-left:.8rem}.md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--next{width:2rem}}@media screen{.md-typeset .tabbed-set>input:first-child:checked~.tabbed-labels>:first-child,.md-typeset .tabbed-set>input:nth-child(10):checked~.tabbed-labels>:nth-child(10),.md-typeset .tabbed-set>input:nth-child(11):checked~.tabbed-labels>:nth-child(11),.md-typeset .tabbed-set>input:nth-child(12):checked~.tabbed-labels>:nth-child(12),.md-typeset .tabbed-set>input:nth-child(13):checked~.tabbed-labels>:nth-child(13),.md-typeset .tabbed-set>input:nth-child(14):checked~.tabbed-labels>:nth-child(14),.md-typeset .tabbed-set>input:nth-child(15):checked~.tabbed-labels>:nth-child(15),.md-typeset .tabbed-set>input:nth-child(16):checked~.tabbed-labels>:nth-child(16),.md-typeset .tabbed-set>input:nth-child(17):checked~.tabbed-labels>:nth-child(17),.md-typeset .tabbed-set>input:nth-child(18):checked~.tabbed-labels>:nth-child(18),.md-typeset .tabbed-set>input:nth-child(19):checked~.tabbed-labels>:nth-child(19),.md-typeset .tabbed-set>input:nth-child(2):checked~.tabbed-labels>:nth-child(2),.md-typeset .tabbed-set>input:nth-child(20):checked~.tabbed-labels>:nth-child(20),.md-typeset .tabbed-set>input:nth-child(3):checked~.tabbed-labels>:nth-child(3),.md-typeset .tabbed-set>input:nth-child(4):checked~.tabbed-labels>:nth-child(4),.md-typeset .tabbed-set>input:nth-child(5):checked~.tabbed-labels>:nth-child(5),.md-typeset .tabbed-set>input:nth-child(6):checked~.tabbed-labels>:nth-child(6),.md-typeset .tabbed-set>input:nth-child(7):checked~.tabbed-labels>:nth-child(7),.md-typeset .tabbed-set>input:nth-child(8):checked~.tabbed-labels>:nth-child(8),.md-typeset .tabbed-set>input:nth-child(9):checked~.tabbed-labels>:nth-child(9){color:var(--md-accent-fg-color)}.md-typeset .no-js .tabbed-set>input:first-child:checked~.tabbed-labels>:first-child,.md-typeset .no-js .tabbed-set>input:nth-child(10):checked~.tabbed-labels>:nth-child(10),.md-typeset .no-js .tabbed-set>input:nth-child(11):checked~.tabbed-labels>:nth-child(11),.md-typeset .no-js .tabbed-set>input:nth-child(12):checked~.tabbed-labels>:nth-child(12),.md-typeset .no-js .tabbed-set>input:nth-child(13):checked~.tabbed-labels>:nth-child(13),.md-typeset .no-js .tabbed-set>input:nth-child(14):checked~.tabbed-labels>:nth-child(14),.md-typeset .no-js .tabbed-set>input:nth-child(15):checked~.tabbed-labels>:nth-child(15),.md-typeset .no-js .tabbed-set>input:nth-child(16):checked~.tabbed-labels>:nth-child(16),.md-typeset .no-js .tabbed-set>input:nth-child(17):checked~.tabbed-labels>:nth-child(17),.md-typeset .no-js .tabbed-set>input:nth-child(18):checked~.tabbed-labels>:nth-child(18),.md-typeset .no-js .tabbed-set>input:nth-child(19):checked~.tabbed-labels>:nth-child(19),.md-typeset .no-js .tabbed-set>input:nth-child(2):checked~.tabbed-labels>:nth-child(2),.md-typeset .no-js .tabbed-set>input:nth-child(20):checked~.tabbed-labels>:nth-child(20),.md-typeset .no-js .tabbed-set>input:nth-child(3):checked~.tabbed-labels>:nth-child(3),.md-typeset .no-js .tabbed-set>input:nth-child(4):checked~.tabbed-labels>:nth-child(4),.md-typeset .no-js .tabbed-set>input:nth-child(5):checked~.tabbed-labels>:nth-child(5),.md-typeset .no-js .tabbed-set>input:nth-child(6):checked~.tabbed-labels>:nth-child(6),.md-typeset .no-js .tabbed-set>input:nth-child(7):checked~.tabbed-labels>:nth-child(7),.md-typeset .no-js .tabbed-set>input:nth-child(8):checked~.tabbed-labels>:nth-child(8),.md-typeset .no-js .tabbed-set>input:nth-child(9):checked~.tabbed-labels>:nth-child(9),.no-js .md-typeset .tabbed-set>input:first-child:checked~.tabbed-labels>:first-child,.no-js .md-typeset .tabbed-set>input:nth-child(10):checked~.tabbed-labels>:nth-child(10),.no-js .md-typeset .tabbed-set>input:nth-child(11):checked~.tabbed-labels>:nth-child(11),.no-js .md-typeset .tabbed-set>input:nth-child(12):checked~.tabbed-labels>:nth-child(12),.no-js .md-typeset .tabbed-set>input:nth-child(13):checked~.tabbed-labels>:nth-child(13),.no-js .md-typeset .tabbed-set>input:nth-child(14):checked~.tabbed-labels>:nth-child(14),.no-js .md-typeset .tabbed-set>input:nth-child(15):checked~.tabbed-labels>:nth-child(15),.no-js .md-typeset .tabbed-set>input:nth-child(16):checked~.tabbed-labels>:nth-child(16),.no-js .md-typeset .tabbed-set>input:nth-child(17):checked~.tabbed-labels>:nth-child(17),.no-js .md-typeset .tabbed-set>input:nth-child(18):checked~.tabbed-labels>:nth-child(18),.no-js .md-typeset .tabbed-set>input:nth-child(19):checked~.tabbed-labels>:nth-child(19),.no-js .md-typeset .tabbed-set>input:nth-child(2):checked~.tabbed-labels>:nth-child(2),.no-js .md-typeset .tabbed-set>input:nth-child(20):checked~.tabbed-labels>:nth-child(20),.no-js .md-typeset .tabbed-set>input:nth-child(3):checked~.tabbed-labels>:nth-child(3),.no-js .md-typeset .tabbed-set>input:nth-child(4):checked~.tabbed-labels>:nth-child(4),.no-js .md-typeset .tabbed-set>input:nth-child(5):checked~.tabbed-labels>:nth-child(5),.no-js .md-typeset .tabbed-set>input:nth-child(6):checked~.tabbed-labels>:nth-child(6),.no-js .md-typeset .tabbed-set>input:nth-child(7):checked~.tabbed-labels>:nth-child(7),.no-js .md-typeset .tabbed-set>input:nth-child(8):checked~.tabbed-labels>:nth-child(8),.no-js .md-typeset .tabbed-set>input:nth-child(9):checked~.tabbed-labels>:nth-child(9){border-color:var(--md-accent-fg-color)}}.md-typeset .tabbed-set>input:first-child.focus-visible~.tabbed-labels>:first-child,.md-typeset .tabbed-set>input:nth-child(10).focus-visible~.tabbed-labels>:nth-child(10),.md-typeset .tabbed-set>input:nth-child(11).focus-visible~.tabbed-labels>:nth-child(11),.md-typeset .tabbed-set>input:nth-child(12).focus-visible~.tabbed-labels>:nth-child(12),.md-typeset .tabbed-set>input:nth-child(13).focus-visible~.tabbed-labels>:nth-child(13),.md-typeset .tabbed-set>input:nth-child(14).focus-visible~.tabbed-labels>:nth-child(14),.md-typeset .tabbed-set>input:nth-child(15).focus-visible~.tabbed-labels>:nth-child(15),.md-typeset .tabbed-set>input:nth-child(16).focus-visible~.tabbed-labels>:nth-child(16),.md-typeset .tabbed-set>input:nth-child(17).focus-visible~.tabbed-labels>:nth-child(17),.md-typeset .tabbed-set>input:nth-child(18).focus-visible~.tabbed-labels>:nth-child(18),.md-typeset .tabbed-set>input:nth-child(19).focus-visible~.tabbed-labels>:nth-child(19),.md-typeset .tabbed-set>input:nth-child(2).focus-visible~.tabbed-labels>:nth-child(2),.md-typeset .tabbed-set>input:nth-child(20).focus-visible~.tabbed-labels>:nth-child(20),.md-typeset .tabbed-set>input:nth-child(3).focus-visible~.tabbed-labels>:nth-child(3),.md-typeset .tabbed-set>input:nth-child(4).focus-visible~.tabbed-labels>:nth-child(4),.md-typeset .tabbed-set>input:nth-child(5).focus-visible~.tabbed-labels>:nth-child(5),.md-typeset .tabbed-set>input:nth-child(6).focus-visible~.tabbed-labels>:nth-child(6),.md-typeset .tabbed-set>input:nth-child(7).focus-visible~.tabbed-labels>:nth-child(7),.md-typeset .tabbed-set>input:nth-child(8).focus-visible~.tabbed-labels>:nth-child(8),.md-typeset .tabbed-set>input:nth-child(9).focus-visible~.tabbed-labels>:nth-child(9){background-color:var(--md-accent-fg-color--transparent)}.md-typeset .tabbed-set>input:first-child:checked~.tabbed-content>:first-child,.md-typeset .tabbed-set>input:nth-child(10):checked~.tabbed-content>:nth-child(10),.md-typeset .tabbed-set>input:nth-child(11):checked~.tabbed-content>:nth-child(11),.md-typeset .tabbed-set>input:nth-child(12):checked~.tabbed-content>:nth-child(12),.md-typeset .tabbed-set>input:nth-child(13):checked~.tabbed-content>:nth-child(13),.md-typeset .tabbed-set>input:nth-child(14):checked~.tabbed-content>:nth-child(14),.md-typeset .tabbed-set>input:nth-child(15):checked~.tabbed-content>:nth-child(15),.md-typeset .tabbed-set>input:nth-child(16):checked~.tabbed-content>:nth-child(16),.md-typeset .tabbed-set>input:nth-child(17):checked~.tabbed-content>:nth-child(17),.md-typeset .tabbed-set>input:nth-child(18):checked~.tabbed-content>:nth-child(18),.md-typeset .tabbed-set>input:nth-child(19):checked~.tabbed-content>:nth-child(19),.md-typeset .tabbed-set>input:nth-child(2):checked~.tabbed-content>:nth-child(2),.md-typeset .tabbed-set>input:nth-child(20):checked~.tabbed-content>:nth-child(20),.md-typeset .tabbed-set>input:nth-child(3):checked~.tabbed-content>:nth-child(3),.md-typeset .tabbed-set>input:nth-child(4):checked~.tabbed-content>:nth-child(4),.md-typeset .tabbed-set>input:nth-child(5):checked~.tabbed-content>:nth-child(5),.md-typeset .tabbed-set>input:nth-child(6):checked~.tabbed-content>:nth-child(6),.md-typeset .tabbed-set>input:nth-child(7):checked~.tabbed-content>:nth-child(7),.md-typeset .tabbed-set>input:nth-child(8):checked~.tabbed-content>:nth-child(8),.md-typeset .tabbed-set>input:nth-child(9):checked~.tabbed-content>:nth-child(9){display:block}:root{--md-tasklist-icon:url('data:image/svg+xml;charset=utf-8,');--md-tasklist-icon--checked:url('data:image/svg+xml;charset=utf-8,')}.md-typeset .task-list-item{list-style-type:none;position:relative}[dir=ltr] .md-typeset .task-list-item [type=checkbox]{left:-2em}[dir=rtl] .md-typeset .task-list-item [type=checkbox]{right:-2em}.md-typeset .task-list-item [type=checkbox]{position:absolute;top:.45em}.md-typeset .task-list-control [type=checkbox]{opacity:0;z-index:-1}[dir=ltr] .md-typeset .task-list-indicator:before{left:-1.5em}[dir=rtl] .md-typeset .task-list-indicator:before{right:-1.5em}.md-typeset .task-list-indicator:before{background-color:var(--md-default-fg-color--lightest);content:"";height:1.25em;-webkit-mask-image:var(--md-tasklist-icon);mask-image:var(--md-tasklist-icon);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;position:absolute;top:.15em;width:1.25em}.md-typeset [type=checkbox]:checked+.task-list-indicator:before{background-color:#00e676;-webkit-mask-image:var(--md-tasklist-icon--checked);mask-image:var(--md-tasklist-icon--checked)}:root>*{--md-mermaid-font-family:var(--md-text-font-family),sans-serif;--md-mermaid-edge-color:var(--md-code-fg-color);--md-mermaid-node-bg-color:var(--md-accent-fg-color--transparent);--md-mermaid-node-fg-color:var(--md-accent-fg-color);--md-mermaid-label-bg-color:var(--md-default-bg-color);--md-mermaid-label-fg-color:var(--md-code-fg-color)}.mermaid{line-height:normal;margin:1em 0}@media screen and (min-width:45em){[dir=ltr] .md-typeset .inline{float:left}[dir=rtl] .md-typeset .inline{float:right}[dir=ltr] .md-typeset .inline{margin-right:.8rem}[dir=rtl] .md-typeset .inline{margin-left:.8rem}.md-typeset .inline{margin-bottom:.8rem;margin-top:0;width:11.7rem}[dir=ltr] .md-typeset .inline.end{float:right}[dir=rtl] .md-typeset .inline.end{float:left}[dir=ltr] .md-typeset .inline.end{margin-left:.8rem;margin-right:0}[dir=rtl] .md-typeset .inline.end{margin-left:0;margin-right:.8rem}} \ No newline at end of file diff --git a/assets/stylesheets/main.975780f9.min.css.map b/assets/stylesheets/main.975780f9.min.css.map new file mode 100644 index 00000000..5e13ffb9 --- /dev/null +++ b/assets/stylesheets/main.975780f9.min.css.map @@ -0,0 +1 @@ +{"version":3,"sources":["src/assets/stylesheets/main/extensions/pymdownx/_keys.scss","../../../src/assets/stylesheets/main.scss","src/assets/stylesheets/main/_resets.scss","src/assets/stylesheets/main/_colors.scss","src/assets/stylesheets/main/_icons.scss","src/assets/stylesheets/main/_typeset.scss","src/assets/stylesheets/utilities/_break.scss","src/assets/stylesheets/main/layout/_banner.scss","src/assets/stylesheets/main/layout/_base.scss","src/assets/stylesheets/main/layout/_clipboard.scss","src/assets/stylesheets/main/layout/_consent.scss","src/assets/stylesheets/main/layout/_content.scss","src/assets/stylesheets/main/layout/_dialog.scss","src/assets/stylesheets/main/layout/_feedback.scss","src/assets/stylesheets/main/layout/_footer.scss","src/assets/stylesheets/main/layout/_form.scss","src/assets/stylesheets/main/layout/_header.scss","src/assets/stylesheets/main/layout/_nav.scss","src/assets/stylesheets/main/layout/_search.scss","src/assets/stylesheets/main/layout/_select.scss","src/assets/stylesheets/main/layout/_sidebar.scss","src/assets/stylesheets/main/layout/_source.scss","src/assets/stylesheets/main/layout/_tabs.scss","src/assets/stylesheets/main/layout/_tag.scss","src/assets/stylesheets/main/layout/_tooltip.scss","src/assets/stylesheets/main/layout/_top.scss","src/assets/stylesheets/main/layout/_version.scss","src/assets/stylesheets/main/extensions/markdown/_admonition.scss","node_modules/material-design-color/material-color.scss","src/assets/stylesheets/main/extensions/markdown/_footnotes.scss","src/assets/stylesheets/main/extensions/markdown/_toc.scss","src/assets/stylesheets/main/extensions/pymdownx/_arithmatex.scss","src/assets/stylesheets/main/extensions/pymdownx/_critic.scss","src/assets/stylesheets/main/extensions/pymdownx/_details.scss","src/assets/stylesheets/main/extensions/pymdownx/_emoji.scss","src/assets/stylesheets/main/extensions/pymdownx/_highlight.scss","src/assets/stylesheets/main/extensions/pymdownx/_tabbed.scss","src/assets/stylesheets/main/extensions/pymdownx/_tasklist.scss","src/assets/stylesheets/main/integrations/_mermaid.scss","src/assets/stylesheets/main/_modifiers.scss"],"names":[],"mappings":"AAgGM,gBCo+GN,CCxiHA,KAEE,6BAAA,CAAA,0BAAA,CAAA,qBAAA,CADA,qBDzBF,CC8BA,iBAGE,kBD3BF,CC8BE,gCANF,iBAOI,yBDzBF,CACF,CC6BA,KACE,QD1BF,CC8BA,qBAIE,uCD3BF,CC+BA,EACE,aAAA,CACA,oBD5BF,CCgCA,GAME,QAAA,CAJA,kBAAA,CADA,aAAA,CAEA,aAAA,CAEA,gBAAA,CADA,SD3BF,CCiCA,MACE,aD9BF,CCkCA,QAEE,eD/BF,CCmCA,IACE,iBDhCF,CCoCA,MACE,uBAAA,CACA,gBDjCF,CCqCA,MAEE,eAAA,CACA,kBDlCF,CCsCA,OAKE,gBAAA,CACA,QAAA,CAFA,mBAAA,CADA,iBAAA,CAFA,QAAA,CACA,SD/BF,CCuCA,MACE,QAAA,CACA,YDpCF,CErDA,MAIE,6BAAA,CACA,oCAAA,CACA,mCAAA,CACA,0BAAA,CACA,sCAAA,CAGA,4BAAA,CACA,2CAAA,CACA,yBAAA,CACA,qCFmDF,CEpCA,qCAGE,+BAAA,CACA,sCAAA,CACA,wCAAA,CACA,yCAAA,CACA,0BAAA,CACA,sCAAA,CACA,wCAAA,CACA,yCAAA,CAGA,0BAAA,CACA,0BAAA,CAGA,4BAAA,CACA,iCAAA,CACA,kCAAA,CACA,mCAAA,CACA,mCAAA,CACA,kCAAA,CACA,iCAAA,CACA,+CAAA,CACA,6DAAA,CACA,gEAAA,CACA,4DAAA,CACA,4DAAA,CACA,6DAAA,CAGA,6CAAA,CAGA,+CAAA,CAGA,iCAAA,CAGA,gCAAA,CACA,gCAAA,CAGA,8BAAA,CACA,kCAAA,CACA,qCAAA,CAGA,kCAAA,CAGA,mDAAA,CACA,mDAAA,CAGA,yBAAA,CACA,qCAAA,CACA,uCAAA,CACA,8BAAA,CACA,oCAAA,CAGA,8DAAA,CAKA,8DAAA,CAKA,0DFaF,CGjHE,aAIE,iBAAA,CAHA,aAAA,CAEA,aAAA,CADA,YHsHJ,CI3HA,KACE,kCAAA,CACA,iCAAA,CAGA,uGAAA,CAKA,mFJ4HF,CItHA,WAGE,mCAAA,CACA,sCJyHF,CIrHA,wBANE,6BJmIF,CI7HA,aAIE,4BAAA,CACA,sCJwHF,CIhHA,MACE,0NAAA,CACA,mNAAA,CACA,oNJmHF,CI5GA,YAGE,gCAAA,CAAA,kBAAA,CAFA,eAAA,CACA,eJgHF,CI3GE,aAPF,YAQI,gBJ8GF,CACF,CI3GE,uGAME,iBAAA,CAAA,cJ6GJ,CIzGE,eAEE,uCAAA,CAEA,aAAA,CACA,eAAA,CAJA,iBJgHJ,CIvGE,8BAPE,eAAA,CAGA,qBJkHJ,CI9GE,eAGE,kBAAA,CACA,eAAA,CAHA,oBJ6GJ,CIrGE,eAGE,gBAAA,CADA,eAAA,CAGA,qBAAA,CADA,eAAA,CAHA,mBJ2GJ,CInGE,kBACE,eJqGJ,CIjGE,eAEE,eAAA,CACA,qBAAA,CAFA,YJqGJ,CI/FE,8BAGE,uCAAA,CAEA,cAAA,CADA,eAAA,CAEA,qBAAA,CAJA,eJqGJ,CI7FE,eACE,wBJ+FJ,CI3FE,eAGE,+DAAA,CAFA,iBAAA,CACA,cJ8FJ,CIzFE,cACE,+BAAA,CACA,qBJ2FJ,CIxFI,mCAEE,sBJyFN,CIrFI,wCAEE,+BJsFN,CInFM,kDACE,uDJqFR,CIhFI,mBACE,kBAAA,CACA,iCJkFN,CI9EI,4BACE,uCAAA,CACA,oBJgFN,CI3EE,iDAGE,6BAAA,CACA,aAAA,CACA,2BJ6EJ,CI1EI,aARF,iDASI,oBJ+EJ,CACF,CI3EE,iBAIE,wCAAA,CACA,mBAAA,CACA,kCAAA,CAAA,0BAAA,CAJA,eAAA,CADA,uBAAA,CAEA,qBJgFJ,CI1EI,qCAEE,uCAAA,CADA,YJ6EN,CIvEE,gBAEE,iBAAA,CACA,eAAA,CAFA,iBJ2EJ,CItEI,qBAQE,kCAAA,CAAA,0BAAA,CADA,eAAA,CANA,aAAA,CACA,QAAA,CAIA,uCAAA,CAFA,aAAA,CADA,oCAAA,CAQA,yDAAA,CADA,oBAAA,CADA,iBAAA,CAJA,iBJ8EN,CIrEM,2BACE,+CJuER,CInEM,wCAEE,YAAA,CADA,WJsER,CIjEM,8CACE,oDJmER,CIhEQ,oDACE,0CJkEV,CI3DE,gBAOE,4CAAA,CACA,mBAAA,CACA,mKACE,CAPF,gCAAA,CAFA,oBAAA,CAGA,eAAA,CAFA,uBAAA,CAGA,uBAAA,CACA,qBJgEJ,CItDE,iBAGE,6CAAA,CACA,kCAAA,CAAA,0BAAA,CAHA,aAAA,CACA,qBJ0DJ,CIpDE,iBAEE,6DAAA,CACA,WAAA,CAFA,oBJwDJ,CInDI,oBANF,iBAOI,iBJsDJ,CInDI,yDAWE,2CAAA,CACA,mBAAA,CACA,8BAAA,CAJA,gCAAA,CAKA,mBAAA,CAXA,oBAAA,CAOA,eAAA,CAHA,cAAA,CADA,aAAA,CADA,6BAAA,CAAA,qBAAA,CAGA,mBAAA,CAPA,iBAAA,CAGA,UJ+DN,CInEI,sDAWE,2CAAA,CACA,mBAAA,CACA,8BAAA,CAJA,gCAAA,CAKA,mBAAA,CAXA,oBAAA,CAOA,eAAA,CAHA,cAAA,CADA,aAAA,CADA,0BAAA,CAAA,qBAAA,CAGA,mBAAA,CAPA,iBAAA,CAGA,UJ+DN,CInEI,mEAEE,MJiEN,CInEI,gEAEE,MJiEN,CInEI,0DAEE,MJiEN,CInEI,mEAEE,OJiEN,CInEI,gEAEE,OJiEN,CInEI,0DAEE,OJiEN,CInEI,gDAWE,2CAAA,CACA,mBAAA,CACA,8BAAA,CAJA,gCAAA,CAKA,mBAAA,CAXA,oBAAA,CAOA,eAAA,CAHA,cAAA,CADA,aAAA,CADA,6BAAA,CAAA,0BAAA,CAAA,qBAAA,CAGA,mBAAA,CAPA,iBAAA,CAGA,UJ+DN,CACF,CIhDE,kBACE,WJkDJ,CI9CE,oDAEE,qBJgDJ,CIlDE,oDAEE,sBJgDJ,CI5CE,iCACE,kBJiDJ,CIlDE,iCACE,mBJiDJ,CIlDE,iCAIE,2DJ8CJ,CIlDE,iCAIE,4DJ8CJ,CIlDE,uBAGE,uCAAA,CADA,aAAA,CAAA,cJgDJ,CI1CE,eACE,oBJ4CJ,CIxCE,kDAEE,kBJ2CJ,CI7CE,kDAEE,mBJ2CJ,CI7CE,8BAGE,SJ0CJ,CIvCI,0DACE,iBJ0CN,CItCI,oCACE,2BJyCN,CItCM,0CACE,2BJyCR,CIpCI,wDAEE,kBJuCN,CIzCI,wDAEE,mBJuCN,CIzCI,oCACE,kBJwCN,CIpCM,kGAEE,aJwCR,CIpCM,0DACE,eJuCR,CInCM,4EACE,kBAAA,CAAA,eJuCR,CIxCM,sEACE,kBAAA,CAAA,eJuCR,CIxCM,gGAEE,kBJsCR,CIxCM,0FAEE,kBJsCR,CIxCM,8EAEE,kBJsCR,CIxCM,gGAEE,mBJsCR,CIxCM,0FAEE,mBJsCR,CIxCM,8EAEE,mBJsCR,CIxCM,0DACE,kBAAA,CAAA,eJuCR,CIhCE,yBAEE,mBJkCJ,CIpCE,yBAEE,oBJkCJ,CIpCE,eACE,mBAAA,CAAA,cJmCJ,CI9BE,kDAIE,WAAA,CADA,cJiCJ,CIzBI,4BAEE,oBJ2BN,CIvBI,6BAEE,oBJyBN,CIrBI,kCACE,YJuBN,CInBI,8EAEE,YJoBN,CIfE,mBACE,iBAAA,CAGA,eAAA,CADA,cAAA,CAEA,iBAAA,CAHA,yBAAA,CAAA,sBAAA,CAAA,iBJoBJ,CIdI,uBACE,aJgBN,CIXE,uBAGE,iBAAA,CADA,eAAA,CADA,eJeJ,CITE,mBACE,cJWJ,CIPE,+BAKE,2CAAA,CACA,iDAAA,CACA,mBAAA,CANA,oBAAA,CAGA,gBAAA,CAFA,cAAA,CACA,aAAA,CAKA,iBJSJ,CINI,aAXF,+BAYI,aJSJ,CACF,CIJI,iCACE,gBJMN,CICM,gEACE,YJCR,CIFM,6DACE,YJCR,CIFM,uDACE,YJCR,CIGM,+DACE,eJDR,CIAM,4DACE,eJDR,CIAM,sDACE,eJDR,CIMI,gEACE,eJJN,CIGI,6DACE,eJJN,CIGI,uDACE,eJJN,CIOM,0EACE,gBJLR,CIIM,uEACE,gBJLR,CIIM,iEACE,gBJLR,CIUI,kCAGE,eAAA,CAFA,cAAA,CACA,sBAAA,CAEA,kBJRN,CIYI,kCAGE,qDAAA,CAFA,sBAAA,CACA,kBJTN,CIcI,wCACE,iCJZN,CIeM,8CACE,iCAAA,CACA,sDJbR,CIkBI,iCACE,iBJhBN,CIqBE,wCACE,cJnBJ,CIsBI,wDAIE,gBJdN,CIUI,wDAIE,iBJdN,CIUI,8CAUE,UAAA,CATA,oBAAA,CAEA,YAAA,CAGA,oDAAA,CAAA,4CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CACA,iCAAA,CAJA,0BAAA,CAHA,WJZN,CIwBI,oDACE,oDJtBN,CI0BI,mEACE,kDAAA,CACA,yDAAA,CAAA,iDJxBN,CI4BI,oEACE,kDAAA,CACA,0DAAA,CAAA,kDJ1BN,CI+BE,wBACE,iBAAA,CACA,eAAA,CACA,iBJ7BJ,CIiCE,mBACE,oBAAA,CACA,kBAAA,CACA,eJ/BJ,CIkCI,aANF,mBAOI,aJ/BJ,CACF,CIkCI,8BACE,aAAA,CAEA,QAAA,CACA,eAAA,CAFA,UJ9BN,CK7VI,wCD0YF,uBACE,iBJzCF,CI4CE,4BACE,eJ1CJ,CACF,CM/hBA,WAGE,0CAAA,CADA,+BAAA,CADA,aNmiBF,CM9hBE,aANF,WAOI,YNiiBF,CACF,CM9hBE,oBAEE,uCAAA,CADA,gCNiiBJ,CM5hBE,kBAGE,eAAA,CAFA,iBAAA,CACA,eN+hBJ,CM1hBE,6BACE,WN+hBJ,CMhiBE,6BACE,UN+hBJ,CMhiBE,mBAEE,aAAA,CACA,cAAA,CACA,uBN4hBJ,CMzhBI,yBACE,UN2hBN,CO3jBA,KASE,cAAA,CARA,WAAA,CACA,iBP+jBF,CK3ZI,oCEtKJ,KAaI,gBPwjBF,CACF,CKhaI,oCEtKJ,KAkBI,cPwjBF,CACF,COnjBA,KASE,2CAAA,CAPA,YAAA,CACA,qBAAA,CAKA,eAAA,CAHA,eAAA,CAJA,iBAAA,CAGA,UPyjBF,COjjBE,aAZF,KAaI,aPojBF,CACF,CKjaI,wCEhJF,yBAII,cPijBJ,CACF,COxiBA,SAEE,gBAAA,CAAA,iBAAA,CADA,eP4iBF,COviBA,cACE,YAAA,CACA,qBAAA,CACA,WP0iBF,COviBE,aANF,cAOI,aP0iBF,CACF,COtiBA,SACE,WPyiBF,COtiBE,gBACE,YAAA,CACA,WAAA,CACA,iBPwiBJ,COniBA,aACE,eAAA,CAEA,sBAAA,CADA,kBPuiBF,CO7hBA,WACE,YPgiBF,CO3hBA,WAGE,QAAA,CACA,SAAA,CAHA,iBAAA,CACA,OPgiBF,CO3hBE,uCACE,aP6hBJ,COzhBE,+BAEE,uCAAA,CADA,kBP4hBJ,COthBA,SASE,2CAAA,CACA,mBAAA,CAHA,gCAAA,CACA,gBAAA,CAHA,YAAA,CAQA,SAAA,CAFA,uCAAA,CALA,mBAAA,CALA,cAAA,CAWA,2BAAA,CARA,UPgiBF,COphBE,eAGE,SAAA,CADA,uBAAA,CAEA,oEACE,CAJF,UPyhBJ,CO3gBA,MACE,WP8gBF,CQxqBA,MACE,+PR0qBF,CQpqBA,cAQE,mBAAA,CADA,0CAAA,CAIA,cAAA,CALA,YAAA,CAGA,uCAAA,CACA,oBAAA,CATA,iBAAA,CAEA,UAAA,CADA,QAAA,CAUA,qBAAA,CAPA,WAAA,CADA,SR+qBF,CQpqBE,aAfF,cAgBI,YRuqBF,CACF,CQpqBE,kCAEE,uCAAA,CADA,YRuqBJ,CQlqBE,qBACE,uCRoqBJ,CQhqBE,yCACE,+BRkqBJ,CQnqBE,sCACE,+BRkqBJ,CQnqBE,gCACE,+BRkqBJ,CQ7pBE,oBAKE,6BAAA,CAKA,UAAA,CATA,aAAA,CAEA,cAAA,CACA,aAAA,CAEA,2CAAA,CAAA,mCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CAPA,aRuqBJ,CQ3pBE,sBACE,cR6pBJ,CQ1pBI,2BACE,2CR4pBN,CQtpBI,sDAEE,uDAAA,CADA,+BRypBN,CQ1pBI,mDAEE,uDAAA,CADA,+BRypBN,CQ1pBI,6CAEE,uDAAA,CADA,+BRypBN,CS/tBA,mBACE,GAEE,SAAA,CADA,0BTmuBF,CS/tBA,GAEE,SAAA,CADA,uBTkuBF,CACF,CS7tBA,mBACE,GACE,ST+tBF,CS5tBA,GACE,ST8tBF,CACF,CSntBE,qBASE,2BAAA,CADA,mCAAA,CAAA,2BAAA,CAFA,0BAAA,CADA,WAAA,CAEA,SAAA,CANA,cAAA,CACA,KAAA,CAEA,UAAA,CADA,ST2tBJ,CSjtBE,mBAcE,mDAAA,CANA,2CAAA,CACA,QAAA,CACA,mBAAA,CARA,QAAA,CASA,kDACE,CAPF,eAAA,CAEA,aAAA,CADA,SAAA,CALA,cAAA,CAGA,UAAA,CADA,ST4tBJ,CS7sBE,kBACE,aT+sBJ,CS3sBE,sBACE,YAAA,CACA,YT6sBJ,CS1sBI,oCACE,aT4sBN,CSvsBE,sBACE,mBTysBJ,CStsBI,6CACE,cTwsBN,CKlmBI,wCIvGA,6CAKI,aAAA,CAEA,gBAAA,CACA,iBAAA,CAFA,UT0sBN,CACF,CSnsBE,kBACE,cTqsBJ,CUtyBA,YACE,WAAA,CAIA,WVsyBF,CUnyBE,mBACE,qBAAA,CACA,iBVqyBJ,CKzoBI,sCKtJE,4EACE,kBVkyBN,CU9xBI,0JACE,mBVgyBN,CUjyBI,8EACE,kBVgyBN,CACF,CU3xBI,0BAGE,UAAA,CAFA,aAAA,CACA,YV8xBN,CUzxBI,+BACE,eV2xBN,CUrxBE,8BACE,WV0xBJ,CU3xBE,8BACE,UV0xBJ,CU3xBE,8BAGE,iBVwxBJ,CU3xBE,8BAGE,kBVwxBJ,CU3xBE,oBAEE,cAAA,CAEA,SVuxBJ,CUpxBI,aAPF,oBAQI,YVuxBJ,CACF,CUpxBI,gCACE,yCVsxBN,CUlxBI,wBACE,cAAA,CACA,kBVoxBN,CUjxBM,kCACE,oBVmxBR,CWp1BA,qBAEE,WXk2BF,CWp2BA,qBAEE,UXk2BF,CWp2BA,WAOE,2CAAA,CACA,mBAAA,CALA,YAAA,CAMA,8BAAA,CAJA,iBAAA,CAMA,SAAA,CALA,mBAAA,CASA,mBAAA,CAdA,cAAA,CASA,0BAAA,CAEA,wCACE,CATF,SXg2BF,CWl1BE,aAlBF,WAmBI,YXq1BF,CACF,CWl1BE,mBAEE,SAAA,CAIA,mBAAA,CALA,uBAAA,CAEA,kEXq1BJ,CW90BE,kBACE,gCAAA,CACA,eXg1BJ,CYn3BA,aACE,gBAAA,CACA,iBZs3BF,CYn3BE,sBAGE,WAAA,CAFA,QAAA,CACA,SZs3BJ,CYj3BE,oBAEE,eAAA,CADA,eZo3BJ,CY/2BE,oBACE,iBZi3BJ,CY72BE,mBAIE,sBAAA,CAFA,YAAA,CACA,cAAA,CAEA,sBAAA,CAJA,iBZm3BJ,CY52BI,iDACE,yCZ82BN,CY12BI,6BACE,iBZ42BN,CYv2BE,mBAGE,uCAAA,CACA,cAAA,CAHA,aAAA,CACA,cAAA,CAGA,sBZy2BJ,CYt2BI,gDACE,+BZw2BN,CYp2BI,4BACE,0CAAA,CACA,mBZs2BN,CYj2BE,mBAGE,SAAA,CAFA,iBAAA,CACA,2BAAA,CAEA,8DZm2BJ,CY91BI,qBAEE,aAAA,CADA,eZi2BN,CY51BI,6BAEE,SAAA,CADA,uBZ+1BN,Ca76BA,WAEE,0CAAA,CADA,+Bbi7BF,Ca76BE,aALF,WAMI,Ybg7BF,CACF,Ca76BE,kBACE,6BAAA,CAEA,aAAA,CADA,abg7BJ,Ca56BI,gCACE,Yb86BN,Caz6BE,iBACE,YAAA,CAKA,cAAA,CAIA,uCAAA,CADA,eAAA,CADA,oBAAA,CADA,kBAAA,CAIA,uBbu6BJ,Cap6BI,4CACE,Ubs6BN,Cav6BI,yCACE,Ubs6BN,Cav6BI,mCACE,Ubs6BN,Cal6BI,+BACE,oBbo6BN,CKrxBI,wCQrII,yCACE,Yb65BR,CACF,Cax5BI,iCACE,gBb25BN,Ca55BI,iCACE,iBb25BN,Ca55BI,uBAEE,gBb05BN,Cav5BM,iCACE,eby5BR,Can5BE,kBAEE,WAAA,CAGA,eAAA,CACA,kBAAA,CAHA,6BAAA,CACA,cAAA,CAHA,iBAAA,CAMA,kBbq5BJ,Caj5BE,mBACE,YAAA,CACA,abm5BJ,Ca/4BE,sBAKE,gBAAA,CAHA,MAAA,CACA,gBAAA,CAGA,UAAA,CAFA,cAAA,CAHA,iBAAA,CACA,Obq5BJ,Ca54BA,gBACE,gDb+4BF,Ca54BE,uBACE,YAAA,CACA,cAAA,CACA,6BAAA,CACA,ab84BJ,Ca14BE,kCACE,sCb44BJ,Caz4BI,6DACE,+Bb24BN,Ca54BI,0DACE,+Bb24BN,Ca54BI,oDACE,+Bb24BN,Can4BA,cAIE,wCAAA,CACA,gBAAA,CAHA,iBAAA,CACA,eAAA,CAFA,Ub04BF,CKj2BI,mCQ1CJ,cASI,Ubs4BF,CACF,Cal4BE,yBACE,sCbo4BJ,Ca73BA,WACE,cAAA,CACA,qBbg4BF,CK92BI,mCQpBJ,WAMI,ebg4BF,CACF,Ca73BE,iBACE,oBAAA,CAEA,aAAA,CACA,iBAAA,CAFA,Ybi4BJ,Ca53BI,wBACE,eb83BN,Ca13BI,qBAGE,iBAAA,CAFA,gBAAA,CACA,mBb63BN,CcpiCE,uBAKE,kBAAA,CACA,mBAAA,CAHA,gCAAA,CAIA,cAAA,CANA,oBAAA,CAGA,eAAA,CAFA,kBAAA,CAMA,gEduiCJ,CcjiCI,gCAEE,2CAAA,CACA,uCAAA,CAFA,gCdqiCN,Cc/hCI,kDAEE,0CAAA,CACA,sCAAA,CAFA,+BdmiCN,CcpiCI,+CAEE,0CAAA,CACA,sCAAA,CAFA,+BdmiCN,CcpiCI,yCAEE,0CAAA,CACA,sCAAA,CAFA,+BdmiCN,Cc5hCE,gCAKE,4BdiiCJ,CctiCE,gEAME,6BdgiCJ,CctiCE,gCAME,4BdgiCJ,CctiCE,sBAIE,6DAAA,CAGA,8BAAA,CAJA,eAAA,CAFA,aAAA,CACA,eAAA,CAMA,sCd8hCJ,CczhCI,iDACE,6CAAA,CACA,8Bd2hCN,Cc7hCI,8CACE,6CAAA,CACA,8Bd2hCN,Cc7hCI,wCACE,6CAAA,CACA,8Bd2hCN,CcvhCI,+BACE,UdyhCN,Ce5kCA,WAOE,2CAAA,CAGA,8CACE,CALF,gCAAA,CADA,aAAA,CAFA,MAAA,CAFA,uBAAA,CAAA,eAAA,CAEA,OAAA,CADA,KAAA,CAEA,SfmlCF,CexkCE,aAfF,WAgBI,Yf2kCF,CACF,CexkCE,mBACE,2BAAA,CACA,iEf0kCJ,CepkCE,mBACE,kDACE,CAEF,kEfokCJ,Ce9jCE,kBAEE,kBAAA,CADA,YAAA,CAEA,efgkCJ,Ce5jCE,mBAKE,kBAAA,CAGA,cAAA,CALA,YAAA,CAIA,uCAAA,CAHA,aAAA,CAHA,iBAAA,CAQA,uBAAA,CAHA,qBAAA,CAJA,SfqkCJ,Ce3jCI,yBACE,Uf6jCN,CezjCI,iCACE,oBf2jCN,CevjCI,uCAEE,uCAAA,CADA,Yf0jCN,CerjCI,2BACE,YAAA,CACA,afujCN,CK18BI,wCU/GA,2BAMI,YfujCN,CACF,CepjCM,iDAIE,iBAAA,CAHA,aAAA,CAEA,aAAA,CADA,UfwjCR,Ce1jCM,8CAIE,iBAAA,CAHA,aAAA,CAEA,aAAA,CADA,UfwjCR,Ce1jCM,wCAIE,iBAAA,CAHA,aAAA,CAEA,aAAA,CADA,UfwjCR,CKx+BI,mCUzEA,iCAII,YfijCN,CACF,Ce9iCM,wCACE,YfgjCR,Ce5iCM,+CACE,oBf8iCR,CKn/BI,sCUtDA,iCAII,YfyiCN,CACF,CepiCE,kBAEE,YAAA,CACA,cAAA,CAFA,iBAAA,CAIA,8DACE,CAFF,kBfuiCJ,CejiCI,oCAGE,SAAA,CAIA,mBAAA,CALA,6BAAA,CAEA,8DACE,CAJF,UfuiCN,Ce9hCM,8CACE,8BfgiCR,Ce3hCI,8BACE,ef6hCN,CexhCE,4BAGE,kBf6hCJ,CehiCE,4BAGE,iBf6hCJ,CehiCE,4BAIE,gBf4hCJ,CehiCE,4BAIE,iBf4hCJ,CehiCE,kBACE,WAAA,CAIA,eAAA,CAHA,aAAA,CAIA,kBf0hCJ,CevhCI,4CAGE,SAAA,CAIA,mBAAA,CALA,8BAAA,CAEA,8DACE,CAJF,Uf6hCN,CephCM,sDACE,6BfshCR,CelhCM,8DAGE,SAAA,CAIA,mBAAA,CALA,uBAAA,CAEA,8DACE,CAJF,SfwhCR,Ce7gCI,uCAGE,WAAA,CAFA,iBAAA,CACA,UfghCN,Ce1gCE,mBACE,YAAA,CACA,aAAA,CACA,cAAA,CAEA,+CACE,CAFF,kBf6gCJ,CevgCI,8DACE,WAAA,CACA,SAAA,CACA,oCfygCN,CelgCE,mBACE,YfogCJ,CKzjCI,mCUoDF,6BAQI,gBfogCJ,Ce5gCA,6BAQI,iBfogCJ,Ce5gCA,mBAKI,aAAA,CAEA,iBAAA,CADA,afsgCJ,CACF,CKjkCI,sCUoDF,6BAaI,kBfogCJ,CejhCA,6BAaI,mBfogCJ,CACF,CgB5uCA,MACE,0MAAA,CACA,gMAAA,CACA,yNhB+uCF,CgBzuCA,QACE,eAAA,CACA,ehB4uCF,CgBzuCE,eACE,aAAA,CAGA,eAAA,CADA,eAAA,CADA,eAAA,CAGA,sBhB2uCJ,CgBxuCI,+BACE,YhB0uCN,CgBvuCM,mCAEE,WAAA,CADA,UhB0uCR,CgBluCQ,6DAME,iBAAA,CALA,aAAA,CAGA,aAAA,CADA,cAAA,CAEA,kBAAA,CAHA,UhBwuCV,CgB1uCQ,0DAME,iBAAA,CALA,aAAA,CAGA,aAAA,CADA,cAAA,CAEA,kBAAA,CAHA,UhBwuCV,CgB1uCQ,oDAME,iBAAA,CALA,aAAA,CAGA,aAAA,CADA,cAAA,CAEA,kBAAA,CAHA,UhBwuCV,CgB7tCE,cAGE,eAAA,CAFA,QAAA,CACA,ShBguCJ,CgB3tCE,cACE,ehB6tCJ,CgB1tCI,sCACE,ehB4tCN,CgB7tCI,sCACE,chB4tCN,CgBvtCE,cAEE,kBAAA,CAKA,cAAA,CANA,YAAA,CAEA,6BAAA,CACA,iBAAA,CACA,eAAA,CAIA,uBAAA,CAHA,sBAAA,CAEA,sBhB0tCJ,CgBttCI,sBACE,uChBwtCN,CgBptCI,oCACE,+BhBstCN,CgBltCI,0CACE,UhBotCN,CgBhtCI,yCACE,+BhBktCN,CgBntCI,sCACE,+BhBktCN,CgBntCI,gCACE,+BhBktCN,CgB9sCI,4BACE,uCAAA,CACA,oBhBgtCN,CgB5sCI,0CACE,YhB8sCN,CgB3sCM,yDAKE,6BAAA,CAJA,aAAA,CAEA,WAAA,CACA,qCAAA,CAAA,6BAAA,CAFA,UhBgtCR,CgBzsCM,kDACE,YhB2sCR,CgBtsCI,gBAEE,cAAA,CADA,YhBysCN,CgBnsCE,cACE,ahBqsCJ,CgBjsCE,gBACE,YhBmsCJ,CKjpCI,wCW3CA,0CASE,2CAAA,CAHA,YAAA,CACA,qBAAA,CACA,WAAA,CAJA,MAAA,CAFA,iBAAA,CAEA,OAAA,CADA,KAAA,CAEA,ShBksCJ,CgBvrCI,4DACE,eAAA,CACA,ehByrCN,CgB3rCI,yDACE,eAAA,CACA,ehByrCN,CgB3rCI,mDACE,eAAA,CACA,ehByrCN,CgBrrCI,gCAOE,qDAAA,CAHA,uCAAA,CAIA,cAAA,CANA,aAAA,CAGA,kBAAA,CAFA,wBAAA,CAFA,iBAAA,CAKA,kBhByrCN,CgBprCM,wDAGE,UhB0rCR,CgB7rCM,wDAGE,WhB0rCR,CgB7rCM,8CAIE,aAAA,CAEA,aAAA,CACA,YAAA,CANA,iBAAA,CACA,SAAA,CAGA,YhBwrCR,CgBnrCQ,oDAIE,6BAAA,CAKA,UAAA,CARA,aAAA,CAEA,WAAA,CAEA,2CAAA,CAAA,mCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CANA,UhB4rCV,CgBhrCM,8CAEE,2CAAA,CACA,gEACE,CAHF,eAAA,CAIA,4BAAA,CACA,kBhBirCR,CgB9qCQ,2DACE,YhBgrCV,CgB3qCM,8CAGE,2CAAA,CAFA,gCAAA,CACA,ehB8qCR,CgBzqCM,yCAIE,aAAA,CADA,UAAA,CAEA,YAAA,CACA,aAAA,CALA,iBAAA,CAEA,WAAA,CADA,ShB+qCR,CgBtqCI,+BACE,MhBwqCN,CgBpqCI,+BAEE,4DAAA,CADA,ShBuqCN,CgBnqCM,qDACE,+BhBqqCR,CgBlqCQ,gFACE,+BhBoqCV,CgBrqCQ,6EACE,+BhBoqCV,CgBrqCQ,uEACE,+BhBoqCV,CgB9pCI,+BACE,YAAA,CACA,mBhBgqCN,CgB7pCM,uDAGE,mBhBgqCR,CgBnqCM,uDAGE,kBhBgqCR,CgBnqCM,6CAIE,gBAAA,CAFA,aAAA,CADA,YhBkqCR,CgB5pCQ,mDAIE,6BAAA,CAKA,UAAA,CARA,aAAA,CAEA,WAAA,CAEA,2CAAA,CAAA,mCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CANA,UhBqqCV,CgBrpCM,+CACE,mBhBupCR,CgB/oCM,4CAEE,wBAAA,CADA,ehBkpCR,CgB9oCQ,oEACE,mBhBgpCV,CgBjpCQ,oEACE,oBhBgpCV,CgB5oCQ,4EACE,iBhB8oCV,CgB/oCQ,4EACE,kBhB8oCV,CgB1oCQ,oFACE,mBhB4oCV,CgB7oCQ,oFACE,oBhB4oCV,CgBxoCQ,4FACE,mBhB0oCV,CgB3oCQ,4FACE,oBhB0oCV,CgBnoCE,mBACE,wBhBqoCJ,CgBjoCE,wBACE,YAAA,CAEA,SAAA,CADA,0BAAA,CAEA,oEhBmoCJ,CgB9nCI,kCACE,2BhBgoCN,CgB3nCE,gCAEE,SAAA,CADA,uBAAA,CAEA,qEhB6nCJ,CgBxnCI,8CAEE,kCAAA,CAAA,0BhBynCN,CACF,CK/xCI,wCW8KA,0CACE,YhBonCJ,CgBjnCI,yDACE,UhBmnCN,CgB/mCI,wDACE,YhBinCN,CgB7mCI,kDACE,YhB+mCN,CgB1mCE,gBAIE,iDAAA,CADA,gCAAA,CAFA,aAAA,CACA,ehB8mCJ,CACF,CK51CM,6DWuPF,6CACE,YhBwmCJ,CgBrmCI,4DACE,UhBumCN,CgBnmCI,2DACE,YhBqmCN,CgBjmCI,qDACE,YhBmmCN,CACF,CKp1CI,mCWyPA,kCAME,qCAAA,CACA,qDAAA,CANA,uBAAA,CAAA,eAAA,CACA,KAAA,CAGA,ShB8lCJ,CgBzlCI,6CACE,uBhB2lCN,CgBvlCI,gDACE,YhBylCN,CACF,CKn2CI,sCW7JJ,QA6aI,oDhBulCF,CgBplCE,gCAME,qCAAA,CACA,qDAAA,CANA,uBAAA,CAAA,eAAA,CACA,KAAA,CAGA,ShBslCJ,CgBjlCI,8CACE,uBhBmlCN,CgBzkCE,sEACE,YhB8kCJ,CgB1kCE,6DACE,ahB4kCJ,CgB7kCE,0DACE,ahB4kCJ,CgB7kCE,oDACE,ahB4kCJ,CgBxkCE,6CACE,YhB0kCJ,CgBtkCE,uBACE,aAAA,CACA,ehBwkCJ,CgBrkCI,kCACE,ehBukCN,CgBnkCI,qCACE,eAAA,CACA,mBhBqkCN,CgBlkCM,mDACE,mBhBokCR,CgBhkCM,mDACE,YhBkkCR,CgB7jCI,+BACE,ahB+jCN,CgB5jCM,2DACE,ShB8jCR,CgBxjCE,cAGE,kBAAA,CADA,YAAA,CAEA,+CACE,CAJF,WhB6jCJ,CgBrjCI,wBACE,wBhBujCN,CgBnjCI,oBACE,uDhBqjCN,CgBjjCI,oBAKE,6BAAA,CAKA,UAAA,CATA,oBAAA,CAEA,WAAA,CAGA,2CAAA,CAAA,mCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CALA,qBAAA,CAFA,UhB2jCN,CgB/iCI,0JAEE,uBhBgjCN,CgBliCI,+HACE,YhBwiCN,CgBriCM,oDACE,aAAA,CACA,ShBuiCR,CgBpiCQ,kEAOE,qCAAA,CACA,qDAAA,CAFA,eAAA,CAFA,YAAA,CACA,eAAA,CAJA,uBAAA,CAAA,eAAA,CACA,KAAA,CACA,ShB2iCV,CgBniCU,4FACE,mBhBqiCZ,CgBjiCU,gFACE,YhBmiCZ,CgB3hCI,2CACE,ahB6hCN,CgB1hCM,iFACE,mBhB4hCR,CgB7hCM,iFACE,kBhB4hCR,CgBnhCI,mFACE,ehBqhCN,CgBlhCM,iGACE,ShBohCR,CgB/gCI,qFAGE,mDhBihCN,CgBphCI,qFAGE,oDhBihCN,CgBphCI,2EACE,aAAA,CACA,oBhBkhCN,CgB9gCM,0FACE,YhBghCR,CACF,CiBroDA,MACE,igBjBwoDF,CiBloDA,WACE,iBjBqoDF,CKv+CI,mCY/JJ,WAKI,ejBqoDF,CACF,CiBloDE,kBACE,YjBooDJ,CiBhoDE,oBAEE,SAAA,CADA,SjBmoDJ,CKh+CI,wCYpKF,8BAQI,YjB0oDJ,CiBlpDA,8BAQI,ajB0oDJ,CiBlpDA,oBAYI,2CAAA,CACA,kBAAA,CAHA,WAAA,CACA,eAAA,CAOA,mBAAA,CAZA,iBAAA,CACA,SAAA,CAOA,uBAAA,CACA,4CACE,CAPF,UjByoDJ,CiB7nDI,+DACE,SAAA,CACA,oCjB+nDN,CACF,CKtgDI,mCYjJF,8BAiCI,MjBioDJ,CiBlqDA,8BAiCI,OjBioDJ,CiBlqDA,oBAoCI,0BAAA,CACA,cAAA,CAFA,QAAA,CAJA,cAAA,CACA,KAAA,CAMA,sDACE,CALF,OjBgoDJ,CiBtnDI,+DAME,YAAA,CACA,SAAA,CACA,4CACE,CARF,UjB2nDN,CACF,CKrgDI,wCYxGA,+DAII,mBjB6mDN,CACF,CKnjDM,6DY/DF,+DASI,mBjB6mDN,CACF,CKxjDM,6DY/DF,+DAcI,mBjB6mDN,CACF,CiBxmDE,kBAEE,kCAAA,CAAA,0BjBymDJ,CKvhDI,wCYpFF,4BAQI,MjBgnDJ,CiBxnDA,4BAQI,OjBgnDJ,CiBxnDA,kBAWI,QAAA,CAGA,SAAA,CAFA,eAAA,CANA,cAAA,CACA,KAAA,CAMA,wBAAA,CAEA,qGACE,CANF,OAAA,CADA,SjB+mDJ,CiBlmDI,4BACE,yBjBomDN,CiBhmDI,6DAEE,WAAA,CAEA,SAAA,CADA,uBAAA,CAEA,sGACE,CALF,UjBsmDN,CACF,CKlkDI,mCYjEF,4BA2CI,WjBgmDJ,CiB3oDA,4BA2CI,UjBgmDJ,CiB3oDA,kBA6CI,eAAA,CAHA,iBAAA,CAIA,8CAAA,CAFA,ajB+lDJ,CACF,CKjmDM,6DYOF,6DAII,ajB0lDN,CACF,CKhlDI,sCYfA,6DASI,ajB0lDN,CACF,CiBrlDE,iBAIE,2CAAA,CACA,0BAAA,CAFA,aAAA,CAFA,iBAAA,CAKA,2CACE,CALF,SjB2lDJ,CK7lDI,mCYAF,iBAaI,0BAAA,CACA,mBAAA,CAFA,ajBulDJ,CiBllDI,uBACE,0BjBolDN,CACF,CiBhlDI,4DAEE,2CAAA,CACA,6BAAA,CACA,8BAAA,CAHA,gCjBqlDN,CiB7kDE,4BAKE,mBAAA,CAAA,oBjBklDJ,CiBvlDE,4BAKE,mBAAA,CAAA,oBjBklDJ,CiBvlDE,kBAQE,gBAAA,CAFA,eAAA,CAFA,WAAA,CAHA,iBAAA,CAMA,sBAAA,CAJA,UAAA,CADA,SjBqlDJ,CiB5kDI,+BACE,qBjB8kDN,CiB1kDI,kEAEE,uCjB2kDN,CiBvkDI,6BACE,YjBykDN,CK7mDI,wCYaF,kBA8BI,eAAA,CADA,aAAA,CADA,UjB0kDJ,CACF,CKvoDI,mCYgCF,4BAmCI,mBjB0kDJ,CiB7mDA,4BAmCI,oBjB0kDJ,CiB7mDA,kBAoCI,aAAA,CACA,ejBwkDJ,CiBrkDI,+BACE,uCjBukDN,CiBnkDI,mCACE,gCjBqkDN,CiBjkDI,6DACE,kBjBmkDN,CiBhkDM,wJAEE,uCjBikDR,CACF,CiB3jDE,iBAIE,cAAA,CAHA,oBAAA,CAEA,aAAA,CAEA,kCACE,CAJF,YjBgkDJ,CiBxjDI,uBACE,UjB0jDN,CiBtjDI,yCAGE,UjByjDN,CiB5jDI,yCAGE,WjByjDN,CiB5jDI,+BACE,iBAAA,CACA,SAAA,CAEA,SjBwjDN,CiBrjDM,6CACE,oBjBujDR,CK1pDI,wCY2FA,yCAcI,UjBsjDN,CiBpkDE,yCAcI,WjBsjDN,CiBpkDE,+BAaI,SjBujDN,CiBnjDM,+CACE,YjBqjDR,CACF,CKtrDI,mCY8GA,+BAwBI,mBjBojDN,CiBjjDM,8CACE,YjBmjDR,CACF,CiB7iDE,8BAGE,WjBijDJ,CiBpjDE,8BAGE,UjBijDJ,CiBpjDE,oBAKE,mBAAA,CAJA,iBAAA,CACA,SAAA,CAEA,SjBgjDJ,CKlrDI,wCY8HF,8BAUI,WjB+iDJ,CiBzjDA,8BAUI,UjB+iDJ,CiBzjDA,oBASI,SjBgjDJ,CACF,CiB5iDI,gCACE,iBjBkjDN,CiBnjDI,gCACE,kBjBkjDN,CiBnjDI,sBAEE,uCAAA,CAEA,SAAA,CADA,oBAAA,CAEA,+DjB8iDN,CiBziDM,yCAEE,uCAAA,CADA,YjB4iDR,CiBviDM,yFAGE,SAAA,CACA,mBAAA,CAFA,kBjB0iDR,CiBriDQ,8FACE,UjBuiDV,CiBhiDE,8BAOE,mBAAA,CAAA,oBjBuiDJ,CiB9iDE,8BAOE,mBAAA,CAAA,oBjBuiDJ,CiB9iDE,oBAIE,kBAAA,CAIA,yCAAA,CALA,YAAA,CAMA,eAAA,CAHA,WAAA,CAKA,SAAA,CAVA,iBAAA,CACA,KAAA,CAUA,uBAAA,CAFA,kBAAA,CALA,UjByiDJ,CK5uDI,mCY8LF,8BAgBI,mBjBmiDJ,CiBnjDA,8BAgBI,oBjBmiDJ,CiBnjDA,oBAiBI,ejBkiDJ,CACF,CiB/hDI,+DACE,SAAA,CACA,0BjBiiDN,CiB5hDE,6BAKE,+BjB+hDJ,CiBpiDE,0DAME,gCjB8hDJ,CiBpiDE,6BAME,+BjB8hDJ,CiBpiDE,mBAIE,eAAA,CAHA,iBAAA,CAEA,UAAA,CADA,SjBkiDJ,CK3uDI,wCYuMF,mBAWI,QAAA,CADA,UjB+hDJ,CACF,CKpwDI,mCY0NF,mBAiBI,SAAA,CADA,UAAA,CAEA,sBjB8hDJ,CiB3hDI,8DACE,8BAAA,CACA,SjB6hDN,CACF,CiBxhDE,uBAKE,kCAAA,CAAA,0BAAA,CAFA,2CAAA,CAFA,WAAA,CACA,eAAA,CAOA,kBjBshDJ,CiBnhDI,iEAZF,uBAaI,uBjBshDJ,CACF,CKjzDM,6DY6QJ,uBAkBI,ajBshDJ,CACF,CKhyDI,sCYuPF,uBAuBI,ajBshDJ,CACF,CKryDI,mCYuPF,uBA4BI,YAAA,CAEA,yDAAA,CADA,oBjBuhDJ,CiBnhDI,kEACE,ejBqhDN,CiBjhDI,6BACE,+CjBmhDN,CiB/gDI,0CAEE,YAAA,CADA,WjBkhDN,CiB7gDI,gDACE,oDjB+gDN,CiB5gDM,sDACE,0CjB8gDR,CACF,CiBvgDA,kBACE,gCAAA,CACA,qBjB0gDF,CiBvgDE,wBAKE,qDAAA,CAHA,uCAAA,CACA,gBAAA,CACA,kBAAA,CAHA,eAAA,CAKA,uBjBygDJ,CKz0DI,mCY0TF,kCAUI,mBjBygDJ,CiBnhDA,kCAUI,oBjBygDJ,CACF,CiBrgDE,wBAGE,eAAA,CAFA,QAAA,CACA,SAAA,CAGA,wBAAA,CAAA,qBAAA,CAAA,gBjBsgDJ,CiBlgDE,wBACE,yDjBogDJ,CiBjgDI,oCACE,ejBmgDN,CiB9/CE,wBACE,aAAA,CACA,YAAA,CAEA,uBAAA,CADA,gCjBigDJ,CiB7/CI,mDACE,uDjB+/CN,CiBhgDI,gDACE,uDjB+/CN,CiBhgDI,0CACE,uDjB+/CN,CiB3/CI,gDACE,mBjB6/CN,CiBx/CE,gCAGE,+BAAA,CAGA,cAAA,CALA,aAAA,CAGA,gBAAA,CACA,YAAA,CAHA,mBAAA,CAQA,uBAAA,CAHA,2CjB2/CJ,CKh3DI,mCY8WF,0CAcI,mBjBw/CJ,CiBtgDA,0CAcI,oBjBw/CJ,CACF,CiBr/CI,2DAEE,uDAAA,CADA,+BjBw/CN,CiBz/CI,wDAEE,uDAAA,CADA,+BjBw/CN,CiBz/CI,kDAEE,uDAAA,CADA,+BjBw/CN,CiBn/CI,wCACE,YjBq/CN,CiBh/CI,wDACE,YjBk/CN,CiB9+CI,oCACE,WjBg/CN,CiB3+CE,2BAGE,eAAA,CADA,eAAA,CADA,iBjB++CJ,CKv4DI,mCYuZF,qCAOI,mBjB6+CJ,CiBp/CA,qCAOI,oBjB6+CJ,CACF,CiBv+CM,8DAGE,eAAA,CADA,eAAA,CAEA,eAAA,CAHA,ejB4+CR,CiBn+CE,kCAEE,MjBy+CJ,CiB3+CE,kCAEE,OjBy+CJ,CiB3+CE,wBAME,uCAAA,CAFA,aAAA,CACA,YAAA,CAJA,iBAAA,CAEA,YjBw+CJ,CKv4DI,wCY4ZF,wBAUI,YjBq+CJ,CACF,CiBl+CI,8BAIE,6BAAA,CAKA,UAAA,CARA,oBAAA,CAEA,WAAA,CAEA,+CAAA,CAAA,uCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CANA,UjB2+CN,CiBj+CM,wCACE,oBjBm+CR,CiB79CE,yBAGE,gBAAA,CADA,eAAA,CAEA,eAAA,CAHA,ajBk+CJ,CiB39CE,0BASE,2BAAA,CACA,oBAAA,CALA,uCAAA,CAJA,mBAAA,CAKA,gBAAA,CACA,eAAA,CAJA,aAAA,CADA,eAAA,CAEA,eAAA,CAIA,sBjB+9CJ,CK56DI,wCYqcF,0BAeI,oBAAA,CADA,ejB89CJ,CACF,CK39DM,6DY8eJ,0BAqBI,oBAAA,CADA,ejB89CJ,CACF,CiB19CI,+BAEE,wBAAA,CADA,yBjB69CN,CiBv9CE,yBAEE,gBAAA,CACA,iBAAA,CAFA,ajB29CJ,CiBr9CE,uBAEE,wBAAA,CADA,+BjBw9CJ,CkB9nEA,WACE,iBAAA,CACA,SlBioEF,CkB9nEE,kBAOE,2CAAA,CACA,mBAAA,CACA,8BAAA,CAHA,gCAAA,CAHA,QAAA,CAEA,gBAAA,CADA,YAAA,CAOA,SAAA,CAVA,iBAAA,CACA,sBAAA,CAQA,mCAAA,CAEA,oElBgoEJ,CkB1nEI,+DACE,gBAAA,CAEA,SAAA,CADA,+BAAA,CAEA,sFACE,CADF,8ElB4nEN,CkBhoEI,4DACE,gBAAA,CAEA,SAAA,CADA,+BAAA,CAEA,mFACE,CADF,8ElB4nEN,CkBhoEI,sDACE,gBAAA,CAEA,SAAA,CADA,+BAAA,CAEA,8ElB4nEN,CkBrnEI,wBAUE,+BAAA,CAAA,8CAAA,CAFA,6BAAA,CAAA,8BAAA,CACA,YAAA,CAEA,UAAA,CANA,QAAA,CAFA,QAAA,CAIA,kBAAA,CADA,iBAAA,CALA,iBAAA,CACA,KAAA,CAEA,OlB8nEN,CkBlnEE,iBAOE,mBAAA,CAFA,eAAA,CACA,oBAAA,CAJA,QAAA,CADA,kBAAA,CAGA,aAAA,CADA,SlBwnEJ,CkBhnEE,iBACE,kBlBknEJ,CkB9mEE,2BAGE,kBAAA,CAAA,oBlBonEJ,CkBvnEE,2BAGE,mBAAA,CAAA,mBlBonEJ,CkBvnEE,iBAKE,cAAA,CAJA,aAAA,CAGA,YAAA,CAKA,uBAAA,CAHA,2CACE,CALF,UlBqnEJ,CkB3mEI,4CACE,+BlB6mEN,CkB9mEI,yCACE,+BlB6mEN,CkB9mEI,mCACE,+BlB6mEN,CkBzmEI,uBACE,qDlB2mEN,CmB/rEA,YAIE,qBAAA,CADA,aAAA,CAGA,gBAAA,CALA,uBAAA,CAAA,eAAA,CACA,UAAA,CAGA,anBmsEF,CmB/rEE,aATF,YAUI,YnBksEF,CACF,CKphEI,wCc3KF,+BAMI,anBssEJ,CmB5sEA,+BAMI,cnBssEJ,CmB5sEA,qBAWI,2CAAA,CAHA,aAAA,CAEA,WAAA,CANA,cAAA,CACA,KAAA,CAOA,uBAAA,CACA,iEACE,CALF,aAAA,CAFA,SnBqsEJ,CmB1rEI,mEACE,8BAAA,CACA,6BnB4rEN,CmBzrEM,6EACE,8BnB2rER,CmBtrEI,6CAEE,QAAA,CAAA,MAAA,CACA,QAAA,CAEA,eAAA,CAJA,iBAAA,CACA,OAAA,CAEA,qBAAA,CAFA,KnB2rEN,CACF,CKnkEI,sCctKJ,YAuDI,QnBsrEF,CmBnrEE,mBACE,WnBqrEJ,CmBjrEE,6CACE,UnBmrEJ,CACF,CmB/qEE,uBACE,YAAA,CACA,OnBirEJ,CKllEI,mCcjGF,uBAMI,QnBirEJ,CmB9qEI,8BACE,WnBgrEN,CmB5qEI,qCACE,anB8qEN,CmB1qEI,+CACE,kBnB4qEN,CACF,CmBvqEE,wBAUE,uBAAA,CANA,kCAAA,CAAA,0BAAA,CAHA,cAAA,CACA,eAAA,CASA,yDAAA,CAFA,oBnBsqEJ,CmBjqEI,8BACE,+CnBmqEN,CmB/pEI,2CAEE,YAAA,CADA,WnBkqEN,CmB7pEI,iDACE,oDnB+pEN,CmB5pEM,uDACE,0CnB8pER,CmBhpEE,wCAGE,wBACE,qBnBgpEJ,CmB5oEE,6BACE,kCnB8oEJ,CmB/oEE,6BACE,iCnB8oEJ,CACF,CK1mEI,wCc5BF,YAME,0BAAA,CADA,QAAA,CAEA,SAAA,CANA,cAAA,CACA,KAAA,CAMA,sDACE,CALF,OAAA,CADA,SnB+oEF,CmBpoEE,4CAEE,WAAA,CACA,SAAA,CACA,4CACE,CAJF,UnByoEJ,CACF,CoBtzEA,iBACE,GACE,QpBwzEF,CoBrzEA,GACE,apBuzEF,CACF,CoBnzEA,gBACE,GAEE,SAAA,CADA,0BpBszEF,CoBlzEA,IACE,SpBozEF,CoBjzEA,GAEE,SAAA,CADA,uBpBozEF,CACF,CoB3yEA,MACE,mgBAAA,CACA,oiBAAA,CACA,0nBAAA,CACA,mhBpB6yEF,CoBvyEA,WAOE,kCAAA,CAAA,0BAAA,CANA,aAAA,CACA,gBAAA,CACA,eAAA,CAEA,uCAAA,CAGA,uBAAA,CAJA,kBpB6yEF,CoBtyEE,iBACE,UpBwyEJ,CoBpyEE,iBACE,oBAAA,CAEA,aAAA,CACA,qBAAA,CAFA,UpBwyEJ,CoBnyEI,+BAEE,iBpBqyEN,CoBvyEI,+BAEE,kBpBqyEN,CoBvyEI,qBACE,gBpBsyEN,CoBjyEI,kDACE,iBpBoyEN,CoBryEI,kDACE,kBpBoyEN,CoBryEI,kDAEE,iBpBmyEN,CoBryEI,kDAEE,kBpBmyEN,CoB9xEE,iCAGE,iBpBmyEJ,CoBtyEE,iCAGE,kBpBmyEJ,CoBtyEE,uBACE,oBAAA,CACA,6BAAA,CAEA,eAAA,CACA,sBAAA,CACA,qBpBgyEJ,CoB5xEE,kBACE,YAAA,CAMA,gBAAA,CALA,SAAA,CAMA,oBAAA,CAJA,gBAAA,CAKA,WAAA,CAHA,eAAA,CADA,SAAA,CAFA,UpBoyEJ,CoB3xEI,iDACE,4BpB6xEN,CoBxxEE,iBACE,eAAA,CACA,sBpB0xEJ,CoBvxEI,gDACE,2BpByxEN,CoBrxEI,kCAIE,kBpB6xEN,CoBjyEI,kCAIE,iBpB6xEN,CoBjyEI,wBAME,6BAAA,CAIA,UAAA,CATA,oBAAA,CAEA,YAAA,CAIA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CAJA,uBAAA,CAHA,WpB+xEN,CoBnxEI,iCACE,apBqxEN,CoBjxEI,iCACE,gDAAA,CAAA,wCpBmxEN,CoB/wEI,+BACE,8CAAA,CAAA,sCpBixEN,CoB7wEI,+BACE,8CAAA,CAAA,sCpB+wEN,CoB3wEI,sCACE,qDAAA,CAAA,6CpB6wEN,CqBp6EA,SASE,2CAAA,CAFA,gCAAA,CAHA,aAAA,CAIA,eAAA,CAFA,aAAA,CADA,UAAA,CAFA,SrB26EF,CqBl6EE,aAZF,SAaI,YrBq6EF,CACF,CK1vEI,wCgBzLJ,SAkBI,YrBq6EF,CACF,CqBl6EE,iBACE,mBrBo6EJ,CqBh6EE,yBAEE,iBrBs6EJ,CqBx6EE,yBAEE,kBrBs6EJ,CqBx6EE,eAME,eAAA,CADA,eAAA,CAJA,QAAA,CAEA,SAAA,CACA,kBrBo6EJ,CqB95EE,eACE,oBAAA,CACA,aAAA,CACA,kBAAA,CAAA,mBrBg6EJ,CqB35EE,eAOE,kCAAA,CAAA,0BAAA,CANA,aAAA,CAEA,eAAA,CADA,gBAAA,CAMA,UAAA,CAJA,uCAAA,CACA,oBAAA,CAIA,8DrB45EJ,CqBv5EI,iEAEE,aAAA,CACA,SrBw5EN,CqB35EI,8DAEE,aAAA,CACA,SrBw5EN,CqB35EI,wDAEE,aAAA,CACA,SrBw5EN,CqBn5EM,2CACE,qBrBq5ER,CqBt5EM,2CACE,qBrBw5ER,CqBz5EM,2CACE,qBrB25ER,CqB55EM,2CACE,qBrB85ER,CqB/5EM,2CACE,oBrBi6ER,CqBl6EM,2CACE,qBrBo6ER,CqBr6EM,2CACE,qBrBu6ER,CqBx6EM,2CACE,qBrB06ER,CqB36EM,4CACE,qBrB66ER,CqB96EM,4CACE,oBrBg7ER,CqBj7EM,4CACE,qBrBm7ER,CqBp7EM,4CACE,qBrBs7ER,CqBv7EM,4CACE,qBrBy7ER,CqB17EM,4CACE,qBrB47ER,CqB77EM,4CACE,oBrB+7ER,CqBz7EI,gCAEE,SAAA,CADA,yBAAA,CAEA,wCrB27EN,CsBxgFA,MACE,wStB2gFF,CsBlgFE,qBAEE,mBAAA,CADA,kBtBsgFJ,CsBjgFE,8BAEE,iBtB4gFJ,CsB9gFE,8BAEE,gBtB4gFJ,CsB9gFE,oBAUE,+CAAA,CACA,oBAAA,CAVA,oBAAA,CAKA,gBAAA,CADA,eAAA,CAGA,qBAAA,CADA,eAAA,CAJA,kBAAA,CACA,uBAAA,CAKA,qBtBqgFJ,CsBhgFI,0BAGE,uCAAA,CAFA,aAAA,CACA,YAAA,CAEA,6CtBkgFN,CsB7/EM,gEAGE,0CAAA,CADA,+BtB+/ER,CsBz/EI,yBACE,uBtB2/EN,CsBn/EI,gCAME,oDAAA,CAMA,UAAA,CAXA,oBAAA,CAEA,YAAA,CACA,iBAAA,CAGA,qCAAA,CAAA,6BAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CACA,iCAAA,CANA,0BAAA,CAHA,WtB+/EN,CsBj/EI,6DACE,0CtBm/EN,CsBp/EI,0DACE,0CtBm/EN,CsBp/EI,oDACE,0CtBm/EN,CuB5jFA,iBACE,GACE,uDAAA,CACA,oBvB+jFF,CuB5jFA,IACE,6BAAA,CACA,kBvB8jFF,CuB3jFA,GACE,wBAAA,CACA,oBvB6jFF,CACF,CuBrjFA,MACE,wBvBujFF,CuBjjFA,YAwBE,kCAAA,CAAA,0BAAA,CALA,2CAAA,CACA,mBAAA,CACA,8BAAA,CAJA,gCAAA,CACA,sCAAA,CAfA,+IACE,CAYF,8BAAA,CASA,SAAA,CAxBA,iBAAA,CACA,uBAAA,CAoBA,4BAAA,CAIA,uDACE,CAZF,6BAAA,CADA,SvB4jFF,CuB1iFE,oBAGE,SAAA,CADA,uBAAA,CAEA,2EACE,CAJF,SvB+iFJ,CuBriFE,4DACE,sCvBuiFJ,CuBxiFE,yDACE,sCvBuiFJ,CuBxiFE,mDACE,sCvBuiFJ,CuBniFE,mBAEE,gBAAA,CADA,avBsiFJ,CuBliFI,2CACE,YvBoiFN,CuBhiFI,0CACE,evBkiFN,CuB1hFA,eACE,eAAA,CAEA,YAAA,CADA,kBvB8hFF,CuB1hFE,yBACE,avB4hFJ,CuBxhFE,6BACE,oBAAA,CAGA,iBvBwhFJ,CuBphFE,sBAOE,cAAA,CAFA,sCAAA,CADA,eAAA,CADA,YAAA,CAGA,YAAA,CALA,iBAAA,CAOA,wBAAA,CAAA,qBAAA,CAAA,gBAAA,CANA,SvB4hFJ,CuBnhFI,qCACE,UAAA,CACA,uBvBqhFN,CuBlhFM,gEACE,UvBohFR,CuBrhFM,6DACE,UvBohFR,CuBrhFM,uDACE,UvBohFR,CuB5gFI,4BAYE,oDAAA,CACA,iBAAA,CAIA,UAAA,CARA,YAAA,CANA,YAAA,CAOA,cAAA,CACA,cAAA,CAVA,iBAAA,CACA,KAAA,CAYA,2CACE,CARF,wBAAA,CACA,6BAAA,CAJA,UvBuhFN,CuBvgFM,4CAGE,8CACE,2BvBugFR,CACF,CuBngFM,gDAIE,cAAA,CAHA,2CvBsgFR,CuB9/EI,2BAEE,sCAAA,CADA,iBvBigFN,CuB5/EI,qFACE,+BvB8/EN,CuB//EI,kFACE,+BvB8/EN,CuB//EI,4EACE,+BvB8/EN,CuB3/EM,2FACE,0CvB6/ER,CuB9/EM,wFACE,0CvB6/ER,CuB9/EM,kFACE,0CvB6/ER,CuBx/EI,0CAGE,cAAA,CADA,eAAA,CADA,SvB4/EN,CuBt/EI,8CACE,oBAAA,CACA,evBw/EN,CuBr/EM,qDAME,mCAAA,CALA,oBAAA,CACA,mBAAA,CAEA,qBAAA,CACA,iDAAA,CAFA,qBvB0/ER,CuBn/EQ,iBAVF,qDAWI,WvBs/ER,CuBn/EQ,mEACE,mCvBq/EV,CACF,CwBntFA,kBAKE,exB+tFF,CwBpuFA,kBAKE,gBxB+tFF,CwBpuFA,QASE,2CAAA,CACA,oBAAA,CAEA,8BAAA,CALA,uCAAA,CAHA,aAAA,CAIA,eAAA,CAGA,YAAA,CALA,mBAAA,CALA,cAAA,CACA,UAAA,CAWA,yBAAA,CACA,mGACE,CAZF,SxBiuFF,CwB/sFE,aArBF,QAsBI,YxBktFF,CACF,CwB/sFE,kBACE,wBxBitFJ,CwB7sFE,gBAEE,SAAA,CAEA,mBAAA,CAHA,+BAAA,CAEA,uBxBgtFJ,CwB5sFI,0BACE,8BxB8sFN,CwBzsFE,mCAEE,0CAAA,CADA,+BxB4sFJ,CwB7sFE,gCAEE,0CAAA,CADA,+BxB4sFJ,CwB7sFE,0BAEE,0CAAA,CADA,+BxB4sFJ,CwBvsFE,YACE,oBAAA,CACA,oBxBysFJ,CyB7vFA,oBACE,GACE,mBzBgwFF,CACF,CyBxvFA,MACE,wfzB0vFF,CyBpvFA,YACE,aAAA,CAEA,eAAA,CADA,azBwvFF,CyBpvFE,+BAOE,kBAAA,CAAA,kBzBqvFJ,CyB5vFE,+BAOE,iBAAA,CAAA,mBzBqvFJ,CyB5vFE,qBAQE,aAAA,CAEA,cAAA,CADA,YAAA,CARA,iBAAA,CAKA,UzBsvFJ,CyB/uFI,qCAIE,iBzBuvFN,CyB3vFI,qCAIE,kBzBuvFN,CyB3vFI,2BAKE,6BAAA,CAKA,UAAA,CATA,oBAAA,CAEA,YAAA,CAGA,yCAAA,CAAA,iCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CAPA,WzByvFN,CyB5uFE,kBAUE,2CAAA,CACA,mBAAA,CACA,8BAAA,CAJA,gCAAA,CACA,oBAAA,CAJA,kBAAA,CADA,YAAA,CASA,SAAA,CANA,aAAA,CADA,SAAA,CALA,iBAAA,CAgBA,4BAAA,CAfA,UAAA,CAYA,+CACE,CAZF,SzB0vFJ,CyBzuFI,gEACE,gBAAA,CACA,SAAA,CACA,8CACE,CADF,sCzB2uFN,CyB9uFI,6DACE,gBAAA,CACA,SAAA,CACA,2CACE,CADF,sCzB2uFN,CyB9uFI,uDACE,gBAAA,CACA,SAAA,CACA,sCzB2uFN,CyBruFI,wBAGE,oCACE,gCzBquFN,CyBjuFI,2CACE,czBmuFN,CACF,CyB9tFE,kBACE,kBzBguFJ,CyB5tFE,4BAGE,kBAAA,CAAA,oBzBmuFJ,CyBtuFE,4BAGE,mBAAA,CAAA,mBzBmuFJ,CyBtuFE,kBAME,cAAA,CALA,aAAA,CAIA,YAAA,CAKA,uBAAA,CAHA,2CACE,CAJF,kBAAA,CAFA,UzBouFJ,CyBztFI,6CACE,+BzB2tFN,CyB5tFI,0CACE,+BzB2tFN,CyB5tFI,oCACE,+BzB2tFN,CyBvtFI,wBACE,qDzBytFN,C0B1zFA,MAEI,uWAAA,CAAA,8WAAA,CAAA,sPAAA,CAAA,8xBAAA,CAAA,0MAAA,CAAA,gbAAA,CAAA,gMAAA,CAAA,iQAAA,CAAA,0VAAA,CAAA,6aAAA,CAAA,8SAAA,CAAA,gM1Bm1FJ,C0Bv0FE,4CAQE,8CAAA,CACA,2BAAA,CACA,mBAAA,CACA,8BAAA,CANA,mCAAA,CAHA,iBAAA,CAIA,gBAAA,CAHA,iBAAA,CACA,eAAA,CAGA,uB1B80FJ,C0Bv0FI,aAdF,4CAeI,e1B20FJ,CACF,C0Bv0FI,gDACE,qB1B00FN,C0Bt0FI,gHAEE,iBAAA,CADA,c1B00FN,C0B30FI,0GAEE,iBAAA,CADA,c1B00FN,C0B30FI,8FAEE,iBAAA,CADA,c1B00FN,C0Br0FI,4FACE,iB1Bw0FN,C0Bp0FI,kFACE,e1Bu0FN,C0Bn0FI,0FACE,Y1Bs0FN,C0Bl0FI,8EACE,mB1Bq0FN,C0Bh0FE,sEAME,iBAAA,CAAA,mB1Bw0FJ,C0B90FE,sEAME,kBAAA,CAAA,kB1Bw0FJ,C0B90FE,sEAUE,uB1Bo0FJ,C0B90FE,sEAUE,wB1Bo0FJ,C0B90FE,sEAWE,4B1Bm0FJ,C0B90FE,4IAYE,6B1Bk0FJ,C0B90FE,sEAYE,4B1Bk0FJ,C0B90FE,kDAQE,0BAAA,CACA,WAAA,CAFA,eAAA,CAHA,eAAA,CACA,oBAAA,CAAA,iBAAA,CAHA,iB1B40FJ,C0B/zFI,kFACE,e1Bk0FN,C0B9zFI,oFAGE,U1By0FN,C0B50FI,oFAGE,W1By0FN,C0B50FI,gEAME,wBCsIU,CDjIV,UAAA,CANA,WAAA,CAEA,kDAAA,CAAA,0CAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CATA,iBAAA,CACA,UAAA,CAEA,U1Bw0FN,C0B7zFI,4DACE,4D1Bg0FN,C0B3yFE,iEACE,oB1B8yFJ,C0B/yFE,2DACE,oB1B8yFJ,C0B/yFE,+CACE,oB1B8yFJ,C0B1yFE,wEACE,0B1B6yFJ,C0B9yFE,kEACE,0B1B6yFJ,C0B9yFE,sDACE,0B1B6yFJ,C0B1yFI,+EACE,wBAnBG,CAoBH,kDAAA,CAAA,0C1B4yFN,C0B9yFI,yEACE,wBAnBG,CAoBH,0C1B4yFN,C0B9yFI,6DACE,wBAnBG,CAoBH,kDAAA,CAAA,0C1B4yFN,C0BxyFI,8EACE,a1B0yFN,C0B3yFI,wEACE,a1B0yFN,C0B3yFI,4DACE,a1B0yFN,C0B1zFE,oFACE,oB1B6zFJ,C0B9zFE,8EACE,oB1B6zFJ,C0B9zFE,kEACE,oB1B6zFJ,C0BzzFE,2FACE,0B1B4zFJ,C0B7zFE,qFACE,0B1B4zFJ,C0B7zFE,yEACE,0B1B4zFJ,C0BzzFI,kGACE,wBAnBG,CAoBH,sDAAA,CAAA,8C1B2zFN,C0B7zFI,4FACE,wBAnBG,CAoBH,8C1B2zFN,C0B7zFI,gFACE,wBAnBG,CAoBH,sDAAA,CAAA,8C1B2zFN,C0BvzFI,iGACE,a1ByzFN,C0B1zFI,2FACE,a1ByzFN,C0B1zFI,+EACE,a1ByzFN,C0Bz0FE,uEACE,oB1B40FJ,C0B70FE,iEACE,oB1B40FJ,C0B70FE,qDACE,oB1B40FJ,C0Bx0FE,8EACE,0B1B20FJ,C0B50FE,wEACE,0B1B20FJ,C0B50FE,4DACE,0B1B20FJ,C0Bx0FI,qFACE,wBAnBG,CAoBH,kDAAA,CAAA,0C1B00FN,C0B50FI,+EACE,wBAnBG,CAoBH,0C1B00FN,C0B50FI,mEACE,wBAnBG,CAoBH,kDAAA,CAAA,0C1B00FN,C0Bt0FI,oFACE,a1Bw0FN,C0Bz0FI,8EACE,a1Bw0FN,C0Bz0FI,kEACE,a1Bw0FN,C0Bx1FE,iFACE,oB1B21FJ,C0B51FE,2EACE,oB1B21FJ,C0B51FE,+DACE,oB1B21FJ,C0Bv1FE,wFACE,0B1B01FJ,C0B31FE,kFACE,0B1B01FJ,C0B31FE,sEACE,0B1B01FJ,C0Bv1FI,+FACE,wBAnBG,CAoBH,iDAAA,CAAA,yC1By1FN,C0B31FI,yFACE,wBAnBG,CAoBH,yC1By1FN,C0B31FI,6EACE,wBAnBG,CAoBH,iDAAA,CAAA,yC1By1FN,C0Br1FI,8FACE,a1Bu1FN,C0Bx1FI,wFACE,a1Bu1FN,C0Bx1FI,4EACE,a1Bu1FN,C0Bv2FE,iFACE,oB1B02FJ,C0B32FE,2EACE,oB1B02FJ,C0B32FE,+DACE,oB1B02FJ,C0Bt2FE,wFACE,0B1By2FJ,C0B12FE,kFACE,0B1By2FJ,C0B12FE,sEACE,0B1By2FJ,C0Bt2FI,+FACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1Bw2FN,C0B12FI,yFACE,wBAnBG,CAoBH,6C1Bw2FN,C0B12FI,6EACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1Bw2FN,C0Bp2FI,8FACE,a1Bs2FN,C0Bv2FI,wFACE,a1Bs2FN,C0Bv2FI,4EACE,a1Bs2FN,C0Bt3FE,gFACE,oB1By3FJ,C0B13FE,0EACE,oB1By3FJ,C0B13FE,8DACE,oB1By3FJ,C0Br3FE,uFACE,0B1Bw3FJ,C0Bz3FE,iFACE,0B1Bw3FJ,C0Bz3FE,qEACE,0B1Bw3FJ,C0Br3FI,8FACE,wBAnBG,CAoBH,sDAAA,CAAA,8C1Bu3FN,C0Bz3FI,wFACE,wBAnBG,CAoBH,8C1Bu3FN,C0Bz3FI,4EACE,wBAnBG,CAoBH,sDAAA,CAAA,8C1Bu3FN,C0Bn3FI,6FACE,a1Bq3FN,C0Bt3FI,uFACE,a1Bq3FN,C0Bt3FI,2EACE,a1Bq3FN,C0Br4FE,wFACE,oB1Bw4FJ,C0Bz4FE,kFACE,oB1Bw4FJ,C0Bz4FE,sEACE,oB1Bw4FJ,C0Bp4FE,+FACE,0B1Bu4FJ,C0Bx4FE,yFACE,0B1Bu4FJ,C0Bx4FE,6EACE,0B1Bu4FJ,C0Bp4FI,sGACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1Bs4FN,C0Bx4FI,gGACE,wBAnBG,CAoBH,6C1Bs4FN,C0Bx4FI,oFACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1Bs4FN,C0Bl4FI,qGACE,a1Bo4FN,C0Br4FI,+FACE,a1Bo4FN,C0Br4FI,mFACE,a1Bo4FN,C0Bp5FE,mFACE,oB1Bu5FJ,C0Bx5FE,6EACE,oB1Bu5FJ,C0Bx5FE,iEACE,oB1Bu5FJ,C0Bn5FE,0FACE,0B1Bs5FJ,C0Bv5FE,oFACE,0B1Bs5FJ,C0Bv5FE,wEACE,0B1Bs5FJ,C0Bn5FI,iGACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1Bq5FN,C0Bv5FI,2FACE,wBAnBG,CAoBH,6C1Bq5FN,C0Bv5FI,+EACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1Bq5FN,C0Bj5FI,gGACE,a1Bm5FN,C0Bp5FI,0FACE,a1Bm5FN,C0Bp5FI,8EACE,a1Bm5FN,C0Bn6FE,0EACE,oB1Bs6FJ,C0Bv6FE,oEACE,oB1Bs6FJ,C0Bv6FE,wDACE,oB1Bs6FJ,C0Bl6FE,iFACE,0B1Bq6FJ,C0Bt6FE,2EACE,0B1Bq6FJ,C0Bt6FE,+DACE,0B1Bq6FJ,C0Bl6FI,wFACE,wBAnBG,CAoBH,oDAAA,CAAA,4C1Bo6FN,C0Bt6FI,kFACE,wBAnBG,CAoBH,4C1Bo6FN,C0Bt6FI,sEACE,wBAnBG,CAoBH,oDAAA,CAAA,4C1Bo6FN,C0Bh6FI,uFACE,a1Bk6FN,C0Bn6FI,iFACE,a1Bk6FN,C0Bn6FI,qEACE,a1Bk6FN,C0Bl7FE,gEACE,oB1Bq7FJ,C0Bt7FE,0DACE,oB1Bq7FJ,C0Bt7FE,8CACE,oB1Bq7FJ,C0Bj7FE,uEACE,0B1Bo7FJ,C0Br7FE,iEACE,0B1Bo7FJ,C0Br7FE,qDACE,0B1Bo7FJ,C0Bj7FI,8EACE,wBAnBG,CAoBH,iDAAA,CAAA,yC1Bm7FN,C0Br7FI,wEACE,wBAnBG,CAoBH,yC1Bm7FN,C0Br7FI,4DACE,wBAnBG,CAoBH,iDAAA,CAAA,yC1Bm7FN,C0B/6FI,6EACE,a1Bi7FN,C0Bl7FI,uEACE,a1Bi7FN,C0Bl7FI,2DACE,a1Bi7FN,C0Bj8FE,oEACE,oB1Bo8FJ,C0Br8FE,8DACE,oB1Bo8FJ,C0Br8FE,kDACE,oB1Bo8FJ,C0Bh8FE,2EACE,0B1Bm8FJ,C0Bp8FE,qEACE,0B1Bm8FJ,C0Bp8FE,yDACE,0B1Bm8FJ,C0Bh8FI,kFACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1Bk8FN,C0Bp8FI,4EACE,wBAnBG,CAoBH,6C1Bk8FN,C0Bp8FI,gEACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1Bk8FN,C0B97FI,iFACE,a1Bg8FN,C0Bj8FI,2EACE,a1Bg8FN,C0Bj8FI,+DACE,a1Bg8FN,C0Bh9FE,wEACE,oB1Bm9FJ,C0Bp9FE,kEACE,oB1Bm9FJ,C0Bp9FE,sDACE,oB1Bm9FJ,C0B/8FE,+EACE,0B1Bk9FJ,C0Bn9FE,yEACE,0B1Bk9FJ,C0Bn9FE,6DACE,0B1Bk9FJ,C0B/8FI,sFACE,wBAnBG,CAoBH,mDAAA,CAAA,2C1Bi9FN,C0Bn9FI,gFACE,wBAnBG,CAoBH,2C1Bi9FN,C0Bn9FI,oEACE,wBAnBG,CAoBH,mDAAA,CAAA,2C1Bi9FN,C0B78FI,qFACE,a1B+8FN,C0Bh9FI,+EACE,a1B+8FN,C0Bh9FI,mEACE,a1B+8FN,C4BjnGA,MACE,wM5BonGF,C4B3mGE,sBACE,uCAAA,CACA,gB5B8mGJ,C4B3mGI,mCACE,a5B6mGN,C4B9mGI,mCACE,c5B6mGN,C4BzmGM,4BACE,sB5B2mGR,C4BxmGQ,mCACE,gC5B0mGV,C4BtmGQ,2DAEE,SAAA,CADA,uBAAA,CAEA,e5BwmGV,C4BpmGQ,0EAEE,SAAA,CADA,uB5BumGV,C4BxmGQ,uEAEE,SAAA,CADA,uB5BumGV,C4BxmGQ,iEAEE,SAAA,CADA,uB5BumGV,C4BlmGQ,yCACE,Y5BomGV,C4B7lGE,0BAEE,eAAA,CADA,e5BgmGJ,C4B5lGI,+BACE,oB5B8lGN,C4BzlGE,gDACE,Y5B2lGJ,C4BvlGE,8BAEE,+BAAA,CADA,oBAAA,CAGA,WAAA,CAGA,SAAA,CADA,4BAAA,CAEA,4DACE,CAJF,0B5B2lGJ,C4BllGI,aAdF,8BAeI,+BAAA,CAEA,SAAA,CADA,uB5BslGJ,CACF,C4BllGI,wCACE,6B5BolGN,C4BhlGI,oCACE,+B5BklGN,C4B9kGI,qCAIE,6BAAA,CAKA,UAAA,CARA,oBAAA,CAEA,YAAA,CAEA,2CAAA,CAAA,mCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CANA,W5BulGN,C4B1kGQ,mDACE,oB5B4kGV,C6B1rGE,kCAEE,iB7BgsGJ,C6BlsGE,kCAEE,kB7BgsGJ,C6BlsGE,wBAGE,yCAAA,CAFA,oBAAA,CAGA,SAAA,CACA,mC7B6rGJ,C6BxrGI,aAVF,wBAWI,Y7B2rGJ,CACF,C6BvrGE,mFAEE,SAAA,CACA,2CACE,CADF,mC7ByrGJ,C6B5rGE,gFAEE,SAAA,CACA,wCACE,CADF,mC7ByrGJ,C6B5rGE,0EAEE,SAAA,CACA,mC7ByrGJ,C6BnrGE,mFAEE,+B7BqrGJ,C6BvrGE,gFAEE,+B7BqrGJ,C6BvrGE,0EAEE,+B7BqrGJ,C6BjrGE,oBACE,yBAAA,CACA,uBAAA,CAGA,yE7BirGJ,CKljGI,sCwBrHE,qDACE,uB7B0qGN,CACF,C6BrqGE,0CACE,yB7BuqGJ,C6BxqGE,uCACE,yB7BuqGJ,C6BxqGE,iCACE,yB7BuqGJ,C6BnqGE,sBACE,0B7BqqGJ,C8BhuGE,2BACE,a9BmuGJ,CK9iGI,wCyBtLF,2BAKI,e9BmuGJ,CACF,C8BhuGI,6BAEE,0BAAA,CAAA,2BAAA,CACA,eAAA,CACA,iBAAA,CAHA,yBAAA,CAAA,sBAAA,CAAA,iB9BquGN,C8B/tGM,2CACE,kB9BiuGR,C+BlvGE,kDACE,kCAAA,CAAA,0B/BqvGJ,C+BtvGE,+CACE,0B/BqvGJ,C+BtvGE,yCACE,kCAAA,CAAA,0B/BqvGJ,C+BjvGE,uBACE,4C/BmvGJ,C+B/uGE,uBACE,4C/BivGJ,C+B7uGE,4BACE,qC/B+uGJ,C+B5uGI,mCACE,a/B8uGN,C+B1uGI,kCACE,a/B4uGN,C+BvuGE,0BAKE,eAAA,CAJA,aAAA,CACA,YAAA,CAEA,aAAA,CADA,kBAAA,CAAA,mB/B2uGJ,C+BtuGI,uCACE,e/BwuGN,C+BpuGI,sCACE,kB/BsuGN,CgCrxGA,MACE,8LhCwxGF,CgC/wGE,oBACE,iBAAA,CAEA,gBAAA,CADA,ahCmxGJ,CgC/wGI,wCACE,uBhCixGN,CgC7wGI,gCAEE,eAAA,CADA,gBhCgxGN,CgCzwGM,wCACE,mBhC2wGR,CgCrwGE,8BAGE,oBhC0wGJ,CgC7wGE,8BAGE,mBhC0wGJ,CgC7wGE,8BAIE,4BhCywGJ,CgC7wGE,4DAKE,6BhCwwGJ,CgC7wGE,8BAKE,4BhCwwGJ,CgC7wGE,oBAME,cAAA,CALA,aAAA,CACA,ehC2wGJ,CgCpwGI,kCACE,uCAAA,CACA,oBhCswGN,CgClwGI,wCAEE,uCAAA,CADA,YhCqwGN,CgChwGI,oCAGE,WhC4wGN,CgC/wGI,oCAGE,UhC4wGN,CgC/wGI,0BAME,6BAAA,CAOA,UAAA,CARA,WAAA,CAEA,yCAAA,CAAA,iCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CATA,iBAAA,CACA,UAAA,CASA,sBAAA,CACA,yBAAA,CARA,UhC2wGN,CgC/vGM,oCACE,wBhCiwGR,CgC5vGI,4BACE,YhC8vGN,CgCzvGI,4CACE,YhC2vGN,CiC90GE,qDACE,mBAAA,CACA,cAAA,CACA,uBjCi1GJ,CiCp1GE,kDACE,mBAAA,CACA,cAAA,CACA,uBjCi1GJ,CiCp1GE,4CACE,mBAAA,CACA,cAAA,CACA,uBjCi1GJ,CiC90GI,yDAGE,iBAAA,CADA,eAAA,CADA,ajCk1GN,CiCn1GI,sDAGE,iBAAA,CADA,eAAA,CADA,ajCk1GN,CiCn1GI,gDAGE,iBAAA,CADA,eAAA,CADA,ajCk1GN,CkCx1GE,gCACE,sClC21GJ,CkC51GE,6BACE,sClC21GJ,CkC51GE,uBACE,sClC21GJ,CkCx1GE,cACE,yClC01GJ,CkC90GE,4DACE,oClCg1GJ,CkCj1GE,yDACE,oClCg1GJ,CkCj1GE,mDACE,oClCg1GJ,CkCx0GE,6CACE,qClC00GJ,CkC30GE,0CACE,qClC00GJ,CkC30GE,oCACE,qClC00GJ,CkCh0GE,oDACE,oClCk0GJ,CkCn0GE,iDACE,oClCk0GJ,CkCn0GE,2CACE,oClCk0GJ,CkCzzGE,gDACE,qClC2zGJ,CkC5zGE,6CACE,qClC2zGJ,CkC5zGE,uCACE,qClC2zGJ,CkCtzGE,gCACE,kClCwzGJ,CkCzzGE,6BACE,kClCwzGJ,CkCzzGE,uBACE,kClCwzGJ,CkClzGE,qCACE,sClCozGJ,CkCrzGE,kCACE,sClCozGJ,CkCrzGE,4BACE,sClCozGJ,CkC7yGE,yCACE,sClC+yGJ,CkChzGE,sCACE,sClC+yGJ,CkChzGE,gCACE,sClC+yGJ,CkCxyGE,yCACE,qClC0yGJ,CkC3yGE,sCACE,qClC0yGJ,CkC3yGE,gCACE,qClC0yGJ,CkCjyGE,gDACE,qClCmyGJ,CkCpyGE,6CACE,qClCmyGJ,CkCpyGE,uCACE,qClCmyGJ,CkC3xGE,6CACE,sClC6xGJ,CkC9xGE,0CACE,sClC6xGJ,CkC9xGE,oCACE,sClC6xGJ,CkClxGE,yDACE,qClCoxGJ,CkCrxGE,sDACE,qClCoxGJ,CkCrxGE,gDACE,qClCoxGJ,CkC/wGE,iCAGE,mBAAA,CAFA,gBAAA,CACA,gBlCkxGJ,CkCpxGE,8BAGE,mBAAA,CAFA,gBAAA,CACA,gBlCkxGJ,CkCpxGE,wBAGE,mBAAA,CAFA,gBAAA,CACA,gBlCkxGJ,CkC9wGE,eACE,4ClCgxGJ,CkC7wGE,eACE,4ClC+wGJ,CkC3wGE,gBAIE,wCAAA,CAHA,aAAA,CACA,wBAAA,CACA,wBlC8wGJ,CkCzwGE,yBAOE,wCAAA,CACA,+DAAA,CACA,4BAAA,CACA,6BAAA,CARA,iBAAA,CAIA,eAAA,CADA,eAAA,CAFA,cAAA,CACA,oCAAA,CAHA,iBlCoxGJ,CkCxwGI,6BACE,YlC0wGN,CkCvwGM,kCACE,wBAAA,CACA,yBlCywGR,CkCnwGE,iCAWE,wCAAA,CACA,+DAAA,CAFA,uCAAA,CAGA,0BAAA,CAPA,UAAA,CAJA,oBAAA,CAMA,2BAAA,CADA,2BAAA,CAEA,2BAAA,CARA,uBAAA,CAAA,eAAA,CAaA,wBAAA,CAAA,qBAAA,CAAA,gBAAA,CATA,SlC4wGJ,CkC1vGE,sBACE,iBAAA,CACA,iBlC4vGJ,CkCpvGI,sCACE,gBlCsvGN,CkClvGI,gDACE,YlCovGN,CkC1uGA,gBACE,iBlC6uGF,CkCzuGE,uCACE,aAAA,CACA,SlC2uGJ,CkC7uGE,oCACE,aAAA,CACA,SlC2uGJ,CkC7uGE,8BACE,aAAA,CACA,SlC2uGJ,CkCtuGE,mBACE,YlCwuGJ,CkCnuGE,oBACE,QlCquGJ,CkCjuGE,4BACE,WAAA,CACA,SAAA,CACA,elCmuGJ,CkChuGI,0CACE,YlCkuGN,CkC5tGE,yBAIE,wCAAA,CAEA,+BAAA,CADA,4BAAA,CAFA,eAAA,CADA,oDAAA,CAKA,wBAAA,CAAA,qBAAA,CAAA,gBlC8tGJ,CkC1tGE,2BAEE,+DAAA,CADA,2BlC6tGJ,CkCztGI,+BACE,uCAAA,CACA,gBlC2tGN,CkCttGE,sBACE,MAAA,CACA,WlCwtGJ,CkCntGA,aACE,alCstGF,CkC5sGE,4BAEE,aAAA,CADA,YlCgtGJ,CkC5sGI,wDAEE,2BAAA,CADA,wBlC+sGN,CkCzsGE,+BAKE,2CAAA,CAEA,+BAAA,CADA,gCAAA,CADA,sBAAA,CAJA,mBAAA,CAEA,gBAAA,CADA,alCgtGJ,CkCxsGI,qCAEE,UAAA,CACA,UAAA,CAFA,alC4sGN,CK70GI,wC6BgJF,8BACE,iBlCisGF,CkCvrGE,wSAGE,elC6rGJ,CkCzrGE,sCAEE,mBAAA,CACA,eAAA,CADA,oBAAA,CADA,kBAAA,CAAA,mBlC6rGJ,CACF,CDphHI,kDAIE,+BAAA,CACA,8BAAA,CAFA,aAAA,CADA,QAAA,CADA,iBC0hHN,CD3hHI,+CAIE,+BAAA,CACA,8BAAA,CAFA,aAAA,CADA,QAAA,CADA,iBC0hHN,CD3hHI,yCAIE,+BAAA,CACA,8BAAA,CAFA,aAAA,CADA,QAAA,CADA,iBC0hHN,CDlhHI,uBAEE,uCAAA,CADA,cCqhHN,CDh+GM,iHAEE,WAlDkB,CAiDlB,kBC2+GR,CD5+GM,6HAEE,WAlDkB,CAiDlB,kBCu/GR,CDx/GM,6HAEE,WAlDkB,CAiDlB,kBCmgHR,CDpgHM,oHAEE,WAlDkB,CAiDlB,kBC+gHR,CDhhHM,0HAEE,WAlDkB,CAiDlB,kBC2hHR,CD5hHM,uHAEE,WAlDkB,CAiDlB,kBCuiHR,CDxiHM,uHAEE,WAlDkB,CAiDlB,kBCmjHR,CDpjHM,6HAEE,WAlDkB,CAiDlB,kBC+jHR,CDhkHM,yCAEE,WAlDkB,CAiDlB,kBCmkHR,CDpkHM,yCAEE,WAlDkB,CAiDlB,kBCukHR,CDxkHM,0CAEE,WAlDkB,CAiDlB,kBC2kHR,CD5kHM,uCAEE,WAlDkB,CAiDlB,kBC+kHR,CDhlHM,wCAEE,WAlDkB,CAiDlB,kBCmlHR,CDplHM,sCAEE,WAlDkB,CAiDlB,kBCulHR,CDxlHM,wCAEE,WAlDkB,CAiDlB,kBC2lHR,CD5lHM,oCAEE,WAlDkB,CAiDlB,kBC+lHR,CDhmHM,2CAEE,WAlDkB,CAiDlB,kBCmmHR,CDpmHM,qCAEE,WAlDkB,CAiDlB,kBCumHR,CDxmHM,oCAEE,WAlDkB,CAiDlB,kBC2mHR,CD5mHM,kCAEE,WAlDkB,CAiDlB,kBC+mHR,CDhnHM,qCAEE,WAlDkB,CAiDlB,kBCmnHR,CDpnHM,mCAEE,WAlDkB,CAiDlB,kBCunHR,CDxnHM,qCAEE,WAlDkB,CAiDlB,kBC2nHR,CD5nHM,wCAEE,WAlDkB,CAiDlB,kBC+nHR,CDhoHM,sCAEE,WAlDkB,CAiDlB,kBCmoHR,CDpoHM,2CAEE,WAlDkB,CAiDlB,kBCuoHR,CD5nHM,iCAEE,WAPkB,CAMlB,iBC+nHR,CDhoHM,uCAEE,WAPkB,CAMlB,iBCmoHR,CDpoHM,mCAEE,WAPkB,CAMlB,iBCuoHR,CmCztHA,MACE,qMAAA,CACA,mMnC4tHF,CmCntHE,wBAKE,mBAAA,CAHA,YAAA,CACA,qBAAA,CACA,YAAA,CAHA,iBnC0tHJ,CmChtHI,8BAGE,QAAA,CACA,SAAA,CAHA,iBAAA,CACA,OnCotHN,CmC/sHM,qCACE,0BnCitHR,CmClrHE,2BAKE,uBAAA,CADA,+DAAA,CAHA,YAAA,CACA,cAAA,CACA,aAAA,CAGA,oBnCorHJ,CmCjrHI,aATF,2BAUI,gBnCorHJ,CACF,CmCjrHI,cAGE,+BACE,iBnCirHN,CmC9qHM,sCAOE,oCAAA,CALA,QAAA,CAWA,UAAA,CATA,aAAA,CAEA,UAAA,CAHA,MAAA,CAFA,iBAAA,CAOA,2CAAA,CACA,qCACE,CAEF,kDAAA,CAPA,+BnCsrHR,CACF,CmCzqHI,8CACE,YnC2qHN,CmCvqHI,iCAQE,+BAAA,CACA,6BAAA,CALA,uCAAA,CAMA,cAAA,CATA,aAAA,CAKA,gBAAA,CADA,eAAA,CAFA,8BAAA,CAWA,+BAAA,CAHA,2CACE,CALF,kBAAA,CALA,UnCmrHN,CmCpqHM,aAII,6CACE,OnCmqHV,CmCpqHQ,8CACE,OnCsqHV,CmCvqHQ,8CACE,OnCyqHV,CmC1qHQ,8CACE,OnC4qHV,CmC7qHQ,8CACE,OnC+qHV,CmChrHQ,8CACE,OnCkrHV,CmCnrHQ,8CACE,OnCqrHV,CmCtrHQ,8CACE,OnCwrHV,CmCzrHQ,8CACE,OnC2rHV,CmC5rHQ,+CACE,QnC8rHV,CmC/rHQ,+CACE,QnCisHV,CmClsHQ,+CACE,QnCosHV,CmCrsHQ,+CACE,QnCusHV,CmCxsHQ,+CACE,QnC0sHV,CmC3sHQ,+CACE,QnC6sHV,CmC9sHQ,+CACE,QnCgtHV,CmCjtHQ,+CACE,QnCmtHV,CmCptHQ,+CACE,QnCstHV,CmCvtHQ,+CACE,QnCytHV,CmC1tHQ,+CACE,QnC4tHV,CACF,CmCvtHM,uCACE,+BnCytHR,CmCntHE,4BACE,UnCqtHJ,CmCltHI,aAJF,4BAKI,gBnCqtHJ,CACF,CmCjtHE,0BACE,YnCmtHJ,CmChtHI,aAJF,0BAKI,anCmtHJ,CmC/sHM,sCACE,OnCitHR,CmCltHM,uCACE,OnCotHR,CmCrtHM,uCACE,OnCutHR,CmCxtHM,uCACE,OnC0tHR,CmC3tHM,uCACE,OnC6tHR,CmC9tHM,uCACE,OnCguHR,CmCjuHM,uCACE,OnCmuHR,CmCpuHM,uCACE,OnCsuHR,CmCvuHM,uCACE,OnCyuHR,CmC1uHM,wCACE,QnC4uHR,CmC7uHM,wCACE,QnC+uHR,CmChvHM,wCACE,QnCkvHR,CmCnvHM,wCACE,QnCqvHR,CmCtvHM,wCACE,QnCwvHR,CmCzvHM,wCACE,QnC2vHR,CmC5vHM,wCACE,QnC8vHR,CmC/vHM,wCACE,QnCiwHR,CmClwHM,wCACE,QnCowHR,CmCrwHM,wCACE,QnCuwHR,CmCxwHM,wCACE,QnC0wHR,CACF,CmCpwHI,+FAEE,QnCswHN,CmCnwHM,yGACE,wBAAA,CACA,yBnCswHR,CmC7vHM,2DAEE,wBAAA,CACA,yBAAA,CAFA,QnCiwHR,CmC1vHM,iEACE,QnC4vHR,CmCzvHQ,qLAGE,wBAAA,CACA,yBAAA,CAFA,QnC6vHV,CmCvvHQ,6FACE,wBAAA,CACA,yBnCyvHV,CmCpvHM,yDACE,kBnCsvHR,CmCjvHI,sCACE,QnCmvHN,CmC9uHE,2BAEE,iBAAA,CAKA,kBAAA,CADA,uCAAA,CAEA,cAAA,CAPA,aAAA,CAGA,YAAA,CACA,gBAAA,CAKA,mBAAA,CADA,gCAAA,CANA,WnCuvHJ,CmC7uHI,iCAEE,uDAAA,CADA,+BnCgvHN,CmC3uHI,iCAIE,6BAAA,CAQA,UAAA,CAXA,aAAA,CAEA,WAAA,CAKA,8CAAA,CAAA,sCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CANA,+CACE,CAJF,UnCqvHN,CmCtuHE,4BAME,yEACE,CALF,YAAA,CAGA,aAAA,CAFA,qBAAA,CAUA,mBAAA,CAZA,iBAAA,CAWA,wBAAA,CARA,YnC4uHJ,CmChuHI,sCACE,wBnCkuHN,CmC9tHI,oCACE,SnCguHN,CmC5tHI,kCAGE,wEACE,CAFF,mBAAA,CADA,OnCguHN,CmCttHM,uDACE,8CAAA,CAAA,sCnCwtHR,CKx0HI,wC8B8HF,wDAGE,kBnC+sHF,CmCltHA,wDAGE,mBnC+sHF,CmCltHA,8CAEE,eAAA,CADA,eAAA,CAGA,iCnC8sHF,CmC1sHE,8DACE,mBnC6sHJ,CmC9sHE,8DACE,kBnC6sHJ,CmC9sHE,oDAEE,UnC4sHJ,CmCxsHE,8EAEE,kBnC2sHJ,CmC7sHE,8EAEE,mBnC2sHJ,CmC7sHE,8EAGE,kBnC0sHJ,CmC7sHE,8EAGE,mBnC0sHJ,CmC7sHE,oEACE,UnC4sHJ,CmCtsHE,8EAEE,mBnCysHJ,CmC3sHE,8EAEE,kBnCysHJ,CmC3sHE,8EAGE,mBnCwsHJ,CmC3sHE,8EAGE,kBnCwsHJ,CmC3sHE,oEACE,UnC0sHJ,CACF,CmC5rHE,cAHF,olDAII,+BnC+rHF,CmC5rHE,g8GACE,sCnC8rHJ,CACF,CmCzrHA,4sDACE,uDnC4rHF,CmCxrHA,wmDACE,anC2rHF,CoCxiIA,MACE,mVAAA,CAEA,4VpC4iIF,CoCliIE,4BAEE,oBAAA,CADA,iBpCsiIJ,CoCjiII,sDAGE,SpCmiIN,CoCtiII,sDAGE,UpCmiIN,CoCtiII,4CACE,iBAAA,CACA,SpCoiIN,CoC9hIE,+CAEE,SAAA,CADA,UpCiiIJ,CoC5hIE,kDAGE,WpCsiIJ,CoCziIE,kDAGE,YpCsiIJ,CoCziIE,wCAME,qDAAA,CAKA,UAAA,CANA,aAAA,CAEA,0CAAA,CAAA,kCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CATA,iBAAA,CACA,SAAA,CAEA,YpCqiIJ,CoC1hIE,gEACE,wBTyWa,CSxWb,mDAAA,CAAA,2CpC4hIJ,CqC9kIA,QACE,8DAAA,CAGA,+CAAA,CACA,iEAAA,CACA,oDAAA,CACA,sDAAA,CACA,mDrC+kIF,CqC3kIA,SAEE,kBAAA,CADA,YrC+kIF,CKt7HI,mCiChKA,8BACE,UtC8lIJ,CsC/lIE,8BACE,WtC8lIJ,CsC/lIE,8BAIE,kBtC2lIJ,CsC/lIE,8BAIE,iBtC2lIJ,CsC/lIE,oBAKE,mBAAA,CAFA,YAAA,CADA,atC6lIJ,CsCvlII,kCACE,WtC0lIN,CsC3lII,kCACE,UtC0lIN,CsC3lII,kCAEE,iBAAA,CAAA,ctCylIN,CsC3lII,kCAEE,aAAA,CAAA,kBtCylIN,CACF","file":"main.css"} \ No newline at end of file diff --git a/assets/stylesheets/palette.2505c338.min.css b/assets/stylesheets/palette.2505c338.min.css new file mode 100644 index 00000000..3c005dd6 --- /dev/null +++ b/assets/stylesheets/palette.2505c338.min.css @@ -0,0 +1 @@ +@media screen{[data-md-color-scheme=slate]{--md-hue:232;--md-default-fg-color:hsla(var(--md-hue),75%,95%,1);--md-default-fg-color--light:hsla(var(--md-hue),75%,90%,0.62);--md-default-fg-color--lighter:hsla(var(--md-hue),75%,90%,0.32);--md-default-fg-color--lightest:hsla(var(--md-hue),75%,90%,0.12);--md-default-bg-color:hsla(var(--md-hue),15%,21%,1);--md-default-bg-color--light:hsla(var(--md-hue),15%,21%,0.54);--md-default-bg-color--lighter:hsla(var(--md-hue),15%,21%,0.26);--md-default-bg-color--lightest:hsla(var(--md-hue),15%,21%,0.07);--md-code-fg-color:hsla(var(--md-hue),18%,86%,1);--md-code-bg-color:hsla(var(--md-hue),15%,15%,1);--md-code-hl-color:#4287ff26;--md-code-hl-number-color:#e6695b;--md-code-hl-special-color:#f06090;--md-code-hl-function-color:#c973d9;--md-code-hl-constant-color:#9383e2;--md-code-hl-keyword-color:#6791e0;--md-code-hl-string-color:#2fb170;--md-code-hl-name-color:var(--md-code-fg-color);--md-code-hl-operator-color:var(--md-default-fg-color--light);--md-code-hl-punctuation-color:var(--md-default-fg-color--light);--md-code-hl-comment-color:var(--md-default-fg-color--light);--md-code-hl-generic-color:var(--md-default-fg-color--light);--md-code-hl-variable-color:var(--md-default-fg-color--light);--md-typeset-color:var(--md-default-fg-color);--md-typeset-a-color:var(--md-primary-fg-color);--md-typeset-mark-color:#4287ff4d;--md-typeset-kbd-color:hsla(var(--md-hue),15%,94%,0.12);--md-typeset-kbd-accent-color:hsla(var(--md-hue),15%,94%,0.2);--md-typeset-kbd-border-color:hsla(var(--md-hue),15%,14%,1);--md-typeset-table-color:hsla(var(--md-hue),75%,95%,0.12);--md-admonition-fg-color:var(--md-default-fg-color);--md-admonition-bg-color:var(--md-default-bg-color);--md-footer-bg-color:hsla(var(--md-hue),15%,12%,0.87);--md-footer-bg-color--dark:hsla(var(--md-hue),15%,10%,1);--md-shadow-z1:0 0.2rem 0.5rem #0003,0 0 0.05rem #0000001a;--md-shadow-z2:0 0.2rem 0.5rem #0000004d,0 0 0.05rem #00000040;--md-shadow-z3:0 0.2rem 0.5rem #0006,0 0 0.05rem #00000059}[data-md-color-scheme=slate] img[src$="#gh-light-mode-only"],[data-md-color-scheme=slate] img[src$="#only-light"]{display:none}[data-md-color-scheme=slate] img[src$="#gh-dark-mode-only"],[data-md-color-scheme=slate] img[src$="#only-dark"]{display:initial}[data-md-color-scheme=slate][data-md-color-primary=pink]{--md-typeset-a-color:#ed5487}[data-md-color-scheme=slate][data-md-color-primary=purple]{--md-typeset-a-color:#bd78c9}[data-md-color-scheme=slate][data-md-color-primary=deep-purple]{--md-typeset-a-color:#a682e3}[data-md-color-scheme=slate][data-md-color-primary=indigo]{--md-typeset-a-color:#6c91d5}[data-md-color-scheme=slate][data-md-color-primary=teal]{--md-typeset-a-color:#00ccb8}[data-md-color-scheme=slate][data-md-color-primary=green]{--md-typeset-a-color:#71c174}[data-md-color-scheme=slate][data-md-color-primary=deep-orange]{--md-typeset-a-color:#ff9575}[data-md-color-scheme=slate][data-md-color-primary=brown]{--md-typeset-a-color:#c7846b}[data-md-color-scheme=slate][data-md-color-primary=black],[data-md-color-scheme=slate][data-md-color-primary=blue-grey],[data-md-color-scheme=slate][data-md-color-primary=grey],[data-md-color-scheme=slate][data-md-color-primary=white]{--md-typeset-a-color:#6c91d5}[data-md-color-switching] *,[data-md-color-switching] :after,[data-md-color-switching] :before{transition-duration:0ms!important}}[data-md-color-accent=red]{--md-accent-fg-color:#ff1947;--md-accent-fg-color--transparent:#ff19471a;--md-accent-bg-color:#fff;--md-accent-bg-color--light:#ffffffb3}[data-md-color-accent=pink]{--md-accent-fg-color:#f50056;--md-accent-fg-color--transparent:#f500561a;--md-accent-bg-color:#fff;--md-accent-bg-color--light:#ffffffb3}[data-md-color-accent=purple]{--md-accent-fg-color:#df41fb;--md-accent-fg-color--transparent:#df41fb1a;--md-accent-bg-color:#fff;--md-accent-bg-color--light:#ffffffb3}[data-md-color-accent=deep-purple]{--md-accent-fg-color:#7c4dff;--md-accent-fg-color--transparent:#7c4dff1a;--md-accent-bg-color:#fff;--md-accent-bg-color--light:#ffffffb3}[data-md-color-accent=indigo]{--md-accent-fg-color:#526cfe;--md-accent-fg-color--transparent:#526cfe1a;--md-accent-bg-color:#fff;--md-accent-bg-color--light:#ffffffb3}[data-md-color-accent=blue]{--md-accent-fg-color:#4287ff;--md-accent-fg-color--transparent:#4287ff1a;--md-accent-bg-color:#fff;--md-accent-bg-color--light:#ffffffb3}[data-md-color-accent=light-blue]{--md-accent-fg-color:#0091eb;--md-accent-fg-color--transparent:#0091eb1a;--md-accent-bg-color:#fff;--md-accent-bg-color--light:#ffffffb3}[data-md-color-accent=cyan]{--md-accent-fg-color:#00bad6;--md-accent-fg-color--transparent:#00bad61a;--md-accent-bg-color:#fff;--md-accent-bg-color--light:#ffffffb3}[data-md-color-accent=teal]{--md-accent-fg-color:#00bda4;--md-accent-fg-color--transparent:#00bda41a;--md-accent-bg-color:#fff;--md-accent-bg-color--light:#ffffffb3}[data-md-color-accent=green]{--md-accent-fg-color:#00c753;--md-accent-fg-color--transparent:#00c7531a;--md-accent-bg-color:#fff;--md-accent-bg-color--light:#ffffffb3}[data-md-color-accent=light-green]{--md-accent-fg-color:#63de17;--md-accent-fg-color--transparent:#63de171a;--md-accent-bg-color:#fff;--md-accent-bg-color--light:#ffffffb3}[data-md-color-accent=lime]{--md-accent-fg-color:#b0eb00;--md-accent-fg-color--transparent:#b0eb001a;--md-accent-bg-color:#000000de;--md-accent-bg-color--light:#0000008a}[data-md-color-accent=yellow]{--md-accent-fg-color:#ffd500;--md-accent-fg-color--transparent:#ffd5001a;--md-accent-bg-color:#000000de;--md-accent-bg-color--light:#0000008a}[data-md-color-accent=amber]{--md-accent-fg-color:#fa0;--md-accent-fg-color--transparent:#ffaa001a;--md-accent-bg-color:#000000de;--md-accent-bg-color--light:#0000008a}[data-md-color-accent=orange]{--md-accent-fg-color:#ff9100;--md-accent-fg-color--transparent:#ff91001a;--md-accent-bg-color:#000000de;--md-accent-bg-color--light:#0000008a}[data-md-color-accent=deep-orange]{--md-accent-fg-color:#ff6e42;--md-accent-fg-color--transparent:#ff6e421a;--md-accent-bg-color:#fff;--md-accent-bg-color--light:#ffffffb3}[data-md-color-primary=red]{--md-primary-fg-color:#ef5552;--md-primary-fg-color--light:#e57171;--md-primary-fg-color--dark:#e53734;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3}[data-md-color-primary=pink]{--md-primary-fg-color:#e92063;--md-primary-fg-color--light:#ec417a;--md-primary-fg-color--dark:#c3185d;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3}[data-md-color-primary=purple]{--md-primary-fg-color:#ab47bd;--md-primary-fg-color--light:#bb69c9;--md-primary-fg-color--dark:#8c24a8;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3}[data-md-color-primary=deep-purple]{--md-primary-fg-color:#7e56c2;--md-primary-fg-color--light:#9574cd;--md-primary-fg-color--dark:#673ab6;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3}[data-md-color-primary=indigo]{--md-primary-fg-color:#4051b5;--md-primary-fg-color--light:#5d6cc0;--md-primary-fg-color--dark:#303fa1;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3}[data-md-color-primary=blue]{--md-primary-fg-color:#2094f3;--md-primary-fg-color--light:#42a5f5;--md-primary-fg-color--dark:#1975d2;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3}[data-md-color-primary=light-blue]{--md-primary-fg-color:#02a6f2;--md-primary-fg-color--light:#28b5f6;--md-primary-fg-color--dark:#0287cf;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3}[data-md-color-primary=cyan]{--md-primary-fg-color:#00bdd6;--md-primary-fg-color--light:#25c5da;--md-primary-fg-color--dark:#0097a8;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3}[data-md-color-primary=teal]{--md-primary-fg-color:#009485;--md-primary-fg-color--light:#26a699;--md-primary-fg-color--dark:#007a6c;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3}[data-md-color-primary=green]{--md-primary-fg-color:#4cae4f;--md-primary-fg-color--light:#68bb6c;--md-primary-fg-color--dark:#398e3d;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3}[data-md-color-primary=light-green]{--md-primary-fg-color:#8bc34b;--md-primary-fg-color--light:#9ccc66;--md-primary-fg-color--dark:#689f38;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3}[data-md-color-primary=lime]{--md-primary-fg-color:#cbdc38;--md-primary-fg-color--light:#d3e156;--md-primary-fg-color--dark:#b0b52c;--md-primary-bg-color:#000000de;--md-primary-bg-color--light:#0000008a}[data-md-color-primary=yellow]{--md-primary-fg-color:#ffec3d;--md-primary-fg-color--light:#ffee57;--md-primary-fg-color--dark:#fbc02d;--md-primary-bg-color:#000000de;--md-primary-bg-color--light:#0000008a}[data-md-color-primary=amber]{--md-primary-fg-color:#ffc105;--md-primary-fg-color--light:#ffc929;--md-primary-fg-color--dark:#ffa200;--md-primary-bg-color:#000000de;--md-primary-bg-color--light:#0000008a}[data-md-color-primary=orange]{--md-primary-fg-color:#ffa724;--md-primary-fg-color--light:#ffa724;--md-primary-fg-color--dark:#fa8900;--md-primary-bg-color:#000000de;--md-primary-bg-color--light:#0000008a}[data-md-color-primary=deep-orange]{--md-primary-fg-color:#ff6e42;--md-primary-fg-color--light:#ff8a66;--md-primary-fg-color--dark:#f4511f;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3}[data-md-color-primary=brown]{--md-primary-fg-color:#795649;--md-primary-fg-color--light:#8d6e62;--md-primary-fg-color--dark:#5d4037;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3}[data-md-color-primary=grey]{--md-primary-fg-color:#757575;--md-primary-fg-color--light:#9e9e9e;--md-primary-fg-color--dark:#616161;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3;--md-typeset-a-color:#4051b5}[data-md-color-primary=blue-grey]{--md-primary-fg-color:#546d78;--md-primary-fg-color--light:#607c8a;--md-primary-fg-color--dark:#455a63;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3;--md-typeset-a-color:#4051b5}[data-md-color-primary=light-green]:not([data-md-color-scheme=slate]){--md-typeset-a-color:#72ad2e}[data-md-color-primary=lime]:not([data-md-color-scheme=slate]){--md-typeset-a-color:#8b990a}[data-md-color-primary=yellow]:not([data-md-color-scheme=slate]){--md-typeset-a-color:#b8a500}[data-md-color-primary=amber]:not([data-md-color-scheme=slate]){--md-typeset-a-color:#d19d00}[data-md-color-primary=orange]:not([data-md-color-scheme=slate]){--md-typeset-a-color:#e68a00}[data-md-color-primary=white]{--md-primary-fg-color:#fff;--md-primary-fg-color--light:#ffffffb3;--md-primary-fg-color--dark:#00000012;--md-primary-bg-color:#000000de;--md-primary-bg-color--light:#0000008a;--md-typeset-a-color:#4051b5}[data-md-color-primary=white] .md-button{color:var(--md-typeset-a-color)}[data-md-color-primary=white] .md-button--primary{background-color:var(--md-typeset-a-color);border-color:var(--md-typeset-a-color);color:#fff}@media screen and (min-width:60em){[data-md-color-primary=white] .md-search__form{background-color:#00000012}[data-md-color-primary=white] .md-search__form:hover{background-color:#00000052}[data-md-color-primary=white] .md-search__input+.md-search__icon{color:#000000de}}@media screen and (min-width:76.25em){[data-md-color-primary=white] .md-tabs{border-bottom:.05rem solid #00000012}}[data-md-color-primary=black]{--md-primary-fg-color:#000;--md-primary-fg-color--light:#0000008a;--md-primary-fg-color--dark:#000;--md-primary-bg-color:#fff;--md-primary-bg-color--light:#ffffffb3;--md-typeset-a-color:#4051b5}[data-md-color-primary=black] .md-button{color:var(--md-typeset-a-color)}[data-md-color-primary=black] .md-button--primary{background-color:var(--md-typeset-a-color);border-color:var(--md-typeset-a-color);color:#fff}[data-md-color-primary=black] .md-header{background-color:#000}@media screen and (max-width:59.9375em){[data-md-color-primary=black] .md-nav__source{background-color:#000000de}}@media screen and (min-width:60em){[data-md-color-primary=black] .md-search__form{background-color:#ffffff1f}[data-md-color-primary=black] .md-search__form:hover{background-color:#ffffff4d}}@media screen and (max-width:76.1875em){html [data-md-color-primary=black] .md-nav--primary .md-nav__title[for=__drawer]{background-color:#000}}@media screen and (min-width:76.25em){[data-md-color-primary=black] .md-tabs{background-color:#000}} \ No newline at end of file diff --git a/assets/stylesheets/palette.2505c338.min.css.map b/assets/stylesheets/palette.2505c338.min.css.map new file mode 100644 index 00000000..3aec1903 --- /dev/null +++ b/assets/stylesheets/palette.2505c338.min.css.map @@ -0,0 +1 @@ +{"version":3,"sources":["src/assets/stylesheets/palette/_scheme.scss","../../../src/assets/stylesheets/palette.scss","src/assets/stylesheets/palette/_accent.scss","src/assets/stylesheets/palette/_primary.scss","src/assets/stylesheets/utilities/_break.scss"],"names":[],"mappings":"AA2BA,cAGE,6BAKE,YAAA,CAGA,mDAAA,CACA,6DAAA,CACA,+DAAA,CACA,gEAAA,CACA,mDAAA,CACA,6DAAA,CACA,+DAAA,CACA,gEAAA,CAGA,gDAAA,CACA,gDAAA,CAGA,4BAAA,CACA,iCAAA,CACA,kCAAA,CACA,mCAAA,CACA,mCAAA,CACA,kCAAA,CACA,iCAAA,CACA,+CAAA,CACA,6DAAA,CACA,gEAAA,CACA,4DAAA,CACA,4DAAA,CACA,6DAAA,CAGA,6CAAA,CAGA,+CAAA,CAGA,iCAAA,CAGA,uDAAA,CACA,6DAAA,CACA,2DAAA,CAGA,yDAAA,CAGA,mDAAA,CACA,mDAAA,CAGA,qDAAA,CACA,wDAAA,CAGA,0DAAA,CAKA,8DAAA,CAKA,0DCxDF,CD6DE,kHAEE,YC3DJ,CD+DE,gHAEE,eC7DJ,CDoFE,yDACE,4BClFJ,CDiFE,2DACE,4BC/EJ,CD8EE,gEACE,4BC5EJ,CD2EE,2DACE,4BCzEJ,CDwEE,yDACE,4BCtEJ,CDqEE,0DACE,4BCnEJ,CDkEE,gEACE,4BChEJ,CD+DE,0DACE,4BC7DJ,CD4DE,2OACE,4BCjDJ,CDwDA,+FAGE,iCCtDF,CACF,CCjDE,2BACE,4BAAA,CACA,2CAAA,CAOE,yBAAA,CACA,qCD6CN,CCvDE,4BACE,4BAAA,CACA,2CAAA,CAOE,yBAAA,CACA,qCDoDN,CC9DE,8BACE,4BAAA,CACA,2CAAA,CAOE,yBAAA,CACA,qCD2DN,CCrEE,mCACE,4BAAA,CACA,2CAAA,CAOE,yBAAA,CACA,qCDkEN,CC5EE,8BACE,4BAAA,CACA,2CAAA,CAOE,yBAAA,CACA,qCDyEN,CCnFE,4BACE,4BAAA,CACA,2CAAA,CAOE,yBAAA,CACA,qCDgFN,CC1FE,kCACE,4BAAA,CACA,2CAAA,CAOE,yBAAA,CACA,qCDuFN,CCjGE,4BACE,4BAAA,CACA,2CAAA,CAOE,yBAAA,CACA,qCD8FN,CCxGE,4BACE,4BAAA,CACA,2CAAA,CAOE,yBAAA,CACA,qCDqGN,CC/GE,6BACE,4BAAA,CACA,2CAAA,CAOE,yBAAA,CACA,qCD4GN,CCtHE,mCACE,4BAAA,CACA,2CAAA,CAOE,yBAAA,CACA,qCDmHN,CC7HE,4BACE,4BAAA,CACA,2CAAA,CAIE,8BAAA,CACA,qCD6HN,CCpIE,8BACE,4BAAA,CACA,2CAAA,CAIE,8BAAA,CACA,qCDoIN,CC3IE,6BACE,yBAAA,CACA,2CAAA,CAIE,8BAAA,CACA,qCD2IN,CClJE,8BACE,4BAAA,CACA,2CAAA,CAIE,8BAAA,CACA,qCDkJN,CCzJE,mCACE,4BAAA,CACA,2CAAA,CAOE,yBAAA,CACA,qCDsJN,CE3JE,4BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,sCFwJN,CEnKE,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,sCFgKN,CE3KE,+BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,sCFwKN,CEnLE,oCACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,sCFgLN,CE3LE,+BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,sCFwLN,CEnME,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,sCFgMN,CE3ME,mCACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,sCFwMN,CEnNE,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,sCFgNN,CE3NE,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,sCFwNN,CEnOE,8BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,sCFgON,CE3OE,oCACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,sCFwON,CEnPE,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAIE,+BAAA,CACA,sCFmPN,CE3PE,+BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAIE,+BAAA,CACA,sCF2PN,CEnQE,8BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAIE,+BAAA,CACA,sCFmQN,CE3QE,+BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAIE,+BAAA,CACA,sCF2QN,CEnRE,oCACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,sCFgRN,CE3RE,8BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,sCFwRN,CEnSE,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,sCAAA,CAKA,4BF4RN,CE5SE,kCACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,sCAAA,CAKA,4BFqSN,CEtRE,sEACE,4BFyRJ,CE1RE,+DACE,4BF6RJ,CE9RE,iEACE,4BFiSJ,CElSE,gEACE,4BFqSJ,CEtSE,iEACE,4BFySJ,CEhSA,8BACE,0BAAA,CACA,sCAAA,CACA,qCAAA,CACA,+BAAA,CACA,sCAAA,CAGA,4BFiSF,CE9RE,yCACE,+BFgSJ,CE7RI,kDAEE,0CAAA,CACA,sCAAA,CAFA,UFiSN,CG7MI,mCD1EA,+CACE,0BF0RJ,CEvRI,qDACE,0BFyRN,CEpRE,iEACE,eFsRJ,CACF,CGxNI,sCDvDA,uCACE,oCFkRJ,CACF,CEzQA,8BACE,0BAAA,CACA,sCAAA,CACA,gCAAA,CACA,0BAAA,CACA,sCAAA,CAGA,4BF0QF,CEvQE,yCACE,+BFyQJ,CEtQI,kDAEE,0CAAA,CACA,sCAAA,CAFA,UF0QN,CEnQE,yCACE,qBFqQJ,CG9NI,wCDhCA,8CACE,0BFiQJ,CACF,CGtPI,mCDJA,+CACE,0BF6PJ,CE1PI,qDACE,0BF4PN,CACF,CG3OI,wCDTA,iFACE,qBFuPJ,CACF,CGnQI,sCDmBA,uCACE,qBFmPJ,CACF","file":"palette.css"} \ No newline at end of file diff --git a/get-started/allocation/adding-a-new-allocation/index.html b/get-started/allocation/adding-a-new-allocation/index.html new file mode 100644 index 00000000..5ba16cc8 --- /dev/null +++ b/get-started/allocation/adding-a-new-allocation/index.html @@ -0,0 +1,3351 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Adding a new Resource Allocation to the project

    +

    If one resource allocation is not sufficient for a project, PI or project managers +may request additional allocations by clicking on the "Request Resource Allocation" +button on the Allocations section of the project details. This will show the page +where all existing users for the project will be listed on the bottom of the request +form. PIs can select desired user(s) to make the requested resource allocations +available on their NERC's OpenStack or OpenShift projects.

    +

    Here, you can view the Resource Type, information about your Allocated Project, +status, End Date of the allocation, and actions button or any pending actions as +shown below:

    +

    Adding a new Resource Allocation

    +

    Adding a new Resource Allocation to your OpenStack project

    +

    Adding a new Resource Allocation to your OpenStack project

    +
    +

    Important: Requested/Approved Allocated OpenStack Storage Quota & Cost

    +

    Ensure you choose NERC (OpenStack) in the Resource option and specify your +anticipated computing units. Each allocation, whether requested or approved, +will be billed based on the pay-as-you-go model. The exception is for +Storage quotas, where the cost is determined by your requested and approved +allocation values +to reserve storage from the total NESE storage pool. For NERC (OpenStack) +Resource Allocations, the Storage quotas are specified by the "OpenStack +Volume Quota (GiB)" and "OpenStack Swift Quota (GiB)" allocation attributes. +If you have common questions or need more information, refer to our +Billing FAQs for comprehensive +answers. Keep in mind that you can easily scale and expand your current resource +allocations within your project by following this documentation +later on.

    +
    +

    Adding a new Resource Allocation to your OpenShift project

    +

    Adding a new Resource Allocation to your OpenShift project

    +
    +

    Important: Requested/Approved Allocated OpenShift Storage Quota & Cost

    +

    Ensure you choose NERC-OCP (OpenShift) in the Resource option (Always Remember: +the first option, i.e. NERC (OpenStack) is selected by default!) and specify +your anticipated computing units. Each allocation, whether requested or approved, +will be billed based on the pay-as-you-go model. The exception is for +Storage quotas, where the cost is determined by +your requested and approved allocation values +to reserve storage from the total NESE storage pool. For NERC-OCP (OpenShift) +Resource Allocations, storage quotas are specified by the "OpenShift Request +on Storage Quota (GiB)" and "OpenShift Limit on Ephemeral Storage Quota (GiB)" +allocation attributes. If you have common questions or need more information, +refer to our Billing FAQs +for comprehensive answers. Keep in mind that you can easily scale and expand +your current resource allocations within your project by following +this documentation +later on.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/allocation/adding-a-project/index.html b/get-started/allocation/adding-a-project/index.html new file mode 100644 index 00000000..15642bb2 --- /dev/null +++ b/get-started/allocation/adding-a-project/index.html @@ -0,0 +1,3310 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    A New Project Creation Process

    +

    What PIs need to fill in order to request a Project?

    +

    Once logged in to NERC's ColdFront, PIs can choose Projects sub-menu located under +the Project menu.

    +

    Projects sub-menu

    +

    Project

    +

    Clicking on the "Add a project" button will show the interface below:

    +

    Add A Project

    +
    +

    Very Important: Project Title Length Limitation

    +

    Please ensure that the project title is both concise and does not exceed a +length of 63 characters.

    +
    +

    PIs need to specify an appropriate title (less than 63 characters), description +of their research work that will be performed on the NERC (in one or two paragraphs), +the field(s) of science or research domain(s), and then click the "Save" button. +Once saved successfully, PIs effectively become the "manager" of the project, and +are free to add or remove users and also request resource allocation(s) to any Projects +for which they are the PI. PIs are permitted to add users to their group, request +new allocations, renew expiring allocations, and provide information such as +publications and grant data. PIs can maintain all their research information under +one project or, if they require, they can separate the work into multiple projects.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/allocation/allocation-change-request/index.html b/get-started/allocation/allocation-change-request/index.html new file mode 100644 index 00000000..ae7a0bd8 --- /dev/null +++ b/get-started/allocation/allocation-change-request/index.html @@ -0,0 +1,3534 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Request change to Resource Allocation to an existing project

    +

    If past resource allocation is not sufficient for an existing project, PIs or project +managers can request a change by clicking "Request Change" button on project +resource allocation detail page as show below:

    +

    Request Change Resource Allocation

    +

    Request Change Resource Allocation Attributes for OpenStack Project

    +

    This will bring up the detailed Quota attributes for that project as shown below:

    +

    Request Change Resource Allocation Attributes for OpenStack Project

    +
    +

    Important: Requested/Approved Allocated OpenStack Storage Quota & Cost

    +

    For NERC (OpenStack) resource types, the Storage quotas are controlled +by the values of the "OpenStack Volume Quota (GiB)" and "OpenStack Swift Quota +(GiB)" quota attributes. The Storage cost is determined by your requested +and approved allocation values +for these quota attributes. If you have common questions or need more information, +refer to our Billing FAQs +for comprehensive answers.

    +
    +

    PI or project managers can provide a new value for the individual quota attributes, +and give justification for the requested changes so that the NERC admin can review +the change request and approve or deny based on justification and quota change request. +Then submitting the change request, this will notify the NERC admin about it. Please +wait untill the NERC admin approves/ deny the change request to see the change on +your resource allocation for the selected project.

    +
    +

    Important Information

    +

    PI or project managers can put the new values on the textboxes for ONLY +quota attributes they want to change others they can be left blank so those +quotas will not get changed!

    +

    To use GPU resources on your VM, you need to specify the number of GPUs in the +"OpenStack GPU Quota" attribute. Additionally, ensure that your other quota +attributes, namely "OpenStack Compute vCPU Quota" and "OpenStack Compute RAM +Quota (MiB)" have sufficient resources to meet the vCPU and RAM requirements +for one of the GPU tier-based flavors. Refer to the GPU Tier documentation +for specific requirements and further details on the flavors available for GPU +usage.

    +
    +

    Allocation Change Requests for OpenStack Project

    +

    Once the request is processed by the NERC admin, any user can view that request +change trails for the project by looking at the "Allocation Change Requests" +section that looks like below:

    +

    Allocation Change Requests for OpenStack Project

    +

    Any user can click on Action button to view the details about the change request. +This will show more details about the change request as shown below:

    +

    Allocation Change Request Details for OpenStack Project

    +

    How to Use GPU Resources in your OpenStack Project

    +
    +

    Comparison Between CPU and GPU

    +

    To learn more about the key differences between CPUs and GPUs, please read this.

    +
    +

    A GPU instance is launched in the same way +as any other compute instance, with a few considerations to keep in mind:

    +
      +
    • +

      When launching a GPU based instance, be sure to select one of the +GPU Tier +based flavor.

      +
    • +
    • +

      You need to have sufficient resource quota to launch the desired flavor. Always +ensure you know which GPU-based flavor you want to use, then submit an +allocation change request +to adjust your current allocation to fit the flavor's resource requirements.

      +
    • +
    +
    +

    Resource Requirements for Launching a VM with "NVIDIA A100 SXM4 40GB" Flavor.

    +

    Based on the GPU Tier documentation, +NERC provides two variations of NVIDIA A100 SXM4 40GB flavors:

    +
      +
    1. gpu-su-a100sxm4.1: Includes 1 NVIDIA A100 GPU
    2. +
    3. gpu-su-a100sxm4.2: Includes 2 NVIDIA A100 GPUs
    4. +
    +

    You should select the flavor that best fits your resource needs and ensure your +OpenStack quotas are appropriately configured for the chosen flavor. To use +a GPU-based VM flavor, choose the one that best fits your resource needs and +make sure your OpenStack quotas meet the required specifications:

    +
      +
    • +

      For the gpu-su-a100sxm4.1 flavor:

      +
        +
      • vCPU: 32
      • +
      • RAM (GiB): 240
      • +
      +
    • +
    • +

      For the gpu-su-a100sxm4.2 flavor:

      +
        +
      • vCPU: 64
      • +
      • RAM (GiB): 480
      • +
      +
    • +
    +

    Ensure that your OpenStack resource quotas are configured as follows:

    +
      +
    • OpenStack GPU Quota: Meets or exceeds the number of GPUs required by the +chosen flavor.
    • +
    • OpenStack Compute vCPU Quota: Meets or exceeds the vCPU requirement.
    • +
    • OpenStack Compute RAM Quota (MiB): Meets or exceeds the RAM requirement.
    • +
    +

    Properly configure these quotas to successfully launch a VM with the selected +"gpu-su-a100sxm4" flavor.

    +
    +
      +
    • We recommend using ubuntu-22.04-x86_64 +as the image for your GPU-based instance because we have tested the NVIDIA driver +with this image and obtained good results. That said, it is possible to run a +variety of other images as well.
    • +
    +

    Request Change Resource Allocation Attributes for OpenShift Project

    +

    Request Change Resource Allocation Attributes for OpenShift Project

    +
    +

    Important: Requested/Approved Allocated OpenShift Storage Quota & Cost

    +

    For NERC-OCP (OpenShift) resource types, the Storage quotas are controlled +by the values of the "OpenShift Request on Storage Quota (GiB)" and "OpenShift +Limit on Ephemeral Storage Quota (GiB)" quota attributes. The Storage cost +is determined by your requested and approved allocation values +for these quota attributes.

    +
    +

    PI or project managers can provide a new value for the individual quota attributes, +and give justification for the requested changes so that the NERC admin can review +the change request and approve or deny based on justification and quota change request. +Then submitting the change request, this will notify the NERC admin about it. Please +wait untill the NERC admin approves/ deny the change request to see the change on +your resource allocation for the selected project.

    +
    +

    Important Information

    +

    PI or project managers can put the new values on the textboxes for ONLY +quota attributes they want to change others they can be left blank so those +quotas will not get changed!

    +

    In order to use GPU resources on your pod, you must specify the number of GPUs +you want to use in the "OpenShift Request on GPU Quota" attribute.

    +
    +

    Allocation Change Requests for OpenShift Project

    +

    Once the request is processed by the NERC admin, any user can view that request +change trails for the project by looking at the "Allocation Change Requests" +section that looks like below:

    +

    Allocation Change Requests for OpenShift Project

    +

    Any user can click on Action button to view the details about the change request. +This will show more details about the change request as shown below:

    +

    Allocation Change Request Details for OpenShift Project

    +

    How to Use GPU Resources in your OpenShift Project

    +
    +

    Comparison Between CPU and GPU

    +

    To learn more about the key differences between CPUs and GPUs, please read this.

    +
    +

    For OpenShift pods, we can specify different types of GPUs. Since OpenShift is not +based on flavors, we can customize the resources as needed at the pod level while +still utilizing GPU resources.

    +

    You can read about how to specify a pod to use a GPU here.

    +

    Also, you will be able to select a different GPU device for your workload, as +explained here.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/allocation/allocation-details/index.html b/get-started/allocation/allocation-details/index.html new file mode 100644 index 00000000..28c195e7 --- /dev/null +++ b/get-started/allocation/allocation-details/index.html @@ -0,0 +1,3405 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Allocation details

    +

    Access to ColdFront's allocations details is based on user roles. +PIs and managers see the same allocation details as users, and can also add +project users to the allocation, if they're not already on it, and remove users +from an allocation.

    +

    PI and Manager View

    +

    PIs and managers can view important details of the project and underlying +allocations. It shows all allocations including start and end dates, creation +and last modified dates, users on the allocation and public allocation attributes. +PIs and managers can add or remove users from allocations.

    +

    PI and Manager Allocation View of OpenStack Resource Allocation

    +

    PI and Manager Allocation View of OpenStack Resource Allocation

    +

    PI and Manager Allocation View of OpenShift Resource Allocation

    +

    PI and Manager Allocation View of OpenShift Resource Allocation

    +

    General User View

    +

    General Users who are not PIs or Managers on a project see a read-only view of the +allocation details. If a user is on a project but not a particular allocation, they +will not be able to see the allocation in the Project view nor will they be able +to access the Allocation detail page.

    +

    General User View of OpenStack Resource Allocation

    +

    General User View of OpenStack Resource Allocation

    +

    General User View of OpenShift Resource Allocation

    +

    General User View of OpenShift Resource Allocation

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/allocation/archiving-a-project/index.html b/get-started/allocation/archiving-a-project/index.html new file mode 100644 index 00000000..483066c9 --- /dev/null +++ b/get-started/allocation/archiving-a-project/index.html @@ -0,0 +1,3258 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Archiving an Existing Project

    +

    Only a PI can archive their ColdFront project(s) +by accessing NERC's ColdFront interface.

    +
    +

    Important Note:

    +

    If you archive a project then this will expire all your allocations on that +project, which will disable your group's access to the resources in those +allocations. Also, you cannot make any changes to archived projects. +Alert Archiving a Project

    +
    +

    Once archived it is no longer visible on your projects list. +All archived projects will be listed under your archived projects, +which can be viewed by clicking the "View archived projects" button as shown below:

    +

    View Archived Projects

    +

    All your archived projects are displayed here:

    +

    Archived Projects

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/allocation/coldfront/index.html b/get-started/allocation/coldfront/index.html new file mode 100644 index 00000000..316f5799 --- /dev/null +++ b/get-started/allocation/coldfront/index.html @@ -0,0 +1,3372 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    What is NERC's ColdFront?

    +

    NERC uses NERC's ColdFront interface, an +open source resource allocation management system called +ColdFront to provide a single point-of-entry +for administration, reporting, and measuring scientific impact of NERC resources +for PI.

    +
    +

    Learning ColdFront

    +

    A collection of animated gifs +showcasing common functions in ColdFront is available, providing helpful +insights into how these features can be utilized.

    +
    +

    How to get access to NERC's ColdFront

    +

    Any users who had registerd their user accounts through the +MGHPCC Shared Services (MGHPCC-SS) Account Portal +also known as "RegApp" can get access to NERC's ColdFront interface.

    +

    General Users who are not PIs or Managers on a project see a read-only view of +the NERC's ColdFront as described here.

    +

    Whereas, once a PI Account request +is granted, the PI will receive an email confirming the request approval and +how to connect NERC's ColdFront.

    +

    PI or project managers can use NERC's ColdFront as a self-service web-portal that +can see an administrative view of it as described here +and can do the following tasks:

    +
      +
    • +

      Only PI can add a new project and archive any existing project(s)

      +
    • +
    • +

      Manage existing projects

      +
    • +
    • +

      Request allocations that fall under projects in NERC's resources such as clusters, +cloud resources, servers, storage, and software licenses

      +
    • +
    • +

      Add/remove user access to/from allocated resources who is a member of the project +without requiring system administrator interaction

      +
    • +
    • +

      Elevate selected users to 'manager' status, allowing them to handle some of the +PI asks such as request new resource allocations, add/remove users to/from resource +allocations, add project data such as grants and publications

      +
    • +
    • +

      Monitor resource utilization such as storage and cloud usage

      +
    • +
    • +

      Receive email notifications for expiring/renewing access to resources as well as +notifications when allocations change status - i.e. activated, expired, denied

      +
    • +
    • +

      Provide information such as grants, publications, and other reportable data for +periodic review by center director to demonstrate need for the resources

      +
    • +
    +

    How to login to NERC's ColdFront?

    +

    NERC's ColdFront interface provides users with +login page as shown here:

    +

    ColdFront Login Page

    +

    Please click on "Log In" button. Then, it will show the login interface as +shown below:

    +

    ColdFront Login Interface

    +

    You need to click on "Log in via OpenID Connect" button. This will redirect you +to CILogon welcome page where you can select your appropriate Identity Provider +as shown below:

    +

    CILogon Welcome Page

    +

    Once successful, you will be redirected to the ColdFront's main dashboard as shown +below:

    +

    ColdFront Dashboard

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/allocation/images/CILogon.png b/get-started/allocation/images/CILogon.png new file mode 100644 index 00000000..8931dd9d Binary files /dev/null and b/get-started/allocation/images/CILogon.png differ diff --git a/get-started/allocation/images/adding_new_resource_allocations.png b/get-started/allocation/images/adding_new_resource_allocations.png new file mode 100644 index 00000000..34e7cf86 Binary files /dev/null and b/get-started/allocation/images/adding_new_resource_allocations.png differ diff --git a/get-started/allocation/images/archived_projects_list.png b/get-started/allocation/images/archived_projects_list.png new file mode 100644 index 00000000..c6f12b21 Binary files /dev/null and b/get-started/allocation/images/archived_projects_list.png differ diff --git a/get-started/allocation/images/archiving_project_alert.png b/get-started/allocation/images/archiving_project_alert.png new file mode 100644 index 00000000..b3b8e852 Binary files /dev/null and b/get-started/allocation/images/archiving_project_alert.png differ diff --git a/get-started/allocation/images/coldfront-activate-expiring-allocation.png b/get-started/allocation/images/coldfront-activate-expiring-allocation.png new file mode 100644 index 00000000..4dcd1adb Binary files /dev/null and b/get-started/allocation/images/coldfront-activate-expiring-allocation.png differ diff --git a/get-started/allocation/images/coldfront-add-a-project.png b/get-started/allocation/images/coldfront-add-a-project.png new file mode 100644 index 00000000..c3c84c2d Binary files /dev/null and b/get-started/allocation/images/coldfront-add-a-project.png differ diff --git a/get-started/allocation/images/coldfront-add-remove-users.png b/get-started/allocation/images/coldfront-add-remove-users.png new file mode 100644 index 00000000..116a21ae Binary files /dev/null and b/get-started/allocation/images/coldfront-add-remove-users.png differ diff --git a/get-started/allocation/images/coldfront-add-user-to-project.png b/get-started/allocation/images/coldfront-add-user-to-project.png new file mode 100644 index 00000000..c2b6b10a Binary files /dev/null and b/get-started/allocation/images/coldfront-add-user-to-project.png differ diff --git a/get-started/allocation/images/coldfront-add-users-to-allocation.png b/get-started/allocation/images/coldfront-add-users-to-allocation.png new file mode 100644 index 00000000..150eee29 Binary files /dev/null and b/get-started/allocation/images/coldfront-add-users-to-allocation.png differ diff --git a/get-started/allocation/images/coldfront-allocation-renewal-requested.png b/get-started/allocation/images/coldfront-allocation-renewal-requested.png new file mode 100644 index 00000000..e9d757ed Binary files /dev/null and b/get-started/allocation/images/coldfront-allocation-renewal-requested.png differ diff --git a/get-started/allocation/images/coldfront-change-user-role.png b/get-started/allocation/images/coldfront-change-user-role.png new file mode 100644 index 00000000..1e664e7d Binary files /dev/null and b/get-started/allocation/images/coldfront-change-user-role.png differ diff --git a/get-started/allocation/images/coldfront-dashboard.png b/get-started/allocation/images/coldfront-dashboard.png new file mode 100644 index 00000000..940b4cc0 Binary files /dev/null and b/get-started/allocation/images/coldfront-dashboard.png differ diff --git a/get-started/allocation/images/coldfront-login-interface.png b/get-started/allocation/images/coldfront-login-interface.png new file mode 100644 index 00000000..df0a4b80 Binary files /dev/null and b/get-started/allocation/images/coldfront-login-interface.png differ diff --git a/get-started/allocation/images/coldfront-login-page.png b/get-started/allocation/images/coldfront-login-page.png new file mode 100644 index 00000000..2831ad43 Binary files /dev/null and b/get-started/allocation/images/coldfront-login-page.png differ diff --git a/get-started/allocation/images/coldfront-openshift-allocation-attributes.png b/get-started/allocation/images/coldfront-openshift-allocation-attributes.png new file mode 100644 index 00000000..f73a7fcf Binary files /dev/null and b/get-started/allocation/images/coldfront-openshift-allocation-attributes.png differ diff --git a/get-started/allocation/images/coldfront-openshift-allocation-change-requests.png b/get-started/allocation/images/coldfront-openshift-allocation-change-requests.png new file mode 100644 index 00000000..2a7b47a8 Binary files /dev/null and b/get-started/allocation/images/coldfront-openshift-allocation-change-requests.png differ diff --git a/get-started/allocation/images/coldfront-openshift-allocation-general-user-view.png b/get-started/allocation/images/coldfront-openshift-allocation-general-user-view.png new file mode 100644 index 00000000..74c9f678 Binary files /dev/null and b/get-started/allocation/images/coldfront-openshift-allocation-general-user-view.png differ diff --git a/get-started/allocation/images/coldfront-openshift-allocation-pi-manager-view.png b/get-started/allocation/images/coldfront-openshift-allocation-pi-manager-view.png new file mode 100644 index 00000000..2b3e5f50 Binary files /dev/null and b/get-started/allocation/images/coldfront-openshift-allocation-pi-manager-view.png differ diff --git a/get-started/allocation/images/coldfront-openshift-change-requested-details.png b/get-started/allocation/images/coldfront-openshift-change-requested-details.png new file mode 100644 index 00000000..b9ce9f67 Binary files /dev/null and b/get-started/allocation/images/coldfront-openshift-change-requested-details.png differ diff --git a/get-started/allocation/images/coldfront-openstack-allocation-attributes.png b/get-started/allocation/images/coldfront-openstack-allocation-attributes.png new file mode 100644 index 00000000..8e237fd1 Binary files /dev/null and b/get-started/allocation/images/coldfront-openstack-allocation-attributes.png differ diff --git a/get-started/allocation/images/coldfront-openstack-allocation-change-requests.png b/get-started/allocation/images/coldfront-openstack-allocation-change-requests.png new file mode 100644 index 00000000..0e424244 Binary files /dev/null and b/get-started/allocation/images/coldfront-openstack-allocation-change-requests.png differ diff --git a/get-started/allocation/images/coldfront-openstack-allocation-general-user-view.png b/get-started/allocation/images/coldfront-openstack-allocation-general-user-view.png new file mode 100644 index 00000000..1e252430 Binary files /dev/null and b/get-started/allocation/images/coldfront-openstack-allocation-general-user-view.png differ diff --git a/get-started/allocation/images/coldfront-openstack-allocation-pi-manager-view.png b/get-started/allocation/images/coldfront-openstack-allocation-pi-manager-view.png new file mode 100644 index 00000000..de1bdf59 Binary files /dev/null and b/get-started/allocation/images/coldfront-openstack-allocation-pi-manager-view.png differ diff --git a/get-started/allocation/images/coldfront-openstack-change-requested-details.png b/get-started/allocation/images/coldfront-openstack-change-requested-details.png new file mode 100644 index 00000000..a5458748 Binary files /dev/null and b/get-started/allocation/images/coldfront-openstack-change-requested-details.png differ diff --git a/get-started/allocation/images/coldfront-pi-add-users-on-allocation.png b/get-started/allocation/images/coldfront-pi-add-users-on-allocation.png new file mode 100644 index 00000000..447c3947 Binary files /dev/null and b/get-started/allocation/images/coldfront-pi-add-users-on-allocation.png differ diff --git a/get-started/allocation/images/coldfront-project-review-notifications.png b/get-started/allocation/images/coldfront-project-review-notifications.png new file mode 100644 index 00000000..0ca8dec8 Binary files /dev/null and b/get-started/allocation/images/coldfront-project-review-notifications.png differ diff --git a/get-started/allocation/images/coldfront-project-review-pending-status.png b/get-started/allocation/images/coldfront-project-review-pending-status.png new file mode 100644 index 00000000..6936749c Binary files /dev/null and b/get-started/allocation/images/coldfront-project-review-pending-status.png differ diff --git a/get-started/allocation/images/coldfront-project-review-steps.png b/get-started/allocation/images/coldfront-project-review-steps.png new file mode 100644 index 00000000..340520a5 Binary files /dev/null and b/get-started/allocation/images/coldfront-project-review-steps.png differ diff --git a/get-started/allocation/images/coldfront-project-review.png b/get-started/allocation/images/coldfront-project-review.png new file mode 100644 index 00000000..5f1a5af9 Binary files /dev/null and b/get-started/allocation/images/coldfront-project-review.png differ diff --git a/get-started/allocation/images/coldfront-project.png b/get-started/allocation/images/coldfront-project.png new file mode 100644 index 00000000..8f52a50b Binary files /dev/null and b/get-started/allocation/images/coldfront-project.png differ diff --git a/get-started/allocation/images/coldfront-projects-sub-menu.png b/get-started/allocation/images/coldfront-projects-sub-menu.png new file mode 100644 index 00000000..c272cd08 Binary files /dev/null and b/get-started/allocation/images/coldfront-projects-sub-menu.png differ diff --git a/get-started/allocation/images/coldfront-remove-users-from-a-project.png b/get-started/allocation/images/coldfront-remove-users-from-a-project.png new file mode 100644 index 00000000..bdd38b0b Binary files /dev/null and b/get-started/allocation/images/coldfront-remove-users-from-a-project.png differ diff --git a/get-started/allocation/images/coldfront-remove-users-from-allocation.png b/get-started/allocation/images/coldfront-remove-users-from-allocation.png new file mode 100644 index 00000000..0c09293c Binary files /dev/null and b/get-started/allocation/images/coldfront-remove-users-from-allocation.png differ diff --git a/get-started/allocation/images/coldfront-renewed-allocation.png b/get-started/allocation/images/coldfront-renewed-allocation.png new file mode 100644 index 00000000..5143b977 Binary files /dev/null and b/get-started/allocation/images/coldfront-renewed-allocation.png differ diff --git a/get-started/allocation/images/coldfront-request-a-new-openshift-allocation.png b/get-started/allocation/images/coldfront-request-a-new-openshift-allocation.png new file mode 100644 index 00000000..9df1fdc4 Binary files /dev/null and b/get-started/allocation/images/coldfront-request-a-new-openshift-allocation.png differ diff --git a/get-started/allocation/images/coldfront-request-a-new-openstack-allocation.png b/get-started/allocation/images/coldfront-request-a-new-openstack-allocation.png new file mode 100644 index 00000000..cdf3ec6a Binary files /dev/null and b/get-started/allocation/images/coldfront-request-a-new-openstack-allocation.png differ diff --git a/get-started/allocation/images/coldfront-request-change-allocation.png b/get-started/allocation/images/coldfront-request-change-allocation.png new file mode 100644 index 00000000..764ecad3 Binary files /dev/null and b/get-started/allocation/images/coldfront-request-change-allocation.png differ diff --git a/get-started/allocation/images/coldfront-request-new-openshift-allocation-with-users.png b/get-started/allocation/images/coldfront-request-new-openshift-allocation-with-users.png new file mode 100644 index 00000000..62060e56 Binary files /dev/null and b/get-started/allocation/images/coldfront-request-new-openshift-allocation-with-users.png differ diff --git a/get-started/allocation/images/coldfront-request-new-openshift-allocation.png b/get-started/allocation/images/coldfront-request-new-openshift-allocation.png new file mode 100644 index 00000000..b50228b3 Binary files /dev/null and b/get-started/allocation/images/coldfront-request-new-openshift-allocation.png differ diff --git a/get-started/allocation/images/coldfront-request-new-openstack-allocation-with-users.png b/get-started/allocation/images/coldfront-request-new-openstack-allocation-with-users.png new file mode 100644 index 00000000..d715d132 Binary files /dev/null and b/get-started/allocation/images/coldfront-request-new-openstack-allocation-with-users.png differ diff --git a/get-started/allocation/images/coldfront-request-new-openstack-allocation.png b/get-started/allocation/images/coldfront-request-new-openstack-allocation.png new file mode 100644 index 00000000..051328c6 Binary files /dev/null and b/get-started/allocation/images/coldfront-request-new-openstack-allocation.png differ diff --git a/get-started/allocation/images/coldfront-search-multiple-users.png b/get-started/allocation/images/coldfront-search-multiple-users.png new file mode 100644 index 00000000..8f23e99c Binary files /dev/null and b/get-started/allocation/images/coldfront-search-multiple-users.png differ diff --git a/get-started/allocation/images/coldfront-submit-allocation-activation.png b/get-started/allocation/images/coldfront-submit-allocation-activation.png new file mode 100644 index 00000000..c6f88003 Binary files /dev/null and b/get-started/allocation/images/coldfront-submit-allocation-activation.png differ diff --git a/get-started/allocation/images/coldfront-user-details.png b/get-started/allocation/images/coldfront-user-details.png new file mode 100644 index 00000000..e643edea Binary files /dev/null and b/get-started/allocation/images/coldfront-user-details.png differ diff --git a/get-started/allocation/images/coldfront-user-search.png b/get-started/allocation/images/coldfront-user-search.png new file mode 100644 index 00000000..eca6e454 Binary files /dev/null and b/get-started/allocation/images/coldfront-user-search.png differ diff --git a/get-started/allocation/images/coldfront-users-notification.png b/get-started/allocation/images/coldfront-users-notification.png new file mode 100644 index 00000000..454371a9 Binary files /dev/null and b/get-started/allocation/images/coldfront-users-notification.png differ diff --git a/get-started/allocation/images/new_resource_allocation.png b/get-started/allocation/images/new_resource_allocation.png new file mode 100644 index 00000000..19d81582 Binary files /dev/null and b/get-started/allocation/images/new_resource_allocation.png differ diff --git a/get-started/allocation/images/renew_expiring_allocation.png b/get-started/allocation/images/renew_expiring_allocation.png new file mode 100644 index 00000000..216599d4 Binary files /dev/null and b/get-started/allocation/images/renew_expiring_allocation.png differ diff --git a/get-started/allocation/images/view_archived_projects.png b/get-started/allocation/images/view_archived_projects.png new file mode 100644 index 00000000..c1dd112d Binary files /dev/null and b/get-started/allocation/images/view_archived_projects.png differ diff --git a/get-started/allocation/manage-users-to-a-project/index.html b/get-started/allocation/manage-users-to-a-project/index.html new file mode 100644 index 00000000..ef75f38a --- /dev/null +++ b/get-started/allocation/manage-users-to-a-project/index.html @@ -0,0 +1,3427 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Managing Users in the Project

    +

    Add/Remove User(s) to/from a Project

    +

    A user can only view projects they are on. PIs or managers can add or remove users +from their respective projects by navigating to the Users section of the project.

    +

    Add/Remove Users from Project

    +

    Once we click on the "Add Users" button, it will show us the following search interface:

    +

    User Search Interface

    +
    +

    Searching multiple users at once!

    +

    If you want to simultaneously search for multiple users in the system, you +can input multiple usernames separated by space or newline, as shown below: +Searching Multiple User(s) +NOTE: This will return a list of all users matching those provided usernames +only if they exist.

    +
    +

    They can search for any users in the system that are not already part of the project +by providing exact matched username or partial text of other multiple fields. The +search results show details about the user account such as email address, username, +first name, last name etc. as shown below:

    +

    Add User(s) To Project

    +
    +

    Delegating user as 'Manager'

    +

    When adding a user to your project you can optionally designate them as a +"Manager" by selecting their role using the drop down next to their email. +Read more about user roles +here.

    +
    +

    Thus, found user(s) can be selected and assigned directly to the available resource +allocation(s) on the given project using this interface. While adding the users, +their Role also can be selected from the dropdown options as either User or Manager. +Once confirmed with selection of user(s) their roles and allocations, click on the +"Add Selected Users to Project" button.

    +

    Removing Users from the Project is straightforward by just clicking on the +"Remove Users" button. Then it shows the following interface:

    +

    Remove User(s) From A Project

    +

    PI or project managers can select the user(s) and then click on the "Remove Selected +Users From Project" button.

    +

    User Roles

    +

    Access to ColdFront is role based so users see a read-only view of the allocation +details for any allocations they are on. PIs see the same allocation details as general +users and can also add project users to the allocation if they're not already on +it. Even on the first time, PIs add any user to the project as the User role. Later +PI or project managers can delegate users on their project to the 'manager' role. +This allows multiple managers on the same project. This provides the user with the +same access and abilities as the PI. A "Manager" is a user who has the same +permissions as the PI to add/remove users, request/renew allocations, +add/remove project info such as grants, publications, and research output. +Managers may also complete the annual project review.

    +
    +

    What can a PI do that a manager can't?

    +

    The only tasks a PI can do that a manager can't is create a new project or +archive any existing project(s). All other project-related actions that a PI +can perform can also be accomplished by any one of the managers assigned to +that project.

    +
    +

    General User Accounts are not able to create/update projects and request Resource +Allocations. Instead, these accounts must be associated with a Project that has +Resources. General User accounts that are associated with a Project have access +to view their project details and use all the resources associated with the Project +on NERC.

    +

    General Users (not PIs or Managers) can turn off email notifications at the project +level. PIs also have the 'manager' status on a project. Managers can't turn off their +notifications. This ensures they continue to get allocation expiration notification +emails.

    +

    Delegating User to Manager Role

    +

    You can also modify a users role of existing project users at any +time by clicking on the Edit button next to the user's name.

    +

    To change a user's role to 'manager' click on the edit icon next to the user's name +on the Project Detail page:

    +

    Change User Role

    +

    Then toggle the "Role" from User to Manager:

    +

    User Details

    +
    +

    Very Important

    +

    Make sure to click the "Update" button to save the change.

    +

    This delegation of "Manager" role can also be done when adding a user to your +project. You can optionally designate them as a "Manager" by selecting their +role using the drop down next to their email as described here.

    +
    +

    Notifications

    +

    All users on a project will receive notifications about allocations including +reminders of upcoming expiration dates and status changes. Users may uncheck +the box next to their username to turn off notifications. Managers and PIs on +the project are not able to turn off notifications.

    +

    User Notifications

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/allocation/managing-users-to-an-allocation/index.html b/get-started/allocation/managing-users-to-an-allocation/index.html new file mode 100644 index 00000000..dc003239 --- /dev/null +++ b/get-started/allocation/managing-users-to-an-allocation/index.html @@ -0,0 +1,3258 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Adding and removing project Users to project Resource Allocation

    +

    Any available users who were not added previously on a given project can be added +to resource allocation by clicking on the "Add Users" button as shown below:

    +

    Adding and removing project User(s) to project Allocation

    +

    Once Clicked it will show the following interface where PIs can select the available +user(s) on the checkboxes and click on the "Add Selected Users to Allocation" button.

    +

    Add Selected User(s) to Allocation

    +
    +

    Very Important

    +

    The desired user must already be on the project to be added to the allocation.

    +
    +

    Removing Users from the Resource Allocation is straightforward by just clicking on +the "Remove Users" button. Then it shows the following interface:

    +

    Removing User(s) from the Resource Allocation

    +

    PI or project managers can select the user(s) on the checkboxes and then click on +the "Remove Selected Users From Project" button.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/allocation/project-and-allocation-review/index.html b/get-started/allocation/project-and-allocation-review/index.html new file mode 100644 index 00000000..ac914bc9 --- /dev/null +++ b/get-started/allocation/project-and-allocation-review/index.html @@ -0,0 +1,3436 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    + +
    + + + +
    +
    + + + + + + + + + +

    Project and Individual Allocation Annual Review Process

    +

    Project Annual Review Process

    +

    NERC's ColdFront allows annual project reviews for NERC admins by mandating PIs +to assess and update their projects. With the Project Review feature activated, +each project undergoes a mandatory review every 365 days. During this process, +PIs update project details, confirm project members, and input publications, +grants, and research outcomes from the preceding year.

    +
    +

    Required Project Review

    +

    The PI or any manager(s) of a project must complete the project review once +every 365 days. ColdFront does not send notifications to PIs when project reviews +are due. Instead, when the PI or Manager(s) of a project views their project +they will find the notification that the project review is due. Additionally, +when the project review is pending, PIs or Project Manager(s) cannot request +new allocations or renew expiring allocations or change request to update the +allocated allocation attributes' values. This is to enforce PIs need to +review their projects annually. The PI or any managers on the project are +able to complete the project review process.

    +
    +

    Project Reviews by PIs or Project Manager(s)

    +

    When a PI or any Project Manager(s) of a project logs into NERC's ColdFront web +console and their project review is due, they will see a banner next to the +project name on the home page:

    +

    Project Review

    +

    If they try to request a new allocation or renew an expiring allocation or change +request to update the allocated allocation attributes' values, they will get an +error message:

    +

    Project Review Pending Notification

    +

    Project Review Steps

    +

    When they click on the "Review Project" link they're presented with the requirements +and a description of why we're asking for this update:

    +

    Project Review Submit Details

    +

    The links in each step direct them to different parts of their Project Detail page. +This review page lists the dates when grants and publications were last updated. +If there are no grant or publications or at least one of them hasn't been udpated +in the last year, we ask for a reason they're not updating the project information. +This helps encourage PIs to provide updates if they have them. If not, they +provide a reason and this is displayed for the NERC admins as part of the review +process.

    +

    Once the project review page is completed, the PI is redirected to the project +detail page and they see the status change to "project review pending".

    +

    Project Review Pending Status

    +

    Allocation Renewals

    +

    When the requested allocation is approved, it must have an expiration date - which +is normally 365 days or 1 year from the date it is approved. Automated emails are +triggered to all users on an allocation when the expiration date is 60 days away, +30 days, 7 days, and then expired, unless the user turns off notifications on the +project.

    +
    +

    Very Important: Urgent Allocation Renewal is Required Before Expiration

    +

    If the allocation renewal isn't processed prior to the original allocation +expiration date by the PI or Manager, the allocation will expire and the +allocation users will get a notification email letting them know the allocation +has expired! +Allocation Renewal Prior Expiration

    +

    Currently, a project will continue to be able to utilize expired allocations. +So this will continue to incur costs for you.

    +
    +

    Allocation renewals may not require any additions or changes to the allocation +attributes from the PI or Manager. By default, if the PI or Manager clicks on +the 'Activate' button as shown below:

    +

    ColdFront Activate Expiring Allocation

    +

    Then it will prompt for confirmation and allow the admin to review and submit the +activation request by clicking on 'Submit' button as shown below:

    +

    ColdFront Allocation Renewal Submit

    +

    Emails are sent to all allocation users letting them know the renewal request has +been submitted.

    +

    Then the allocation status will change to "Renewal Requested" as shown below:

    +

    ColdFront Allocation Renewal Requested

    +

    Once the renewal request is reviewed and approved by NERC admins, it will change +into "Active" status and the expiration date is set to another 365 days as shown +below:

    +

    ColdFront Allocation Renewal Successful

    +

    Then an automated email notification will be sent to the PI and all users on the +allocation that have enabled email notifications.

    +

    Cost Associated with Expired Allocations

    +

    Currently, a project will continue to be able to utilize expired allocations. +So this will continue to incur costs for you. In the future, we plan to change +this behavior so expired allocations will result in its associated VMs/pods not +to start and possibly having associated active VMs/pods to cease running.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/allocation/requesting-an-allocation/index.html b/get-started/allocation/requesting-an-allocation/index.html new file mode 100644 index 00000000..b9f61be6 --- /dev/null +++ b/get-started/allocation/requesting-an-allocation/index.html @@ -0,0 +1,3525 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    How to request a new Resource Allocation

    +

    On the Project Detail page the project PI/manager(s) can request an allocation +by clicking the "Request Resource Allocation" button as shown below:

    +

    Requesting an Allocation

    +

    On the shown page, you will be able to choose either OpenStack Resource Allocation +or OpenShift Resource Allocation by specifying either NERC (OpenStack) or +NERC-OCP (OpenShift) in the Resource dropdown option. Note: The +first option i.e. NERC (OpenStack), is selected by default.

    +
    +

    Default GPU Resource Quota for Initial Allocation Requests

    +

    By default, the GPU resource quota is set to 0 for the initial resource +allocation request for both OpenStack and OpenShift Resource Types. However, +you will be able to change request and adjust +the corresponding GPU quotas for both after they are approved for the first +time. For NERC's OpenStack, please follow this guide +on how to utilize GPU resources in your OpenStack project. For NERC's OpenShift, +refer to this reference +to learn about how to use GPU resources in pod level.

    +
    +

    Request A New OpenStack Resource Allocation for an OpenStack Project

    +

    Request A New OpenStack Resource Allocation

    +

    If users have already been added to the project as +described here, the Users selection section +will be displayed as shown below:

    +

    Request A New OpenStack Resource Allocation Selecting Users

    +

    In this section, the project PI/manager(s) can choose user(s) from the project +to be included in this allocation before clicking the "Submit" button.

    +
    +

    Read the End User License Agreement Before Submission

    +

    You should read the shown End User License Agreement (the "Agreement"). +By clicking the "Submit" button, you agree to the Terms and Conditions.

    +
    +
    +

    Important: Requested/Approved Allocated OpenStack Storage Quota & Cost

    +

    Ensure you choose NERC (OpenStack) in the Resource option and specify your +anticipated computing units. Each allocation, whether requested or approved, +will be billed based on the pay-as-you-go model. The exception is for +Storage quotas, where the cost is determined by your requested and approved +allocation values +to reserve storage from the total NESE storage pool. For NERC (OpenStack) +Resource Allocations, the Storage quotas are specified by the "OpenStack +Volume Quota (GiB)" and "OpenStack Swift Quota (GiB)" allocation attributes. +If you have common questions or need more information, refer to our +Billing FAQs for comprehensive +answers. Keep in mind that you can easily scale and expand your current resource +allocations within your project by following this documentation +later on.

    +
    +

    Resource Allocation Quotas for OpenStack Project

    +

    The amount of quota to start out a resource allocation after approval, can be +specified using an integer field in the resource allocation request form as shown +above. The provided unit value is computed as PI or project managers request +resource quota. The basic unit of computational resources is defined in terms of +integer value that corresponds to multiple OpenStack resource quotas. For example, +1 Unit corresponds to:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Resource NameQuota Amount x Unit
    Instances1
    vCPUs1
    GPU0
    RAM(MiB)4096
    Volumes2
    Volume Storage(GiB)20
    Object Storage(GiB)1
    +
    +

    Information

    +

    By default, 2 OpenStack Floating IPs, 10 Volume Snapshots and 10 Security +Groups are provided to each approved project regardless of units of requested +quota units.

    +
    +

    Request A New OpenShift Resource Allocation for an OpenShift project

    +

    Request A New OpenShift Resource Allocation

    +

    If users have already been added to the project as +described here, the Users selection section +will be displayed as shown below:

    +

    Request A New OpenShift Resource Allocation Selecting Users

    +

    In this section, the project PI/manager(s) can choose user(s) from the project +to be included in this allocation before clicking the "Submit" button.

    +
    +

    Read the End User License Agreement Before Submission

    +

    You should read the shown End User License Agreement (the "Agreement"). +By clicking the "Submit" button, you agree to the Terms and Conditions.

    +
    +

    Resource Allocation Quotas for OpenShift Project

    +

    The amount of quota to start out a resource allocation after approval, can be +specified using an integer field in the resource allocation request form as shown +above. The provided unit value is computed as PI or project managers request +resource quota. The basic unit of computational resources is defined in terms of +integer value that corresponds to multiple OpenShift resource quotas. For example, +1 Unit corresponds to:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Resource NameQuota Amount x Unit
    vCPUs1
    GPU0
    RAM(MiB)4096
    Persistent Volume Claims (PVC)2
    Storage(GiB)20
    Ephemeral Storage(GiB)5
    +
    +

    Important: Requested/Approved Allocated OpenShift Storage Quota & Cost

    +

    Ensure you choose NERC-OCP (OpenShift) in the Resource option (Always Remember: +the first option, i.e. NERC (OpenStack) is selected by default!) and specify +your anticipated computing units. Each allocation, whether requested or approved, +will be billed based on the pay-as-you-go model. The exception is for +Storage quotas, where the cost is determined by +your requested and approved allocation values +to reserve storage from the total NESE storage pool. For NERC-OCP (OpenShift) +Resource Allocations, storage quotas are specified by the "OpenShift Request +on Storage Quota (GiB)" and "OpenShift Limit on Ephemeral Storage Quota (GiB)" +allocation attributes. If you have common questions or need more information, +refer to our Billing FAQs +for comprehensive answers. Keep in mind that you can easily scale and expand +your current resource allocations within your project by following +this documentation +later on.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/best-practices/best-practices-for-bu/index.html b/get-started/best-practices/best-practices-for-bu/index.html new file mode 100644 index 00000000..215bb659 --- /dev/null +++ b/get-started/best-practices/best-practices-for-bu/index.html @@ -0,0 +1,3293 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + + + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/best-practices/best-practices-for-harvard/index.html b/get-started/best-practices/best-practices-for-harvard/index.html new file mode 100644 index 00000000..5fe31339 --- /dev/null +++ b/get-started/best-practices/best-practices-for-harvard/index.html @@ -0,0 +1,3715 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Securing Your Public Facing Server

    +

    Overview

    +

    This document is aimed to provide you with a few concrete actions you can take +to significantly enhance the security of your devices. This advice can be +enabled even if your servers are not public facing. However, we strongly +recommend implementing these steps if your servers are intended to be accessible +to the internet at large.

    +

    All recommendations and guidance are guided by our policy that has specific +requirements, the current policy/requirements for servers at NERC can be +found here.

    +
    +

    Harvard University Security Policy Information

    +

    Please note that all assets deployed to your NERC project must be compliant +with University Security policies. Please familiarize yourself with the +Harvard University Information Security Policy +and your role in securing data. If you have any questions about how Security +should be implemented in the Cloud, please contact your school security +officer: "Havard Security Officer".

    +
    +

    Know Your Data

    +

    Depending on the data that exists on your servers, you may have to take added or +specific steps to safeguard that data. At Harvard, we developed a scale of data +classification ranging from 1 to 5 in order of increasing data sensitivity.

    +

    We have prepared added guidance with examples for both +Administrative Data +and Research Data.

    +

    Additionally, if your work involved individuals situated in a European Economic +Area, you may be subject to the requirements of the General Data Protection +Regulations and more information about your responsibilities can be found +here.

    +

    Host Protection

    +

    The primary focus of this guide is to provide you with security essentials that +we support and that you can implement with little effort.

    +

    Endpoint Protection

    +

    Harvard University uses the endpoint protection service: Crowdstrike, which +actively checks a machine for indication of malicious activity and will act to +both block the activity and remediate the issue. This service is offered free to +our community members and requires the installation of an agent on the server +that runs transparently. This software enables the Harvard security team to +review security events and act as needed.

    +

    Crowdstrike can be downloaded from our repository at: +agents.itsec.harvard.edu this software is required +for all devices owned by Harvard staff/faculty and available for all operating +systems.

    +
    +

    Please note

    +

    To acess this repository you need to be in Harvard Campus Network.

    +
    +

    Patch/Update Regularly

    +

    It is common that vendors/developers will announce that they have discovered a new +vulnerability in the software you may be using. A lot of these vulnerabilities +are addressed by new releases that the developer issues. Keeping your software +and server operating system up to date with current versions ensures that you are +using a version of the software that does not have any known/published vulnerabilities.

    +

    Vulnerability Management

    +

    Various software versions have historically been found to be vulnerable to specific +attacks and exploits. The risk of running older versions of software is that you +may be exposing your machine to a possible known method of attack.

    +

    To assess which attacks you might be vulnerable to and be provided with specific +remediation guidance, we recommend enrolling your servers with our Tenable service +which periodically scans the software on your server and correlates the software +information with a database of published vulnerabilities. This service will enable +you to prioritize which component you need to upgrade or otherwise define which +vulnerabilities you may be exposed to.

    +

    The Tenable agent run transparently and can be enabled to work according to +the parameters set for your school; the agent can be downloaded +here and configuration support can be found +by filing a support request via HUIT support ticketing system: +ServiceNow.

    +

    Safer Applications/ Development

    +

    Every application has its own unique operational constraints/requirements, and +the advice below cannot be comprehensive however we can offer a few general recommendations

    +

    Secure Credential Management

    +

    Credentials should not be kept on the server, nor should they be included directly +in your programming logic.

    +

    Attackers often review running code on the server to see if they can obtain any +sensitive credentials that may have been included in each script. To better +manage your credentials, we recommend either using:

    +

    1password Credential Manager

    +

    AWS Secrets

    +

    Not Running the Application as the Root/Superuser

    +

    Frequently an application needs special permissions and access and often it is +easiest to run an application in the root/superuser account. This is a dangerous +practice since the application, when compromised, gives attackers an account with +full administrative privileges. Instead, configuring the application to run with +an account with only the permissions it needs to run is a way to minimize the +impact of a given compromise.

    +

    Safer Networking

    +

    The goal in safer networking is to minimize the areas that an attacker can target.

    +

    Minimize Publicly Exposed Services

    +

    Every port/service open to the internet will be scanned to access your servers. +We recommend that any service/port that is not needed to be accessed by the +public be placed behind the campus firewall. This will significantly reduce the +number of attempts by attackers to compromise your servers.

    +

    In practice this usually means that you only expose posts 80/443 which enables +you to serve websites, while you keep all other services such as SSH, +WordPress-logins, etc behind the campus firewall.

    +

    Strengthen SSH Logins

    +

    Where possible, and if needed, logins to a Harvard service should be placed behind +Harvardkey. For researchers however, the preferred login method is usually SSH +and we recommend the following ways to strengthen your SSH accounts

    +

    ● Disable password only logins

    +
      +
    • +

      In file /etc/ssh/sshd_config change PasswordAuthentication to no to disable +tunneled clear text passwords i.e. PasswordAuthentication no.

      +
    • +
    • +

      Uncomment the permit empty passwords option in the second line, and, if needed, +change yes to no i.e. PermitEmptyPasswords no

      +
    • +
    • +

      Then run service ssh restart.

      +
    • +
    +

    ● Use SSH keys with passwords enabled on them

    +

    ● If possible, enroll the SSH service with a Two-factor authentication provider +such as DUO or YubiKey.

    +

    Attack Detection

    +

    Despite the best protection, a sophisticated attacker may still find a way to +compromise your servers and in those scenarios, we want to enhance your ability +to detect activity that may be suspicious.

    +

    Install Crowdstrike

    +

    As stated above, Crowdstrike is both an endpoint protection service and also an +endpoint detection service. This software understands activities that might be +benign in isolation but coupled with other actions on the device may be +indicative of a compromise. It also enables the quickest security response.

    +

    Crowdstrike can be downloaded from our repository at: +agents.itsec.harvard.edu +this software is needed for all devices owned by Harvard staff/faculty and +available for all operating systems.

    +

    Safeguard your System Logs

    +

    System logs are logs that check and track activity on your servers, including +logins, installed applications, errors and more.

    +

    Sophisticated attackers will try to delete these logs to frustrate investigations +and prevent discovery of their attacks. To ensure that your logs are still +accessible and available for review, we recommend that you configure your logs +to be sent to a system separate from your servers. This can be either sending logs +to an external file storage repository. Or configuring a separate logging system +using Splunk.

    +

    For help setting up logging please file a support request via our support +ticketing system: ServiceNow.

    +

    Escalating an Issue

    +

    There are several ways you can report a security issue and they are all documented +on HUIT Internet Security and Data Privacy group site.

    +

    In the event you suspect a security issue has occurred or wanted someone to supply +a security assessment, please feel free to reach out to the HUIT Internet Security +and Data Privacy group, specifically the Operations & Engineering team.

    +

    Email Harvard ITSEC-OPS

    +

    Service Queue

    +

    Harvard HUIT Slack Channel: #isdp-public

    +

    Further References

    +

    https://policy.security.harvard.edu/all-servers

    +

    https://enterprisearchitecture.harvard.edu/security-minimal-viable-product-requirements-huit-hostedmanaged-server-instances

    +

    https://policy.security.harvard.edu/security-requirements

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/best-practices/best-practices-for-my-institution/index.html b/get-started/best-practices/best-practices-for-my-institution/index.html new file mode 100644 index 00000000..380fbdaf --- /dev/null +++ b/get-started/best-practices/best-practices-for-my-institution/index.html @@ -0,0 +1,3309 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Best Practices for My Institution

    +

    Institutions with the Best Practices outlines

    +

    The following institutions using our services have already provided guidelines +for best practices:

    +
      +
    1. +

      Harvard University

      +
    2. +
    3. +

      Boston University

      +
    4. +
    +
    +

    Upcoming Best Practices for other institutions

    +

    We are in the process of obtaining Best Practices for institutions not listed +above.

    +
    +

    If your institution already have outlined Best Practices guidelines with your +internal IT department, please contact us to list it here soon by emailing us at +help@nerc.mghpcc.org +or, by submitting a new ticket at the NERC's Support Ticketing System.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/best-practices/best-practices/index.html b/get-started/best-practices/best-practices/index.html new file mode 100644 index 00000000..920b8d48 --- /dev/null +++ b/get-started/best-practices/best-practices/index.html @@ -0,0 +1,3308 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Best Practices for the NERC Users

    +

    By 2025, according to Gartner's forecast, +the responsibility for approximately 99% of cloud security failures will likely +lie with customers. These failures can be attributed to the difficulties in gauging +and overseeing risks associated with on-prem cloud security. The MGHPCC will enter +into a lightweight Memorandum of Understanding (MOU) with each institutional +customer that consumes NERC services and that will also clearly explain about +the security risks and some of the shared responsibilities for the customers while +using the NERC. This ensures roles and responsibilities are distinctly understood +by each party.

    +

    NERC Principal Investigators (PIs): PIs are ultimately responsible for their +end-users and the security of the systems and applications that are deployed as +part of their project(s) on NERC. This includes being responsible for the security +of their data hosted on the NERC as well as users, accounts and access management.

    +

    Every individual user needs to comply with your Institution’s Security +and Privacy policies to protect +their Data, Endpoints, Accounts and Access management. They +must ensure any data created on or uploaded to the NERC is adequately secured. +Each customer has complete control over their systems, networks and assets. It +is essential to restrict access to the NERC provided user environment only to +authorized users by using secure identity and access management. Furthermore, +users have authority over various credential-related aspects, including secure +login mechanisms, single sign-on (SSO), and multifactor authentication.

    +

    Under this model, we are responsible for operation of the physical infrastructure +that includes responsibility for protecting, patching and maintaining underlying +virtualization layer, servers, disks, storage, network gears, other hardwares, +and softwares. Whereas NERC users are responsible for the security of the guest +operating system (OS) and software stack i.e. databases used to run their +applications and data. They are also entrusted with safeguarding middleware, +containers, workloads, and any code or data generated by the platform.

    +

    All NERC users are responsible for their use of NERC services, which include:

    +
      +
    • +

      Following the best practices for security on NERC services. Please review your +institutional guidelines next.

      +
    • +
    • +

      Complying with security policies regarding VMs and containers. NERC admins are +not responsible for maintaining or deploying VMs or containers created by PIs for +their projects. See Harvard University and Boston University policies +here. We will be adding more +institutions under this page soon. Without prior notice, NERC reserves the right +to shut down any VM or container that is causing internal or external problems +or violating these policies.

      +
    • +
    • +

      Adhering to institutional restrictions and compliance policies around the data +they upload and provide access to/from NERC. At NERC, we only offer users to +store internal data in which information is chosen to keep confidential but the +disclosure of which would not cause material harm to you, your users and your +institution. Your institution may have already classified and categorized data +and implemented security policies and guidance for each category. If your project +includes sensitive data and information then you might need to contact NERC's +admin as soon as possible to discuss other potential options.

      +
    • +
    • +

      Backups and/or snapshots +are the user's responsibility for volumes/data, configurations, objects, and +their state, which are useful in the case when users accidentally delete/lose +their data. NERC admins cannot recover lost data. In addition, while NERC stores +data with high redundancy to deal with computer or disk failures, PIs should +ensure they have off-site backups for disaster recovery, e.g., to deal with +occasional disruptions and outages due to the natural disasters that impact the +MGHPCC data center.

      +
    • +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/cost-billing/billing-faqs/index.html b/get-started/cost-billing/billing-faqs/index.html new file mode 100644 index 00000000..efe777bb --- /dev/null +++ b/get-started/cost-billing/billing-faqs/index.html @@ -0,0 +1,3424 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Billing Frequently Asked Questions (FAQs)

    +

    Our primary focus is to deliver outstanding on-prem cloud services, prioritizing +reliability, security, and cutting-edge solutions to meet your research and teaching +requirements. To achieve this, we have implemented a cost-effective pricing model +that enables us to maintain, enhance, and sustain the quality of our services. By +adopting consistent cost structures across all institutions, we can make strategic +investments in infrastructure, expand our service portfolio, and enhance our +support capabilities for a seamless user experience.

    +

    Most of the institutions using our services have an MOU (Memorandum Of Understanding) +with us to be better aligned to a number of research regulations, policies and +requirements but if your institution does not have an MOU with us, please have +someone from your faculty or administration contact us to discuss it soon by emailing +us at help@nerc.mghpcc.org +or, by submitting a new ticket at the NERC's Support Ticketing System.

    +

    Questions & Answers

    +
    +1. As a new NERC PI for the first time, am I entitled to any credits? +
      +
    • +

      Yes, you will receive up to $1000 of credit for the first month only.

      +
    • +
    • +

      This credit is not transferable to subsequent months.

      +
    • +
    • +

      This does not apply to the usage of GPU resources.

      +
    • +
    +
    +
    +2. How often will I be billed? +

    You or your institution will be billed monthly within the first week of each +month.

    +
    +
    +3. If I have an issue with my bill, who do I contact? +

    Please send your requests by emailing us at +help@nerc.mghpcc.org +or, by submitting a new ticket at the NERC's Support Ticketing System.

    +
    +
    +4. How do I control costs? +

    Upon creating a project, you will set these resource limits (quotas) for +OpenStack (VMs), OpenShift (containers), and storage through +ColdFront. This is the maximum +amount of resources you can consume at one time.

    +
    +
    +5. Are we invoicing for CPUs/GPUs only when the VM or Pod is active? +

    Yes. You will only be billed based on your utilization (cores, memory, GPU) +when VMs exist (even if they are Stopped!) or when pods are running. +Utilization will be translated into billable Service Units (SUs).

    +

    Persistent storage related to an OpenStack VM or OpenShift Pod will continue +to be billed even when the VM is stopped or the Pod is not running.

    +
    +
    +6. Am I going to incur costs for expired allocations? +

    Currently, a project will continue to be able to utilize expired allocations. +So this will continue to incur costs for you.

    +
    +
    +7. Are VMs invoiced even when shut down? +

    Yes, as long as VMs are using resources they are invoiced. In order not to be +billed for a VM you must delete +the Instance/VM. It is a good idea to create a snapshot of your VM +prior to deleting it.

    +
    +
    +8. Will OpenStack & OpenShift show on a single invoice? +

    Yes. In the near future customers of NERC will be able to view per project service +utilization via the XDMoD tool.

    +
    +
    +9. What happens when a Flavor is expanded during the month? +

    a. Flavors cannot be expanded.

    +

    b. You can create a snapshot of an existing VM/Instance and, with that snapshot, +deploy a new flavor of VM/Instance.

    +
    +
    +10. Is storage charged separately? +

    Yes, but on the same invoice. To learn more, see our page on Storage.

    +
    +
    +11. Will I be charged for storage attached to shut-off instances? +

    Yes.

    +
    +
    +12. Are we Invoicing Storage using ColdFront Requests or resource usage? +

    a. Storage is invoiced based on Coldfront Requests.

    +

    b. When you request additional storage through Coldfront, invoicing on that +additional storage will occur when your request is fulfilled. When you request +a decrease in storage through +Request change using ColdFront, +your invoicing will adjust accordingly when your request is made. In both cases +'invoicing' means 'accumulate hours for whatever storage quantity was added +or removed'.

    +

    For example:

    +
      +
    1. +

      I request an increase in storage, the request is approved and processed.

      +
        +
      • At this point we start Invoicing.
      • +
      +
    2. +
    3. +

      I request a decrease in storage.

      +
        +
      • The invoicing for that storage stops immediately.
      • +
      +
    4. +
    +
    +
    +13. For OpenShift, what values are we using to track CPU & Memory? +

    a. For invoicing we utilize requests.cpu for tracking CPU utilization & +requests.memory for tracking memory utilization.

    +

    b. Utilization will be capped based on the limits you set in ColdFront for +your resource allocations.

    +
    +
    +14. If a single Pod exceeds the resources for a GPU SU, how is it invoiced? +

    It will be invoiced as 2 or more GPU SU's depending on how many multiples of +the resources it exceeds.

    +
    +
    +15. How often will we change the pricing? +

    a. Our current plan is no more than once a year for existing offerings.

    +

    b. Additional offerings may be added throughout the year (i.e. new types of +hardware or storage).

    +
    +
    +16. Is there any NERC Pricing Calculator? +

    Yes. Start your estimate with no commitment based on your resource needs by +using this online tool. For more information about how to use this tool, see +How to use the NERC Pricing Calculator.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/cost-billing/billing-process-for-bu/index.html b/get-started/cost-billing/billing-process-for-bu/index.html new file mode 100644 index 00000000..7210ca16 --- /dev/null +++ b/get-started/cost-billing/billing-process-for-bu/index.html @@ -0,0 +1,3307 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Billing Process for Boston University

    +

    Boston University has elected to receive a centralized invoice for its university +investigators and their designated user’s use of NERC services. IS&T will then +internally recover the cost from investigators. The process for cost recovery is +currently being implemented, and we will reach out to investigators once the process +is complete to obtain internal funding information to process your monthly bill.

    +

    Subsidization of Boston University’s Use of NERC

    +

    Boston University will subsidize a portion of NERC usage by its investigators. +The University will subsidize $100 per month of an investigator’s total usage on +NERC, regardless of the number of NERC projects an investigator has established. +Monthly subsidies cannot be carried over to subsequent months. The subsidized +amount and method are subject to change, and any adjustments will be conveyed +directly to investigators and updated on this page.

    +

    Please direct any questions about BU’s billing process by emailing us at +help@nerc.mghpcc.org +or submitting a new ticket to the the NERC's Support Ticketing System. +Questions about a specific invoice that you have received can be sent to IST-ISR-NERC@bu.edu.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/cost-billing/billing-process-for-harvard/index.html b/get-started/cost-billing/billing-process-for-harvard/index.html new file mode 100644 index 00000000..6546e744 --- /dev/null +++ b/get-started/cost-billing/billing-process-for-harvard/index.html @@ -0,0 +1,3341 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Billing Process for Harvard University

    +

    Direct Billing for NERC is a convenience service for Harvard Faculty and Departments. +HUIT will pay the monthly invoices and then allocate the monthly usage costs on +the Harvard University General Ledger. This follows a similar pattern with how +other Public Cloud Providers (AWS, Azure, GCP) accounts are billed and leverage +the HUIT Central Billing Portal. Your HUIT +Customer Code will be matched to your NERC Project Allocation Name as a Billing +Asset. In this process you will be asked for your GL billing code, which you can +change as needed per project. Please be cognizant that only a single billing code +is allowed per billing asset. Therefore, if you have multiple projects with different +funds, if you are able, please create a separate project for each fund. Otherwise, +you will need to take care of this with internal journals inside of your department +or lab. During each monthly billing cycle, the NERC team will upload the billing +Comma-separated values (CSV) files to the HUIT Central Billing system accessible +AWS Object Storage (S3) bucket. The HUIT Central Billing system ingests billing +data files provided by NERC, maps the usage costs to HUIT Billing customers +(and GL Codes) and then includes those amounts in HUIT Monthly Billing of all +customers. This is an automated process.

    +

    Please follow these two steps to ensure proper billing setup:

    +
      +
    1. +

      Each Harvard PI must have a HUIT billing account linked to their NetID (abc123), +and NERC requires a HUIT "Customer Code" for billing purposes. To create a +HUIT billing account, sign up here +with your HarvardKey. The PI's submission of the corresponding HUIT +"Customer Code" is now seamlessly integrated into the PI user account role +submission process. This means that PIs can provide the corresponding HUIT +"Customer Code" either while submitting NERC's PI Request Form +or by submitting a new ticket at NERC's Support Ticketing System +under the "NERC PI Account Request" option in the Help Topic dropdown menu.

      +
      +

      What if you already have an existing Customer Code?

      +

      Please note that if you already have an existing active NERC account, you +need to provide your HUIT Customer Code to NERC. If you think your department +may already have a HUIT account but you don’t know the corresponding Customer +Code then you can contact HUIT Billing +to get the required Customer Code.

      +
      +
    2. +
    3. +

      During the Resource Allocation review and approval process, we will utilize the +HUIT "Customer Code" provided by the PI in step #1 to align it with the approved +allocation. Before confirming the mapping of the Customer Code to the Resource +Allocation, we will send an email to the PI to confirm its accuracy and then approve +the requested allocation. Subsequently, after the allocation is approved, we will +request the PI to initiate a change request +to input the correct "Customer Code" into the allocation's "Institution-Specific +Code" attribute's value.

      +
      +

      Very Important Note

      +

      We recommend keeping your "Institution-Specific Code" updated at all +times, ensuring it accurately reflects your current and valid Customer +Code. The PI or project manager(s) have the authority to request changes +for updating the "Institution-Specific Code" attribute for each resource +allocation. They can do so by submitting a Change Request as outlined here.

      +
      +

      How to view Project Name, Project ID & Institution-Specific Code?

      +

      By clicking on the Allocation detail page through ColdFront, you can access +information about the allocation of each resource, including OpenStack and +OpenShift as described here. +You can review and verify Allocated Project Name, Allocated Project +ID and Institution-Specific Code attributes, which are located under +the "Allocation Attributes" section on the detail page as +described here.

      +
      +
      +

      Once we confirm the six-digit HUIT Customer Code for the PI and the correct +resource allocation, the NERC admin team will initiate the creation of a new +ServiceNow ticket. This will be done by reaching out to +HUIT Billing +or directly emailing HUIT Billing at huit-billing@harvard.edu +for the approved and active allocation request.

      +

      In this email, the NERC admin needs to specify the Allocated Project ID, +Allocated Project Name, Customer Code, and PI's Email address. +Then, the HUIT billing team will generate a unique Asset ID to be utilized +by the Customer's HUIT billing portal.

      +
      +

      Important Information regarding HUIT Billing SLA

      +

      Please note that we will require the PI or Manager(s) to repeat step #2 +for any new resource allocation(s) as well as renewed allocation(s). +Additionally, the HUIT Billing SLA for new Cloud Billing assets is 2 +business days, although most requests are typically completed within +8 hours.

      +
      +
      +

      Harvard University Security Policy Information

      +

      Please note that all assets deployed to your NERC project must be compliant +with University Security policies as described +here. Please familiarize +yourself with the +Harvard University Information Security Policy +and your role in securing data. If you have any questions about how Security +should be implemented in the Cloud, please contact your school security +officer: "Havard Security Officer".

      +
      +
    4. +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/cost-billing/billing-process-for-my-institution/index.html b/get-started/cost-billing/billing-process-for-my-institution/index.html new file mode 100644 index 00000000..7921b52e --- /dev/null +++ b/get-started/cost-billing/billing-process-for-my-institution/index.html @@ -0,0 +1,3342 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Billing Process for My Institution

    +

    Memorandum of Understanding (MOU)

    +

    The New England Research Cloud (NERC) is a shared service offered through the +Massachusetts Green High Performance Computing Center (MGHPCC). The MGHPCC will +enter into a lightweight Memorandum of Understanding (MOU) with each institutional +customer that consumes NERC services. The MOU is intended to ensure the institution +maintains access to valuable and relevant cloud services provided by the MGHPCC +via the NERC to be better aligned to a number of research regulations, policies, +and requirements and also ensure NERC remains sustainable over time.

    +

    Institutions with established MOUs and Billing Processes

    +

    For cost recovery purposes, institutional customers may elect to receive one invoice +for the usage of NERC services by its PIs and cost recovery internally. Every month, +the NERC team will export, back up, and securely store the billing data for all +PIs in the form of comma-separated values (CSV) files and provide it to the MGHPCC +for billing purposes.

    +

    The following institutions using our services have established MOU as well as +billing processes with us:

    +
      +
    1. +

      Harvard University

      +
    2. +
    3. +

      Boston University

      +
    4. +
    +
    +

    Upcoming MOU with other institutions

    +

    We are in the process of establishing MOUs for institutions not listed above.

    +
    +

    PIs from other institutions not listed above can still utilize NERC services with +the understanding that they are directly accountable for managing their usage and +ensuring all service charges are paid promptly. If you have any some common +questions or need further information, see our Billing FAQs +for comprehensive answers.

    +

    If your institution does not have an MOU with us, please have someone from your +faculty or administration contact us to discuss it soon by emailing us at +help@nerc.mghpcc.org +or, by submitting a new ticket at the NERC's Support Ticketing System.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/cost-billing/how-pricing-works/index.html b/get-started/cost-billing/how-pricing-works/index.html new file mode 100644 index 00000000..d552d704 --- /dev/null +++ b/get-started/cost-billing/how-pricing-works/index.html @@ -0,0 +1,3628 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    How does NERC pricing work?

    +
    +

    As a new PI using NERC for the first time, am I entitled to any credits?

    +

    As a new PI using NERC for the first time, you might wonder if you get any +credits. Yes, you'll receive up to $1000 for the first month only. But +remember, this credit can not be used in the following months. Also, +it does not apply to GPU resource usage.

    +
    +

    NERC offers you a pay-as-you-go approach for pricing for our cloud infrastructure +offerings (Tiers of Service), including Infrastructure-as-a-Service (IaaS) – Red +Hat OpenStack and Platform-as-a-Service (PaaS) – Red Hat OpenShift. The exception +is the Storage quotas in NERC Storage Tiers, where the cost is determined by +your requested and approved allocation values +to reserve storage from the total NESE storage pool. For NERC (OpenStack) +Resource Allocations, storage quotas are specified by the "OpenStack Volume Quota +(GiB)" and "OpenStack Swift Quota (GiB)" allocation attributes. Whereas for +NERC-OCP (OpenShift) Resource Allocations, storage quotas are specified by the +"OpenShift Request on Storage Quota (GiB)" and "OpenShift Limit on Ephemeral Storage +Quota (GiB)" allocation attributes. If you have common questions or need more +information, refer to our Billing FAQs for comprehensive answers. +NERC offers a flexible cost model where an institution (with a per-project breakdown) +is billed solely for the duration of the specific services required. Access is based +on project-approved resource quotas, eliminating runaway usage and charges. There +are no obligations of long-term contracts or complicated licensing agreements. +Each institution will enter a lightweight MOU with MGHPCC that defines the services +and billing model.

    +

    Calculations

    +

    Service Units (SUs)

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    NamevGPUvCPURAM (GiB)Current Price
    CPU014$0.013
    A100 GPU12474$1.803
    A100sxm4 GPU132240$2.078
    V100 GPU148192$1.214
    K80 GPU1628.5$0.463
    +

    Breakdown

    +

    CPU/GPU SUs

    +

    Service Units (SUs) can only be purchased as a whole unit. We will charge for +Pods (summed up by Project) and VMs on a per-hour basis for any portion of an +hour they are used, and any VM "flavor"/Pod reservation is charged as a multiplier +of the base SU for the maximum resource they reserve.

    +

    GPU SU Example:

    +
      +
    • +

      A Project or VM with:

      +

      1 A100 GPU, 24 vCPUs, 95MiB RAM, 199.2hrs

      +
    • +
    • +

      Will be charged:

      +

      1 A100 GPU SUs x 200hrs (199.2 rounded up) x $1.803

      +

      $360.60

      +
    • +
    +

    OpenStack CPU SU Example:

    +
      +
    • +

      A Project or VM with:

      +

      3 vCPU, 20 GiB RAM, 720hrs (24hr x 30days)

      +
    • +
    • +

      Will be charged:

      +

      5 CPU SUs due to the extra RAM (20GiB vs. 12GiB(3 x 4GiB)) x 720hrs x $0.013

      +

      $46.80

      +
    • +
    +
    +

    Are VMs invoiced even when shut down?

    +

    Yes, VMs are invoiced as long as they are utilizing resources. In order not +to be billed for a VM, you must delete +your Instance/VM. It is advisable to create a snapshot +of your VM prior to deleting it, ensuring you have a backup of your data and +configurations. By proactively managing your VMs and resources, you can +optimize your usage and minimize unnecessary costs.

    +

    If you have common questions or need more information, refer to our +Billing FAQs for comprehensive +answers.

    +
    +

    OpenShift CPU SU Example:

    +
      +
    • +

      Project with 3 Pods with:

      +

      i. 1 vCPU, 3 GiB RAM, 720hrs (24hr*30days)

      +

      ii. 0.1 vCPU, 8 GiB RAM, 720hrs (24hr*30days)

      +

      iii. 2 vCPU, 4 GiB RAM, 720hrs (24hr*30days)

      +
    • +
    • +

      Project Will be charged:

      +

      RoundUP(Sum(

      +

      1 CPU SUs due to first pod * 720hrs * $0.013

      +

      2 CPU SUs due to extra RAM (8GiB vs 0.4GiB(0.1*4GiB)) * 720hrs * $0.013

      +

      2 CPU SUs due to more CPU (2vCPU vs 1vCPU(4GiB/4)) * 720hrs * $0.013

      +

      ))

      +

      =RoundUP(Sum(720(1+2+2)))*0.013

      +

      $46.80

      +
    • +
    +
    +

    How to calculate cost for all running OpenShift pods?

    +

    If you prefer a function for the OpenShift pods here it is:

    +

    Project SU HR count = RoundUP(SUM(Pod1 SU hour count + Pod2 SU hr count + +...))

    +
    +

    OpenShift Pods are summed up to the project level so that fractions of CPU/RAM +that some pods use will not get overcharged. There will be a split between CPU and +GPU pods, as GPU pods cannot currently share resources with CPU pods.

    +

    Storage

    +

    Storage is charged separately at a rate of $0.009 TiB/hr or $9.00E-6 GiB/hr. +OpenStack volumes remain provisioned until they are deleted. VM's reserve +volumes, and you can also create extra volumes yourself. In OpenShift pods, storage +is only provisioned while it is active, and in persistent volumes, storage remains +provisioned until it is deleted.

    +
    +

    Very Important: Requested/Approved Allocated Storage Quota and Cost

    +

    The Storage cost is determined by +your requested and approved allocation values. +Once approved, these Storage quotas will need to be reserved from the +total NESE storage pool for both NERC (OpenStack) and NERC-OCP (OpenShift) +resources. For NERC (OpenStack) Resource Allocations, storage quotas are +specified by the "OpenStack Volume Quota (GiB)" and "OOpenStack Swift Quota +(GiB)" allocation attributes. Whereas for NERC-OCP (OpenShift) Resource +Allocations, storage quotas are specified by the "OpenShift Request on Storage +Quota (GiB)" and "OpenShift Limit on Ephemeral Storage Quota (GiB)" allocation +attributes.

    +

    Even if you have deleted all volumes, snapshots, and object storage buckets and +objects in your OpenStack and OpenShift projects. It is very essential to +adjust the approved values for your NERC (OpenStack) and NERC-OCP (OpenShift) +resource allocations to zero (0) otherwise you will still be incurring a charge +for the approved storage as explained in Billing FAQs.

    +

    Keep in mind that you can easily scale and expand your current resource +allocations within your project. Follow this guide +on how to use NERC's ColdFront to reduce your Storage quotas for NERC (OpenStack) +allocations and this guide +for NERC-OCP (OpenShift) allocations.

    +
    +

    Storage Example 1:

    +
      +
    • +

      Volume or VM with:

      +

      500GiB for 699.2hrs

      +
    • +
    • +

      Will be charged:

      +

      .5 Storage TiB SU (.5 TiB x 700hrs) x $0.009 TiB/hr

      +

      $3.15

      +
    • +
    +

    Storage Example 2:

    +
      +
    • +

      Volume or VM with:

      +

      10TiB for 720hrs (24hr x 30days)

      +
    • +
    • +

      Will be charged:

      +

      10 Storage TiB SU (10TiB x 720 hrs) x $0.009 TiB/hr

      +

      $64.80

      +
    • +
    +

    Storage includes all types of storage Object, Block, Ephemeral & Image.

    +

    High-Level Function

    +

    To provide a more practical way to calculate your usage, here is a function of +how the calculation works for OpenShift and OpenStack.

    +
      +
    1. +

      OpenStack = (Resource (vCPU/RAM/vGPU) assigned to VM flavor converted to +number of equivalent SUs) * (time VM has been running), rounded up to a whole +hour + Extra storage.

      +
      +

      NERC's OpenStack Flavor List

      +

      You can find the most up-to-date information on the current NERC's OpenStack +flavors with corresponding SUs by referring to this page.

      +
      +
    2. +
    3. +

      OpenShift = (Resource (vCPU/RAM) requested by Pod converted to the number +of SU) * (time Pod was running), summed up to project level rounded up to the whole +hour.

      +
    4. +
    +

    How to Pay?

    +

    To ensure a comprehensive understanding of the billing process and payment options +for NERC offerings, we advise PIs/Managers to visit individual pages designated +for each institution. These pages provide +detailed information specific to each organization's policies and procedures +regarding their billing. By exploring these dedicated pages, you can gain insights +into the preferred payment methods, invoicing cycles, breakdowns of cost components, +and any available discounts or offers. Understanding the institution's unique +approach to billing ensures accurate planning, effective financial management, +and a transparent relationship with us.

    +

    If you have any some common questions or need further information, see our +Billing FAQs for comprehensive answers.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/cost-billing/images/cost-estimator-bottom-sheets.png b/get-started/cost-billing/images/cost-estimator-bottom-sheets.png new file mode 100644 index 00000000..d34e0102 Binary files /dev/null and b/get-started/cost-billing/images/cost-estimator-bottom-sheets.png differ diff --git a/get-started/cost-billing/images/su.png b/get-started/cost-billing/images/su.png new file mode 100644 index 00000000..31c7da53 Binary files /dev/null and b/get-started/cost-billing/images/su.png differ diff --git a/get-started/cost-billing/nerc-pricing-calculator/index.html b/get-started/cost-billing/nerc-pricing-calculator/index.html new file mode 100644 index 00000000..afbdc785 --- /dev/null +++ b/get-started/cost-billing/nerc-pricing-calculator/index.html @@ -0,0 +1,3273 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    NERC Pricing Calculator

    +

    The NERC Pricing Calculator is a google excel based tool for estimating the cost +of utilizing various NERC resources in different NERC service offerings. It offers +a user-friendly interface, allowing users to input their requirements and customize +configurations to generate accurate and tailored cost estimates for optimal +budgeting and resource allocation.

    +

    Start your estimate with no commitment, and explore NERC services and pricing for +your research needs by using this online tool.

    +
    +

    How to use the NERC Pricing Calculator?

    +

    Please Note, you need to make a copy of this tool before estimating the +cost and once copied you can easily update corresponding resource type columns' +values on your own working sheet that will reflect your potential Service +Units (SU), Rate, and cost per Hour, Month and Year. This tool has 4 sheets +at the bottom as shown here: +Estimator Available Sheets +If you are more interested to calculate your cost estimates based on the available +NERC OpenStack flavors +(which define the compute, memory, and storage capacity for your dedicated +instances), you can select and use the second sheet titled "OpenStack Flavor". +For cost estimating the NERC OpenShift resources, you can use the first sheet +titled "Calculate SU" and input pod specific resource requests in each row. +If you are scaling the pods more than one then you need to enter a new row or +entry for each scaled pods. For Storage cost, you need to use the third sheet +titled "Calculate Storage". And then the total cost will be reflected at +the last sheet titled "Total Cost".

    +
    +

    For more information about how NERC pricing works, see +How does NERC pricing work and +to know more about billing process for your own institution, see +Billing Process for My Institution.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/create-a-user-portal-account/index.html b/get-started/create-a-user-portal-account/index.html new file mode 100644 index 00000000..0be5674f --- /dev/null +++ b/get-started/create-a-user-portal-account/index.html @@ -0,0 +1,3444 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    User Account Types

    +

    NERC offers two types of user accounts: a Principal Investigator (PI) Account +and a General User Account. All General Users must be assigned to their project +by an active NERC PI or by one of the delegated project manager(s), as +described here. Then, those project +users can be added to the resource allocation during a new allocation request or +at a later time.

    +
    +

    Principal Investigator Eligibility Information

    +
      +
    • +

      MGHPCC consortium members, whereby they enter into an service agreement with +MGHPCC for the NERC services.

      +
    • +
    • +

      Non-members of MGHPCC can also be PIs of NERC Services, but must also have +an active non-member agreement with MGHPCC.

      +
    • +
    • +

      External research focused institutions will be considered on a case-by-case +basis and are subject to an external customer cost structure.

      +
    • +
    +
    +

    A PI account can request allocations of NERC resources, grant access to other +general users enabling them to log into NERC's computational project space, and +delegate its responsibilities to other collaborators from the same institutions +or elsewhere as managers using NERC's ColdFront interface, +as described here.

    +

    Getting Started

    +

    Any faculty, staff, student, and external collaborator must request a user account +through the MGHPCC Shared Services (MGHPCC-SS) Account Portal, +also known as "RegApp". This is a web-based, single point-of-entry to the NERC +system that displays a user welcome page. The welcome page of the account +registration site displays instructions on how to register a General User +account on NERC, as shown in the image below:

    +

    MGHPCC Shared Services (MGHPCC-SS) Account Portal Welcome Page

    +

    There are two options: either register for a new account or manage an existing +one. If you are new to NERC and want to register as a new MGHPCC-SS user, click +on the "Register for an Account" button. This will redirect you to a new web page +which shows details about how to register for a new MGHPCC-SS user account. NERC +uses CILogon that supports login either using your Institutional Identity +Provider (IdP).

    +

    Clicking the "Begin MGHPCC-SS Account Creation Process" button will initiate the +account creation process. You will be redirected to a site managed by CILogon +where you will select your institutional or commercial identity provider, as +shown below:

    +

    CILogon Page

    +

    Once selected, you will be redirected to your institutional or commercial identity +provider, where you will log in, as shown here:

    +

    Institutional IdP Login Page

    +

    After a successful log on, your browser will be redirected back to the MGHPCC-SS +Registration Page and ask for a review and confirmation of creating your account +with fetched information to complete the account creation process.

    +

    User Account Review Before Creation Page

    +
    +

    Very Important

    +

    If you don't click the "Create MGHPCC-SS Account" button, your account will +not be created! So, this is a very important step. Review your information +carefully and then click on the "Create MGHPCC-SS Account" button to save +your information. Please review the information, make any corrections that +you need and fill in any blank/ missing fields such as "Research Domain". Please +read the End User Level Agreement (EULA) and accept the terms by checking +the checkbox in this form.

    +
    +

    Once you have reviewed and verified that all your user information in this form +is correct, only then click the "Create MGHPCC-SS Account" button. This will +automatically send an email to your email address with a link to validate and +confirm your account information.

    +

    User Account Email Verification Page

    +

    Once you receive an "MGHPCC-SS Account Creation Validation" email, review your +user account information to ensure it is correct. Then, click on the provided +validation web link and enter the unique account creation Confirmation Code +provided in the email as shown below:

    +

    MGHPCC-SS Account Creation Validation

    +

    Once validated, you need to ensure that your user account is created and valid +by viewing the following page:

    +

    Successful Account Validation Page

    +
    +

    Important Note

    +

    If you have an institutional identity, it's preferable to use that identity +to create your MGHPCC-SS account. Institutional identities are vetted by identity +management teams and provide a higher level of confidence to resource owners +when granting access to resources. You can only link one university account +to an MGHPCC-SS account; if you have multiple university accounts, you will +only be able to link one of those accounts to your MGHPCC-SS account. If, at +a later date, you want to change which account is connected to your MGHPCC-SS +identity, you can do so by contacting help@mghpcc.org.

    +
    +

    How to update and modify your MGHPCC-SS account information?

    +
      +
    1. +

      Log in to the RegApp using your MGHPCC-SS account.

      +
    2. +
    3. +

      Click on "Manage Your MGHPCC-SS Account" button as shown below:

      +

      MGHPCC-SS Account Update

      +
    4. +
    5. +

      Review your currently saved account information, make any necessary corrections +or updates to fields, and then click on the "Update MGHPCC-SS Account" button.

      +
    6. +
    7. +

      This will send an email to verify your updated account information, so please +check your email address.

      +
    8. +
    9. +

      Confirm and validate the new account details by clicking the provided validation +web link and entering the unique Confirmation Code provided in the email as +shown below:

      +

      MGHPCC-SS Account Update Validation

      +
    10. +
    +

    How to request a Principal Investigator (PI) Account?

    +

    The process for requesting and obtaining a PI Account is relatively simple. +You can fill out this NERC Principal Investigator (PI) Account Request form +to initiate the process.

    +

    Alternatively, users can request a Principal Investigator (PI) user account +by submitting a new ticket at the NERC's Support Ticketing System +under the "NERC PI Account Request" option in the Help Topic dropdown menu, +as shown in the image below:

    +

    the NERC's Support Ticketing System PI Ticket

    +
    +

    Information

    +

    Once your PI user request is reviewed and approved by the NERC's admin, you +will receive an email confirmation from NERC's support system, i.e., +help@nerc.mghpcc.org. +Then, you can access NERC's ColdFront resource allocation management portal +using the PI user role, as described here.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/get-started/images/CILogon.png b/get-started/images/CILogon.png new file mode 100644 index 00000000..8931dd9d Binary files /dev/null and b/get-started/images/CILogon.png differ diff --git a/get-started/images/account-email-verification-page.png b/get-started/images/account-email-verification-page.png new file mode 100644 index 00000000..84c8826f Binary files /dev/null and b/get-started/images/account-email-verification-page.png differ diff --git a/get-started/images/account_creation_confirmation.png b/get-started/images/account_creation_confirmation.png new file mode 100644 index 00000000..58ddbb02 Binary files /dev/null and b/get-started/images/account_creation_confirmation.png differ diff --git a/get-started/images/account_update.png b/get-started/images/account_update.png new file mode 100644 index 00000000..89af616d Binary files /dev/null and b/get-started/images/account_update.png differ diff --git a/get-started/images/account_update_confirmation.png b/get-started/images/account_update_confirmation.png new file mode 100644 index 00000000..ffbd9125 Binary files /dev/null and b/get-started/images/account_update_confirmation.png differ diff --git a/get-started/images/institutional_idp.png b/get-started/images/institutional_idp.png new file mode 100644 index 00000000..84cb4d94 Binary files /dev/null and b/get-started/images/institutional_idp.png differ diff --git a/get-started/images/osticket-pi-request.png b/get-started/images/osticket-pi-request.png new file mode 100644 index 00000000..d10919ba Binary files /dev/null and b/get-started/images/osticket-pi-request.png differ diff --git a/get-started/images/regapp-welcome-page.png b/get-started/images/regapp-welcome-page.png new file mode 100644 index 00000000..039a3582 Binary files /dev/null and b/get-started/images/regapp-welcome-page.png differ diff --git a/get-started/images/successful-account-validation.png b/get-started/images/successful-account-validation.png new file mode 100644 index 00000000..cb62ccfd Binary files /dev/null and b/get-started/images/successful-account-validation.png differ diff --git a/get-started/images/user-account-review-page.png b/get-started/images/user-account-review-page.png new file mode 100644 index 00000000..94dbf4eb Binary files /dev/null and b/get-started/images/user-account-review-page.png differ diff --git a/get-started/images/user-flow-NERC.png b/get-started/images/user-flow-NERC.png new file mode 100644 index 00000000..711bf855 Binary files /dev/null and b/get-started/images/user-flow-NERC.png differ diff --git a/get-started/user-onboarding-on-NERC/index.html b/get-started/user-onboarding-on-NERC/index.html new file mode 100644 index 00000000..a10eb455 --- /dev/null +++ b/get-started/user-onboarding-on-NERC/index.html @@ -0,0 +1,3306 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    User Onboarding Process Overview

    +

    NERC's Research allocations are available to faculty members and researchers, including +postdoctoral researchers and students, at a U.S. based institution in New England. +In order to get access to resources provided by NERC's computational infrastructure, +you must first register and obtain a user account.

    +

    The overall user flow can be summarized using the following sequence diagram:

    +

    NERC user flow

    +
      +
    1. +

      All users including PI need to register to NERC via: https://regapp.mss.mghpcc.org/.

      +
    2. +
    3. +

      PI will send a request for a Principal Investigator (PI) user account role +by submitting: NERC's PI Request Form.

      +

      Alternatively, users can request a Principal Investigator (PI) user account +by submitting a new ticket at the NERC's Support Ticketing System +under the "NERC PI Account Request" option in the Help Topic dropdown menu, +as shown in the image below:

      +

      the NERC's Support Ticketing System PI Ticket

      +
      +

      Principal Investigator Eligibility Information

      +
        +
      • +

        MGHPCC consortium members, whereby they enter into an service agreement +with MGHPCC for the NERC services.

        +
      • +
      • +

        Non-members of MGHPCC can also be PIs of NERC Services, but must also have an active non-member agreement with MGHPCC.

        +
      • +
      • +

        External research focused institutions will be considered on a case-by-case basis and are subject to an external customer cost structure.

        +
      • +
      +
      +
    4. +
    5. +

      Wait until the PI request gets approved by the NERC's admin.

      +
    6. +
    7. +

      Once a PI request is approved, PI can add a new project and also search +and add user(s) to the project - Other general user(s) can also see the project(s) +once they are added to a project via: https://coldfront.mss.mghpcc.org.

      +
    8. +
    9. +

      PI or project Manager can request resource allocation either NERC (OpenStack) +or NERC-OCP (OpenShift) for the newly added project and select which user(s) +can use the requested allocation.

      +
      +

      As a new NERC PI for the first time, am I entitled to any credits?

      +

      As a new PI using NERC for the first time, you might wonder if you get +any credits. Yes, you'll receive up to $1000 for the first month only. +But remember, this credit can not be used in the following months. +Also, it does not apply to GPU resource usage.

      +
      +
    10. +
    11. +

      Wait until the requested resource allocation gets approved by the NERC's admin.

      +
    12. +
    13. +

      Once approved, PI and the corresponding project users can go to either +NERC Openstack horizon web interface: https://stack.nerc.mghpcc.org +or NERC OpenShift web console: https://console.apps.shift.nerc.mghpcc.org +based on approved Resource Type and they can start using the NERC's resources +based on the approved project quotas.

      +
    14. +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/images/NERC-Diagram-MOC.png b/images/NERC-Diagram-MOC.png new file mode 100644 index 00000000..8bc1f88f Binary files /dev/null and b/images/NERC-Diagram-MOC.png differ diff --git a/index.html b/index.html new file mode 100644 index 00000000..cc77c41a --- /dev/null +++ b/index.html @@ -0,0 +1,3246 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    + +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/javascripts/extra.js b/javascripts/extra.js new file mode 100644 index 00000000..e69de29b diff --git a/migration-moc-to-nerc/Step1/index.html b/migration-moc-to-nerc/Step1/index.html new file mode 100644 index 00000000..1fadf3b6 --- /dev/null +++ b/migration-moc-to-nerc/Step1/index.html @@ -0,0 +1,3411 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Creating NERC Project and Networks

    +

    This process includes some waiting for emails and approvals. It is advised to +start this process and then move to step +2 +and continue with these steps once you recieve approval.

    +

    Account Creation & Quota Request

    +
      +
    1. +

      Register for your new NERC account +here.

      +
        +
      1. Wait for an approval email.
      2. +
      +
    2. +
    3. +

      Register to be a PI for a NERC account +here.

      +
        +
      1. Wait for an approval email.
      2. +
      +
    4. +
    5. +

      Request the quota necessary for all of your MOC Projects to be added +to NERC here +(link also in PI approval email).

      +

      ColdFront_Login

      +
        +
      1. +

        Log in with your institution login by clicking on +Log in via OpenID Connect (highlighted in yellow above).

        +

        ColdFront_Projects

        +
      2. +
      3. +

        Under Projects>> Click on the name of your project +(highlighted in yellow above).

        +

        ColdFront_Projects

        +
      4. +
      5. +

        Scroll down until you see Request Resource Allocation +(highlighted in yellow above) and click on it.

        +

        ColdFront_Allocation

        +
      6. +
      7. +

        Fill out the Justification (highlighted in purple above) for +the quota allocation.

        +
      8. +
      9. +

        Using your “MOC Instance information” table you gathered from your MOC +project calculate the total number of Instances, VCPUs, RAM and use your +“MOC Volume Information” table to calculate Disk space you will need.

        +
      10. +
      11. +

        Using the up and down arrows (highlighted in yellow above) or by +entering the number manually select the multiple of 1 Instance, 2 vCPUs, 0 +GPUs, 4GB RAM, 2 Volumes and 100GB Disk and 1GB Object Storage that you +will need.

        +
          +
        1. For example if I need 2 instances 2 vCPUs, 3GB RAM, 3 Volumes and +30GB of storage I would type in 2 or click the up arrow once to select +2 units.
        2. +
        +
      12. +
      13. +

        Click Submit (highlighted in green above).

        +
      14. +
      +
    6. +
    7. +

      Wait for your allocation approval email.

      +
    8. +
    +

    Setup

    +

    Login to the Dashboard

    +
      +
    1. +

      Log into the +NERC OpenStack Dashboard +using your OpenID Connect password.

      +

      Dashboard_Login

      +
        +
      1. +

        Click Connect.

        +

        Dashboard_Login_CILogon

        +
      2. +
      3. +

        Select your institution from the drop down (highlighted in yellow +above).

        +
      4. +
      5. +

        Click Log On (highlighted in purple).

        +
      6. +
      7. +

        Follow your institution's log on instructions.

        +
      8. +
      +
    2. +
    +

    Setup NERC Network

    +
      +
    1. +

      You are then brought to the Project>Compute>Overview location of +the Dashboard.

      +

      Project_Comp_Overview

      +
        +
      1. +

        This will look very familiar as the MOC and NERC Dashboard are quite +similar.

        +
      2. +
      3. +

        Follow the instructions +here +to set up your network/s (you may also use the default_network +if you wish).

        +
          +
        1. The networks don't have to exactly match the MOC. You only need the +networks for creating your new instances (and accessing them once we +complete the migration).
        2. +
        +
      4. +
      5. +

        Follow the instructions +here +to set up your router/s (you may also use the default_router if you wish).

        +
      6. +
      7. +

        Follow the instructions +here +to set up your Security Group/s.

        +
          +
        1. This is where you can use your “MOC Security Group Information” +table to create similar Security Groups to the ones you had in the MOC.
        2. +
        +
      8. +
      9. +

        Follow the instructions +here +to set up your SSH Key-pair/s.

        +
      10. +
      +
    2. +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/migration-moc-to-nerc/Step2/index.html b/migration-moc-to-nerc/Step2/index.html new file mode 100644 index 00000000..06652855 --- /dev/null +++ b/migration-moc-to-nerc/Step2/index.html @@ -0,0 +1,3667 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Identify Volumes, Instances & Security Groups on the MOC that need to be Migrated to the NERC

    +

    Please read the instructions in their entirety before proceeding. +Allow yourself enough time to complete them.

    +

    Volume Snapshots will not be migrated. +If you have a Snapshot you wish to backup please “Create Volume” from it first.

    +

    Confirm Access and Login to MOC Dashboard

    +
      +
    1. Go to the MOC Dashboard.
    2. +
    +

    SSO / Google Login

    +
      +
    1. +

      If you have SSO through your Institution or google select +Institution Account from the dropdown.

      +

      Login1

      +
    2. +
    3. +

      Click Connect.

      +
    4. +
    5. +

      Click on University Logins (highlighted in yellow below) +if you are using SSO with your Institution.

      +

      Login2

      +
        +
      1. Follow your Institution's login steps after that, and skip to +Gathering MOC information for the +Migration.
      2. +
      +
    6. +
    7. +

      Click Google (highlighted in purple above) if your SSO +is through Google.

      +
        +
      1. Follow standard Google login steps to get in this +way, and skip to Gathering MOC information for the +Migration.
      2. +
      +
    8. +
    +

    Keystone Credentials

    +
      +
    1. +

      If you have a standard login and password leave the dropdown +as Keystone Credentials.

      +

      Login3

      +
    2. +
    3. +

      Enter your User Name.

      +
    4. +
    5. +

      Enter your Password.

      +
    6. +
    7. +

      Click Connect.

      +
    8. +
    +

    Don't know your login?

    +
      +
    1. +

      If you do not know your login information please create a +Password Reset ticket.

      +

      OSticket1

      +
    2. +
    3. +

      Click Open a New Ticket (highlighted in yellow above).

      +

      OSticket2

      +
    4. +
    5. +

      Click the dropdown and select Forgot Pass & SSO Account +Link (highlighted in blue above).

      +
    6. +
    7. +

      In the text field (highlighted in purple above) provide +the Institution email, project you are working on and the email +address you used to create the account.

      +
    8. +
    9. +

      Click Create Ticket (highlighted in yellow above) and +wait for the pinwheel.

      +
    10. +
    11. +

      You will receive an email to let you know that the MOC support +staff will get back to you.

      +
    12. +
    +

    Gathering MOC information for the Migration

    +
      +
    1. +

      You are then brought to the Project>Compute>Overview location of the +Dashboard.

      +

      Project_Compute_Instance

      +
    2. +
    +

    Create Tables to hold your information

    +

    Create 3 tables of all of your Instances, your Volumes and Security Groups, +for example, if you have 2 instances, 3 volumes and 2 Security Groups like the +samples below your lists might look like this:

    +

    MOC Instance Information Table

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Instance NameMOC VCPUsMOC DiskMOC RAMMOC UUID
    Fedora_test110GB1GB16a1bfc2-8c90-4361-8c13-64ab40bb6207
    Ubuntu_Test110GB2GB6a40079a-59f7-407c-9e66-23bc5b749a95
    total220GB3GB
    +

    MOC Volume Information Table

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    MOC Volume NameMOC DiskMOC Attached ToBootableMOC UUIDNERC Volume Name
    Fedora10GiBFedora_testYesea45c20b-434a-4c41-8bc6-f48256fc76a8
    9c73295d-fdfa-4544-b8b8-a876cc0a1e8610GiBUbuntu_TestYes9c73295d-fdfa-4544-b8b8-a876cc0a1e86
    Snapshot of Fed_Test10GiBFedora_testNoea45c20b-434a-4c41-8bc6-f48256fc76a8
    total30GiB
    +

    MOC Security Group Information Table

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Security Group NameDirectionEther TypeIP ProtocolPort RangeRemote IP Prefix
    ssh_only_testIngressIPv4TCP220.0.0.0/0
    ping_only_testIngressIPv4ICMPAny0.0.0.0/0
    +

    Gather the Instance Information

    +

    Gather the Instance UUIDs (of only the instances that you need to migrate +to the NERC).

    +
      +
    1. +

      Click +Instances +(highlighted in pink in image above)

      +

      Project_Instance_Name

      +
    2. +
    3. +

      Click the Instance Name (highlighted in Yellow above) of the first +instance you would like to gather data on.

      +

      Project_Inst_Details

      +
    4. +
    5. +

      Locate the ID row (highlighted in green above) and copy and save the ID +(highlighted in purple above).

      +
        +
      1. This is the UUID of your first Instance.
      2. +
      +
    6. +
    7. +

      Locate the RAM, VCPUs & Disk rows (highlighted in yellow) and copy and +save the associated values (highlighted in pink).

      +
    8. +
    9. +

      Repeat this section for each +Instance you have.

      +
    10. +
    +

    Gather the Volume Information

    +

    Gather the Volume UUIDs (of only the volumes that you need to migrate +to the NERC).

    +

    Project_Volumes_Volumes

    +
      +
    1. +

      Click Volumes dropdown.

      +
    2. +
    3. +

      Select Volumes +(highlighted in purple above).

      +

      Project_Volumes_Names

      +
    4. +
    5. +

      Click the Volume Name (highlighted in yellow above) of the first +volume you would like to gather data on.

      +
        +
      1. +

        The name might be the same as the ID (highlighted in blue above).

        +

        Project_Volumes_Details

        +
      2. +
      +
    6. +
    7. +

      Locate the ID row (highlighted in green above) and copy and save the ID +(highlighted in purple above).

      +
        +
      1. This is the UUID of your first Volume.
      2. +
      +
    8. +
    9. +

      Locate the Size row (highlighted in yellow above) and copy and save +the Volume size (highlighted in pink above).

      +
    10. +
    11. +

      Locate the Bootable row (highlighted in gray above) and copy and save +the Volume size (highlighted in red above).

      +
    12. +
    13. +

      Locate the Attached To row (highlighted in blue above) and copy and save +the Instance this Volume is attached to (highlighted in orange above).

      +
        +
      1. If the volume is not attached to an image it will state +“Not attached”.
      2. +
      +
    14. +
    15. +

      Repeat this section for each Volume +you have.

      +
    16. +
    +

    Gather your Security Group Information

    +

    If you already have all of your Security Group information outside of the +OpenStack Dashboard skip to the section.

    +

    Gather the Security Group information (of only the security groups that you +need to migrate to the NERC).

    +

    Project_Network_SecGroup

    +
      +
    1. +

      Click Network dropdown

      +
    2. +
    3. +

      Click +Security +Groups (highlighted in yellow above).

      +

      Ntwrk_ScGrp_Names

      +
    4. +
    5. +

      Click Manage Rules (highlighted in yellow above) of the first +Security Group you would like to gather data on.

      +

      Ntwrk_SGp_Detal

      +
    6. +
    7. +

      Ignore the first 2 lines (highlighted in yellow above).

      +
    8. +
    9. +

      Write down the important information for all lines after (highlighted in +blue above).

      +
        +
      1. Direction, Ether Type, IP Protocol, Port Range, Remote IP Prefix, +Remote Security Group.
      2. +
      +
    10. +
    11. +

      Repeat this section +for each security group you have.

      +
    12. +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/migration-moc-to-nerc/Step3/index.html b/migration-moc-to-nerc/Step3/index.html new file mode 100644 index 00000000..e01065cc --- /dev/null +++ b/migration-moc-to-nerc/Step3/index.html @@ -0,0 +1,3853 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Steps to Migrate Volumes from MOC to NERC

    +

    Create a spreadsheet to track the values you will need

    +
      +
    1. +

      The values you will want to keep track of are.

      + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      LabelValue
      MOCAccess
      MOCSecret
      NERCAccess
      NERCSecret
      MOCEndPointhttps://kzn-swift.massopen.cloud
      NERCEndPointhttps://stack.nerc.mghpcc.org:13808
      MinIOVolume
      MOCVolumeBackupID
      ContainerName
      NERCVolumeBackupID
      NERCVolumeName
      +
    2. +
    3. +

      It is also helpful to have a text editor open so that you can insert +the values from the spreadsheet into the commands that need to be run.

      +
    4. +
    +

    Create a New MOC Mirror to NERC Instance

    +
      +
    1. +

      Follow the instructions +here +to set up your instance.

      +

      Image Selection

      +
        +
      1. +

        When selecting the Image please select moc-nerc-migration +(highlighted in yellow above).

        +
      2. +
      3. +

        Once the Instance is Running move onto the next step

        +
      4. +
      +
    2. +
    3. +

      Name your new instance something you will remember, MirrorMOC2NERC +for example.

      +
    4. +
    5. +

      Assign a Floating IP to your new instance. If you need assistance please +review the Floating IP steps here.

      +
        +
      1. Your floating IPs will not be the same as the ones you had in the +MOC. Please claim new floating IPs to use.
      2. +
      +
    6. +
    7. +

      SSH into the MirrorMOC2NERC Instance. The user to use for login is centos. +If you have any trouble please review the SSH steps here.

      +
    8. +
    +

    Setup Application Credentials

    +

    Gather MOC Application Credentials

    +
      +
    1. +

      Follow the instructions here to create your Application +Credentials.

      +
        +
      1. Make sure to save the clouds.yaml as clouds_MOC.yaml.
      2. +
      +
    2. +
    +

    Gathering NERC Application Credentials

    +
      +
    1. +

      Follow the instructions under the header Command Line setup +here to create your Application Credentials.

      +
        +
      1. Make sure to save the clouds.yaml as clouds_NERC.yaml.
      2. +
      +
    2. +
    +

    Combine the two clouds.yaml files

    +
      +
    1. +

      Make a copy of clouds_MOC.yaml and save as clouds.yaml

      +
    2. +
    3. +

      Open clouds.yaml in a text editor of your choice.

      +

      clouds.yaml MOC

      +
        +
      1. Change the openstack (highlighted in yellow above) value to moc +(highlighted in yellow two images below).
      2. +
      +
    4. +
    5. +

      Open clouds_NERC.yaml in a text editor of your choice.

      +

      clouds.yaml NERC

      +
        +
      1. +

        Change the openstack (highlighted in yellow above) value to nerc +(highlighted in green below).

        +
      2. +
      3. +

        Highlight and copy everything from nerc to the end of the line that +starts with auth_type

        +
      4. +
      +

      clouds.yaml Combined

      +
        +
      1. Paste the copied text into clouds.yaml below the line that starts +with auth_type. Your new clouds.yaml will look similar to the image +above.
      2. +
      +
    6. +
    7. +

      For further instructions on clouds.yaml files go +Here.

      +
    8. +
    +

    Moving Application Credentials to VM

    +
      +
    1. +

      SSH into the VM created at the top of this page for example MirrorMOC2NERC.

      +
    2. +
    3. +

      Create the openstack config folder and empty clouds.yaml file.

      +
      mkdir -p ~/.config/openstack
      +cd ~/.config/openstack
      +touch clouds.yaml
      +
      +
    4. +
    5. +

      Open the clouds.yaml file in your favorite text editor. +(vi is preinstalled).

      +
    6. +
    7. +

      Copy the entire text inside the clouds.yaml file on your local computer.

      +
    8. +
    9. +

      Paste the contents of the local clouds.yaml file into the clouds.yaml +on the VM.

      +
    10. +
    11. +

      Save and exit your VM text editor.

      +
    12. +
    +

    Confirm the Instances are Shut Down

    +
      +
    1. +

      Confirm the instances are Shut Down. This is a very important step +because we will be using the force modifier when we make our backup. The +volume can become corrupted if the Instance is not in a Shut Down state.

      +
    2. +
    3. +

      Log into the Instance page of the +MOC Dashboard

      +

      Instance Shutdown

      +
    4. +
    5. +

      Check the Power State of all of the instances you plan to migrate volumes +from are set to Shut Down (highlighted in yellow in image above).

      +
        +
      1. +

        If they are not please do so from the Actions Column.

        +

        Shut Off Instance

        +
          +
        1. +

          Click the drop down arrow under actions.

          +
        2. +
        3. +

          Select Shut Off Instance (blue arrow pointing to it in image +above).

          +
        4. +
        +
      2. +
      +
    6. +
    +

    Backup and Move Volume Data from MOC to NERC

    +
      +
    1. SSH into the VM created at the top of this page. For steps on how to do +this please see instructions here.
    2. +
    +

    Create EC2 credentials in MOC & NERC

    +
      +
    1. +

      Generate credentials for Kaizen with the command below.

      +
      openstack --os-cloud moc ec2 credentials create
      +
      +

      EC2 for MOC

      +
        +
      1. Copy the access (circled in red above) and secret (circled in blue +above) values into your table as <MOCAccess> and <MOCSecret>.
      2. +
      +
    2. +
    3. +

      Generate credentials for the NERC with the command below.

      +
      openstack --os-cloud nerc ec2 credentials create
      +
      +

      EC2 for NERC

      +
        +
      1. Copy the access (circled in red above) and secret (circled in blue +above) values into your table as as <NERCAccess> +and <NERCSecret>.
      2. +
      +
    4. +
    +

    Find Object Store Endpoints

    +
      +
    1. +

      Look up information on the object-store service in MOC with the command +below.

      +
      openstack --os-cloud moc catalog show object-store -c endpoints
      +
      +

      MOC URL

      +
        +
      1. If the value is different than https://kzn-swift.massopen.cloud copy +the base URL for this service (circled in red above).
      2. +
      +
    2. +
    3. +

      Look up information on the object-store service in NERC with the command +below.

      +
      openstack --os-cloud nerc catalog show object-store -c endpoints
      +
      +

      NERC URL

      +
        +
      1. If the value is different than https://stack.nerc.mghpcc.org:13808 +copy the base URL for this service (circled in red above).
      2. +
      +
    4. +
    +

    Configure minio client aliases

    +
      +
    1. +

      Create a MinIO alias for MOC using the base URL of the "public" +interface of the object-store service <MOCEndPoint> and the EC2 access key +(ex. <MOCAccess>) & secret key (ex. <MOCSecret>) from your table.

      +
      $ mc alias set moc https://kzn-swift.massopen.cloud <MOCAccess> <MOCSecret>
      +mc: Configuration written to `/home/centos/.mc/config.json`. Please update your access credentials.
      +mc: Successfully created `/home/centos/.mc/share`.
      +mc: Initialized share uploads `/home/centos/.mc/share/uploads.json` file.
      +mc: Initialized share downloads `/home/centos/.mc/share/downloads.json` file.
      +Added `moc` successfully.
      +
      +
    2. +
    3. +

      Create a MinIO alias for NERC using the base URL of the "public" +interface of the object-store service <NERCEndPoint> and the EC2 access key (ex. +<NERCAccess>) & secret key (ex. <NERCSecret>) from your table.

      +
      $ mc alias set nerc https://stack.nerc.mghpcc.org:13808 <NERCAccess> <NERCSecret>
      +Added `nerc` successfully.
      +
      +
    4. +
    +

    Backup MOC Volumes

    +
      +
    1. +

      Locate the desired Volume UUID from the table you created in +Step 2 Gathering MOC Information.

      +
    2. +
    3. +

      Add the first Volume ID from your table to the code below in the +<MOCVolumeID> field and create a Container Name to replace the +<ContainerName> field. Container Name should be easy to remember as well +as unique so include your name. Maybe something like thomasa-backups.

      +
      openstack --os-cloud moc volume backup create --force --container <ContainerName> <MOCVolumeID>
      ++-------+---------------------+
      +| Field | Value               |
      ++-------+---------------------+
      +| id    | <MOCVolumeBackupID> |
      +| name  | None                |
      +
      +
        +
      1. Copy down your <MOCVolumeBackupID> to your table.
      2. +
      +
    4. +
    5. +

      Wait for the backup to become available. You can run the command below to +check on the status. If your volume is 25 or larger this might be a good time +to go get a warm beverage or lunch.

      +
      openstack --os-cloud moc volume backup list
      ++---------------------+------+-------------+-----------+------+
      +| ID                  | Name | Description | Status    | Size |
      ++---------------------+------+-------------+-----------+------+
      +| <MOCVolumeBackupID> | None | None        | creating  |   10 |
      +...
      +openstack --os-cloud moc volume backup list
      ++---------------------+------+-------------+-----------+------+
      +| ID                  | Name | Description | Status    | Size |
      ++---------------------+------+-------------+-----------+------+
      +| <MOCVolumeBackupID> | None | None        | available |   10 |
      +
      +
    6. +
    +

    Gather MinIO Volume data

    +
      +
    1. Get the volume information for future commands. Use the same +<ContainerName> from when you created the volume backup. It is worth +noting that this value shares the ID number with the VolumeID.
      $ mc ls moc/<ContainerName>
      +[2022-04-29 09:35:16 EDT]     0B <MinIOVolume>/
      +
      +
    2. +
    +

    Create a Container on NERC

    +
      +
    1. Create the NERC container that we will send the volume to. Use +the same <ContainerName> from when you created the volume backup.
      $ mc mb nerc/<ContainerName>
      +Bucket created successfully `nerc/<ContainerName>`.
      +
      +
    2. +
    +

    Mirror the Volume from MOC to NERC

    +
      +
    1. Using the volume label from MinIO <MinIOVolume> and the <ContainerName> +for the command below you will kick off the move of your volume. This takes +around 30 sec per GB of data in your volume.
      $ mc mirror moc/<ContainerName>/<MinIOVolume> nerc/<ContainerName>/<MinIOVolume>
      +...123a30e_sha256file:  2.61GB / 2.61GB [=========...=========] 42.15Mib/s 1m3s
      +
      +
    2. +
    +

    Copy the Backup Record from MOC to NERC

    +
      +
    1. +

      Now that we've copied the backup data into the NERC environment, we need +to register the backup with the NERC backup service. We do this by copying +metadata from MOC. You will need the original <MOCVolumeBackupID> you used to +create the original Backup.

      +
      openstack --os-cloud moc volume backup record export -f value <MOCVolumeBackupID> > record.txt
      +
      +
    2. +
    3. +

      Next we will import the record into NERC.

      +
      openstack --os-cloud nerc volume backup record import -f value $(cat record.txt)
      +<NERCVolumeBackupID>
      +None
      +
      +
        +
      1. Copy <NERCVolumeBackupID> value into your table.
      2. +
      +
    4. +
    +

    Create an Empty Volume on NERC to Receive the Backup

    +
      +
    1. Create a volume in the NERC environment to receive the backup. This must be +the same size or larger than the original volume which can be changed by +modifying the <size> field. Remove the "--bootable" flag if you are not +creating a bootable volume. The <NERCVolumeName> field can be any name you want, +I would suggest something that will help you keep track of what instance you +want to attach it to. Make sure to fill in the table you created in Step 2 +with the <NERCVolumeName> value in the NERC Volume Name column.
      openstack --os-cloud nerc volume create --bootable --size <size> <NERCVolumeName>
      ++---------------------+----------------+
      +| Field               | Value          |
      ++---------------------+----------------+
      +| attachments         | []             |
      +| availability_zone   | nova           |
      +...
      +| id                  | <NERCVolumeID> |
      +...
      +| size                | <size>         |
      ++---------------------+----------------+
      +
      +
    2. +
    +

    Restore the Backup

    +
      +
    1. +

      Restore the Backup to the Volume you just created.

      +
      openstack --os-cloud nerc volume backup restore <NERCVolumeBackupID> <NERCVolumeName>
      +
      +
    2. +
    3. +

      Wait for the volume to shift from restoring-backup to available.

      +
      openstack --os-cloud nerc volume list
      ++----------------+------------+------------------+------+-------------+
      +| ID             | Name       | Status           | Size | Attached to |
      ++----------------+------------+------------------+------+-------------+
      +| <NERCVolumeID> | MOC Volume | restoring-backup |    3 | Migration   |
      +openstack --os-cloud nerc volume list
      ++----------------+------------+-----------+------+-------------+
      +| ID             | Name       | Status    | Size | Attached to |
      ++----------------+------------+-----------+------+-------------+
      +| <NERCVolumeID> | MOC Volume | available |    3 | Migration   |
      +
      +
    4. +
    5. +

      Repeat these Backup and Move Volume +Data +steps for each volume you need to migrate.

      +
    6. +
    +

    Create NERC Instances Using MOC Volumes

    +
      +
    1. +

      If you have volumes that need to be attached to an instance please follow +the next steps.

      +
    2. +
    3. +

      Follow the instructions here to set up your instance/s.

      +
        +
      1. +

        Instead of using an Image for your Boot Source you will use a Volume +(orange arrow in image below).

        +

        Volume Selection

        +
          +
        1. Select the <NERCVolumeName> you created in step Create an Empty +Volume on NERC to Recieve the +Backup
        2. +
        +
      2. +
      3. +

        The Flavor will be important as this decides how much vCPUs, RAM, +and Disk this instance will consume of your total.

        +
          +
        1. If for some reason the earlier approved resource quota is not +sufficient you can request further quota by following +these steps.
        2. +
        +
      4. +
      +
    4. +
    5. +

      Repeat this section +for each instance you need to create.

      +
    6. +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/migration-moc-to-nerc/Step4/index.html b/migration-moc-to-nerc/Step4/index.html new file mode 100644 index 00000000..60aad09d --- /dev/null +++ b/migration-moc-to-nerc/Step4/index.html @@ -0,0 +1,3384 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Remove Volume Backups to Conserve Storage

    +

    If you find yourself low on Volume Storage please follow the steps below to +remove your old Volume Backups. If you are very low on space you can do this +every time you finish copying a new volume to the NERC. If on the other hand +you have plety of remaining space feel free to leave all of your Volume +Backups as they are.

    +
      +
    1. SSH into the MirrorMOC2NERC Instance. The user to use for +login is centos. If you have any trouble please review the SSH steps +here.
    2. +
    +

    Check Remaining MOC Volume Storage

    +
      +
    1. +

      Log into the MOC Dashboard and go to Project > Compute > +Overview.

      +

      Volume Storage

      +
    2. +
    3. +

      Look at the Volume Storage meter (highlighted in yellow in image above).

      +
    4. +
    +

    Delete MOC Volume Backups

    +
      +
    1. +

      Gather a list of current MOC Volume Backups with the command below.

      +
      openstack --os-cloud moc volume backup list
      ++---------------------+------+-------------+-----------+------+
      +| ID                  | Name | Description | Status    | Size |
      ++---------------------+------+-------------+-----------+------+
      +| <MOCVolumeBackupID> | None | None        | available |   10 |
      +
      +
    2. +
    3. +

      Only remove Volume Backups you are sure have been moved to the NERC. +with the command below you can delete Volume Backups.

      +
      openstack --os-cloud moc volume backup delete <MOCVolumeBackupID>
      +
      +
    4. +
    5. +

      Repeat the MOC Volume Backup section for +all MOC Volume Backups you wish to remove.

      +
    6. +
    +

    Delete MOC Container <ContainerName>

    +

    Remove the Container created i.e. <ContainerName> on MOC side with a unique name +during migration. Replace the <ContainerName> field with your own container name +created during migration process:

    +
        openstack --os-cloud moc container delete --recursive <ContainerName>
    +
    +

    Verify the <ContainerName> is removed from MOC:

    +
        openstack --os-cloud moc container list
    +
    +

    Check Remaining NERC Volume Storage

    +
      +
    1. +

      Log into the NERC Dashboard and go to Project > Compute > +Overview.

      +

      Volume Storage

      +
    2. +
    3. +

      Look at the Volume Storage meter (highlighted in yellow in image above).

      +
    4. +
    +

    Delete NERC Volume Backups

    +
      +
    1. +

      Gather a list of current NERC Volume Backups with the command below.

      +
      openstack --os-cloud nerc volume backup list
      ++---------------------+------+-------------+-----------+------+
      +| ID                  | Name | Description | Status    | Size |
      ++---------------------+------+-------------+-----------+------+
      +| <MOCVolumeBackupID> | None | None        | available |   3  |
      +
      +
    2. +
    3. +

      Only remove Volume Backups you are sure have been migrated to NERC Volumes. +Keep in mind that you might not have named the volume the same as on the MOC so +check your table from Step 2 to confirm.You can confirm what Volumes you +have in NERC with the following command.

      +
      openstack --os-cloud nerc volume list
      ++----------------+------------------+--------+------+----------------------------------+
      +| ID             | Name             | Status | Size | Attached to                      |
      ++----------------+------------------+--------+------+----------------------------------+
      +| <NERCVolumeID> | <NERCVolumeName> | in-use |    3 | Attached to MOC2NERC on /dev/vda |
      +
      +
    4. +
    5. +

      To remove volume backups please use the command below.

      +
      openstack --os-cloud nerc volume backup delete <MOCVolumeBackupID>
      +
      +
    6. +
    7. +

      Repeat the NERC Volume Backup section for +all NERC Volume Backups you wish to remove.

      +
    8. +
    +

    Delete NERC Container <ContainerName>

    +

    Remove the Container created i.e. <ContainerName> on NERC side with a unique name +during migration to mirror the Volume from MOC to NERC. Replace the <ContainerName> +field with your own container name created during migration process:

    +
        openstack --os-cloud nerc container delete --recursive <ContainerName>
    +
    +

    Verify the <ContainerName> is removed from NERC:

    +
        openstack --os-cloud nerc container list
    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/migration-moc-to-nerc/images/S1_ColdFront_Allocation.png b/migration-moc-to-nerc/images/S1_ColdFront_Allocation.png new file mode 100644 index 00000000..f9bc4643 Binary files /dev/null and b/migration-moc-to-nerc/images/S1_ColdFront_Allocation.png differ diff --git a/migration-moc-to-nerc/images/S1_ColdFront_Login.png b/migration-moc-to-nerc/images/S1_ColdFront_Login.png new file mode 100644 index 00000000..ba0b3554 Binary files /dev/null and b/migration-moc-to-nerc/images/S1_ColdFront_Login.png differ diff --git a/migration-moc-to-nerc/images/S1_ColdFront_ManageProject.png b/migration-moc-to-nerc/images/S1_ColdFront_ManageProject.png new file mode 100644 index 00000000..0864ea53 Binary files /dev/null and b/migration-moc-to-nerc/images/S1_ColdFront_ManageProject.png differ diff --git a/migration-moc-to-nerc/images/S1_ColdFront_Projects.png b/migration-moc-to-nerc/images/S1_ColdFront_Projects.png new file mode 100644 index 00000000..4d07beb1 Binary files /dev/null and b/migration-moc-to-nerc/images/S1_ColdFront_Projects.png differ diff --git a/migration-moc-to-nerc/images/S1_Dashboard_Instance.png b/migration-moc-to-nerc/images/S1_Dashboard_Instance.png new file mode 100644 index 00000000..aa447f2d Binary files /dev/null and b/migration-moc-to-nerc/images/S1_Dashboard_Instance.png differ diff --git a/migration-moc-to-nerc/images/S1_Dashboard_Instance_Details.png b/migration-moc-to-nerc/images/S1_Dashboard_Instance_Details.png new file mode 100644 index 00000000..cfed4be7 Binary files /dev/null and b/migration-moc-to-nerc/images/S1_Dashboard_Instance_Details.png differ diff --git a/migration-moc-to-nerc/images/S1_Dashboard_Instance_Name.png b/migration-moc-to-nerc/images/S1_Dashboard_Instance_Name.png new file mode 100644 index 00000000..11c5344f Binary files /dev/null and b/migration-moc-to-nerc/images/S1_Dashboard_Instance_Name.png differ diff --git a/migration-moc-to-nerc/images/S1_Dashboard_Login.png b/migration-moc-to-nerc/images/S1_Dashboard_Login.png new file mode 100644 index 00000000..403c7ea5 Binary files /dev/null and b/migration-moc-to-nerc/images/S1_Dashboard_Login.png differ diff --git a/migration-moc-to-nerc/images/S1_Dashboard_Login_CILogon.png b/migration-moc-to-nerc/images/S1_Dashboard_Login_CILogon.png new file mode 100644 index 00000000..2b20c694 Binary files /dev/null and b/migration-moc-to-nerc/images/S1_Dashboard_Login_CILogon.png differ diff --git a/migration-moc-to-nerc/images/S1_Dashboard_Project_Compute_Overview.png b/migration-moc-to-nerc/images/S1_Dashboard_Project_Compute_Overview.png new file mode 100644 index 00000000..aca160fe Binary files /dev/null and b/migration-moc-to-nerc/images/S1_Dashboard_Project_Compute_Overview.png differ diff --git a/migration-moc-to-nerc/images/S1_Dashboard_Project_VolumeBootable1.png b/migration-moc-to-nerc/images/S1_Dashboard_Project_VolumeBootable1.png new file mode 100644 index 00000000..e8e97881 Binary files /dev/null and b/migration-moc-to-nerc/images/S1_Dashboard_Project_VolumeBootable1.png differ diff --git a/migration-moc-to-nerc/images/S1_Dashboard_Project_VolumeBootable2.png b/migration-moc-to-nerc/images/S1_Dashboard_Project_VolumeBootable2.png new file mode 100644 index 00000000..5d3c9352 Binary files /dev/null and b/migration-moc-to-nerc/images/S1_Dashboard_Project_VolumeBootable2.png differ diff --git a/migration-moc-to-nerc/images/S1_Dashboard_Volume.png b/migration-moc-to-nerc/images/S1_Dashboard_Volume.png new file mode 100644 index 00000000..76e7995e Binary files /dev/null and b/migration-moc-to-nerc/images/S1_Dashboard_Volume.png differ diff --git a/migration-moc-to-nerc/images/S1_Dashboard_Volume_Details.png b/migration-moc-to-nerc/images/S1_Dashboard_Volume_Details.png new file mode 100644 index 00000000..541e8c7d Binary files /dev/null and b/migration-moc-to-nerc/images/S1_Dashboard_Volume_Details.png differ diff --git a/migration-moc-to-nerc/images/S1_Dashboard_Volume_Name.png b/migration-moc-to-nerc/images/S1_Dashboard_Volume_Name.png new file mode 100644 index 00000000..181d6f6d Binary files /dev/null and b/migration-moc-to-nerc/images/S1_Dashboard_Volume_Name.png differ diff --git a/migration-moc-to-nerc/images/S2_Login1.png b/migration-moc-to-nerc/images/S2_Login1.png new file mode 100644 index 00000000..eb39ce09 Binary files /dev/null and b/migration-moc-to-nerc/images/S2_Login1.png differ diff --git a/migration-moc-to-nerc/images/S2_Login2.png b/migration-moc-to-nerc/images/S2_Login2.png new file mode 100644 index 00000000..e75f654c Binary files /dev/null and b/migration-moc-to-nerc/images/S2_Login2.png differ diff --git a/migration-moc-to-nerc/images/S2_Login3.png b/migration-moc-to-nerc/images/S2_Login3.png new file mode 100644 index 00000000..70c9cd95 Binary files /dev/null and b/migration-moc-to-nerc/images/S2_Login3.png differ diff --git a/migration-moc-to-nerc/images/S2_OSticket1.png b/migration-moc-to-nerc/images/S2_OSticket1.png new file mode 100644 index 00000000..a5183c02 Binary files /dev/null and b/migration-moc-to-nerc/images/S2_OSticket1.png differ diff --git a/migration-moc-to-nerc/images/S2_OSticket2.png b/migration-moc-to-nerc/images/S2_OSticket2.png new file mode 100644 index 00000000..ee3217af Binary files /dev/null and b/migration-moc-to-nerc/images/S2_OSticket2.png differ diff --git a/migration-moc-to-nerc/images/S2_Project_Compute_Instance.png b/migration-moc-to-nerc/images/S2_Project_Compute_Instance.png new file mode 100644 index 00000000..65d3ad8f Binary files /dev/null and b/migration-moc-to-nerc/images/S2_Project_Compute_Instance.png differ diff --git a/migration-moc-to-nerc/images/S2_Project_Compute_Instance_Details.png b/migration-moc-to-nerc/images/S2_Project_Compute_Instance_Details.png new file mode 100644 index 00000000..a1c619bf Binary files /dev/null and b/migration-moc-to-nerc/images/S2_Project_Compute_Instance_Details.png differ diff --git a/migration-moc-to-nerc/images/S2_Project_Compute_Instance_Name.png b/migration-moc-to-nerc/images/S2_Project_Compute_Instance_Name.png new file mode 100644 index 00000000..d423b703 Binary files /dev/null and b/migration-moc-to-nerc/images/S2_Project_Compute_Instance_Name.png differ diff --git a/migration-moc-to-nerc/images/S2_Project_Network_SecurityGroup.png b/migration-moc-to-nerc/images/S2_Project_Network_SecurityGroup.png new file mode 100644 index 00000000..8ae5dcf3 Binary files /dev/null and b/migration-moc-to-nerc/images/S2_Project_Network_SecurityGroup.png differ diff --git a/migration-moc-to-nerc/images/S2_Project_Network_SecurityGroup_Details.png b/migration-moc-to-nerc/images/S2_Project_Network_SecurityGroup_Details.png new file mode 100644 index 00000000..cc0de0c2 Binary files /dev/null and b/migration-moc-to-nerc/images/S2_Project_Network_SecurityGroup_Details.png differ diff --git a/migration-moc-to-nerc/images/S2_Project_Network_SecurityGroup_Names.png b/migration-moc-to-nerc/images/S2_Project_Network_SecurityGroup_Names.png new file mode 100644 index 00000000..6223b499 Binary files /dev/null and b/migration-moc-to-nerc/images/S2_Project_Network_SecurityGroup_Names.png differ diff --git a/migration-moc-to-nerc/images/S2_Project_Volumes_Details.png b/migration-moc-to-nerc/images/S2_Project_Volumes_Details.png new file mode 100644 index 00000000..c4e0d4f0 Binary files /dev/null and b/migration-moc-to-nerc/images/S2_Project_Volumes_Details.png differ diff --git a/migration-moc-to-nerc/images/S2_Project_Volumes_Names.png b/migration-moc-to-nerc/images/S2_Project_Volumes_Names.png new file mode 100644 index 00000000..1cdd03b2 Binary files /dev/null and b/migration-moc-to-nerc/images/S2_Project_Volumes_Names.png differ diff --git a/migration-moc-to-nerc/images/S2_Project_Volumes_Volumes.png b/migration-moc-to-nerc/images/S2_Project_Volumes_Volumes.png new file mode 100644 index 00000000..ebfe2a69 Binary files /dev/null and b/migration-moc-to-nerc/images/S2_Project_Volumes_Volumes.png differ diff --git a/migration-moc-to-nerc/images/S3_CloudyamlCombined.png b/migration-moc-to-nerc/images/S3_CloudyamlCombined.png new file mode 100644 index 00000000..d5442381 Binary files /dev/null and b/migration-moc-to-nerc/images/S3_CloudyamlCombined.png differ diff --git a/migration-moc-to-nerc/images/S3_CloudyamlMOC.png b/migration-moc-to-nerc/images/S3_CloudyamlMOC.png new file mode 100644 index 00000000..c91cc4e9 Binary files /dev/null and b/migration-moc-to-nerc/images/S3_CloudyamlMOC.png differ diff --git a/migration-moc-to-nerc/images/S3_CloudyamlNERC.png b/migration-moc-to-nerc/images/S3_CloudyamlNERC.png new file mode 100644 index 00000000..5f628075 Binary files /dev/null and b/migration-moc-to-nerc/images/S3_CloudyamlNERC.png differ diff --git a/migration-moc-to-nerc/images/S3_EC2CredMOC.png b/migration-moc-to-nerc/images/S3_EC2CredMOC.png new file mode 100644 index 00000000..acee8179 Binary files /dev/null and b/migration-moc-to-nerc/images/S3_EC2CredMOC.png differ diff --git a/migration-moc-to-nerc/images/S3_EC2CredNERC.png b/migration-moc-to-nerc/images/S3_EC2CredNERC.png new file mode 100644 index 00000000..00a5e254 Binary files /dev/null and b/migration-moc-to-nerc/images/S3_EC2CredNERC.png differ diff --git a/migration-moc-to-nerc/images/S3_ImageSelection.png b/migration-moc-to-nerc/images/S3_ImageSelection.png new file mode 100644 index 00000000..50f79b19 Binary files /dev/null and b/migration-moc-to-nerc/images/S3_ImageSelection.png differ diff --git a/migration-moc-to-nerc/images/S3_InstanceShutdown.png b/migration-moc-to-nerc/images/S3_InstanceShutdown.png new file mode 100644 index 00000000..4f68d428 Binary files /dev/null and b/migration-moc-to-nerc/images/S3_InstanceShutdown.png differ diff --git a/migration-moc-to-nerc/images/S3_MOCEndpoint.png b/migration-moc-to-nerc/images/S3_MOCEndpoint.png new file mode 100644 index 00000000..4974cd8a Binary files /dev/null and b/migration-moc-to-nerc/images/S3_MOCEndpoint.png differ diff --git a/migration-moc-to-nerc/images/S3_NERCEndpoint.png b/migration-moc-to-nerc/images/S3_NERCEndpoint.png new file mode 100644 index 00000000..e2ed009a Binary files /dev/null and b/migration-moc-to-nerc/images/S3_NERCEndpoint.png differ diff --git a/migration-moc-to-nerc/images/S3_ShutOffInstance.png b/migration-moc-to-nerc/images/S3_ShutOffInstance.png new file mode 100644 index 00000000..182f60c2 Binary files /dev/null and b/migration-moc-to-nerc/images/S3_ShutOffInstance.png differ diff --git a/migration-moc-to-nerc/images/S3_VolumeSelect.png b/migration-moc-to-nerc/images/S3_VolumeSelect.png new file mode 100644 index 00000000..2cb77f4d Binary files /dev/null and b/migration-moc-to-nerc/images/S3_VolumeSelect.png differ diff --git a/migration-moc-to-nerc/images/S4_VolumeStorageMOC.png b/migration-moc-to-nerc/images/S4_VolumeStorageMOC.png new file mode 100644 index 00000000..92561fe0 Binary files /dev/null and b/migration-moc-to-nerc/images/S4_VolumeStorageMOC.png differ diff --git a/migration-moc-to-nerc/images/S4_VolumeStorageNERC.png b/migration-moc-to-nerc/images/S4_VolumeStorageNERC.png new file mode 100644 index 00000000..a43f36fb Binary files /dev/null and b/migration-moc-to-nerc/images/S4_VolumeStorageNERC.png differ diff --git a/openshift-ai/data-science-project/explore-the-jupyterlab-environment/index.html b/openshift-ai/data-science-project/explore-the-jupyterlab-environment/index.html new file mode 100644 index 00000000..2a83ce84 --- /dev/null +++ b/openshift-ai/data-science-project/explore-the-jupyterlab-environment/index.html @@ -0,0 +1,3545 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Explore the JupyterLab Environment

    +

    When your workbench is ready, the status will change to Running and you can select +"Open" to go to your environment:

    +

    Open JupyterLab Environment

    +
    +

    How can I start or stop a Workbench?

    +

    You can use this "toggle switch" under the "Status" section to easily start/stop +this environment later on.

    +
    +

    Make sure you are selecting "mss-keycloak" once shown:

    +

    RHOAI JupyterLab Login with KeyCloak

    +

    Authorize the requested permissions if needed:

    +

    Authorize Access to the RHOAI

    +

    This will initiate your JupyterLab +environment based on the Jupyter Image you have selected. JupyterLab offers a +shared interactive integrated development environment.

    +

    Once you successfully authenticate you should see the NERC RHOAI JupyterLab Web +Interface as shown below:

    +

    RHOAI JupyterLab Web Interface

    +

    It's pretty empty right now, though. The first thing we will do is add content +into this environment by using Git.

    +

    Clone a Git repository

    +

    You can clone a Git repository in JupyterLab through the left-hand toolbar or +the Git menu option in the main menu as shown below:

    +

    JupyterLab Toolbar and Menu

    +

    Let's clone a repository using the left-hand toolbar. Click on the Git icon, +shown in below:

    +

    JupyterLab Git

    +

    Then click on Clone a Repository as shown below:

    +

    JupyterLab Git Actions

    +

    Enter the git repository URL, which points to the end-to-end ML workflows demo +project i.e. https://github.com/nerc-project/nerc_rhoai_mlops.

    +

    Then click Clone button as shown below:

    +

    NERC RHOAI MLOps Example Project

    +
    +

    What is MLOps?

    +

    Machine learning operations (MLOps) are a set of practices that automate and +simplify machine learning (ML) workflows and deployments.

    +
    +

    Cloning takes a few seconds, after which you can double-click and navigate to the +newly-created folder that contains your cloned Git repository.

    +

    Exploring the Example NERC MLOps Project

    +

    You will be able to find the newly-created folder named nerc_rhoai_mlops based +on the Git repository name, as shown below:

    +

    Git Clone Repo Folder on NERC RHOAI

    +

    Working with notebooks

    +

    What's a notebook?

    +

    A notebook is an environment where you have cells that can display formatted text, +or code.

    +

    This is an empty cell:

    +

    Jupyter Empty Cell

    +

    And a cell where we have entered some Python code:

    +

    Jupyter Cell With Python Code

    +
      +
    • +

      Code cells contain Python code that can be run interactively. It means that you +can modify the code, then run it, but only for this cell, not for the whole +content of the notebook! The code will not run on your computer or in the browser, +but directly in the environment you are connected to NERC RHOAI.

      +
    • +
    • +

      To run a code cell, you simply select it (select the cell, or on the left side +of it), and select the Run/Play button from the toolbar (you can also press +CTRL+Enter to run a cell, or Shift+Enter to run the cell and automatically +select the following one).

      +
    • +
    +

    The Run button on the toolbar:

    +

    Jupyter Cell Run Button

    +

    As you will see, you then get the result of the code that was run in that cell +(if the code produces some output), as well as information on when this particular +cell has been run.

    +

    When you save a notebook, the code as well as all the results are saved! So you +can always reopen it to look at the results without having to run all the program +again, while still having access to the code that produced this content.

    +
    +

    More about Notebook

    +

    Notebooks are so named because they are just like a physical Notebook. It is +exactly like if you were taking notes about your experiments (which you will +do), along with the code itself, including any parameters you set. You see +the output of the experiment inline (this is the result from a cell once it +is run), along with all the notes you want to take (to do that, you can +switch the cell type from the menu from Code to Markup).

    +
    +

    Sample Jupyter Notebook files

    +

    In your Jupyter environment, you can navigate and select any Jupyter notebook +files by double-clicking them in the file explorer on the left side. Double-click +the notebook file to launch it. This action will open another tab in the content +section of the environment, on the right.

    +

    Here, you can find three primary starter notebooks for setting up the intelligent +application: 01_sandbox.ipynb, 02_model_training_basics.ipynb, and 03_remote_inference.ipynb +within the root folder path of nerc_rhoai_mlops.

    +

    You can click and run 01_sandbox.ipynb to verify the setup JupyterLab environment +can run python code properly.

    +

    Also, you can find the "samples" folder within the root folder path of nerc_rhoai_mlops. +For learning purposes, double-click on the "samples" folder under the newly-created +folder named nerc_rhoai_mlops. Within the "samples" folder, you'll find some starter +Jupyter notebook files: Intro.ipynb, Lorenz.ipynb, and gpu.ipynb. These files +can be used to test basic JupyterLab functionalities. You can explore them at +your own pace by running each of them individually. Please feel free to experiment, +run the different cells, add some more code. You can do what you want - it is your +environment, and there is no risk of breaking anything or impacting other users. +This environment isolation is also a great advantage brought by NERC RHOAI.

    +
    +

    How to get access to the NERC RHOAI Dashboard from JupyterLab Environment?

    +

    If you had closed the NERC RHOAI dashboard, you can access it from your currently +opened JupyterLab IDE by clicking on File -> Hub Control Panel as shown below:

    +

    Jupyter Hub Control Panel Menu

    +
    +

    Testing for GPU Code

    +

    As we have setup the workbench specifing the desired Number of GPUs: "1", we +will be able to test GPU based code running gpu.ipynb notebook file as shown below:

    +

    GPU Code Test

    +

    Training a model

    +

    Within the root folder path of nerc_rhoai_mlops, find a sample Jupyter notebook +file 02_model_training_basics.ipynb that demonstrates how to train a model within +the NERC RHOAI. To run it you need to double click it and execute the "Run" button +to run all notebook cells at once. This is used to train your model for "Basic +classification of clothing images" by importing the publicly available +Fashion MNIST dataset and using +TensorFlow. This process will take some time to complete. At the end, it will +generate and save the model my-model.keras within the root folder path of +nerc_rhoai_mlops.

    +
    +

    The Machine Learning Model File Hosted on NERC OpenStack Object Bucket.

    +

    The model we are going to use is an object detection model that is able to +isolate and recognize T-shirts, bottles, and hats in pictures. Although the +process is globally the same one as what we have seen in the +previous section, this model has already been trained as +it takes a few hours with the help of a GPU to do it. If you want to know +more about this training process, you can have a look here.

    +

    The resulting model has been saved in the ONNX format, +an open standard for machine learning interoperability, which is one we can +use with OpenVINO and RHOAI model serving. The model has been stored and is +available for download in NERC OpenStack Object Storage container as described +here.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift-ai/data-science-project/images/QR-code.png b/openshift-ai/data-science-project/images/QR-code.png new file mode 100644 index 00000000..25010f78 Binary files /dev/null and b/openshift-ai/data-science-project/images/QR-code.png differ diff --git a/openshift-ai/data-science-project/images/add-a-model-server.png b/openshift-ai/data-science-project/images/add-a-model-server.png new file mode 100644 index 00000000..b7d5f2fc Binary files /dev/null and b/openshift-ai/data-science-project/images/add-a-model-server.png differ diff --git a/openshift-ai/data-science-project/images/add-data-connection.png b/openshift-ai/data-science-project/images/add-data-connection.png new file mode 100644 index 00000000..751159ba Binary files /dev/null and b/openshift-ai/data-science-project/images/add-data-connection.png differ diff --git a/openshift-ai/data-science-project/images/authorize-access-to-the-rhoai.png b/openshift-ai/data-science-project/images/authorize-access-to-the-rhoai.png new file mode 100644 index 00000000..97626616 Binary files /dev/null and b/openshift-ai/data-science-project/images/authorize-access-to-the-rhoai.png differ diff --git a/openshift-ai/data-science-project/images/capture-camera-image.png b/openshift-ai/data-science-project/images/capture-camera-image.png new file mode 100644 index 00000000..e491938c Binary files /dev/null and b/openshift-ai/data-science-project/images/capture-camera-image.png differ diff --git a/openshift-ai/data-science-project/images/change-grpc-url-value.png b/openshift-ai/data-science-project/images/change-grpc-url-value.png new file mode 100644 index 00000000..85b80450 Binary files /dev/null and b/openshift-ai/data-science-project/images/change-grpc-url-value.png differ diff --git a/openshift-ai/data-science-project/images/configure-a-new-data-connection.png b/openshift-ai/data-science-project/images/configure-a-new-data-connection.png new file mode 100644 index 00000000..cb949ab4 Binary files /dev/null and b/openshift-ai/data-science-project/images/configure-a-new-data-connection.png differ diff --git a/openshift-ai/data-science-project/images/configure-a-new-model-server.png b/openshift-ai/data-science-project/images/configure-a-new-model-server.png new file mode 100644 index 00000000..047875d3 Binary files /dev/null and b/openshift-ai/data-science-project/images/configure-a-new-model-server.png differ diff --git a/openshift-ai/data-science-project/images/configure-and-deploy-model.png b/openshift-ai/data-science-project/images/configure-and-deploy-model.png new file mode 100644 index 00000000..221d5882 Binary files /dev/null and b/openshift-ai/data-science-project/images/configure-and-deploy-model.png differ diff --git a/openshift-ai/data-science-project/images/create-workbench.png b/openshift-ai/data-science-project/images/create-workbench.png new file mode 100644 index 00000000..5d0479e4 Binary files /dev/null and b/openshift-ai/data-science-project/images/create-workbench.png differ diff --git a/openshift-ai/data-science-project/images/data-connection-info.png b/openshift-ai/data-science-project/images/data-connection-info.png new file mode 100644 index 00000000..de013e79 Binary files /dev/null and b/openshift-ai/data-science-project/images/data-connection-info.png differ diff --git a/openshift-ai/data-science-project/images/data-science-project-details.png b/openshift-ai/data-science-project/images/data-science-project-details.png new file mode 100644 index 00000000..4eb76f4c Binary files /dev/null and b/openshift-ai/data-science-project/images/data-science-project-details.png differ diff --git a/openshift-ai/data-science-project/images/data-science-projects.png b/openshift-ai/data-science-project/images/data-science-projects.png new file mode 100644 index 00000000..1c8379d8 Binary files /dev/null and b/openshift-ai/data-science-project/images/data-science-projects.png differ diff --git a/openshift-ai/data-science-project/images/deployed-model-inference-endpoints.png b/openshift-ai/data-science-project/images/deployed-model-inference-endpoints.png new file mode 100644 index 00000000..851c0365 Binary files /dev/null and b/openshift-ai/data-science-project/images/deployed-model-inference-endpoints.png differ diff --git a/openshift-ai/data-science-project/images/gpu-code-test.png b/openshift-ai/data-science-project/images/gpu-code-test.png new file mode 100644 index 00000000..ddc1adf9 Binary files /dev/null and b/openshift-ai/data-science-project/images/gpu-code-test.png differ diff --git a/openshift-ai/data-science-project/images/intelligent-application-frontend-interface.png b/openshift-ai/data-science-project/images/intelligent-application-frontend-interface.png new file mode 100644 index 00000000..58624c8f Binary files /dev/null and b/openshift-ai/data-science-project/images/intelligent-application-frontend-interface.png differ diff --git a/openshift-ai/data-science-project/images/intelligent_application-topology.png b/openshift-ai/data-science-project/images/intelligent_application-topology.png new file mode 100644 index 00000000..c8410459 Binary files /dev/null and b/openshift-ai/data-science-project/images/intelligent_application-topology.png differ diff --git a/openshift-ai/data-science-project/images/intelligent_application_deployment-yaml-content.png b/openshift-ai/data-science-project/images/intelligent_application_deployment-yaml-content.png new file mode 100644 index 00000000..a510d633 Binary files /dev/null and b/openshift-ai/data-science-project/images/intelligent_application_deployment-yaml-content.png differ diff --git a/openshift-ai/data-science-project/images/jupyter-cell-with-code.png b/openshift-ai/data-science-project/images/jupyter-cell-with-code.png new file mode 100644 index 00000000..d0f61b78 Binary files /dev/null and b/openshift-ai/data-science-project/images/jupyter-cell-with-code.png differ diff --git a/openshift-ai/data-science-project/images/jupyter-empty-cell.png b/openshift-ai/data-science-project/images/jupyter-empty-cell.png new file mode 100644 index 00000000..2109bd62 Binary files /dev/null and b/openshift-ai/data-science-project/images/jupyter-empty-cell.png differ diff --git a/openshift-ai/data-science-project/images/jupyter-run-code-button.png b/openshift-ai/data-science-project/images/jupyter-run-code-button.png new file mode 100644 index 00000000..86710b43 Binary files /dev/null and b/openshift-ai/data-science-project/images/jupyter-run-code-button.png differ diff --git a/openshift-ai/data-science-project/images/jupyterlab-toolbar-main-menu.jpg b/openshift-ai/data-science-project/images/jupyterlab-toolbar-main-menu.jpg new file mode 100644 index 00000000..f09dce1c Binary files /dev/null and b/openshift-ai/data-science-project/images/jupyterlab-toolbar-main-menu.jpg differ diff --git a/openshift-ai/data-science-project/images/jupyterlab_git.png b/openshift-ai/data-science-project/images/jupyterlab_git.png new file mode 100644 index 00000000..e1874ba0 Binary files /dev/null and b/openshift-ai/data-science-project/images/jupyterlab_git.png differ diff --git a/openshift-ai/data-science-project/images/jupyterlab_git_actions.png b/openshift-ai/data-science-project/images/jupyterlab_git_actions.png new file mode 100644 index 00000000..dc958348 Binary files /dev/null and b/openshift-ai/data-science-project/images/jupyterlab_git_actions.png differ diff --git a/openshift-ai/data-science-project/images/jupyterlab_web_interface.png b/openshift-ai/data-science-project/images/jupyterlab_web_interface.png new file mode 100644 index 00000000..1c5f0a87 Binary files /dev/null and b/openshift-ai/data-science-project/images/jupyterlab_web_interface.png differ diff --git a/openshift-ai/data-science-project/images/juyter-hub-control-panel-menu.png b/openshift-ai/data-science-project/images/juyter-hub-control-panel-menu.png new file mode 100644 index 00000000..d59b9496 Binary files /dev/null and b/openshift-ai/data-science-project/images/juyter-hub-control-panel-menu.png differ diff --git a/openshift-ai/data-science-project/images/model-deployed-successful.png b/openshift-ai/data-science-project/images/model-deployed-successful.png new file mode 100644 index 00000000..97674d80 Binary files /dev/null and b/openshift-ai/data-science-project/images/model-deployed-successful.png differ diff --git a/openshift-ai/data-science-project/images/model-serving-deploy-model-option.png b/openshift-ai/data-science-project/images/model-serving-deploy-model-option.png new file mode 100644 index 00000000..0a6ffea0 Binary files /dev/null and b/openshift-ai/data-science-project/images/model-serving-deploy-model-option.png differ diff --git a/openshift-ai/data-science-project/images/model-test-object-detection.png b/openshift-ai/data-science-project/images/model-test-object-detection.png new file mode 100644 index 00000000..f61e2e3c Binary files /dev/null and b/openshift-ai/data-science-project/images/model-test-object-detection.png differ diff --git a/openshift-ai/data-science-project/images/nerc-mlops-git-repo.png b/openshift-ai/data-science-project/images/nerc-mlops-git-repo.png new file mode 100644 index 00000000..5bf97ea5 Binary files /dev/null and b/openshift-ai/data-science-project/images/nerc-mlops-git-repo.png differ diff --git a/openshift-ai/data-science-project/images/object-detection-via-phone.jpg b/openshift-ai/data-science-project/images/object-detection-via-phone.jpg new file mode 100644 index 00000000..e23673b0 Binary files /dev/null and b/openshift-ai/data-science-project/images/object-detection-via-phone.jpg differ diff --git a/openshift-ai/data-science-project/images/open-tensorflow-jupyter-lab.png b/openshift-ai/data-science-project/images/open-tensorflow-jupyter-lab.png new file mode 100644 index 00000000..6b68127c Binary files /dev/null and b/openshift-ai/data-science-project/images/open-tensorflow-jupyter-lab.png differ diff --git a/openshift-ai/data-science-project/images/openstack-bucket-storing-model-file.png b/openshift-ai/data-science-project/images/openstack-bucket-storing-model-file.png new file mode 100644 index 00000000..8f5f373b Binary files /dev/null and b/openshift-ai/data-science-project/images/openstack-bucket-storing-model-file.png differ diff --git a/openshift-ai/data-science-project/images/pre_post_processor_deployment-yaml-content.png b/openshift-ai/data-science-project/images/pre_post_processor_deployment-yaml-content.png new file mode 100644 index 00000000..a0afa8c6 Binary files /dev/null and b/openshift-ai/data-science-project/images/pre_post_processor_deployment-yaml-content.png differ diff --git a/openshift-ai/data-science-project/images/project-verify-yaml-editor.png b/openshift-ai/data-science-project/images/project-verify-yaml-editor.png new file mode 100644 index 00000000..82e60e6c Binary files /dev/null and b/openshift-ai/data-science-project/images/project-verify-yaml-editor.png differ diff --git a/openshift-ai/data-science-project/images/rhoai-git-cloned-repo.png b/openshift-ai/data-science-project/images/rhoai-git-cloned-repo.png new file mode 100644 index 00000000..ff812e2c Binary files /dev/null and b/openshift-ai/data-science-project/images/rhoai-git-cloned-repo.png differ diff --git a/openshift-ai/data-science-project/images/rhoai-jupyterlab-login.png b/openshift-ai/data-science-project/images/rhoai-jupyterlab-login.png new file mode 100644 index 00000000..47393117 Binary files /dev/null and b/openshift-ai/data-science-project/images/rhoai-jupyterlab-login.png differ diff --git a/openshift-ai/data-science-project/images/running-model-server.png b/openshift-ai/data-science-project/images/running-model-server.png new file mode 100644 index 00000000..2f2134c9 Binary files /dev/null and b/openshift-ai/data-science-project/images/running-model-server.png differ diff --git a/openshift-ai/data-science-project/images/switch-camera-view.png b/openshift-ai/data-science-project/images/switch-camera-view.png new file mode 100644 index 00000000..12b02268 Binary files /dev/null and b/openshift-ai/data-science-project/images/switch-camera-view.png differ diff --git a/openshift-ai/data-science-project/images/tensor-flow-workbench.png b/openshift-ai/data-science-project/images/tensor-flow-workbench.png new file mode 100644 index 00000000..122022ba Binary files /dev/null and b/openshift-ai/data-science-project/images/tensor-flow-workbench.png differ diff --git a/openshift-ai/data-science-project/images/workbench-cluster-storage.png b/openshift-ai/data-science-project/images/workbench-cluster-storage.png new file mode 100644 index 00000000..13399242 Binary files /dev/null and b/openshift-ai/data-science-project/images/workbench-cluster-storage.png differ diff --git a/openshift-ai/data-science-project/images/workbench-error-status.png b/openshift-ai/data-science-project/images/workbench-error-status.png new file mode 100644 index 00000000..59e04c66 Binary files /dev/null and b/openshift-ai/data-science-project/images/workbench-error-status.png differ diff --git a/openshift-ai/data-science-project/images/workbench-information.png b/openshift-ai/data-science-project/images/workbench-information.png new file mode 100644 index 00000000..578ff78f Binary files /dev/null and b/openshift-ai/data-science-project/images/workbench-information.png differ diff --git a/openshift-ai/data-science-project/images/yaml-import-new-content.png b/openshift-ai/data-science-project/images/yaml-import-new-content.png new file mode 100644 index 00000000..f3e4a58e Binary files /dev/null and b/openshift-ai/data-science-project/images/yaml-import-new-content.png differ diff --git a/openshift-ai/data-science-project/images/yaml-upload-plus-icon.png b/openshift-ai/data-science-project/images/yaml-upload-plus-icon.png new file mode 100644 index 00000000..498602ed Binary files /dev/null and b/openshift-ai/data-science-project/images/yaml-upload-plus-icon.png differ diff --git a/openshift-ai/data-science-project/model-serving-in-the-rhoai/index.html b/openshift-ai/data-science-project/model-serving-in-the-rhoai/index.html new file mode 100644 index 00000000..7a3c585f --- /dev/null +++ b/openshift-ai/data-science-project/model-serving-in-the-rhoai/index.html @@ -0,0 +1,3516 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Model Serving in the NERC RHOAI

    +

    Prerequisites:

    +

    To run a model server and deploy a model on it, you need to have:

    + +

    Create a data connection

    +

    Once we have our workbench and cluster storage set up, we can add data connections. +Click the "Add data connection" button to open the data connection configuration +window as shown below:

    +

    Add Data Connection

    +

    Data connections are configurations for remote data location. Within this window, +enter the information about the S3-compatible object bucket where the model is stored. +Enter the following information:

    +
      +
    • +

      Name: The name you want to give to the data connection.

      +
    • +
    • +

      Access Key: The access key to the bucket.

      +
    • +
    • +

      Secret Key: The secret for the access key.

      +
    • +
    • +

      Endpoint: The endpoint to connect to the storage.

      +
    • +
    • +

      Region: The region to connect to the storage.

      +
    • +
    • +

      Bucket: The name of the bucket.

      +
    • +
    +

    NOTE: However, you are not required to use the S3 service from Amazon Web +Services (AWS). Any S3-compatible storage i.e. NERC OpenStack Container (Ceph), +Minio, AWS S3, etc. is supported.

    +

    Configure and Add A New Data Connection

    +

    For our example project, let's name it "ocp-nerc-container-connect", we'll select +the "us-east-1" as Region, choose "ocp-container" as Bucket.

    +

    The API Access EC2 credentials can be downloaded and accessed from the NERC OpenStack +Project as described here. +This credential file contains information regarding Access Key, +Secret Key, and Endpoint.

    +

    Very Important Note: If you are using an AWS S3 bucket, the Endpoint +needs to be set as https://s3.amazonaws.com/. However, for the NERC Object Storage +container, which is based on the Ceph backend, the Endpoint needs to be set +as https://stack.nerc.mghpcc.org:13808, and the Region should be set as us-east-1.

    +
    +

    How to store & connect to the model file in the object storage bucket?

    +

    The model file(s) should have been saved into an S3-compatible object storage +bucket (NERC OpenStack Container [Ceph], Minio, or AWS S3) for which you must +have the connection information, such as location and credentials. You can +create a bucket on your active project at the NERC OpenStack Project by following +the instructions in this guide.

    +

    The API Access EC2 credentials can be downloaded and accessed from the NERC +OpenStack Project as described here.

    +

    For our example project, we are creating a bucket named "ocp-container" in +one of our NERC OpenStack project's object storage. Inside this bucket, we +have added a folder or directory called "coolstore-model", where we will +store the model file in ONNX format, as shown here:

    +

    NERC OpenStack Container Storing Model File

    +

    ONNX: An open standard for machine learning interoperability.

    +
    +

    After completing the required fields, click Add data connection. You should +now see the data connection displayed in the main project window as shown below:

    +

    New Data Connection Info

    +

    Create a model server

    +

    After creating the data connection, you can add your model server. Select +Add server as shown below:

    +

    Add A Model Server

    +

    In the pop-up window that appears, depicted as shown below, you can specify the +following details:

    +

    Configure A New Model Server

    +
      +
    • +

      Model server name

      +
    • +
    • +

      Serving runtime: either "OpenVINO Model Server" or "OpenVINO Model Server +(Supports GPUs)"

      +
    • +
    • +

      Number of model server replicas: This is the number of instances of the +model server engine that you want to deploy. You can scale it up as needed, +depending on the number of requests you will receive.

      +
    • +
    • +

      Model server size: This is the amount of resources, CPU, and RAM that will +be allocated to your server. Select the appropriate configuration for size and +the complexity of your model.

      +
    • +
    • +

      Model route: Check this box if you want the serving endpoint (the model serving +API) to be accessible outside of the OpenShift cluster through an external route.

      +
    • +
    • +

      Token authorization: Check this box if you want to secure or restrict access +to the model by forcing requests to provide an authorization token.

      +
    • +
    +

    After adding and selecting options within the Add model server pop-up +window, click Add to create the model server.

    +

    For our example project, let's name the Model server as "coolstore-modelserver". +We'll select the OpenVINO Model Server in Serving runtime. Leave replicas +to "1", size to "Small". At this point, don't check +Make model available via an external route as shown below:

    +

    Running Model Server

    +
    +

    NERC RHOAI supported Model Server Runtimes

    +

    NERC RHOAI integrates the Intel's OpenVINO Model Server +runtime, a high-performance system for serving models, optimized for deployment +on Intel architectures. Also, NERC RHOAI offers OpenVINO Model Server serving +runtime that supports GPUs.

    +
    +

    Once you've configured your model server, you can deploy your model by clicking +on "Deploy model" located on the right side of the running model server. Alternatively, +you can also do this from the main RHOAI dashboard's "Model Serving" menu item as +shown below:

    +

    Model Serving Deploy Model Option

    +

    If you wish to view details for the model server, click on the link corresponding +to the Model Server's Name. You can also modify a model server configuration by +clicking on the three dots on the right side, and selecting Edit model server. +This will bring back the same configuration page we used earlier. This menu also +have option for you to delete the model server.

    +

    Deploy the model

    +

    To add a model to be served, click the Deploy model button. Doing so will +initiate the Deploy model pop-up window as shown below:

    +

    Configure and Deploy Model Info

    +

    Enter the following information for your new model:

    +
      +
    • +

      Model Name: The name you want to give to your model (e.g., "coolstore").

      +
    • +
    • +

      Model framework (name - version): The framework used to save this model. +At this time, OpenVINO IR or ONNX or Tensorflow are supported.

      +
    • +
    • +

      Model location: Select the data connection that you created to store the +model. Alternatively, you can create another data connection directly from this +menu.

      +
    • +
    • +

      Folder path: If your model is not located at the root of the bucket of your +data connection, you must enter the path to the folder it is in.

      +
    • +
    +

    For our example project, let's name the Model as "coolstore", select +"onnx - 1" for the framework, select the Data location you created before for the +Model location, and enter "coolstore-model" as the folder path for the model +(without leading /).

    +

    When you are ready to deploy your model, select the Deploy button.

    +

    When you return to the Deployed models page, you will see your newly deployed model. +You should click on the 1 on the Deployed models tab to see details. When the +model has finished deploying, the status icon will be a green checkmark indicating +the model deployment is complete as shown below:

    +

    Model Deployed Successfully

    +

    The model is now accessible through the API endpoint of the model server. The +information about the endpoint is different, depending on how you configured the +model server.

    +

    If you did not expose the model externally through a route, click on the Internal +Service link in the Inference endpoint section. A popup will display the address +for the gRPC and the REST URLs for the inference endpoints as shown below:

    +

    Successfully Deployed Model Inference endpoints Info

    +

    Notes:

    +
      +
    • +

      The REST URL displayed is only the base address of the endpoint. You must +append /v2/models/name-of-your-model/infer to it to have the full address. +Example: http://modelmesh-serving.model-serving:8008/v2/models/coolstore/infer

      +
    • +
    • +

      The full documentation of the API (REST and gRPC) is available here.

      +
    • +
    • +

      The gRPC proto file for the Model Server is available here.

      +
    • +
    • +

      If you have exposed the model through an external route, the Inference endpoint +displays the full URL that you can copy.

      +
    • +
    +
    +

    Important Note

    +

    Even when you expose the model through an external route, the internal ones +are still available. They use this format:

    +
      +
    • +

      REST: http://modelmesh-serving.name-of-your-project:8008/v2/models/name-of-your-model/infer

      +
    • +
    • +

      gRPC: grpc://modelmesh-serving.name-of-your-project:8033. Please make +note of the grpc URL value, we will need it later.

      +
    • +
    +
    +

    Your model is now deployed and ready to use!

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift-ai/data-science-project/testing-model-in-the-rhoai/index.html b/openshift-ai/data-science-project/testing-model-in-the-rhoai/index.html new file mode 100644 index 00000000..0db903da --- /dev/null +++ b/openshift-ai/data-science-project/testing-model-in-the-rhoai/index.html @@ -0,0 +1,3513 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    + +
    +
    + + + +
    +
    + + + + + + + + + +

    Test the Model in the NERC RHOAI

    +

    Now that the model server is ready to receive requests, +we can test it.

    +
    +

    How to get access to the NERC RHOAI Dashboard from JupyterLab Environment?

    +

    If you had closed the NERC RHOAI dashboard, you can access it from your currently +opened JupyterLab IDE by clicking on File -> Hub Control Panel as shown below:

    +

    Jupyter Hub Control Panel Menu

    +
    +
      +
    • +

      In your project in JupyterLab, open the notebook 03_remote_inference.ipynb and +follow the instructions to see how the model can be queried.

      +
    • +
    • +

      Update the grpc_url as noted before +for the the grpc URL value from the deployed model on the NERC RHOAI Model server.

      +

      Change grpc URL Value

      +
    • +
    • +

      Once you've completed the notebook's instructions, the object detection model +can isolate and recognize T-shirts, bottles, and hats in pictures, as shown below:

      +

      Model Test to Detect Objects In An Image

      +
    • +
    +

    Building and deploying an intelligent application

    +

    The application we are going to deploy is a simple example of how you can add an +intelligent feature powered by AI/ML to an application. It is a webapp that you +can use on your phone to discover coupons on various items you can see in a store, +in an augmented reality way.

    +

    Architecture

    +

    The different components of this intelligent application are:

    +

    The Frontend: a React application, typically running on the browser of your +phone,

    +

    The Backend: a NodeJS server, serving the application and relaying API calls,

    +

    The Pre-Post Processing Service: a Python FastAPI service, doing the image +pre-processing, calling the model server API, and doing the post-processing before +sending the results back.

    +

    The Model Server: the RHOAI component serving the model as an API to do +the inference.

    +

    Application Workflow Steps

    +
      +
    1. +

      Pass the image to the pre-post processing service

      +
    2. +
    3. +

      Pre-process the image and call the model server

      +
    4. +
    5. +

      Send back the inference result

      +
    6. +
    7. +

      Post-process the inference and send back the result

      +
    8. +
    9. +

      Pass the result to the frontend for display

      +
    10. +
    +

    Deploy the application

    +

    The deployment of the application is really easy, as we already created for you +the necessary YAML files. They are included in the Git project we used for this +example project. You can find them in the deployment folder inside your JupyterLab +environment, or directly here.

    +

    To deploy the Pre-Post Processing Service service and the Application:

    +
      +
    • +

      From your NERC's OpenShift Web Console, +navigate to your project corresponding to the NERC RHOAI Data Science Project +and select the "Import YAML" button, represented by the "+" icon in the top +navigation bar as shown below:

      +

      YAML Add Icon

      +
    • +
    • +

      Verify that you selected the correct project.

      +

      Correct Project Selected for YAML Editor

      +
    • +
    • +

      Copy/Paste the content of the file pre_post_processor_deployment.yaml inside +the opened YAML editor. If you have named your model coolstore as instructed, +you're good to go. If not, modify the value on line # 35 +with the name you set. You can then click the Create button as shown below:

      +

      YAML Editor Add Pre-Post Processing Service Content

      +
    • +
    • +

      Once Resource is successfully created, you will see the following screen:

      +

      Resources successfully created Importing More YAML

      +
    • +
    • +

      Click on "Import more YAML" and Copy/Paste the content of the file intelligent_application_deployment.yaml +inside the opened YAML editor. Nothing to change here, you can then click the +Create button as shown below:

      +

      YAML Editor Pre-Post Processing Service Content

      +
    • +
    • +

      If both deployments are successful, you will be able to see both of them grouped +under "intelligent-application" on the Topology View menu, as shown below:

      +

      Intelligent Application Under Topology

      +
    • +
    +

    Use the application

    +

    The application is relatively straightforward to use. Click on the URL for the +Route ia-frontend that was created.

    +

    You have first to allow it to use your camera, this is the interface you get:

    +

    Intelligent Application Frontend Interface

    +

    You have:

    +
      +
    • +

      The current view of your camera.

      +
    • +
    • +

      A button to take a picture as shown here:

      +

      Capture Camera Image

      +
    • +
    • +

      A button to switch from front to rear camera if you are using a phone:

      +

      Switch Camera View

      +
    • +
    • +

      A QR code that you can use to quickly open the application on a phone +(much easier than typing the URL!):

      +

      QR code

      +
    • +
    +

    When you take a picture, it will be sent to the inference service, and you will +see which items have been detected, and if there is a promotion available as shown +below:

    +

    Object Detection Via Phone Camera

    +

    Tweak the application

    +

    There are two parameters you can change on this application:

    +
      +
    • +

      On the ia-frontend Deployment, you can modify the DISPLAY_BOX environment +variable from true to false. It will hide the bounding box and the inference +score, so that you get only the coupon flying over the item.

      +
    • +
    • +

      On the ia-inference Deployment, the one used for pre-post processing, you can +modify the COUPON_VALUE environment variable. The format is simply an Array +with the value of the coupon for the 3 classes: bottle, hat, shirt. As you see, +these values could be adjusted in real time, and this could even be based on another +ML model!

      +
    • +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift-ai/data-science-project/using-projects-the-rhoai/index.html b/openshift-ai/data-science-project/using-projects-the-rhoai/index.html new file mode 100644 index 00000000..476f169c --- /dev/null +++ b/openshift-ai/data-science-project/using-projects-the-rhoai/index.html @@ -0,0 +1,3429 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Using Your Data Science Project (DSP)

    +

    You can access your current projects by navigating to the "Data Science Projects" +menu item on the left-hand side, as highlighted in the figure below:

    +

    Data Science Projects

    +

    If you have any existing projects, they will be displayed here. These projects +correspond to your NERC-OCP (OpenShift) resource allocations.

    +
    +

    Why we need Data Science Project (DSP)?

    +

    To implement a data science workflow, you must use a data science project. + Projects allow you and your team to organize and collaborate on resources + within separated namespaces. From a project you can create multiple workbenches, + each with their own Jupyter notebook environment, and each with their own data + connections and cluster storage. In addition, the workbenches can share models + and data with pipelines and model servers.

    +
    +

    Selecting your data science project

    +

    Here, you can click on specific projects corresponding to the appropriate allocation +where you want to work. This brings you to your selected data science project's +details page, as shown below:

    +

    Data Science Project's Details

    +

    Within the data science project, you can add the following configuration options:

    +
      +
    • +

      Workbenches: Development environments within your project where you can access +notebooks and generate models.

      +
    • +
    • +

      Cluster storage: Storage for your project in your OpenShift cluster.

      +
    • +
    • +

      Data connections: A list of data sources that your project uses.

      +
    • +
    • +

      Pipelines: A list of created and configured pipeline servers.

      +
    • +
    • +

      Models and model servers: A list of models and model servers that your project +uses.

      +
    • +
    +

    As you can see in the project's details figure, our selected data science project +currently has no workbenches, storage, data connections, pipelines, or model servers.

    +

    Populate the data science project with a Workbench

    +

    Add a workbench by clicking the Create workbench button as shown below:

    +

    Create Workbench

    +
    +

    What are Workbenches?

    +

    Workbenches are development environments. They can be based on JupyterLab, but +also on other types of IDEs, like VS Code or RStudio. You can create as many +workbenches as you want, and they can run concurrently.

    +
    +

    On the Create workbench page, complete the following information.

    +

    Note: Not all fields are required.

    +
      +
    • +

      Name

      +
    • +
    • +

      Description

      +
    • +
    • +

      Notebook image (Image selection)

      +
    • +
    • +

      Deployment size (Container size and Number of GPUs)

      +
    • +
    • +

      Environment variables

      +
    • +
    • +

      Cluster storage name

      +
    • +
    • +

      Cluster storage description

      +
    • +
    • +

      Persistent storage size

      +
    • +
    • +

      Data connections

      +
    • +
    +
    +

    How to specify CPUs, Memory, and GPUs for your JupyterLab workbench?

    +

    You have the option to select different container sizes to define compute +resources, including CPUs and memory. Each container size comes with pre-configured +CPU and memory resources.

    +

    Optionally, you can specify the desired Number of GPUs depending on the +nature of your data analysis and machine learning code requirements. However, +this number should not exceed the GPU quota specified by the value of the +"OpenShift Request on GPU Quota" attribute that has been approved for +this "NERC-OCP (OpenShift)" resource allocation on NERC's ColdFront, as +described here.

    +

    If you need to increase this quota value, you can request a change as +explained here.

    +
    +

    Once you have entered the information for your workbench, click Create.

    +

    Fill Workbench Information

    +

    For our example project, let's name it "Tensorflow Workbench". We'll select the +TensorFlow image, choose a Deployment size of Small, Number of GPUs +as 1 and allocate a Cluster storage space of 1GB.

    +
    +

    More About Cluster Storage

    +

    Cluster storage consists of Persistent Volume Claims (PVCs), which are +persistent storage spaces available for storing your notebooks and data. You +can create PVCs directly from here and mount them in your workbenches as +needed. It's worth noting that a default cluster storage (PVC) is automatically +created with the same name as your workbench to save your work.

    +
    +

    After creating the workbench, you will return to your project page. It shows the +status of the workbench as shown below:

    +

    Workbench and Cluster Storage

    +

    Notice that under the status indicator the workbench is Running. However, if any +issues arise, such as an "exceeded quota" error, a red exclamation mark will appear +under the Status indicator, as shown in the example below:

    +

    Workbench Error Status

    +

    You can hover over that icon to view details. Upon closer inspection of the error +message and the "Event log", you will receive details about the issue, enabling +you to resolve it accordingly.

    +

    When your workbench is ready and the status changes to Running, you can select +"Open" to access your environment:

    +

    Open JupyterLab Environment

    +
    +

    How can I start or stop a Workbench?

    +

    You can use this "toggle switch" under the "Status" section to easily start/stop +this environment later on.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift-ai/get-started/rhoai-overview/index.html b/openshift-ai/get-started/rhoai-overview/index.html new file mode 100644 index 00000000..3e376bf1 --- /dev/null +++ b/openshift-ai/get-started/rhoai-overview/index.html @@ -0,0 +1,3372 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Red Hat OpenShift AI (RHOAI) Overview

    +

    RHOAI offers a versatile and scalable MLOps solution equipped with tools for +rapid constructing, deploying, and overseeing AI-driven applications. Integrating +the proven features of both Red Hat OpenShift AI and Red Hat OpenShift creates a +comprehensive enterprise-grade artificial intelligence and machine learning (AI/ML) +application platform, facilitating collaboration among data scientists, engineers, +and app developers. This consolidated platform promotes consistency, security, +and scalability, fostering seamless teamwork across disciplines and empowering +teams to quickly explore, build, train, deploy, test machine learning models, and +scale AI-enabled intelligent applications.

    +

    Formerly known as Red Hat OpenShift Data Science, OpenShift AI facilitates the +complete journey of AI/ML experiments and models. OpenShift AI enables data +acquisition and preparation, model training and fine-tuning, model serving and +model monitoring, hardware acceleration, and distributed workloads using +graphics processing unit (GPU) resources.

    +

    AI for All

    +

    Recent enhancements to Red Hat OpenShift AI include:

    +
      +
    • +

      Implementation Deployment pipelines for monitoring AI/ML experiments and +automating ML workflows accelerate the iteration process for data scientists and +developers of intelligent applications. This integration facilitates swift iteration +on machine learning projects and embeds automation into application deployment and +updates.

      +
    • +
    • +

      Model serving now incorporates GPU assistance for inference tasks and custom +model serving runtimes, enhancing inference performance and streamlining the +deployment of foundational models.

      +
    • +
    • +

      With Model monitoring, organizations can oversee performance and operational +metrics through a centralized dashboard, enhancing management capabilities.

      +
    • +
    +

    Red Hat OpenShift AI ecosystem

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    NameDescription
    AI/ML modeling and visualization toolsJupyterLab UI with prebuilt notebook images and common Python libraries and packages; TensorFlow; PyTorch, CUDA; and also support for custom notebook images
    Data engineeringSupport for different Data Engineering third party tools (optional)
    Data ingestion and storageSupports Amazon Simple Storage Service (S3) and NERC OpenStack Object Storage
    GPU supportAvailable NVIDIA GPU Devices (with GPU operator): NVIDIA A100-SXM4-40GB and V100-PCIE-32GB
    Model serving and monitoringModel serving (KServe with user interface), model monitoring, OpenShift Source-to-Image (S2I), Red Hat OpenShift API Management (optional add-on), Intel Distribution of the OpenVINO toolkit
    Data science pipelinesData science pipelines (Kubeflow Pipelines) chain together processes like data preparation, build models, and serve models
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift-ai/index.html b/openshift-ai/index.html new file mode 100644 index 00000000..5175ef72 --- /dev/null +++ b/openshift-ai/index.html @@ -0,0 +1,3368 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    + +
    + + + +
    +
    + + + + + + + + + +

    Red Hat OpenShift AI (RHOAI) Tutorial Index

    +

    If you're just starting out, we recommend starting from Red Hat OpenShift AI +(RHOAI) Overview and going through the tutorial +in order.

    +

    If you just need to review a specific step, you can find the page you need in +the list below.

    +

    NERC OpenShift AI Getting Started

    + +

    NERC OpenShift AI dashboard

    + +

    Using Data Science Project in the NERC RHOAI

    + +

    Other Example Projects

    + +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift-ai/logging-in/access-the-rhoai-dashboard/index.html b/openshift-ai/logging-in/access-the-rhoai-dashboard/index.html new file mode 100644 index 00000000..62c7a856 --- /dev/null +++ b/openshift-ai/logging-in/access-the-rhoai-dashboard/index.html @@ -0,0 +1,3266 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Access the NERC's OpenShift AI dashboard

    +

    Access the NERC's OpenShift Web Console +via the web browser as described here.

    +

    Make sure you are selecting "mss-keycloak" as shown here:

    +

    OpenShift Login with KeyCloak

    +

    Once you successfully authenticate you should see the NERC OpenShift Web Console +as shown below:

    +

    OpenShift Web Console

    +

    After logging in to the NERC OpenShift console, access the NERC's Red Hat OpenShift +AI dashboard by clicking the application launcher icon (the black-and-white +icon that looks like a grid), located on the header as shown below:

    +

    The NERC RHOAI Link

    +

    OpenShift AI uses the same credentials as OpenShift for the dashboard, notebooks, +and all other components. When prompted, log in to the OpenShift AI dashboard by +using your OpenShift credentials by clicking "Log In With OpenShift" button +as shown below:

    +

    Log In With OpenShift

    +

    After the NERC OpenShift AI dashboard launches, it displays all currently enabled +applications.

    +

    The NERC RHOAI Dashboard

    +

    You can return to OpenShift Web Console by using the application launcher icon +(the black-and-white icon that looks like a grid), and choosing the "OpenShift +Console" as shown below:

    +

    The NERC OpenShift Web Console Link

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift-ai/logging-in/images/CILogon_interface.png b/openshift-ai/logging-in/images/CILogon_interface.png new file mode 100644 index 00000000..fd1c073f Binary files /dev/null and b/openshift-ai/logging-in/images/CILogon_interface.png differ diff --git a/openshift-ai/logging-in/images/authorize-access-to-the-rhoai.png b/openshift-ai/logging-in/images/authorize-access-to-the-rhoai.png new file mode 100644 index 00000000..97626616 Binary files /dev/null and b/openshift-ai/logging-in/images/authorize-access-to-the-rhoai.png differ diff --git a/openshift-ai/logging-in/images/log_in_with_openshift.png b/openshift-ai/logging-in/images/log_in_with_openshift.png new file mode 100644 index 00000000..9a2b73e6 Binary files /dev/null and b/openshift-ai/logging-in/images/log_in_with_openshift.png differ diff --git a/openshift-ai/logging-in/images/openshift-web-console.png b/openshift-ai/logging-in/images/openshift-web-console.png new file mode 100644 index 00000000..3a35b98a Binary files /dev/null and b/openshift-ai/logging-in/images/openshift-web-console.png differ diff --git a/openshift-ai/logging-in/images/openshift_login.png b/openshift-ai/logging-in/images/openshift_login.png new file mode 100644 index 00000000..025ab7d0 Binary files /dev/null and b/openshift-ai/logging-in/images/openshift_login.png differ diff --git a/openshift-ai/logging-in/images/the-nerc-openshift-web-console-link.png b/openshift-ai/logging-in/images/the-nerc-openshift-web-console-link.png new file mode 100644 index 00000000..1debfb1a Binary files /dev/null and b/openshift-ai/logging-in/images/the-nerc-openshift-web-console-link.png differ diff --git a/openshift-ai/logging-in/images/the-rhoai-dashboard.png b/openshift-ai/logging-in/images/the-rhoai-dashboard.png new file mode 100644 index 00000000..418c0a37 Binary files /dev/null and b/openshift-ai/logging-in/images/the-rhoai-dashboard.png differ diff --git a/openshift-ai/logging-in/images/the-rhoai-link.png b/openshift-ai/logging-in/images/the-rhoai-link.png new file mode 100644 index 00000000..c1603588 Binary files /dev/null and b/openshift-ai/logging-in/images/the-rhoai-link.png differ diff --git a/openshift-ai/logging-in/the-rhoai-dashboard-overview/index.html b/openshift-ai/logging-in/the-rhoai-dashboard-overview/index.html new file mode 100644 index 00000000..db9645ff --- /dev/null +++ b/openshift-ai/logging-in/the-rhoai-dashboard-overview/index.html @@ -0,0 +1,3296 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    The NERC's OpenShift AI dashboard Overview

    +

    In the NERC's RHOAI dashboard, you can see multiple links on your left hand side.

    +
      +
    1. +

      Applications:

      +
        +
      • +

        Enabled: Launch your enabled applications, view documentation, or get +started with quick start instructions and tasks.

        +
      • +
      • +

        Explore: View optional applications for your RHOAI instance.

        +

        NOTE: Most of them are disabled by default on NERC RHOAI right now.

        +
      • +
      +
    2. +
    3. +

      Data Science Projects: View your existing projects. This will show different +projects corresponding to your NERC-OCP (OpenShift) resource allocations. Here, +you can choose specific projects corresponding to the appropriate allocation where +you want to work. Within these projects, you can create workbenches, deploy various +development environments (such as Jupyter Notebooks, VS Code, RStudio, etc.), add +data connections, or serve models.

      +
      +

      What are Workbenches?

      +

      Workbenches are development environments. They can be based on JupyterLab, +but also on other types of IDEs, like VS Code or RStudio. You can create +as many workbenches as you want, and they can run concurrently.

      +
      +
    4. +
    5. +

      Data Science Pipelines:

      +
        +
      • +

        Pipelines: Manage your pipelines for a specific project selected from the +dropdown menu.

        +
      • +
      • +

        Runs: Manage and view your runs for a specific project selected from the +dropdown menu.

        +
      • +
      +
    6. +
    7. +

      Model Serving: Manage and view the health and performance of your deployed +models across different projects corresponding to your NERC-OCP (OpenShift) resource +allocations. Also, you can "Deploy Model" to a specific project selected from the +dropdown menu here.

      +
    8. +
    9. +

      Resources: Access all learning resources that Resources showcases various +tutorials or demos helping your onboarding to the RHOAI platform.

      +
    10. +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift-ai/other-projects/configure-jupyter-notebook-use-gpus-aiml-modeling/index.html b/openshift-ai/other-projects/configure-jupyter-notebook-use-gpus-aiml-modeling/index.html new file mode 100644 index 00000000..b6a1daaf --- /dev/null +++ b/openshift-ai/other-projects/configure-jupyter-notebook-use-gpus-aiml-modeling/index.html @@ -0,0 +1,3612 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    + +
    +
    + + + +
    +
    + + + + + + + + + +

    Configure a Jupyter notebook to use GPUs for AI/ML modeling

    +

    Prerequisites:

    +

    Prepare your Jupyter notebook server for using a GPU, you need to have:

    + +

    Please ensure that you start your Jupyter notebook server with options as depicted +in the following configuration screen. This screen provides you with the opportunity +to select a notebook image and configure its options, including the number of GPUs.

    +

    PyTorch Workbench Information

    +

    For our example project, let's name it "PyTorch Workbench". We'll select the +PyTorch image, choose a Deployment size of Small, Number of GPUs +as 1 and allocate a Cluster storage space of 1GB.

    +

    If this procedure is successful, you have started your Jupyter notebook server. +When your workbench is ready, the status will change to Running and you can select +"Open" to go to your environment:

    +

    Open JupyterLab Environment

    +

    Once you successfully authenticate you should see the NERC RHOAI JupyterLab Web +Interface as shown below:

    +

    RHOAI JupyterLab Web Interface

    +

    It's pretty empty right now, though. On the left side of the navigation pane, +locate the Name explorer panel. This panel is where you can create and manage +your project directories.

    +

    Clone a GitHub Repository

    +

    You can clone a Git repository in JupyterLab through the left-hand toolbar or +the Git menu option in the main menu as shown below:

    +

    JupyterLab Toolbar and Menu

    +

    Let's clone a repository using the left-hand toolbar. Click on the Git icon, +shown in below:

    +

    JupyterLab Git

    +

    Then click on Clone a Repository as shown below:

    +

    JupyterLab Git Actions

    +

    Enter the git repository URL, which points to the end-to-end ML workflows demo +project i.e. https://github.com/rh-aiservices-bu/getting-started-with-gpus.

    +

    Then click Clone button as shown below:

    +

    Getting Started With GPUs Example Project

    +

    Cloning takes a few seconds, after which you can double-click and navigate to the +newly-created folder i.e. getting-started-with-gpus that contains your cloned +Git repository.

    +

    You will be able to find the newly-created folder named getting-started-with-gpus +based on the Git repository name, as shown below:

    +

    Git Clone Repo Folder on NERC RHOAI

    +

    Exploring the getting-started-with-gpus repository contents

    +

    After you've cloned your repository, the getting-started-with-gpus repository +contents appear in a directory under the Name pane. The directory contains +several notebooks as .ipnyb files, along with a standard license and README +file as shown below:

    +

    Content of The Repository

    +

    Double-click the torch-use-gpu.ipynb file to open this notebook.

    +

    This notebook handles the following tasks:

    +
      +
    1. +

      Importing torch libraries (utilities).

      +
    2. +
    3. +

      Listing available GPUs.

      +
    4. +
    5. +

      Checking that GPUs are enabled.

      +
    6. +
    7. +

      Assigning a GPU device and retrieve the GPU name.

      +
    8. +
    9. +

      Loading vectors, matrices, and data onto a GPU.

      +
    10. +
    11. +

      Loading a neural network model onto a GPU.

      +
    12. +
    13. +

      Training the neural network model.

      +
    14. +
    +

    Start by importing the various torch and torchvision utilities:

    +
    import torch
    +import torch.nn as nn
    +import torch.nn.functional as F
    +from torch.utils.data import TensorDataset
    +import torch.optim as optim
    +import torchvision
    +from torchvision import datasets
    +import torchvision.transforms as transforms
    +import matplotlib.pyplot as plt
    +from tqdm import tqdm
    +
    +

    Once the utilities are loaded, determine how many GPUs are available:

    +
    torch.cuda.is_available() # Do we have a GPU? Should return True.
    +
    +
    torch.cuda.device_count()  # How many GPUs do we have access to?
    +
    +

    When you have confirmed that a GPU device is available for use, assign a GPU device +and retrieve the GPU name:

    +
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    +print(device)  # Check which device we got
    +
    +
    torch.cuda.get_device_name(0)
    +
    +

    Once you have assigned the first GPU device to your device variable, you are ready +to work with the GPU. Let's start working with the GPU by loading vectors, matrices, +and data:

    +
    X_train = torch.IntTensor([0, 30, 50, 75, 70])  # Initialize a Tensor of Integers with no device specified
    +print(X_train.is_cuda, ",", X_train.device)  # Check which device Tensor is created on
    +
    +
    # Move the Tensor to the device we want to use
    +X_train = X_train.cuda()
    +# Alternative method: specify the device using the variable
    +# X_train = X_train.to(device)
    +# Confirm that the Tensor is on the GPU now
    +print(X_train.is_cuda, ",", X_train.device)
    +
    +
    # Alternative method: Initialize the Tensor directly on a specific device.
    +X_test = torch.cuda.IntTensor([30, 40, 50], device=device)
    +print(X_test.is_cuda, ",", X_test.device)
    +
    +

    After you have loaded vectors, matrices, and data onto a GPU, load a neural network +model:

    +
    # Here is a basic fully connected neural network built in Torch.
    +# If we want to load it / train it on our GPU, we must first put it on the GPU
    +# Otherwise it will remain on CPU by default.
    +
    +batch_size = 100
    +
    +class SimpleNet(nn.Module):
    +    def __init__(self):
    +        super(SimpleNet, self).__init__()
    +        self.fc1 = nn.Linear(784, 784)
    +        self.fc2 = nn.Linear(784, 10)
    +
    +    def forward(self, x):
    +        x = x.view(batch_size, -1)
    +        x = self.fc1(x)
    +        x = F.relu(x)
    +        x = self.fc2(x)
    +        output = F.softmax(x, dim=1)
    +        return output
    +
    +
    model = SimpleNet().to(device)  # Load the neural network model onto the GPU
    +
    +

    After the model has been loaded onto the GPU, train it on a data set. For this +example, we will use the FashionMNIST data set:

    +
    """
    +    Data loading, train and test set via the PyTorch dataloader.
    +"""
    +# Transform our data into Tensors to normalize the data
    +train_transform=transforms.Compose([
    +        transforms.ToTensor(),
    +        transforms.Normalize((0.1307,), (0.3081,))
    +        ])
    +
    +test_transform=transforms.Compose([
    +        transforms.ToTensor(),
    +        transforms.Normalize((0.1307,), (0.3081,)),
    +        ])
    +
    +# Set up a training data set
    +trainset = datasets.FashionMNIST('./data', train=True, download=True,
    +                  transform=train_transform)
    +train_loader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
    +                                          shuffle=False, num_workers=2)
    +
    +# Set up a test data set
    +testset = datasets.FashionMNIST('./data', train=False,
    +                  transform=test_transform)
    +test_loader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
    +                                        shuffle=False, num_workers=2)
    +
    +

    Once the FashionMNIST data set has been downloaded, you can take a look at the +dictionary and sample its content.

    +
    # A dictionary to map our class numbers to their items.
    +labels_map = {
    +    0: "T-Shirt",
    +    1: "Trouser",
    +    2: "Pullover",
    +    3: "Dress",
    +    4: "Coat",
    +    5: "Sandal",
    +    6: "Shirt",
    +    7: "Sneaker",
    +    8: "Bag",
    +    9: "Ankle Boot",
    +}
    +
    +# Plotting 9 random different items from the training data set, trainset.
    +figure = plt.figure(figsize=(8, 8))
    +for i in range(1, 3 * 3 + 1):
    +    sample_idx = torch.randint(len(trainset), size=(1,)).item()
    +    img, label = trainset[sample_idx]
    +    figure.add_subplot(3, 3, i)
    +    plt.title(labels_map[label])
    +    plt.axis("off")
    +    plt.imshow(img.view(28,28), cmap="gray")
    +plt.show()
    +
    +

    The following figure shows a few of the data set's pictures:

    +

    Downloaded FashionMNIST Data Set

    +

    There are ten classes of fashion items (e.g. shirt, shoes, and so on). Our goal +is to identify which class each picture falls into. Now you can train the model +and determine how well it classifies the items:

    +
    def train(model, device, train_loader, optimizer, epoch):
    +    """Model training function"""
    +    model.train()
    +    print(device)
    +    for batch_idx, (data, target) in tqdm(enumerate(train_loader)):
    +        data, target = data.to(device), target.to(device)
    +        optimizer.zero_grad()
    +        output = model(data)
    +        loss = F.nll_loss(output, target)
    +        loss.backward()
    +        optimizer.step()
    +
    +
    def test(model, device, test_loader):
    +    """Model evaluating function"""
    +    model.eval()
    +    test_loss = 0
    +    correct = 0
    +    # Use the no_grad method to increase computation speed
    +    # since computing the gradient is not necessary in this step.
    +    with torch.no_grad():
    +        for data, target in test_loader:
    +            data, target = data.to(device), target.to(device)
    +            output = model(data)
    +            test_loss += F.nll_loss(output, target, reduction='sum').item()  # sum up batch loss
    +            pred = output.argmax(dim=1, keepdim=True)  # get the index of the max log-probability
    +            correct += pred.eq(target.view_as(pred)).sum().item()
    +
    +    test_loss /= len(test_loader.dataset)
    +
    +    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
    +        test_loss, correct, len(test_loader.dataset),
    +        100. * correct / len(test_loader.dataset)))
    +
    +
    # number of  training 'epochs'
    +EPOCHS = 5
    +# our optimization strategy used in training.
    +optimizer = optim.Adadelta(model.parameters(), lr=0.01)
    +
    +
    for epoch in range(1, EPOCHS + 1):
    +        print( f"EPOCH: {epoch}")
    +        train(model, device, train_loader, optimizer, epoch)
    +        test(model, device, test_loader)
    +
    +

    As the model is trained, you can follow along as its accuracy increases from 63 +to 72 percent. (Your accuracies might differ, because accuracy can depend on the +random initialization of weights.)

    +

    Once the model is trained, save it locally:

    +
    # Saving the model's weights!
    +torch.save(model.state_dict(), "mnist_fashion_SimpleNet.pt")
    +
    +

    Load and run a PyTorch model

    +

    Let's now determine how our simple torch model performs using GPU resources.

    +

    In the getting-started-with-gpus directory, double click on the +torch-test-model.ipynb file (highlighted as shown below) to open the notebook.

    +

    Content of Torch Test Model Notebook

    +

    After importing the torch and torchvision utilities, assign the first GPU to +your device variable. Prepare to import your trained model, then place the model +on your GPU and load in its trained weights:

    +
    import torch
    +import torch.nn as nn
    +import torch.nn.functional as F
    +from torchvision import datasets
    +import torchvision.transforms as transforms
    +import matplotlib.pyplot as plt
    +
    +
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    +print(device)  # let's see what device we got
    +
    +
    # Getting set to import our trained model.
    +
    +# batch size of 1 so we can look at one image at time.
    +batch_size = 1
    +
    +
    +class SimpleNet(nn.Module):
    +    def __init__(self):
    +        super(SimpleNet, self).__init__()
    +        self.fc1 = nn.Linear(784, 784)
    +        self.fc2 = nn.Linear(784, 10)
    +
    +    def forward(self, x):
    +        x = x.view(batch_size, -1)
    +        x = self.fc1(x)
    +        x = F.relu(x)
    +        x = self.fc2(x)
    +        output = F.softmax(x, dim=1)
    +        return output
    +
    +
    model = SimpleNet().to( device )
    +model.load_state_dict( torch.load("mnist_fashion_SimpleNet.pt") )
    +
    +

    You are now ready to examine some data and determine how your model performs. +The sample run as shown below shows that the model predicted a "bag" with a +confidence of about 0.9192. Despite the % in the output, 0.9192 is very good +because a perfect confidence would be 1.0.

    +

    Model Performance Test Result

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift-ai/other-projects/how-access-s3-data-then-download-and-analyze-it/index.html b/openshift-ai/other-projects/how-access-s3-data-then-download-and-analyze-it/index.html new file mode 100644 index 00000000..a7db6ee0 --- /dev/null +++ b/openshift-ai/other-projects/how-access-s3-data-then-download-and-analyze-it/index.html @@ -0,0 +1,3447 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    How to access, download, and analyze data for S3 usage

    +

    Prerequisites:

    +

    Prepare your Jupyter notebook server for using a GPU, you need to have:

    + +

    Please ensure that you start your Jupyter notebook server with options as depicted +in the following configuration screen. This screen provides you with the opportunity +to select a notebook image and configure its options, including the number of GPUs.

    +

    Standard Data Science Workbech Information

    +

    For our example project, let's name it "Standard Data Science Workbench". We'll +select the Standard Data Science image, choose a Deployment size of Small, +Number of GPUs as 0 and allocate a Cluster storage space of 1GB.

    +

    If this procedure is successful, you have started your Jupyter notebook server. +When your workbench is ready, the status will change to Running and you can select +"Open" to go to your environment:

    +

    Open JupyterLab Environment

    +

    Once you successfully authenticate you should see the NERC RHOAI JupyterLab Web +Interface as shown below:

    +

    RHOAI JupyterLab Web Interface

    +

    It's pretty empty right now, though. On the left side of the navigation pane, +locate the Name explorer panel. This panel is where you can create and manage +your project directories.

    +

    Clone a GitHub Repository

    +

    You can clone a Git repository in JupyterLab through the left-hand toolbar or +the Git menu option in the main menu as shown below:

    +

    JupyterLab Toolbar and Menu

    +

    Let's clone a repository using the left-hand toolbar. Click on the Git icon, +shown in below:

    +

    JupyterLab Git

    +

    Then click on Clone a Repository as shown below:

    +

    JupyterLab Git Actions

    +

    Enter the git repository URL, which points to the end-to-end ML workflows demo +project i.e. https://github.com/rh-aiservices-bu/access-s3-data.

    +

    Then click Clone button as shown below:

    +

    Access, Download and Analysis Example Project

    +

    Cloning takes a few seconds, after which you can double-click and navigate to the +newly-created folder i.e. access-s3-data that contains your cloned Git repository.

    +

    You will be able to find the newly-created folder named access-s3-data based on +the Git repository name, as shown below:

    +

    Git Clone Repo Folder on NERC RHOAI

    +

    Access and download S3 data

    +

    In the Name menu, double-click the downloadData.ipynb notebook in the file +explorer on the left side to launch it. This action will open another tab in the +content section of the environment, on the right.

    +

    Run each cell in the notebook, using the Shift-Enter key combination, and pay +attention to the execution results. Using this notebook, we will:

    +
      +
    • +

      Make a connection to an AWS S3 storage bucket

      +
    • +
    • +

      Download a CSV file into the "datasets" folder

      +
    • +
    • +

      Rename the downloaded CSV file to "newtruckdata.csv"

      +
    • +
    +

    View your new CSV file

    +

    Inside the "datasets" directory, double-click the "newtruckdata.csv" file. File +contents should appear as shown below:

    +

    New Truck Data CSV File Content

    +

    The file contains the data you will analyze and perform some analytics.

    +

    Getting ready to run analysis on your new CSV file

    +

    Since you now have data, you can open the next Jupyter notebook, simpleCalc.ipynb, +and perform the following operations:

    +
      +
    • +

      Create a dataframe.

      +
    • +
    • +

      Perform simple total and average calculations.

      +
    • +
    • +

      Print the calculation results.

      +
    • +
    +

    Analyzing your S3 data access run results

    +

    Double-click the simpleCalc.ipynb Python file. When you execute the cells in the +notebook, results appear like the ones shown below:

    +

    Simple Calculation Results

    +

    The cells in the above figure show the mileage of four vehicles. In the next cell, +we calculate total mileage, total rows (number of vehicles) and the average mileage +for all vehicles. Execute the "Perform Calculations" cell to see basic calculations +performed on the data as shown below:

    +

    Perform Calculation Results

    +

    Calculations show the total mileage as 742, for four vehicles, and an average +mileage of 185.5.

    +

    Success! You have added analyzed your run results using the NERC RHOAI.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift-ai/other-projects/images/access-download-and-analysis-s3-data-git-repo.png b/openshift-ai/other-projects/images/access-download-and-analysis-s3-data-git-repo.png new file mode 100644 index 00000000..8d4b3f36 Binary files /dev/null and b/openshift-ai/other-projects/images/access-download-and-analysis-s3-data-git-repo.png differ diff --git a/openshift-ai/other-projects/images/authorize-access-to-the-rhoai.png b/openshift-ai/other-projects/images/authorize-access-to-the-rhoai.png new file mode 100644 index 00000000..97626616 Binary files /dev/null and b/openshift-ai/other-projects/images/authorize-access-to-the-rhoai.png differ diff --git a/openshift-ai/other-projects/images/downloaded-FashionMNIST-data-set.png b/openshift-ai/other-projects/images/downloaded-FashionMNIST-data-set.png new file mode 100644 index 00000000..5c1aef5c Binary files /dev/null and b/openshift-ai/other-projects/images/downloaded-FashionMNIST-data-set.png differ diff --git a/openshift-ai/other-projects/images/getting-started-with-gpus-git-repo.png b/openshift-ai/other-projects/images/getting-started-with-gpus-git-repo.png new file mode 100644 index 00000000..570a15d3 Binary files /dev/null and b/openshift-ai/other-projects/images/getting-started-with-gpus-git-repo.png differ diff --git a/openshift-ai/other-projects/images/git-repo-content.png b/openshift-ai/other-projects/images/git-repo-content.png new file mode 100644 index 00000000..383741fd Binary files /dev/null and b/openshift-ai/other-projects/images/git-repo-content.png differ diff --git a/openshift-ai/other-projects/images/gpu_rhoai.png b/openshift-ai/other-projects/images/gpu_rhoai.png new file mode 100644 index 00000000..103af6fc Binary files /dev/null and b/openshift-ai/other-projects/images/gpu_rhoai.png differ diff --git a/openshift-ai/other-projects/images/jupyterlab-toolbar-main-menu.jpg b/openshift-ai/other-projects/images/jupyterlab-toolbar-main-menu.jpg new file mode 100644 index 00000000..f09dce1c Binary files /dev/null and b/openshift-ai/other-projects/images/jupyterlab-toolbar-main-menu.jpg differ diff --git a/openshift-ai/other-projects/images/jupyterlab_git.png b/openshift-ai/other-projects/images/jupyterlab_git.png new file mode 100644 index 00000000..e1874ba0 Binary files /dev/null and b/openshift-ai/other-projects/images/jupyterlab_git.png differ diff --git a/openshift-ai/other-projects/images/jupyterlab_git_actions.png b/openshift-ai/other-projects/images/jupyterlab_git_actions.png new file mode 100644 index 00000000..dc958348 Binary files /dev/null and b/openshift-ai/other-projects/images/jupyterlab_git_actions.png differ diff --git a/openshift-ai/other-projects/images/jupyterlab_web_interface.png b/openshift-ai/other-projects/images/jupyterlab_web_interface.png new file mode 100644 index 00000000..1c5f0a87 Binary files /dev/null and b/openshift-ai/other-projects/images/jupyterlab_web_interface.png differ diff --git a/openshift-ai/other-projects/images/model-performance-result.png b/openshift-ai/other-projects/images/model-performance-result.png new file mode 100644 index 00000000..a7ba2aae Binary files /dev/null and b/openshift-ai/other-projects/images/model-performance-result.png differ diff --git a/openshift-ai/other-projects/images/newtruckdata.jpg b/openshift-ai/other-projects/images/newtruckdata.jpg new file mode 100644 index 00000000..0ba24289 Binary files /dev/null and b/openshift-ai/other-projects/images/newtruckdata.jpg differ diff --git a/openshift-ai/other-projects/images/open-pytorch-jupyter-lab.png b/openshift-ai/other-projects/images/open-pytorch-jupyter-lab.png new file mode 100644 index 00000000..fd5ea21f Binary files /dev/null and b/openshift-ai/other-projects/images/open-pytorch-jupyter-lab.png differ diff --git a/openshift-ai/other-projects/images/open-standard-ds-workbench-jupyter-lab.png b/openshift-ai/other-projects/images/open-standard-ds-workbench-jupyter-lab.png new file mode 100644 index 00000000..b1d1e2cc Binary files /dev/null and b/openshift-ai/other-projects/images/open-standard-ds-workbench-jupyter-lab.png differ diff --git a/openshift-ai/other-projects/images/perform_calculation_results.jpg b/openshift-ai/other-projects/images/perform_calculation_results.jpg new file mode 100644 index 00000000..9b2332cf Binary files /dev/null and b/openshift-ai/other-projects/images/perform_calculation_results.jpg differ diff --git a/openshift-ai/other-projects/images/pytorch-workbench.png b/openshift-ai/other-projects/images/pytorch-workbench.png new file mode 100644 index 00000000..7c9d0af6 Binary files /dev/null and b/openshift-ai/other-projects/images/pytorch-workbench.png differ diff --git a/openshift-ai/other-projects/images/rhoai-git-cloned-repo.jpg b/openshift-ai/other-projects/images/rhoai-git-cloned-repo.jpg new file mode 100644 index 00000000..9f668967 Binary files /dev/null and b/openshift-ai/other-projects/images/rhoai-git-cloned-repo.jpg differ diff --git a/openshift-ai/other-projects/images/rhoai-git-cloned-repo.png b/openshift-ai/other-projects/images/rhoai-git-cloned-repo.png new file mode 100644 index 00000000..5bb77c9a Binary files /dev/null and b/openshift-ai/other-projects/images/rhoai-git-cloned-repo.png differ diff --git a/openshift-ai/other-projects/images/running-simple-calculation.jpg b/openshift-ai/other-projects/images/running-simple-calculation.jpg new file mode 100644 index 00000000..308dcc20 Binary files /dev/null and b/openshift-ai/other-projects/images/running-simple-calculation.jpg differ diff --git a/openshift-ai/other-projects/images/simple-calculation-results.png b/openshift-ai/other-projects/images/simple-calculation-results.png new file mode 100644 index 00000000..ffdd49f1 Binary files /dev/null and b/openshift-ai/other-projects/images/simple-calculation-results.png differ diff --git a/openshift-ai/other-projects/images/standard-data-science-workbench.png b/openshift-ai/other-projects/images/standard-data-science-workbench.png new file mode 100644 index 00000000..fee199ba Binary files /dev/null and b/openshift-ai/other-projects/images/standard-data-science-workbench.png differ diff --git a/openshift-ai/other-projects/images/torch-test-model-notebook-content.png b/openshift-ai/other-projects/images/torch-test-model-notebook-content.png new file mode 100644 index 00000000..6891806c Binary files /dev/null and b/openshift-ai/other-projects/images/torch-test-model-notebook-content.png differ diff --git a/openshift/applications/creating-a-sample-application/index.html b/openshift/applications/creating-a-sample-application/index.html new file mode 100644 index 00000000..4d667329 --- /dev/null +++ b/openshift/applications/creating-a-sample-application/index.html @@ -0,0 +1,3568 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Creating A Sample Application

    +

    NERC's OpenShift service is a platform that provides a cloud-native environment +for developing and deploying applications.

    +

    Here, we walk through the process of creating a simple web application, +deploying it. This example uses the Node.js programming language, but the process +with other programming languages will be similar. Instructions provided show the +tasks using both the web console and the command-line tool.

    +

    Using the Developer perspective on NERC's OpenShift Web Console

    +
      +
    1. +

      Go to the NERC's OpenShift Web Console.

      +
    2. +
    3. +

      Click on the Perspective Switcher drop-down menu and select Developer.

      +
    4. +
    5. +

      In the Navigation Menu, click +Add.

      +
    6. +
    7. +

      Creating applications using samples: Use existing code samples to get started +with creating applications on the OpenShift Container Platform. Find the +Create applications using samples section and then click on "View all samples" +and then select the type of application you want to create (e.g. Node.js, Python, +Ruby, etc.), it will load application from Git Repo URL and then review or +modify the application Name for your application. +Alternatively, If you want to create an application from your own source code +located in a git repository, select Import from Git. In the Git Repo URL +text box, enter your git repo url. For example: https://github.com/myuser/mypublicrepo.git. +You may see a warning stating "URL is valid but cannot be reached". You can +ignore this warning!

      +
    8. +
    9. +

      Click "Create" to create your application.

      +
    10. +
    11. +

      Once your application has been created, you can view the details by clicking +on the application name in the Project Overview page.

      +
    12. +
    13. +

      On the Topology View menu, click on your application, or the application +circle if you are in graphical topology view. In the details panel that displays, +scroll to the Routes section on the Resources tab and click on the link to +go to the sample application. This will open your application in a new browser +window. The link will look similar to http://<appname>-<mynamespace>.apps.shift.nerc.mghpcc.org.

      +
    14. +
    +
    +

    Example: Deploying a Python application

    +

    For a quick example on how to use the "Import from Git" option to deploy a +sample Python application, please refer to this guide.

    +
    +

    Additional resources

    +

    For more options and customization please read this.

    +

    Using the CLI (oc command) on your local terminal

    +

    Alternatively, you can create an application on the NERC's OpenShift cluster by +using the oc new-app command from the command line terminal.

    +

    i. Make sure you have the oc CLI tool installed and configured on your local +machine following these steps.

    +
    +

    Information

    +

    Some users may have access to multiple projects. Run the following command to +switch to a specific project space: oc project <your-project-namespace>.

    +
    +

    ii. To create an application, you will need to specify the language and runtime +for your application. You can do this by using the oc new-app command and specifying +a language and runtime. For example, to create a Node.js application, you can run +the following command: +oc new-app nodejs

    +

    iii. If you want to create an application from an existing Git repository, you can +use the --code flag to specify the URL of the repository. For example: +oc new-app --code https://github.com/myuser/mypublicrepo. If you want to use a +different name, you can add the --name=<newname> argument to the oc new-app command. +For example: oc new-app –name=mytestapp https://github.com/myuser/mypublicrepo. +The platform will try to automatically detect the programming language +of the application code and select the latest version of the base language image +available. If oc new-app can't find any suitable Source-To-Image (S2I) builder +images based on your source code in your Git repository or unable to detect the programming +language or detects the wrong one, you can always specify the image you want to use +as part of the new-app argument, with oc new-app <image url>~<git url>. If it is +using a test application based on Node.js, we could use the same command as before +but add nodejs~ before the URL of the Git repository. +For example: oc new-app nodejs~https://github.com/myuser/mypublicrepo.

    +
    +

    Important Note

    +

    If you are using a private remote Git repository, you can use the +--source-secret flag to specify an existing source clone secret that +will get injected into your BuildConfig to access the repository. +For example: oc new-app https://github.com/myuser/yourprivaterepo --source-secret=yoursecret.

    +
    +

    iv. Once your application has been created, You can run oc status to see if your +application was successfully built and deployed. Builds and deployments can sometimes +take several minutes to complete, so you may run this several times. you can view +the details by running the oc get pods command. This will show you a list of all +the pods running in your project, including the pod for your new application.

    +

    v. When using the oc command-line tool to create an application, a route is not +automatically set up to make your application web accessible. Run the following +to make the test application web accessible: +oc create route edge --service=mytestapp --insecure-policy=Redirect. +Once the application is deployed and the route is set up, it can be accessed at +a web URL similar to http://mytestapp-<mynamespace>.apps.shift.nerc.mghpcc.org.

    +

    For more additional resources

    +

    For more options and customization please read this.

    +

    Using the Developer Catalog on NERC's OpenShift Web Console

    +

    The Developer Catalog offers a streamlined process for deploying applications +and services supported by Operator-backed services like CI/CD, Databases, Builder +Images, and Helm Charts. It comprises a diverse array of application components, +services, event sources, and source-to-image builders ready for integration into +your project.

    +
    +

    About Quick Start Templates

    +

    By default, the templates build using a public source repository on GitHub that +contains the necessary application code. For more options and customization +please read this.

    +
    +

    Steps

    +
      +
    1. +

      Go to the NERC's OpenShift Web Console.

      +
    2. +
    3. +

      Click on the Perspective Switcher drop-down menu and select Developer.

      +
    4. +
    5. +

      In the Navigation Menu, click +Add.

      +
    6. +
    7. +

      You need to find the Developer Catalog section and then select All services +option as shown below:

      +

      Select All Services

      +
    8. +
    9. +

      Then, you will be able search any available services from the Developer Catalog +templates by searching for it on catalog and choose the desired type of service +or component that you wish to include in your project. For this example, select +Databases to list all the database services and then click MariaDB to see +the details for the service.

      +

      Search for MariaDB

      +
      +

      To Create Your Own Developer Catalog Service

      +

      You also have the option to create and integrate custom services into the +Developer Catalog using a template, as described here.

      +
      +
    10. +
    11. +

      Once selected by clicking the template, you will see Instantiate Template web +interface as shown below:

      +

      Initiate MariaDB Template

      +
    12. +
    13. +

      Clicking "Instantiate Template" will display an automatically populated +template containing details for the MariaDB service. Click "Create" to begin the +creation process and enter any custom information required.

      +
    14. +
    15. +

      View the MariaDB service in the Topology view as shown below:

      +
    16. +
    +

    MariaDB in Topology

    +

    For Additional resources

    +

    For more options and customization please read this.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift/applications/creating-your-own-developer-catalog-service/index.html b/openshift/applications/creating-your-own-developer-catalog-service/index.html new file mode 100644 index 00000000..3456d91b --- /dev/null +++ b/openshift/applications/creating-your-own-developer-catalog-service/index.html @@ -0,0 +1,3309 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Creating Your Own Developer Catalog Service

    +

    Here, we walk through the process of creating a simple RStudio web server template +that bundles all resources required to run the server i.e. ConfigMap, Pod, Route, +Service, etc. and then initiate and deploy application from that template.

    +

    This example template file is readily accessible from the +Git Repository.

    +
    +

    More about Writing Templates

    +

    For more options and customization please read this.

    +
    +
      +
    1. +

      Find the From Local Machine section and click on Import YAML as shown +below:

      +

      Import YAML

      +
    2. +
    3. +

      On opened YAML editor paste the contents of the template copied from the +rstudio-server-template.yaml file located at the provided Git Repo.

      +

      YAML Editor

      +
    4. +
    5. +

      You need to find the Developer Catalog section and then select All services +option as shown below:

      +

      Select All Services

      +
    6. +
    7. +

      Then, you will be able to use the created Developer Catalog template by searching +for "RStudio" on catalog as shown below:

      +

      Search for RStudio Template

      +
    8. +
    9. +

      Once selected by clicking the template, you will see Instantiate Template web +interface as shown below:

      +

      Initiate Template

      +
    10. +
    11. +

      Based on our template definition, we request that users input a preferred password +for the RStudio server so the following interface will prompt for your password that +will be used during login to the RStudio server.

      +

      Provide the RStudio Password

      +
    12. +
    13. +

      Once successfully initiated, you can either open the application URL using the +Open URL icon as shown below or you can naviate to the Routes section and +click on Location path as shown below:

      +

      How to get the RStudio Application URL

      +
    14. +
    15. +

      To get the Username to be used for login on RStudio server, you need to click +on running pod i.e. rstudio-server as shown below:

      +

      Detail Information for RStudio Pod

      +
    16. +
    17. +

      Then select the YAML section to find out the attribute value for runAsUser +that is used as the Username while Sign in to RStudio server as shown below:

      +

      Username for RStudio Server from Pod runAsUser

      +
    18. +
    19. +

      Finally, you will be able to see the RStudio web interface!

      +
    20. +
    +
    +

    Modifying uploaded templates

    +

    You can edit a template that has already been uploaded to your project: +oc edit template <template>

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift/applications/deleting-applications/index.html b/openshift/applications/deleting-applications/index.html new file mode 100644 index 00000000..e33590e6 --- /dev/null +++ b/openshift/applications/deleting-applications/index.html @@ -0,0 +1,3385 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Deleting your applications

    +

    Deleting applications using the Developer perspective on NERC's OpenShift Web Console

    +

    You can delete applications created in your project by using the +Developer perspective as following:

    +

    To delete an application and all of its associated components using the +Topology view menu in the Developer perspective:

    +
      +
    1. +

      Go to the NERC's OpenShift Web Console.

      +
    2. +
    3. +

      Click on the Perspective Switcher drop-down menu and select Developer.

      +
    4. +
    5. +

      Click the application you want to delete to see the side panel with +the resource details of the application.

      +
    6. +
    7. +

      Click the Actions drop-down menu displayed on the upper right of the panel, +and select Delete Application to see a confirmation dialog box as shown below:

      +

      Delete an application using Actions

      +
    8. +
    9. +

      Enter the name of the application and click Delete to delete it.

      +
    10. +
    +

    Or, if you are using Graph view then you can also right-click the application +you want to delete and click Delete Application to delete it as shown below:

    +

    Delete an application using Context menu

    +

    Deleting applications using the oc command on your local terminal

    +

    Alternatively, you can delete the resource objects by using the +oc delete command from the command line terminal. Make sure you have the oc +CLI tool installed and configured on your local machine following these steps.

    +
    +

    How to select resource object?

    +

    You can delete a single resource object by name, or delete a set of resource +objects by specifying a label selector.

    +
    +

    When an application is deployed, resource objects for that application will +typically have an app label applied to them with value corresponding to the name +of the application. This can be used with the label selector to delete all +resource objects for an application.

    +

    To test what resource objects would be deleted when using a label selector, use +the oc get command to query the set of objects which would be matched.

    +

    oc get all --selector app=<application-name> -o name

    +

    For example:

    +
    oc get all --selector app=rstudio-server -o name
    +pod/rstudio-server
    +service/rstudio-server
    +route.route.openshift.io/rstudio-server
    +
    +

    If you are satisfied that what is shown are the resource objects for your +application, then run oc delete.

    +

    oc delete all --selector app=<application-name>

    +
    +

    Important Note

    +

    Selector all matches on a subset of all resource object types that exist. +It targets the core resource objects that would be created for a build and deployment. +It will not include resource objects such as persistent volume claims (pvc), +config maps (configmap), secrets (secret), and others.

    +
    +

    You will either need to delete these resource objects separately, or if they also +have been labelled with the app tag, list the resource object types along with all.

    +

    oc delete all,configmap,pvc,serviceaccount,rolebinding --selector app=<application-name>

    +

    If you are not sure what labels have been applied to resource objects for your +application, you can run oc describe on the resource object to see the labels +applied to it. For example:

    +
    oc describe pod/rstudio-server
    +Name:         rstudio-server
    +Namespace:    64b664c37f2a47c39c3cf3942ff4d0be
    +Priority:     0
    +Node:         wrk-11/10.30.6.21
    +Start Time:   Fri, 16 Dec 2022 10:59:23 -0500
    +Labels:       app=rstudio-server
    +            template.openshift.io/template-instance-owner=44a3fae8-4e8e-4058-a4a8-0af7bbb41f6
    +...
    +
    +
    +

    Important Note

    +

    It is important to check what labels have been used with your application if +you have created it using a template, as templates may not follow the convention +of using the app label.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift/applications/editing-applications/index.html b/openshift/applications/editing-applications/index.html new file mode 100644 index 00000000..bc5e200c --- /dev/null +++ b/openshift/applications/editing-applications/index.html @@ -0,0 +1,3335 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Editing applications

    +

    You can edit the configuration and the source code of the application you create +using the Topology view.

    +

    Editing the source code of an application using the Developer perspective

    +

    You can click the "Edit Source Code" icon, displayed at the bottom-right of the +deployed application, to access your source code and modify it as shown below:

    +

    Edit the source code of an application

    +
    +

    Information

    +

    This feature is available only when you create applications using the +From Git, Container Image, From Catalog, and From Dockerfile +options.

    +
    +

    Editing the application configuration using the Developer perspective

    +
      +
    1. +

      In the Topology view, right-click the application to see the edit options +available as shown below:

      +

      Edit an application

      +

      Or, In the Topology view, click the deployed application to reveal the +right-side Overview panel. From the Actions drop-down list, we can see +the similar edit options available as shown below:

      +

      Edit an application using Action

      +
    2. +
    3. +

      Click on any of the options available to edit resource used by your application, +the pop-up form will be pre-populated with the values you had added while creating +the applicaiton.

      +
    4. +
    5. +

      Click Save to restart the build and deploy a new image.

      +
    6. +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift/applications/images/add-hpa-popup.png b/openshift/applications/images/add-hpa-popup.png new file mode 100644 index 00000000..2913c790 Binary files /dev/null and b/openshift/applications/images/add-hpa-popup.png differ diff --git a/openshift/applications/images/compute_resources_pod_yaml.png b/openshift/applications/images/compute_resources_pod_yaml.png new file mode 100644 index 00000000..6d095f3e Binary files /dev/null and b/openshift/applications/images/compute_resources_pod_yaml.png differ diff --git a/openshift/applications/images/delete-application-using-actions.png b/openshift/applications/images/delete-application-using-actions.png new file mode 100644 index 00000000..19007896 Binary files /dev/null and b/openshift/applications/images/delete-application-using-actions.png differ diff --git a/openshift/applications/images/delete-application-using-right_click.png b/openshift/applications/images/delete-application-using-right_click.png new file mode 100644 index 00000000..af9c4765 Binary files /dev/null and b/openshift/applications/images/delete-application-using-right_click.png differ diff --git a/openshift/applications/images/edit-an-application-using-action.png b/openshift/applications/images/edit-an-application-using-action.png new file mode 100644 index 00000000..28c12cdb Binary files /dev/null and b/openshift/applications/images/edit-an-application-using-action.png differ diff --git a/openshift/applications/images/edit-an-application.png b/openshift/applications/images/edit-an-application.png new file mode 100644 index 00000000..844be16d Binary files /dev/null and b/openshift/applications/images/edit-an-application.png differ diff --git a/openshift/applications/images/edit-the-source-code-of-application.png b/openshift/applications/images/edit-the-source-code-of-application.png new file mode 100644 index 00000000..a372915e Binary files /dev/null and b/openshift/applications/images/edit-the-source-code-of-application.png differ diff --git a/openshift/applications/images/hpa-form.png b/openshift/applications/images/hpa-form.png new file mode 100644 index 00000000..b5058e73 Binary files /dev/null and b/openshift/applications/images/hpa-form.png differ diff --git a/openshift/applications/images/import-pod-yaml-content.png b/openshift/applications/images/import-pod-yaml-content.png new file mode 100644 index 00000000..f9f34e05 Binary files /dev/null and b/openshift/applications/images/import-pod-yaml-content.png differ diff --git a/openshift/applications/images/import-yaml-content.png b/openshift/applications/images/import-yaml-content.png new file mode 100644 index 00000000..00f4bba2 Binary files /dev/null and b/openshift/applications/images/import-yaml-content.png differ diff --git a/openshift/applications/images/import-yaml.png b/openshift/applications/images/import-yaml.png new file mode 100644 index 00000000..a8bc42f8 Binary files /dev/null and b/openshift/applications/images/import-yaml.png differ diff --git a/openshift/applications/images/initiate-mariadb-template.png b/openshift/applications/images/initiate-mariadb-template.png new file mode 100644 index 00000000..b9b85ead Binary files /dev/null and b/openshift/applications/images/initiate-mariadb-template.png differ diff --git a/openshift/applications/images/initiate-template.png b/openshift/applications/images/initiate-template.png new file mode 100644 index 00000000..d0f768ab Binary files /dev/null and b/openshift/applications/images/initiate-template.png differ diff --git a/openshift/applications/images/limit_ranges.png b/openshift/applications/images/limit_ranges.png new file mode 100644 index 00000000..9ef1648d Binary files /dev/null and b/openshift/applications/images/limit_ranges.png differ diff --git a/openshift/applications/images/mariadb-in-topology.png b/openshift/applications/images/mariadb-in-topology.png new file mode 100644 index 00000000..dc4cf9d6 Binary files /dev/null and b/openshift/applications/images/mariadb-in-topology.png differ diff --git a/openshift/applications/images/nvidia-A100-gpu.png b/openshift/applications/images/nvidia-A100-gpu.png new file mode 100644 index 00000000..8e143c5b Binary files /dev/null and b/openshift/applications/images/nvidia-A100-gpu.png differ diff --git a/openshift/applications/images/nvidia-V100-gpu.png b/openshift/applications/images/nvidia-V100-gpu.png new file mode 100644 index 00000000..84b5d04c Binary files /dev/null and b/openshift/applications/images/nvidia-V100-gpu.png differ diff --git a/openshift/applications/images/pod-object-definition-yaml.png b/openshift/applications/images/pod-object-definition-yaml.png new file mode 100644 index 00000000..78893520 Binary files /dev/null and b/openshift/applications/images/pod-object-definition-yaml.png differ diff --git a/openshift/applications/images/pod-scale-count-arrow.png b/openshift/applications/images/pod-scale-count-arrow.png new file mode 100644 index 00000000..0dc9138e Binary files /dev/null and b/openshift/applications/images/pod-scale-count-arrow.png differ diff --git a/openshift/applications/images/provide-password.png b/openshift/applications/images/provide-password.png new file mode 100644 index 00000000..7668aed5 Binary files /dev/null and b/openshift/applications/images/provide-password.png differ diff --git a/openshift/applications/images/resource-limits-form.png b/openshift/applications/images/resource-limits-form.png new file mode 100644 index 00000000..59b159bb Binary files /dev/null and b/openshift/applications/images/resource-limits-form.png differ diff --git a/openshift/applications/images/resource-limits-popup.png b/openshift/applications/images/resource-limits-popup.png new file mode 100644 index 00000000..092dfc2c Binary files /dev/null and b/openshift/applications/images/resource-limits-popup.png differ diff --git a/openshift/applications/images/rstudio-pod-info.png b/openshift/applications/images/rstudio-pod-info.png new file mode 100644 index 00000000..62f6f12b Binary files /dev/null and b/openshift/applications/images/rstudio-pod-info.png differ diff --git a/openshift/applications/images/rstudio-server-app-url.png b/openshift/applications/images/rstudio-server-app-url.png new file mode 100644 index 00000000..a4bd2c84 Binary files /dev/null and b/openshift/applications/images/rstudio-server-app-url.png differ diff --git a/openshift/applications/images/rstudio-server-user-info.png b/openshift/applications/images/rstudio-server-user-info.png new file mode 100644 index 00000000..6a45c567 Binary files /dev/null and b/openshift/applications/images/rstudio-server-user-info.png differ diff --git a/openshift/applications/images/scale-pod-count.png b/openshift/applications/images/scale-pod-count.png new file mode 100644 index 00000000..8c915af0 Binary files /dev/null and b/openshift/applications/images/scale-pod-count.png differ diff --git a/openshift/applications/images/search-developer-catalog.png b/openshift/applications/images/search-developer-catalog.png new file mode 100644 index 00000000..86c74a78 Binary files /dev/null and b/openshift/applications/images/search-developer-catalog.png differ diff --git a/openshift/applications/images/search-mariadb-database.png b/openshift/applications/images/search-mariadb-database.png new file mode 100644 index 00000000..35656a0f Binary files /dev/null and b/openshift/applications/images/search-mariadb-database.png differ diff --git a/openshift/applications/images/select-service-catalog.png b/openshift/applications/images/select-service-catalog.png new file mode 100644 index 00000000..bd80cadc Binary files /dev/null and b/openshift/applications/images/select-service-catalog.png differ diff --git a/openshift/applications/scaling-and-performance-guide/index.html b/openshift/applications/scaling-and-performance-guide/index.html new file mode 100644 index 00000000..dcc3260b --- /dev/null +++ b/openshift/applications/scaling-and-performance-guide/index.html @@ -0,0 +1,3942 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Scaling and Performance Guide

    +

    Understanding Pod

    +

    Pods serve as the smallest unit of compute that can be defined, deployed, and +managed within the OpenShift Container Platform (OCP). The OCP utilizes the +Kubernetes concept of a pod, +which consists of one or more containers deployed together on a single host.

    +

    Pods are essentially the building blocks of a Kubernetes cluster, analogous to a +machine instance (either physical or virtual) for a container. Each pod is assigned +its own internal IP address, granting it complete ownership over its port space. +Additionally, containers within a pod can share local storage and network resources.

    +

    The lifecycle of a pod typically involves several stages: first, the pod is defined; +then, it is scheduled to run on a node within the cluster; finally, it runs until +its container(s) exit or until it is removed due to some other circumstance. Depending +on the cluster's policy and the exit code of its containers, pods may be removed +after exiting, or they may be retained to allow access to their container logs.

    +

    Example pod configurations

    +

    The following is an example definition of a pod from a Rails application. It +demonstrates many features of pods, most of which are discussed in other topics +and thus only briefly mentioned here:

    +

    Pod object definition (YAML)

    +
      +
    1. +

      Pods can be "tagged" with one or more labels, which can then be used to select +and manage groups of pods in a single operation. The labels are stored in key/value +format in the metadata hash.

      +
    2. +
    3. +

      The pod restart policy with possible values Always, OnFailure, and Never. +The default value is Always. Read this +to learn about "Configuring how pods behave after restart".

      +
    4. +
    5. +

      OpenShift Container Platform defines a security context for containers which +specifies whether they are allowed to run as privileged containers, run as a user +of their choice, and more. The default context is very restrictive but administrators +can modify this as needed.

      +
    6. +
    7. +

      containers specifies an array of one or more container definitions.

      +
    8. +
    9. +

      The container specifies where external storage volumes are mounted within the +container. In this case, there is a volume for storing access to credentials the +registry needs for making requests against the OpenShift Container Platform API.

      +
    10. +
    11. +

      Specify the volumes to provide for the pod. Volumes mount at the specified path. +Do not mount to the container root, /, or any path that is the same in the host +and the container. This can corrupt your host system if the container is sufficiently +privileged, such as the host /dev/pts files. It is safe to mount the host by using +/host.

      +
    12. +
    13. +

      Each container in the pod is instantiated from its own container image.

      +
    14. +
    15. +

      Pods making requests against the OpenShift Container Platform API is a common +enough pattern that there is a serviceAccount field for specifying which service +account user the pod should authenticate as when making the requests. This enables +fine-grained access control for custom infrastructure components.

      +
    16. +
    17. +

      The pod defines storage volumes that are available to its container(s) to use. +In this case, it provides an ephemeral volume for a secret volume containing the +default service account tokens. If you attach persistent volumes that have high +file counts to pods, those pods can fail or can take a long time to start.

      +
    18. +
    +
    +

    Viewing pods

    +

    You can refer to this user guide +on how to view all pods, their usage statics (i.e. CPU, memory, and storage +consumption) and logs in your project using the OpenShift CLI (oc) commands.

    +
    +

    Compute Resources

    +

    Each container running on a node consumes compute resources, which are measurable +quantities that can be requested, allocated, and consumed.

    +

    When authoring a pod configuration YAML file, you can optionally specify how much +CPU, memory (RAM), and local ephemeral storage each container needs in order to +better schedule pods in the cluster and ensure satisfactory performance as shown +below:

    +

    Pod Compute Resources (YAML)

    +

    CPU and memory can be specified in a couple of ways:

    +
      +
    • +

      Resource requests and limits are optional parameters specified at the container +level. OpenShift computes a Pod's request and limit as the sum of requests and limits +across all of its containers. OpenShift then uses these parameters for scheduling +and resource allocation decisions.

      +

      The request value specifies the min value you will be guaranteed. The request +value is also used by the scheduler to assign pods to nodes.

      +

      Pods will get the amount of memory they request. If they exceed their memory +request, they could be killed if another pod happens to need this memory. Pods +are only ever killed when using less memory than requested if critical system +or high priority workloads need the memory utilization.

      +

      Likewise, each container within a Pod is granted the CPU resources it requests, +subject to availability. Additional CPU cycles may be allocated if resources +are available and not required by other active Pods/Jobs.

      +
      +

      Important Information

      +

      If a Pod's total requests are not available on a single node, then the Pod +will remain in a Pending state (i.e. not running) until these resources +become available.

      +
      +
    • +
    • +

      The limit value specifies the max value you can consume. Limit is the value +applications should be tuned to use. Pods will be memory, CPU throttled when they +exceed their available memory and CPU limit.

      +
    • +
    +

    CPU is measured in units called millicores, where 1000 millicores ("m") = 1 vCPU +or 1 Core. Each node in a cluster inspects the operating system to determine the +amount of CPU cores on the node, then multiplies that value by 1000 to express its +total capacity. For example, if a node has 2 cores, the node's CPU capacity would +be represented as 2000m. If you wanted to use 1/10 of a single core, it would +be represented as 100m.

    +

    Memory and ephemeral storage are measured in bytes. In addition, it may be used +with SI suffixes (E, P, T, G, M, K) or their power-of-two-equivalents (Ei, Pi, Ti, +Gi, Mi, Ki).

    +
    +

    What happens if I did not specify the Compute Resources on Pod YAML?

    +

    If you don't specify the compute resources for your objects i.e. containers, +to restrict them from running with unbounded compute resources from our cluster +the objects will use the limit ranges specified for your project namespace. +With limit ranges, we restrict resource consumption for specific objects in +a project. You can also be able to view the current limit range for your project +by going into the Administrator perspective and then navigating into the +"LimitRange details" as shown below:

    +

    Limit Ranges

    +
    +

    How to specify pod to use GPU?

    +

    So from a Developer perspective, the only thing you have to worry about is +asking for GPU resources when defining your pods, with something like:

    +
    spec:
    +  containers:
    +  - name: app
    +    image: ...
    +    resources:
    +      requests:
    +        memory: "64Mi"
    +        cpu: "250m"
    +        nvidia.com/gpu: 1
    +      limits:
    +        memory: "128Mi"
    +        cpu: "500m"
    +
    +

    In the sample Pod Spec above, you can allocate GPUs to pods by specifying the GPU +resource nvidia.com/gpu and indicating the desired number of GPUs. This number +should not exceed the GPU quota specified by the value of the +"OpenShift Request on GPU Quota" attribute that has been approved for your +"NERC-OCP (OpenShift)" resource allocation on NERC's ColdFront as +described here.

    +

    If you need to increase this quota value, you can request a change as +explained here.

    +

    The "resources" section under "containers" with the nvidia.com/gpu specification +indicates the number of GPUs you want in this container. Below is an example of +a running pod YAML that requests the GPU device with a count of 2:

    +
    apiVersion: v1
    +kind: Pod
    +metadata:
    +  name: gpu-pod
    +spec:
    +  restartPolicy: Never
    +  containers:
    +    - name: cuda-container
    +      image: nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2
    +      command: ["sleep"]
    +      args: ["infinity"]
    +      resources:
    +        limits:
    +          nvidia.com/gpu: 2
    +  nodeSelector:
    +    nvidia.com/gpu.product: NVIDIA-A100-SXM4-40GB
    +
    +

    On opened YAML editor paste the contents of the above given pod YAML as shown below:

    +

    YAML Editor GPU Pod

    +

    After the pod is running, navigate to the pod details and execute the following +command in the Terminal to view the currently available NVIDIA GPU devices:

    +

    NVIDIA SMI A100 command

    +

    Additionally, you can execute the following command to narrow down and retrieve +the name of the GPU device:

    +
    nvidia-smi --query-gpu=gpu_name --format=csv,noheader --id=0 | sed -e 's/ /-/g'
    +
    +NVIDIA-A100-SXM4-40GB
    +
    +

    How to select a different GPU device?

    +

    We can specify information about the GPU product type, family, count, and so on, +as shown in the Pod Spec above. Also, these node labels can be used in the Pod Spec +to schedule workloads based on criteria such as the GPU device name, as shown under +nodeSelector as shown below:

    +
    apiVersion: v1
    +kind: Pod
    +metadata:
    +  name: gpu-pod2
    +spec:
    +  restartPolicy: Never
    +  containers:
    +    - name: cuda-container
    +      image: nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2
    +      command: ["sleep"]
    +      args: ["infinity"]
    +      resources:
    +        limits:
    +          nvidia.com/gpu: 1
    +  nodeSelector:
    +    nvidia.com/gpu.product: Tesla-V100-PCIE-32GB
    +
    +

    When you run the nvidia-smi command in the terminal, you can observe the +availability of the different V100 NVIDIA GPU device, as shown below:

    +

    NVIDIA SMI V100 command

    +

    Scaling

    +

    Scaling defines the number of pods or instances of the application you want to +deploy. Bare pods not managed by a replication controller will not be rescheduled +in the event of a node disruption. You can deploy your application using Deployment +or Deployment Config objects to maintain the desired number of healthy pods and +manage them from the web console. You can create deployment strategies +that help reduce downtime during a change or an upgrade to the application. For +more information about deployment, please read this.

    +
    +

    Benefits of Scaling

    +

    This will allow for a quicker response to peaks in demand, and reduce costs by +automatically scaling down when resources are no longer needed.

    +
    +

    Scaling application pods, resources and observability

    +

    The Topology view provides the details of the deployed components in the +Overview panel. You can use the Details, Resources and Observe +tabs to scale the application pods, check build status, services, routes, metrics, +and events as follows:

    +

    Click on the component node to see the Overview panel to the right.

    +

    Use the Details tab to:

    +
      +
    • +

      Scale your pods using the up and down arrows to increase or decrease the number +of pods or instances of the application manually as shown below:

      +

      Scale the Pod Count

      +

      Alternatively, we can easily configure and modify the pod counts by right-click +the application to see the edit options available and selecting the Edit Pod +Count as shown below:

      +

      Edit the Pod Count

      +
    • +
    • +

      Check the Labels, Annotations, and Status of the application.

      +
    • +
    +

    Click the Resources tab to:

    +
      +
    • +

      See the list of all the pods, view their status, access logs, and click on the +pod to see the pod details.

      +
    • +
    • +

      See the builds, their status, access logs, and start a new build if needed.

      +
    • +
    • +

      See the services and routes used by the component.

      +
    • +
    +

    Click the Observe tab to:

    +
      +
    • +

      See the metrics to see CPU usage, Memory usage and Bandwidth consumption.

      +
    • +
    • +

      See the Events.

      +
      +

      Detailed Monitoring your project and application metrics

      +

      On the left navigation panel of the Developer perspective, click +Observe to see the Dashboard, Metrics, Alerts, and Events for your project. +For more information about Monitoring project and application metrics +using the Developer perspective, please +read this.

      +
      +
    • +
    +

    Scaling manually

    +

    To manually scale a DeploymentConfig object, use the oc scale command.

    +
    oc scale dc <dc_name> --replicas=<replica_count>
    +
    +

    For example, the following command sets the replicas in the frontend DeploymentConfig +object to 3.

    +
    oc scale dc frontend --replicas=3
    +
    +

    The number of replicas eventually propagates to the desired and current state of +the deployment configured by the DeploymentConfig object frontend.

    +
    +

    Scaling applications based on a schedule (Cron)

    +

    You can also integrate schedule based scaling uses OpenShift/Kubernetes native +resources called CronJob that execute a task periodically (date + time) +written in Cron format. For example, +scaling an app to 5 replicas at 0900; and then scaling it down to 1 pod at 2359. +To learn more about this, please refer to this blog post.

    +
    +

    AutoScaling

    +

    We can configure automatic scaling, or autoscaling, for applications to match +incoming demand. This feature automatically adjusts the scale of a replication +controller or deployment configuration based on metrics collected from the pods +belonging to that replication controller or deployment configuration. You can +create a Horizontal Pod Autoscaler (HPA) for any deployment, deployment config, +replica set, replication controller, or stateful set.

    +

    For instance, if an application receives no traffic, it is scaled down to the +minimum number of replicas configured for the application. Conversely, replicas +can be scaled up to meet demand if traffic to the application increases.

    +

    Understanding Horizontal Pod Autoscalers (HPA)

    +

    You can create a horizontal pod autoscaler to specify the minimum and maximum +number of pods you want to run, as well as the CPU utilization or memory utilization +your pods should target.

    + + + + + + + + + + + + + + + + + +
    MetricDescription
    CPU UtilizationNumber of CPU cores used. Can be used to calculate a percentage of the pod’s requested CPU.
    Memory UtilizationAmount of memory used. Can be used to calculate a percentage of the pod’s requested memory.
    +

    After you create a horizontal pod autoscaler, OCP begins to query the CPU and/or +memory resource metrics on the pods. When these metrics are available, the HPA +computes the ratio of the current metric utilization with the desired metric +utilization, and scales up or down accordingly. The query and scaling occurs at +a regular interval, but can take one to two minutes before metrics become available.

    +

    For replication controllers, this scaling corresponds directly to the replicas +of the replication controller. For deployment configurations, scaling corresponds +directly to the replica count of the deployment configuration. Note that autoscaling +applies only to the latest deployment in the Complete phase.

    +

    For more information on how the HPA works, read this documentation.

    +
    +

    Very Important Note

    +

    To implement the HPA, all targeted pods must have a Resource limits +set on their containers. The HPA will not have CPU and Memory metrics until +Resource limits are set. CPU request and limit must be set before CPU utilization +can be set. Memory request and limit must be set before Memory utilization +can be set.

    +
    +

    Resource Limit

    +

    Resource limits control how much CPU and memory a container will consume on +a node. You can specify a limit on how much memory and CPU an container can consume +in both request and limit values. You can also specify the min request and max +limit of a given container as well as the max ratio between request and limit. +we can easily configure and modify the Resource Limit by right-click the +application to see the edit options available as shown below:

    +

    Resource Limits Popup

    +

    Then selecting the Edit resource limits link to set the amount of CPU and Memory +resources a container is guaranteed or allowed to use when running. In the pod +specifications, you must specify the resource requests, such as CPU and memory as +described here.

    +

    The HPA uses this specification to determine the resource utilization and then +scales the target up or down. Utilization values are calculated as a percentage +of the resource requests of each pod. Missing resource request values can affect +the optimal performance of the HPA.

    +

    Resource Limits Form

    +

    Creating a horizontal pod autoscaler by using the web console

    +

    From the web console, you can create a HPA that specifies the minimum and maximum +number of pods you want to run on a Deployment or DeploymentConfig object. You +can also define the amount of CPU or memory usage that your pods should target. +The HPA increases and decreases the number of replicas between the minimum and +maximum numbers to maintain the specified CPU utilization across all pods.

    +

    To create an HPA in the web console

    +
      +
    • +

      In the Topology view, click the node to reveal the side pane.

      +
    • +
    • +

      From the Actions drop-down list, select Add HorizontalPodAutoscaler as +shown below:

      +

      Horizontal Pod Autoscaler Popup

      +
    • +
    • +

      This will open the Add HorizontalPodAutoscaler form as shown below:

      +

      Horizontal Pod Autoscaler Form

      +
      +

      Configure via: Form or YAML View

      +

      While creating or editing the horizontal pod autoscaler in the web console, +you can switch from Form view to YAML view.

      +
      +
    • +
    • +

      From the Add HorizontalPodAutoscaler form, define the name, minimum and maximum +pod limits, the CPU and memory usage, and click Save.

      +
    • +
    +

    To edit an HPA in the web console

    +
      +
    • +

      In the Topology view, click the node to reveal the side pane.

      +
    • +
    • +

      From the Actions drop-down list, select Edit HorizontalPodAutoscaler to +open the Edit Horizontal Pod Autoscaler form.

      +
    • +
    • +

      From the Edit Horizontal Pod Autoscaler form, edit the minimum and maximum +pod limits and the CPU and memory usage, and click Save.

      +
    • +
    +

    To remove an HPA in the web console

    +
      +
    • +

      In the Topology view, click the node to reveal the side panel.

      +
    • +
    • +

      From the Actions drop-down list, select Remove HorizontalPodAutoscaler.

      +
    • +
    • +

      In the confirmation pop-up window, click Remove to remove the HPA.

      +
    • +
    +
    +

    Best Practices

    +

    Read this document +to learn more about best practices regarding Horizontal Pod Autoscaler (HPA) +autoscaling.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift/decommission/decommission-openshift-resources/index.html b/openshift/decommission/decommission-openshift-resources/index.html new file mode 100644 index 00000000..077bdd38 --- /dev/null +++ b/openshift/decommission/decommission-openshift-resources/index.html @@ -0,0 +1,3728 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Decommission OpenShift Resources

    +

    You can decommission all of your NERC OpenShift resources sequentially as outlined +below.

    +

    Prerequisite

    +
      +
    • +

      Backup: Back up any critical data or configurations stored on the resources +that going to be decommissioned. This ensures that important information is not +lost during the process.

      +
    • +
    • +

      Kubernetes Objects (Resources): Please review all OpenShift Kubernetes Objects +(Resources) to ensure they are not actively used and ready to be decommissioned.

      +
    • +
    • +

      Install and configure the OpenShift CLI (oc), see How to Setup the +OpenShift CLI Tools +for more information.

      +
    • +
    +

    Delete all Data Science Project resources from the NERC's Red Hat OpenShift AI

    +

    Navigate to the NERC's Red Hat OpenShift AI (RHOAI) dashboard from the NERC's +OpenShift Web Console +via the web browser as described here.

    +

    Once you gain access to the NERC's RHOAI dashboard, you can click on specific Data +Science Project (DSP) corresponding to the appropriate allocation of resources you +want to clean up, as described here.

    +

    The NERC RHOAI dashboard will look like the one shown below, displaying all consumed +resources:

    +

    RHOAI Dashboard Before

    +

    Delete all Workbenches

    +

    Delete all workbenches by clicking on the three dots on the right side of the +individual workbench and selecting Delete workbench, as shown below:

    +

    Delete Workbench

    +

    When prompted please confirm your workbench name and then click "Delete workbench" +button as shown below:

    +

    Delete Workbench Confirmation

    +

    Delete all Cluster Storage

    +

    Delete all cluster storage by clicking on the three dots on the right side of the +individual cluster storage and selecting Delete storage, as shown below:

    +

    Delete Cluster Storage Confirmation

    +

    When prompted please confirm your cluster storage name and then click "Delete storage" +button as shown below:

    +

    Delete Cluster Storage Confirmation

    +

    Delete all Data connections

    +

    Delete all data connections by clicking on the three dots on the right side of the +individual data connection and selecting Delete data connection, as shown below:

    +

    Delete Data Connection

    +

    When prompted please confirm your data connection name and then click "Delete data +connection" button as shown below:

    +

    Delete Data Connection Confirmation

    +

    Delete all Pipelines

    +

    Delete all pipelines by clicking on the three dots on the right side of the +individual pipeline and selecting Delete pipeline, as shown below:

    +

    Delete Pipeline

    +

    When prompted please confirm your pipeline name and then click "Delete pipeline" +button as shown below:

    +

    Delete Pipeline Confirmation

    +

    Delete all Models and Model Servers

    +

    Delete all model servers by clicking on the three dots on the right side of the +individual pipeline and selecting Delete model server, as shown below:

    +

    Delete Model Server

    +

    When prompted please confirm your model server name and then click "Delete model +server" button as shown below:

    +

    Delete Model Server Confirmation

    +
    +

    Important Note

    +

    Deleting Model Server will automatically delete ALL Models deployed on the +model server.

    +
    +

    Finally, the NERC RHOAI dashboard will look clean and empty without any resources, +as shown below:

    +

    RHOAI Dashboard After

    +

    Now, you can return to "OpenShift Web Console" by using the application launcher +icon (the black-and-white icon that looks like a grid), and choosing the "OpenShift +Console" as shown below:

    +

    The NERC OpenShift Web Console Link

    +

    Delete all resources from the NERC OpenShift

    +

    Run oc login in your local machine's terminal using your own token to authenticate +and access all your projects on the NERC OpenShift as +described here. +Please ensure you have already selected the correct project that needs to be +decommissioned, as shown below:

    +
    oc login --token=<your_token> --server=https://api.shift.nerc.mghpcc.org:6443
    +Logged into "https://api.shift.nerc.mghpcc.org:6443" as "test1_user@fas.harvard.edu" using the token provided.
    +
    +You have access to the following projects and can switch between them with 'oc project <projectname>':
    +
    +    test-project-1
    +* test-project-2
    +    test-project-3
    +
    +Using project "test-project-2".
    +
    +

    Switching to your project that need to be decommissioned by running +oc project <projectname> command:

    +
    oc project <your_openshift_project_to_decommission>
    +Using project "<your_openshift_project_to_decommission>" on server "https://api.shift.nerc.mghpcc.org:6443".
    +
    +

    Please confirm the correct project is being selected by running oc project, as +shown below:

    +
    oc project
    +Using project "<your_openshift_project_to_decommission>" on server "https://api.shift.nerc.mghpcc.org:6443".
    +
    +

    Please review all resources currently being used by your project by running +oc get all, as shown below:

    +
    oc get all
    +
    +NAME                                                                  READY   STATUS             RESTARTS       AGE
    +pod/ds-pipeline-persistenceagent-pipelines-definition-868665f7z9lpm   1/1     Running            0              141m
    +...
    +
    +NAME                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                               AGE
    +service/ds-pipeline-pipelines-definition   ClusterIP   172.30.133.168   <none>        8443/TCP,8888/TCP,8887/TCP            141m
    +...
    +
    +NAME                                                                 READY   UP-TO-DATE   AVAILABLE   AGE
    +deployment.apps/ds-pipeline-persistenceagent-pipelines-definition    1/1     1            1           141m
    +...
    +
    +NAME                                                                            DESIRED   CURRENT   READY   AGE
    +replicaset.apps/ds-pipeline-persistenceagent-pipelines-definition-868665f748    1         1         1       141m
    +...
    +
    +NAME                                                 IMAGE REPOSITORY
    +                                                TAGS   UPDATED
    +imagestream.image.openshift.io/simple-node-app-git   image-registry.openshift-image-registry.svc:5000/test-project-gpu-dc1e23/simple-node-app-git
    +
    +NAME                                                        HOST/PORT
    +                                                PATH   SERVICES                           PORT            TERMINATION          WILDCARD
    +route.route.openshift.io/ds-pipeline-pipelines-definition   ds-pipeline-pipelines-definition-test-project-gpu-dc1e23.apps.shift.nerc.mghpcc.org          ds-pipeline-pipelines-definition   oauth           reencrypt/Redirect   None
    +...
    +
    +
    +

    To list all Resources with their Names only.

    +

    To list all resources with their names only, you can run this command: +oc get all -oname.

    +

    Here, -oname flag specifies the output format. In this case, it instructs +the command to output only the names of the resources.

    +
    +

    Run the oc delete command to delete all resource objects specified as +parameters after --all within your selected project (namespace).

    +
    oc delete pod,deployment,pvc,route,service,build,buildconfig,statefulset,replicaset,cronjob,imagestream,revision,configuration,notebook --all
    +
    +
    +

    Danger

    +

    The oc delete operation will cause all resources specfied will be deleted. +This command can be very powerful and should be used with caution as it will +delete all resources in the specified project.

    +

    Always ensure that you are targeting the correct project (namespace) when using +this command to avoid unintentional deletion of resources.

    +

    Make sure to backup any important data or configurations before executing this +command to prevent accidental data loss.

    +
    +

    Please check all the resources currently being used by your project by running +oc get all, as shown below:

    +
    oc get all
    +NAME                        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                               AGE
    +service/modelmesh-serving   ClusterIP   None         <none>        8033/TCP,8008/TCP,8443/TCP,2112/TCP   7m4s
    +
    +
    +

    Important Note

    +

    The last remaining service, i.e., service/modelmesh-serving, shown when running +the oc get all command, is a REQUIRED resource, and so you don't need +to clean it up.

    +
    +

    Use ColdFront to reduce the Storage Quota to Zero

    +

    Each allocation, whether requested or approved, will be billed based on the +pay-as-you-go model. The exception is for Storage quotas, where the cost +is determined by your requested and approved allocation values +to reserve storage from the total NESE storage pool. For NERC-OCP (OpenShift) +Resource Allocations, storage quotas are specified by the "OpenShift Request +on Storage Quota (GiB)" and "OpenShift Limit on Ephemeral Storage Quota (GiB)" +allocation attributes.

    +

    Even if you have deleted all Persistent Volume Claims (PVC) in your OpenShift project. +It is very essential to adjust the approved values for your NERC-OCP (OpenShift) +resource allocations to zero (0) otherwise you will still be incurring a charge +for the approved storage as explained in Billing FAQs.

    +

    To achieve this, you must submit a final change request to reduce the +Storage Quotas for "OpenShift Request on Storage Quota (GiB)" and "OpenShift +Limit on Ephemeral Storage Quota (GiB)" to zero (0) for your NERC-OCP (OpenShift) +resource type. You can review and manage these resource allocations by visiting +the resource allocations. Here, +you can filter the allocation of your interest and then proceed to request a +change request.

    +
    +

    Very Important Note

    +

    Although other allocated resources i.e. CPU, RAM, GPU, etc. operate on a +pay-as-you-go model, wherein charges are incurred solely based on usage, +Expired allocations will remain accessible to the users assigned under the +allocation. It is advisable to set all other allocation quota attributes to +zero (0) during the change request. This measure ensures that existing users +will not accidentally use the resources from the project.

    +

    Alternatively, PIs can control access to the allocation by removing users +assigned to their NERC-OCP (OpenShift) allocation. This ensures that even if +the allocation expires, users will not have access to the unused resources.

    +
    +

    Please make sure your change request looks like this:

    +

    Change Request to Set All Quotas Zero

    +

    Wait until the requested resource allocation gets approved by the NERC's admin.

    +

    After approval, kindly review and verify that the quotas are accurately +reflected in your resource allocation +and OpenShift project. Please ensure +that the approved quota values are accurately displayed as explained here.

    +

    Review your Project Usage

    +

    Run the oc describe quota command to obtain detailed information about the resource +quotas for all Resources defined within your selected project (namespace). Please +note the name of the resource quota in the output of this command, i.e., <your_openshift_project_resource_quota_name>.

    +
    oc get quota
    +
    +NAME                              AGE   REQUEST                                                                               LIMIT
    +<your_openshift_project_resource_quota_name>   105s   persistentvolumeclaims: 0/0, requests.nvidia.com/gpu: 0/0, requests.storage: 0/0   limits.cpu: 0/0, limits.ephemeral-storage: 0/0, limits.memory: 0/0
    +
    +
    +

    Very Important: Ensure No Resources that will be Billed are Used

    +

    Most importantly, ensure that there is no active usage for any of your +currently allocated project resources.

    +
    +

    To review the resource quota usage for your project, you can run +oc describe quota <your_openshift_project_resource_quota_name>.

    +

    Please ensure the output appears as follows, with all Used and Hard resources +having a value of zero (0) as shown below:

    +
    oc describe quota <your_openshift_project_resource_quota_name>
    +
    +Name:                     <your_openshift_project_resource_quota_name>
    +Namespace:                <your_openshift_project_to_decommission>
    +Resource                  Used  Hard
    +--------                  ----  ----
    +limits.cpu                0     0
    +limits.ephemeral-storage  0     0
    +limits.memory             0     0
    +persistentvolumeclaims    0     0
    +requests.nvidia.com/gpu   0     0
    +requests.storage          0     0
    +
    +
    +

    Important Information

    +

    Make sure to replace <your_openshift_project_resource_quota_name> with the +actual name you find in the output, which is typically in this format: <your_openshift_project_to_decommission>-project.

    +
    +

    Review your Project's Resource Quota from the OpenShift Web Console

    +

    After removing all OpenShift resources and updating all resource quotas to set +them to zero (0), you can review and verify that these changes are reflected in +your OpenShift Web Console as well.

    +

    When you are logged-in to the NERC's OpenShift Web Console, you will be redirected +to the Developer perspective which is shown selected on the perspective switcher +located at the Left side. You need to switch to the Administrator perspective +to view your Project's Resource Quota as shown below:

    +

    Perspective Switcher

    +

    On the left sidebar, navigate to Administration -> ResourceQuotas.

    +

    Click on your appropriate project name, i.e., <your_openshift_project_to_decommission>, +to view the Resource Quota details.

    +

    Resource Quota Details

    +
    +

    Very Important Note

    +

    It should also indicate that all resources have NO usage, i.e., zero (0), +and also NO maximum set, i.e., zero (0), as shown below:

    +

    Resource Quota Detail Info

    +
    +

    Finally, Archive your ColdFront Project

    +

    As a PI, you will now be able to Archive your ColdFront Project via +accessing NERC's ColdFront interface. +Please refer to these intructions +on how to archive your projects that need to be decommissioned.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift/decommission/images/change_request_zero.png b/openshift/decommission/images/change_request_zero.png new file mode 100644 index 00000000..25cfef21 Binary files /dev/null and b/openshift/decommission/images/change_request_zero.png differ diff --git a/openshift/decommission/images/cluster-storage-delete-rhoai-confirmation.png b/openshift/decommission/images/cluster-storage-delete-rhoai-confirmation.png new file mode 100644 index 00000000..1d43c988 Binary files /dev/null and b/openshift/decommission/images/cluster-storage-delete-rhoai-confirmation.png differ diff --git a/openshift/decommission/images/cluster-storage-delete-rhoai.png b/openshift/decommission/images/cluster-storage-delete-rhoai.png new file mode 100644 index 00000000..8f84fd7b Binary files /dev/null and b/openshift/decommission/images/cluster-storage-delete-rhoai.png differ diff --git a/openshift/decommission/images/delete-data-connections-rhoai-confirmation.png b/openshift/decommission/images/delete-data-connections-rhoai-confirmation.png new file mode 100644 index 00000000..90f2c972 Binary files /dev/null and b/openshift/decommission/images/delete-data-connections-rhoai-confirmation.png differ diff --git a/openshift/decommission/images/delete-data-connections-rhoai.png b/openshift/decommission/images/delete-data-connections-rhoai.png new file mode 100644 index 00000000..85dcd860 Binary files /dev/null and b/openshift/decommission/images/delete-data-connections-rhoai.png differ diff --git a/openshift/decommission/images/delete-model-server-rhoai-confirmation.png b/openshift/decommission/images/delete-model-server-rhoai-confirmation.png new file mode 100644 index 00000000..9dff8288 Binary files /dev/null and b/openshift/decommission/images/delete-model-server-rhoai-confirmation.png differ diff --git a/openshift/decommission/images/delete-model-server-rhoai.png b/openshift/decommission/images/delete-model-server-rhoai.png new file mode 100644 index 00000000..1c394b48 Binary files /dev/null and b/openshift/decommission/images/delete-model-server-rhoai.png differ diff --git a/openshift/decommission/images/delete-pipelines-rhoai-confirmation.png b/openshift/decommission/images/delete-pipelines-rhoai-confirmation.png new file mode 100644 index 00000000..c4bd4ca2 Binary files /dev/null and b/openshift/decommission/images/delete-pipelines-rhoai-confirmation.png differ diff --git a/openshift/decommission/images/delete-pipelines-rhoai.png b/openshift/decommission/images/delete-pipelines-rhoai.png new file mode 100644 index 00000000..84f80210 Binary files /dev/null and b/openshift/decommission/images/delete-pipelines-rhoai.png differ diff --git a/openshift/decommission/images/delete-workbench-rhoai-confirmation.png b/openshift/decommission/images/delete-workbench-rhoai-confirmation.png new file mode 100644 index 00000000..9729ae12 Binary files /dev/null and b/openshift/decommission/images/delete-workbench-rhoai-confirmation.png differ diff --git a/openshift/decommission/images/delete-workbench-rhoai.png b/openshift/decommission/images/delete-workbench-rhoai.png new file mode 100644 index 00000000..21d7a27e Binary files /dev/null and b/openshift/decommission/images/delete-workbench-rhoai.png differ diff --git a/openshift/decommission/images/perspective-switcher.png b/openshift/decommission/images/perspective-switcher.png new file mode 100644 index 00000000..6dcc1512 Binary files /dev/null and b/openshift/decommission/images/perspective-switcher.png differ diff --git a/openshift/decommission/images/resource_quota_detail_info.png b/openshift/decommission/images/resource_quota_detail_info.png new file mode 100644 index 00000000..37f74b8a Binary files /dev/null and b/openshift/decommission/images/resource_quota_detail_info.png differ diff --git a/openshift/decommission/images/resource_quota_details.png b/openshift/decommission/images/resource_quota_details.png new file mode 100644 index 00000000..6468add9 Binary files /dev/null and b/openshift/decommission/images/resource_quota_details.png differ diff --git a/openshift/decommission/images/rhoai-dashboard-after.png b/openshift/decommission/images/rhoai-dashboard-after.png new file mode 100644 index 00000000..3a69c45b Binary files /dev/null and b/openshift/decommission/images/rhoai-dashboard-after.png differ diff --git a/openshift/decommission/images/rhoai-dashboard-before.png b/openshift/decommission/images/rhoai-dashboard-before.png new file mode 100644 index 00000000..b2b74371 Binary files /dev/null and b/openshift/decommission/images/rhoai-dashboard-before.png differ diff --git a/openshift/decommission/images/the-nerc-openshift-web-console-link.png b/openshift/decommission/images/the-nerc-openshift-web-console-link.png new file mode 100644 index 00000000..1debfb1a Binary files /dev/null and b/openshift/decommission/images/the-nerc-openshift-web-console-link.png differ diff --git a/openshift/get-started/openshift-overview/index.html b/openshift/get-started/openshift-overview/index.html new file mode 100644 index 00000000..6e44814f --- /dev/null +++ b/openshift/get-started/openshift-overview/index.html @@ -0,0 +1,3377 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    OpenShift Overview

    +

    OpenShift is a multifaceted, container orchestration platform from Red Hat. +OpenShift Container Platform is a cloud-based Kubernetes container platform. +NERC offers a cloud development Platform-as-a-Service (PaaS) solution based +on Red Hat's OpenShift Container Platform that provides isolated, multi-tenant +containers for application development and deployment. This is optimized for +continuous containerized application development and multi-tenant deployment +which allows you and your team to focus on solving your research problems and +not infrastructure management.

    +

    Basic Components and Glossary of common terms

    +

    OpenShift is a container orchestration platform that provides a number of components +and tools to help you build, deploy, and manage applications. Here are some of the +basic components of OpenShift:

    +
      +
    • +

      Project: A project is a logical grouping of resources in the NERC's OpenShift +platform that provides isolation from others resources.

      +
    • +
    • +

      Nodes: Nodes are the physical or virtual machines that run the applications +and services in your OpenShift cluster.

      +
    • +
    • +

      Image: An image is a non-changing, definition of file structures and programs +for running an application.

      +
    • +
    • +

      Container: A container is an instance of an image with the addition of other +operating system components such as networking and running programs. Containers are +used to run applications and services in OpenShift.

      +
    • +
    • +

      Pods: Pods are the smallest deployable units defined, deployed, and managed +in OpenShift, that group related one or more containers that need to share resources.

      +
    • +
    • +

      Services: Services are logical representations of a set of pods that provide +a network endpoint for access to the application or service. Services can be used +to load balance traffic across multiple pods, and they can be accessed using a +stable DNS name. Services are assigned an IP address and port and proxy connections +to backend pods. This allows the pods to change while the connection details of the +service remain consistent.

      +
    • +
    • +

      Volume: A volume is a persistent file space available to pods and containers +for storing data. Containers are immutable and therefore upon a restart any +contents are cleared and reset to the original state of the image used to create +the container. Volumes provide storage space for files that need to persist +through container restarts.

      +
    • +
    • +

      Routes: Routes can be used to expose services to external clients to connections +outside the platform. A route is assigned a name in DNS when set up to make it easily +accessible. They can be configured with custom hostnames and TLS certificates.

      +
    • +
    • +

      Replication Controllers: A replication controller (rc) is a built-in mechanism +that ensures a defined number of pods are running at all times. An asset that indicates +how many pod replicas are required to run at a time. If a pod unexpectedly quits +or is deleted, a new copy of the pod is created and started. Additionally, if more +pods are running than the defined number, the replication controller will delete +the extra pods to get down to the defined number.

      +
    • +
    • +

      Namespace: A Namespace is a way to logically isolate resources within the Cluster. +In our case every project gets an unique namespace.

      +
    • +
    • +

      Role-based access control (RBAC): A key security control to ensure that cluster +users and workloads have only access to resources required to execute their roles.

      +
    • +
    • +

      Deployment Configurations: A deployment configuration (dc) is an extension +of a replication controller that is used to push out a new version of application +code. Deployment configurations are used to define the process of deploying +applications and services to OpenShift. Deployment configurations +can be used to specify the number of replicas, the resources required by the +application, and the deployment strategy to use.

      +
    • +
    • +

      Application URL Components: When an application developer adds an application +to a project, a unique DNS name is created for the application via a Route. All +application DNS names will have a hyphen separator between your application name +and your unique project namespace. If the application is a web application, this +DNS name is also used for the URL to access the application. All names are in +the form of <appname>-<mynamespace>.apps.shift.nerc.mghpcc.org. +For example: mytestapp-mynamespace.apps.shift.nerc.mghpcc.org.

      +
    • +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift/index.html b/openshift/index.html new file mode 100644 index 00000000..b5d11131 --- /dev/null +++ b/openshift/index.html @@ -0,0 +1,3434 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    OpenShift Tutorial Index

    +

    If you're just starting out, we recommend starting from OpenShift Overview +and going through the tutorial in order.

    +

    If you just need to review a specific step, you can find the page you need in +the list below.

    +

    OpenShift Getting Started

    + +

    OpenShift Web Console

    + +

    OpenShift command-line interface (CLI) Tools

    + +

    Creating Your First Application on OpenShift

    + +

    Editing Applications

    + +

    Storage

    + +

    Deleting Applications

    + +

    Decommission OpenShift Resources

    + +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift/logging-in/access-the-openshift-web-console/index.html b/openshift/logging-in/access-the-openshift-web-console/index.html new file mode 100644 index 00000000..05a40e32 --- /dev/null +++ b/openshift/logging-in/access-the-openshift-web-console/index.html @@ -0,0 +1,3294 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Access the NERC's OpenShift Web Console

    +

    The NERC's OpenShift Container Platform web console is a user interface that +can be accessed via the web.

    +

    You can find it at https://console.apps.shift.nerc.mghpcc.org.

    +

    The NERC Authentication supports CILogon using Keycloak for gateway authentication +and authorization that provides federated login via your institution accounts and +it is the recommended authentication method.

    +

    Make sure you are selecting "mss-keycloak" as shown here:

    +

    OpenShift Login with KeyCloak

    +

    Next, you will be redirected to CILogon welcome page as shown below:

    +

    CILogon Welcome Page

    +

    MGHPCC Shared Services (MSS) Keycloak will request approval of access to the +following information from the user:

    +
      +
    • Your CILogon user identifier
    • +
    • Your name
    • +
    • Your email address
    • +
    • Your username and affiliation from your identity provider
    • +
    +

    which are required in order to allow access your account on NERC's OpenStack +web console.

    +

    From the "Selected Identity Provider" dropdown option, please select your institution's +name. If you would like to remember your selected institution name for future +logins please check the "Remember this selection" checkbox this will bypass the +CILogon welcome page on subsequent visits and proceed directly to the selected insitution's +identity provider(IdP). Click "Log On". This will redirect to your respective institutional +login page where you need to enter your institutional credentials.

    +
    +

    Important Note

    +

    The NERC does not see or have access to your institutional account credentials, +it points to your selected insitution's identity provider and redirects back +once authenticated.

    +
    +

    Once you successfully authenticate you should see a graphical user interface to +visualize your project data and perform administrative, management, and troubleshooting +tasks.

    +

    OpenShift Web Console

    +
    +

    I can't find my project

    +

    If you are a member of several projects i.e. ColdFront NERC-OCP (OpenShift) +allocations, you may need to switch the project before you can see and use +OpenShift resources you or your team has created. Clicking on the project dropdown +which is displayed near the top left side will popup the list of projects you +are in. You can search and select the new project by hovering and clicking +on the project name in that list as shown below:

    +

    OpenStack Project List

    +
    +
    +

    Important Note

    +

    The default view for the OpenShift Container Platform web console is the Developer +perspective.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift/logging-in/images/CILogon_interface.png b/openshift/logging-in/images/CILogon_interface.png new file mode 100644 index 00000000..fd1c073f Binary files /dev/null and b/openshift/logging-in/images/CILogon_interface.png differ diff --git a/openshift/logging-in/images/CLI-login-tools.png b/openshift/logging-in/images/CLI-login-tools.png new file mode 100644 index 00000000..70438c32 Binary files /dev/null and b/openshift/logging-in/images/CLI-login-tools.png differ diff --git a/openshift/logging-in/images/copy-oc-cli-login-command.png b/openshift/logging-in/images/copy-oc-cli-login-command.png new file mode 100644 index 00000000..b1d7b8a6 Binary files /dev/null and b/openshift/logging-in/images/copy-oc-cli-login-command.png differ diff --git a/openshift/logging-in/images/display-token.png b/openshift/logging-in/images/display-token.png new file mode 100644 index 00000000..6a7a5b37 Binary files /dev/null and b/openshift/logging-in/images/display-token.png differ diff --git a/openshift/logging-in/images/nerc_openshift_web_console.png b/openshift/logging-in/images/nerc_openshift_web_console.png new file mode 100644 index 00000000..498823e5 Binary files /dev/null and b/openshift/logging-in/images/nerc_openshift_web_console.png differ diff --git a/openshift/logging-in/images/oc-login-command.png b/openshift/logging-in/images/oc-login-command.png new file mode 100644 index 00000000..49e1a7dc Binary files /dev/null and b/openshift/logging-in/images/oc-login-command.png differ diff --git a/openshift/logging-in/images/openshift-web-console.png b/openshift/logging-in/images/openshift-web-console.png new file mode 100644 index 00000000..3a35b98a Binary files /dev/null and b/openshift/logging-in/images/openshift-web-console.png differ diff --git a/openshift/logging-in/images/openshift_login.png b/openshift/logging-in/images/openshift_login.png new file mode 100644 index 00000000..6e69a3cf Binary files /dev/null and b/openshift/logging-in/images/openshift_login.png differ diff --git a/openshift/logging-in/images/openshift_project_list.png b/openshift/logging-in/images/openshift_project_list.png new file mode 100644 index 00000000..b2d42ee5 Binary files /dev/null and b/openshift/logging-in/images/openshift_project_list.png differ diff --git a/openshift/logging-in/images/perspective-switcher.png b/openshift/logging-in/images/perspective-switcher.png new file mode 100644 index 00000000..6f7956e9 Binary files /dev/null and b/openshift/logging-in/images/perspective-switcher.png differ diff --git a/openshift/logging-in/images/project-list.png b/openshift/logging-in/images/project-list.png new file mode 100644 index 00000000..9c9c8f54 Binary files /dev/null and b/openshift/logging-in/images/project-list.png differ diff --git a/openshift/logging-in/setup-the-openshift-cli/index.html b/openshift/logging-in/setup-the-openshift-cli/index.html new file mode 100644 index 00000000..6b3235d7 --- /dev/null +++ b/openshift/logging-in/setup-the-openshift-cli/index.html @@ -0,0 +1,3379 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    How to Setup the OpenShift CLI Tools

    +

    The most commonly used command-line client tool for the NERC's OpenShift is +OpenShift CLI (oc). +It is available for Linux, Windows, or macOS and allows you to create +applications and manage OpenShift Container Platform projects from a terminal.

    +

    Installing the OpenShift CLI

    +

    Installation options for the CLI vary depending on your Operating System (OS). +You can install the OpenShift CLI (oc) either by downloading the binary or by using +an Package Manager (RPM).

    +

    Unlike the web console, it allows the user to work directly with the project +source code using command scripts once they are authenticated using token.

    +

    You can download the latest oc CLI client tool binary from web console as shown +below:

    +

    oc - OpenShift Command Line Interface (CLI) Binary Download

    +

    Then add it to your path environment based on your OS choice by following this documentation.

    +

    Configuring the OpenShift CLI

    +

    You can configure the oc command tool to enable tab completion to automatically +complete oc commands or suggest options when you press Tab for the Bash or Zsh +shells by following these steps.

    +

    First Time Usage

    +

    Before you can use the oc command-line tool, you will need to authenticate to the +NERC's OpenShift platform by running built-in login command obtained from the +NERC's OpenShift Web Console. This will allow authentication and enables you to +work with your NERC's OpenShift Container Platform projects. It will create a session +that will last approximately 24 hours.

    +

    To get the oc login command with your own unique token, please login to the NERC's +OpenShift Web Console and then under your user profile link located at the top right +corner, click on Copy login command as shown below:

    +

    Copy oc CLI Login Command

    +

    It will once again ask you to provide your KeyCloak login and then once successful +it will redirect you to a static page with a link to Display Token as shown below:

    +

    Display Token

    +

    Clicking on that "Display Token" link it will show a static page with Login command +with token as shown below: +oc Login Command with Token

    +

    Copy and run the generated command on your terminal to authenticate yourself to +access the project from your terminal i.e. oc login --token=<Your-Token> --server=https://<NERC-OpenShift-Server>

    +

    If you try to run an oc command and get a permission denied message, your login +session has likely expired and you will need to re-generate the oc login command +from your NERC's OpenShift Web Console and then run the new oc login command with +new token on your terminal.

    +

    Other Useful oc Commands

    +

    This reference document +provides descriptions and example commands for OpenShift CLI (oc) developer commands.

    +
    +

    Important Note

    +

    Run oc help to list all commands or run oc <command> --help to get additional +details for a specific command.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift/logging-in/the-openshift-cli/index.html b/openshift/logging-in/the-openshift-cli/index.html new file mode 100644 index 00000000..2daa38f3 --- /dev/null +++ b/openshift/logging-in/the-openshift-cli/index.html @@ -0,0 +1,3263 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    OpenShift command-line interface (CLI) Tools Overview

    +

    With the OpenShift CLI, the oc command, you can create applications and manage +OpenShift Container Platform projects from a terminal.

    +

    The web console provides a comprehensive set of tools for managing your projects +and applications. There are, however, some tasks that can only be performed using +a command-line tool called oc.

    +

    The OpenShift CLI is ideal in the following situations:

    +
      +
    • +

      Working directly with project source code

      +
    • +
    • +

      Scripting OpenShift Container Platform operations

      +
    • +
    • +

      Managing projects while restricted by bandwidth resources and the web console is +unavailable

      +
    • +
    +

    It is recommended that developers should be comfortable with simple command-line +tasks and the the NERC's OpenShift command-line tool.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift/logging-in/web-console-overview/index.html b/openshift/logging-in/web-console-overview/index.html new file mode 100644 index 00000000..fae840a2 --- /dev/null +++ b/openshift/logging-in/web-console-overview/index.html @@ -0,0 +1,3591 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Web Console Overview

    +

    The NERC's OpenShift Container Platform (OCP) has a web-based console that can be +used to perform common management tasks such as building and deploying applications.

    +

    You can find it at https://console.apps.shift.nerc.mghpcc.org.

    +

    The web console provides tools to access and manage your application code and data.

    +

    Below is a sample screenshot of the web interface with labels describing different +sections of the NERC's OpenShift Web Console:

    +

    NERC's OpenShift Web Console Screenshot

    +
      +
    1. +

      Perspective Switcher - Drop-down to select a different perspective. The available +perspectives are a Developer view and an Administrator view.

      +
    2. +
    3. +

      Project List - Drop-down to select a different project. Based on user's active +and approved resource allocations this projects list will be updated.

      +
    4. +
    5. +

      Navigation Menu - Menu options to access different tools and settings for a project. +The list will change depending on which Perspective view you are in.

      +
    6. +
    7. +

      User Preferences - Shown the option to get and copy the OpenShift Command Line +oc login command and set your individual console preferences including default +views, language, import settings, and more.

      +
    8. +
    9. +

      View Switcher - This three dot menu is used to switch between List View +and Graph view of all your applications.

      +
    10. +
    11. +

      Main Panel - Displays basic application information. Clicking on the application +names in the main panel expands the Details Panel (7).

      +
    12. +
    13. +

      Details Panel - Displays additional information about the application selected +from the Main Panel. This includes detailed information about the running application, +applications builds, routes, and more. Tabs at the top of this panel will change +the view to show additional information such as Details and Resources.

      +
    14. +
    +
    +

    Perspective Switcher

    +

    When you are logged-in, you will be redirected to the Developer perspective +which is shown selected on the perspective switcher located at the Left side. You +can switch between the Administrator perspective and the Developer perspective +as per your roles and permissions in a project.

    +

    Perspective Switcher

    +

    About the Administrator perspective in the web console

    +

    The Administrator perspective enables you to view the cluster inventory, capacity, +general and specific utilization information, and the stream of important events, +all of which help you to simplify planning and troubleshooting tasks. Both project +administrators and cluster administrators can view the Administrator perspective.

    +
    +

    Important Note

    +

    The default web console perspective that is shown depends on the role of the +user. The Administrator perspective is displayed by default if the user is +recognized as an administrator.

    +
    +

    About the Developer perspective in the web console

    +

    The Developer perspective offers several built-in ways to deploy applications, +services, and databases.

    +
    +

    Important Note

    +

    The default view for the OpenShift Container Platform web console is the Developer +perspective.

    +
    +

    The web console provides a comprehensive set of tools for managing your projects +and applications.

    +

    Project List

    +

    You can select or switch your projects from the available project drop-down list +located on top navigation as shown below:

    +

    Project List

    +
    +

    Important Note

    +

    You can identify the currently selected project with tick mark and also +you can click on star icon to keep the project under your Favorites list.

    +
    + +

    Topology

    +

    The Topology view in the Developer perspective of the web console provides a +visual representation of all the applications within a project, their build status, +and the components and services associated with them. If you have no workloads or +applications in the project, the Topology view displays the available options to +create applications. If you have existing workloads, the Topology view graphically +displays your workload nodes. To read more about how to view the topology of +your application please read this official documentation from Red Hat

    +

    Observe

    +

    This provides you with a Dashboard to view the resource usage and also other metrics +and events that occured on your project. Here you can identify, monitor, and inspect +the usage of Memory, CPU, Network, and Storage in your project.

    + +

    This allows you to search any resources based on search criteria like Label or Name.

    +

    Builds

    +

    This menu provides tools for building and deploying applications. You can use it +to create and manage build configurations using YAML syntax, as well as view the +status and logs of your builds.

    +

    Helm

    +

    You can enable the Helm Charts here. Helm Charts is the pacakge manager that help +to easily manage definitions, installations and upgrades of you complex application. +It also shows catalog of all available helm charts for you to use by installing them.

    +

    Project

    +

    This allows you to view the overview of the currently selected project from the +drop-down list and also details about it including resource utilization and +resource quotas.

    +

    ConfigMaps

    +

    This menu allows you to view or create a new ConfigMap by entering manually YAML +or JSON definitions, or by dragging and dropping a file into the editor.

    +

    Secrets

    +

    This allows you to view or create Secrets that allows to inject sensitive data +into your application as files or environment variables.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openshift/storage/storage-overview/index.html b/openshift/storage/storage-overview/index.html new file mode 100644 index 00000000..ac807f6a --- /dev/null +++ b/openshift/storage/storage-overview/index.html @@ -0,0 +1,3501 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Storage Overview

    +

    The NERC OCP supports multiple types of storage.

    +

    Glossary of common terms for OCP storage

    +

    This glossary defines common terms that are used in the storage content.

    +

    Storage

    +

    OCP supports many types of storage, both for on-premise and cloud providers. You +can manage container storage for persistent and non-persistent data in an OCP cluster.

    +

    Storage class

    +

    A storage class provides a way for administrators to describe the classes of storage +they offer. Different classes might map to quality of service levels, backup policies, +arbitrary policies determined by the cluster administrators.

    +

    Storage types

    +

    OCP storage is broadly classified into two categories, namely ephemeral storage +and persistent storage.

    +

    Ephemeral storage

    +

    Pods and containers are ephemeral or transient in nature and designed for stateless +applications. Ephemeral storage allows administrators and developers to better manage +the local storage for some of their operations. For more information about ephemeral +storage overview, types, and management, see Understanding ephemeral storage.

    +

    Pods and containers can require temporary or transient local storage for their +operation. The lifetime of this ephemeral storage does not extend beyond the life +of the individual pod, and this ephemeral storage cannot be shared across pods.

    +

    Persistent storage

    +

    Stateful applications deployed in containers require persistent storage. OCP uses +a pre-provisioned storage framework called persistent volumes (PV) to allow cluster +administrators to provision persistent storage. The data inside these volumes can +exist beyond the lifecycle of an individual pod. Developers can use persistent +volume claims (PVCs) to request storage requirements. For more information about +persistent storage overview, configuration, and lifecycle, see Understanding +persistent storage.

    +

    Pods and containers can require permanent storage for their operation. OpenShift +Container Platform uses the Kubernetes persistent volume (PV) framework to allow +cluster administrators to provision persistent storage for a cluster. Developers +can use PVC to request PV resources without having specific knowledge of the +underlying storage infrastructure.

    +

    Persistent volumes (PV)

    +

    OCP uses the Kubernetes persistent volume (PV) framework to allow cluster +administrators to provision persistent storage for a cluster. Developers can use +PVC to request PV resources without having specific knowledge of the underlying +storage infrastructure.

    +

    Persistent volume claims (PVCs)

    +

    You can use a PVC to mount a PersistentVolume into a Pod. You can access the +storage without knowing the details of the cloud environment.

    +
    +

    Important Note

    +

    A PVC is in active use by a pod when a Pod object exists that uses the PVC.

    +
    +

    Access modes

    +

    Volume access modes describe volume capabilities. You can use access modes to match +persistent volume claim (PVC) and persistent volume (PV). The following are the +examples of access modes:

    + + + + + + + + + + + + + + + + + + + + + + + + + +
    Storage ClassDescription
    ReadWriteOnce (RWO)Allows read-write access to the volume by a single node at a time.
    ReadOnlyMany (ROX)Allows multiple nodes to read from the volume simultaneously, but only one node can write to it.
    ReadWriteMany (RWX)Allows multiple nodes to read from and write to the volume simultaneously.
    ReadWriteOncePod (RWOP)Allows read-write access to the volume by multiple pods running on the same node simultaneously.
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/access-and-security/create-a-key-pair/index.html b/openstack/access-and-security/create-a-key-pair/index.html new file mode 100644 index 00000000..e09cadba --- /dev/null +++ b/openstack/access-and-security/create-a-key-pair/index.html @@ -0,0 +1,3600 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Create a Key-pair

    +
    +

    NOTE

    +

    If you will be using PuTTY on Windows, please read this first.

    +
    +

    Add a Key Pair

    +

    For security, the VM images have password authentication disabled by default, +so you will need to use an SSH key pair to log in.

    +

    You can view key pairs by clicking Project, then click Compute panel and choose +Key Pairs from the tabs that appears. This shows the key pairs that are +available for this project.

    +

    Key Pairs

    +

    Generate a Key Pair

    +
    +

    Prerequisite

    +

    You need ssh installed in your system

    +
    +

    You can create a key pair on your local machine, then upload the public key to +the cloud. This is the recommended method.

    +

    Open a terminal and type the following commands (in this example, we have named +the key cloud.key, but you can name it anything you want):

    +
      cd ~/.ssh
    +  ssh-keygen -t rsa -f ~/.ssh/cloud.key -C "label_your_key"
    +
    +

    Example:

    +

    Generate Key Pair

    +

    You will be prompted to create a passphrase for the key. +IMPORTANT: Do not forget the passphrase! If you do, you will be unable to use +your key.

    +

    This process creates two files in your .ssh folder:

    +
      cloud.key      # private key - don’t share this with anyone, and never upload
    +  # it anywhere ever
    +  cloud.key.pub  # this is your public key
    +
    +
    +

    Pro Tip

    +

    The -C "label" field is not required, but it is useful to quickly identify +different public keys later.

    +
    +

    You could use your email address as the label, or a user@host tag that +identifies the computer the key is for.

    +

    For example, if Bob has both a laptop and a desktop computer that he will, +he might use -C "Bob@laptop" to label the key he generates on the laptop, and +-C "Bob@desktop" for the desktop.*

    +

    On your terminal:

    +
      pbcopy < ~/.ssh/cloud.key.pub  #copies the contents of public key to your clipboard
    +
    +
    +

    Pro Tip

    +

    If pbcopy isn't working, you can locate the hidden .ssh folder, open the +file in your favorite text editor, and copy it to your clipboard.

    +
    +

    Import the generated Key Pair

    +

    Now that you have created your keypair in ~/.ssh/cloud.key.pub, you can upload +it to OpenStack by either using Horizon dashboard or +OpenStack CLI as +described below:

    +

    1. Using NERC's Horizon dashboard

    +

    Go back to the Openstack Dashboard, where you should still be on the Key Pairs tab

    +

    (If not, find it under Project -> Compute -> Key Pairs)

    +

    Choose "Import Public Key". Give the key a name in the "Key Pair Name" Box, +choose "SSH Key" as the Key Type dropdown option and paste the public key that +you just copied in the "Public Key" text box.

    +

    Import Key Pair

    +

    Click "Import Public Key". You will see your key pair appear in the list.

    +

    New Key Pair

    +

    You can now skip ahead to Adding the key to an ssh-agent.

    +

    2. Using the OpenStack CLI

    +

    Prerequisites:

    +

    To run the OpenStack CLI commands, you need to have:

    + +

    To create OpenStack keypair using the CLI, do this:

    +

    Using the openstack client commands

    +

    Now that you have created your keypair in ~/.ssh/cloud.key.pub, you can upload +it to OpenStack with name "my-key" as follows:

    +
      openstack keypair create --public-key ~/.ssh/cloud.key.pub my-key
    +  +-------------+-------------------------------------------------+
    +  | Field       | Value                                           |
    +  +-------------+-------------------------------------------------+
    +  | created_at  | None                                            |
    +  | fingerprint | 1c:40:db:ea:82:c2:c3:05:58:81:84:4b:e3:4f:c2:a1 |
    +  | id          | my-key                                          |
    +  | is_deleted  | None                                            |
    +  | name        | my-key                                          |
    +  | type        | ssh                                             |
    +  | user_id     | 938eb8bfc72e4ca3ad2c94e2eb4059f7                |
    +  +-------------+-------------------------------------------------+
    +
    +

    Create a Key Pair using Horizon dashboard

    +

    Alternatively, if you are having trouble creating and importing a key pair with +the instructions above, the Openstack Horizon dashboard can make one for you.

    +

    Click "Create a Key Pair", and enter a name for the key pair.

    +

    Create Key Pair

    +

    Click on "Create a Key Pair" button. You will be prompted to download a .pem +file containing your private key.

    +

    In the example, we have named the key 'cloud_key.pem', but you can name it anything.

    +

    Save this file to your hard drive, for example in your Downloads folder.

    +

    Copy this key inside the .ssh folder on your local machine/laptop, using the +following steps:

    +
      cd ~/Downloads          # Navigate to the folder where you saved the .pem file
    +  mv cloud.pem ~/.ssh/    # This command will copy the key you downloaded to
    +  # your .ssh folder.
    +  cd ~/.ssh               # Navigate to your .ssh folder
    +  chmod 400 cloud.pem     # Change the permissions of the file
    +
    +

    To see your public key, navigate to Project -> Compute -> Key Pairs

    +

    You should see your key in the list.

    +

    Key Pairs List

    +

    If you click on the name of the newly added key, you will see a screen of +information that includes details about your public key:

    +

    View Key Pair Detail

    +

    The public key is the part of the key you distribute to VMs and remote servers.

    +

    You may find it convenient to paste it into a file inside your .ssh folder, +so you don't always need to log into the website to see it.

    +

    Call the file something like cloud_key.pub to distinguish it from your +private key.

    +
    +

    Very Important: Security Best Practice

    +

    Never share your private key with anyone, or upload it to a server!

    +
    +

    Adding your SSH key to the ssh-agent

    +

    If you have many VMs, you will most likely be using one or two VMs with public +IPs as a gateway to others which are not reachable from the internet.

    +

    In order to be able to use your key for multiple SSH hops, do NOT copy your +private key to the gateway VM!

    +

    The correct method to use Agent Forwarding, which adds the key to an ssh-agent +on your local machine and 'forwards' it over the SSH connection.

    +

    If ssh-agent is not already running in background, you need to start the +ssh-agent in the background.

    +
      eval "$(ssh-agent -s)"
    +  > Agent pid 59566
    +
    +

    Then, add the key to your ssh agent:

    +
      cd ~/.ssh
    +  ssh-add cloud.key
    +  Identity added: cloud.key (test_user@laptop)
    +
    +

    Check that it is added with the command

    +
      ssh-add -l
    +  2048 SHA256:D0DLuODzs15j2OaZnA8I52aEeY3exRT2PCsUyAXgI24 test_user@laptop (RSA)
    +
    +

    Depending on your system, you might have to repeat these steps after you reboot +or log out of your computer.

    +

    You can always check if your ssh key is added by running the ssh-add -l command.

    +

    A key with the default name id_rsa will be added by default at login, although +you will still need to unlock it with your passphrase the first time you use it.

    +

    Once the key is added, you will be able to forward it over an SSH connection, +like this:

    +
      ssh -A -i cloud.key <username>@<remote-host-IP>
    +
    +

    Connecting via SSH is discussed in more detail later in the tutorial (SSH to +Cloud VM); for now, just +proceed to the next step below.

    +

    SSH keys with PuTTY on Windows

    +

    PuTTY requires SSH keys to be in its own ppk format. To convert between +OpenSSH keys used by OpenStack and PuTTY's format, you need a utility called PuTTYgen.

    +

    If it was not installed when you originally installed PuTTY, you can get it +here: Download PuTTY.

    +

    You have 2 options for generating keys that will work with PuTTY:

    +
      +
    1. +

      Generate an OpenSSH key with ssh-keygen or from the Horizon GUI using the + instructions above, then use PuTTYgen to convert the private key to .ppk

      +
    2. +
    3. +

      Generate a .ppk key with PuTTYgen, and import the provided OpenSSH public + key to OpenStack using the 'Import the generated Key Pair' instructions + above.

      +
    4. +
    +

    There is a detailed walkthrough of how to use PuTTYgen here: Use SSH Keys with +PuTTY on Windows.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/access-and-security/images/added_rdp_security_rule.png b/openstack/access-and-security/images/added_rdp_security_rule.png new file mode 100644 index 00000000..ca8b9479 Binary files /dev/null and b/openstack/access-and-security/images/added_rdp_security_rule.png differ diff --git a/openstack/access-and-security/images/added_ssh_security_rule.png b/openstack/access-and-security/images/added_ssh_security_rule.png new file mode 100644 index 00000000..e7a0ad90 Binary files /dev/null and b/openstack/access-and-security/images/added_ssh_security_rule.png differ diff --git a/openstack/access-and-security/images/adding_new_security_groups.png b/openstack/access-and-security/images/adding_new_security_groups.png new file mode 100644 index 00000000..41105765 Binary files /dev/null and b/openstack/access-and-security/images/adding_new_security_groups.png differ diff --git a/openstack/access-and-security/images/create_key.png b/openstack/access-and-security/images/create_key.png new file mode 100644 index 00000000..a856cf15 Binary files /dev/null and b/openstack/access-and-security/images/create_key.png differ diff --git a/openstack/access-and-security/images/create_rdp_security_group.png b/openstack/access-and-security/images/create_rdp_security_group.png new file mode 100644 index 00000000..866f612d Binary files /dev/null and b/openstack/access-and-security/images/create_rdp_security_group.png differ diff --git a/openstack/access-and-security/images/create_security_group.png b/openstack/access-and-security/images/create_security_group.png new file mode 100644 index 00000000..b25b8c7c Binary files /dev/null and b/openstack/access-and-security/images/create_security_group.png differ diff --git a/openstack/access-and-security/images/default_security_group_rules.png b/openstack/access-and-security/images/default_security_group_rules.png new file mode 100644 index 00000000..cfb62096 Binary files /dev/null and b/openstack/access-and-security/images/default_security_group_rules.png differ diff --git a/openstack/access-and-security/images/edit_security_group.png b/openstack/access-and-security/images/edit_security_group.png new file mode 100644 index 00000000..9405361a Binary files /dev/null and b/openstack/access-and-security/images/edit_security_group.png differ diff --git a/openstack/access-and-security/images/generate_key.png b/openstack/access-and-security/images/generate_key.png new file mode 100644 index 00000000..36675b33 Binary files /dev/null and b/openstack/access-and-security/images/generate_key.png differ diff --git a/openstack/access-and-security/images/import-key-pair.png b/openstack/access-and-security/images/import-key-pair.png new file mode 100644 index 00000000..6c71ca15 Binary files /dev/null and b/openstack/access-and-security/images/import-key-pair.png differ diff --git a/openstack/access-and-security/images/key-pairs.png b/openstack/access-and-security/images/key-pairs.png new file mode 100644 index 00000000..0943c922 Binary files /dev/null and b/openstack/access-and-security/images/key-pairs.png differ diff --git a/openstack/access-and-security/images/key_pairs_list.png b/openstack/access-and-security/images/key_pairs_list.png new file mode 100644 index 00000000..623630b0 Binary files /dev/null and b/openstack/access-and-security/images/key_pairs_list.png differ diff --git a/openstack/access-and-security/images/new_key_pair.png b/openstack/access-and-security/images/new_key_pair.png new file mode 100644 index 00000000..45c126c6 Binary files /dev/null and b/openstack/access-and-security/images/new_key_pair.png differ diff --git a/openstack/access-and-security/images/ping_icmp_security_rule.png b/openstack/access-and-security/images/ping_icmp_security_rule.png new file mode 100644 index 00000000..cca96a57 Binary files /dev/null and b/openstack/access-and-security/images/ping_icmp_security_rule.png differ diff --git a/openstack/access-and-security/images/rdp_security_group_rules_options.png b/openstack/access-and-security/images/rdp_security_group_rules_options.png new file mode 100644 index 00000000..25f5f5ab Binary files /dev/null and b/openstack/access-and-security/images/rdp_security_group_rules_options.png differ diff --git a/openstack/access-and-security/images/security_group_add_rule.png b/openstack/access-and-security/images/security_group_add_rule.png new file mode 100644 index 00000000..3bdc2e1d Binary files /dev/null and b/openstack/access-and-security/images/security_group_add_rule.png differ diff --git a/openstack/access-and-security/images/security_group_rules.png b/openstack/access-and-security/images/security_group_rules.png new file mode 100644 index 00000000..b8e26689 Binary files /dev/null and b/openstack/access-and-security/images/security_group_rules.png differ diff --git a/openstack/access-and-security/images/security_group_rules_options.png b/openstack/access-and-security/images/security_group_rules_options.png new file mode 100644 index 00000000..80522adb Binary files /dev/null and b/openstack/access-and-security/images/security_group_rules_options.png differ diff --git a/openstack/access-and-security/images/security_groups.png b/openstack/access-and-security/images/security_groups.png new file mode 100644 index 00000000..741d48df Binary files /dev/null and b/openstack/access-and-security/images/security_groups.png differ diff --git a/openstack/access-and-security/images/sg_new_rule.png b/openstack/access-and-security/images/sg_new_rule.png new file mode 100644 index 00000000..fd582fe9 Binary files /dev/null and b/openstack/access-and-security/images/sg_new_rule.png differ diff --git a/openstack/access-and-security/images/sg_view.png b/openstack/access-and-security/images/sg_view.png new file mode 100644 index 00000000..4f829544 Binary files /dev/null and b/openstack/access-and-security/images/sg_view.png differ diff --git a/openstack/access-and-security/images/view_public_key.png b/openstack/access-and-security/images/view_public_key.png new file mode 100644 index 00000000..3a36e7be Binary files /dev/null and b/openstack/access-and-security/images/view_public_key.png differ diff --git a/openstack/access-and-security/security-groups/index.html b/openstack/access-and-security/security-groups/index.html new file mode 100644 index 00000000..55364ae2 --- /dev/null +++ b/openstack/access-and-security/security-groups/index.html @@ -0,0 +1,3520 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Security Groups

    +

    Security groups can be thought of like firewalls. They ultimately control inbound +and outbound traffic to your virtual machines.

    +

    Before you launch an instance, you should add security group rules to enable +users to ping and use SSH to connect to the instance. Security groups are sets +of IP filter rules that define networking access and are applied to all +instances within a project. To do so, you either add rules to the default +security group Add a rule to the default security group or add a new security +group with rules.

    +

    You can view security groups by clicking Project, then click Network panel and +choose Security Groups from the tabs that appears.

    +

    Navigate to Project -> Network -> Security Groups.

    +

    You should see a ‘default’ security group. The default security group allows +traffic only between members of the security group, so by default you can +always connect between VMs in this group. However, it blocks all traffic from +outside, including incoming SSH connections. In order to access instances via a +public IP, an additional security group is needed. on the other hand, for a VM that +hosts a web server, you need a security group which allows access to ports 80 +(for http) and 443 (for https).

    +

    Security Groups

    +
    +

    Important Note

    +

    We strongly advise against altering the default security group and suggest +refraining from adding extra security rules to it. This is because the +default security group is automatically assigned to any newly created VMs. +It is considered a best practice to create separate security groups for related +services, as these groups can be reused multiple times.Security groups are +very highly configurable, for insance, you might create a basic/ generic group +for ssh (port 22) and icmp (which is what we will show as an example here) +and then a separate security group for http (port 80) and https (port 443) +access if you're running a web service on your instance.

    +
    +

    You can also limit access based on where the traffic originates, using either +IP addresses or security groups to define the allowed sources.

    +

    Create a new Security Group

    +

    Allowing SSH

    +

    To allow access to your VM for things like SSH, you will need to create a +security group and add rules to it.

    +

    Click on "Create Security Group". Give your new group a name, and a brief description.

    +

    Create a Security Group

    +

    You will see some existing rules:

    +

    Existing Security Group Rules

    +

    Let's create the new rule to allow SSH. Click on "Add Rule".

    +

    You will see there are a lot of options you can configure on the Add Rule +dialog box.

    +
    +

    To check all available Rule

    +

    You can choose the desired rule template as shown under Rule dropdown options. +This will automatically select the Port required for the selected custom rule.

    +

    Security Group Rules Option

    +
    +

    Adding SSH in Security Group Rules

    +

    Enter the following values:

    +
      +
    • Rule: SSH
    • +
    • Remote: CIDR
    • +
    • CIDR: 0.0.0.0/0
    • +
    +
    +

    Note

    +

    To accept requests from a particular range of IP addresses, specify the IP +address block in the CIDR box.

    +
    +

    The new rule now appears in the list. This signifies that any instances using +this newly added Security Group will now have SSH port 22 open for requests +from any IP address.

    +

    Adding SSH in Security Group Rules

    +

    Allowing Ping

    +

    The default configuration blocks ping responses, so you will need to add an +additional group and/or rule +if you want your public IPs to respond to ping requests.

    +

    Ping is ICMP traffic, so the easiest way to allow it is to add a new rule and +choose "ALL ICMP" from the dropdown.

    +

    In the Add Rule dialog box, enter the following values:

    +
      +
    • Rule: All ICMP
    • +
    • Direction: Ingress
    • +
    • Remote: CIDR
    • +
    • CIDR: 0.0.0.0/0
    • +
    +

    Adding ICMP - ping in Security Group Rules

    +

    Instances will now accept all incoming ICMP packets.

    +

    Allowing RDP

    +

    To allow access to your VM for things like Remote Desktop Protocol (RDP), you will +need to create a security group and add rules to it.

    +

    Click on "Create Security Group". Give your new group a name, and a brief description.

    +

    Create a RDP Security Group

    +

    You will see some existing rules:

    +

    Existing Security Group Rules

    +

    Let's create the new rule to allow SSH. Click on "Add Rule".

    +

    You will see there are a lot of options you can configure on the Add Rule +dialog box.

    +

    Choose "RDP" from the Rule dropdown option as shown below:

    +

    Adding RDP in Security Group Rules

    +

    Enter the following values:

    +
      +
    • Rule: RDP
    • +
    • Remote: CIDR
    • +
    • CIDR: 0.0.0.0/0
    • +
    +
    +

    Note

    +

    To accept requests from a particular range of IP addresses, specify the IP +address block in the CIDR box.

    +
    +

    The new rule now appears in the list. This signifies that any instances using +this newly added Security Group will now have RDP port 3389 open for requests +from any IP address.

    +

    Adding RDP in Security Group Rules

    +

    Editing Existing Security Group and Adding New Security Rules

    +
      +
    • +

      Navigate to Security Groups:

      +

      Navigate to Project -> Network -> Security Groups.

      +
    • +
    • +

      Select the Security Group:

      +

      Choose the security group to which you want to add new rules.

      +
    • +
    • +

      Add New Rule:

      +

      Look for an option to add a new rule within the selected security group.

      +

      View the security group

      +

      Specify the protocol, port range, and source/destination details for the new +rule.

      +

      Add New Security Rules

      +
    • +
    • +

      Save Changes:

      +

      Save the changes to apply the new security rules to the selected security group.

      +
    • +
    +
    +

    Important Note

    +

    Security group changes may take some time to propagate to the instances +associated with the modified group. Ensure that new rules align with your +network security requirements.

    +
    +

    Update Security Group(s) to a running VM

    +

    If you want to attach/deattach any new Security Group(s) to a running VM after it +was launched. First create all new Security Group(s) with all rules required as +described here. Note that same Security Groups can be used by multiple VMs +so don't create same or redundant Security Rules based Security Groups as +there are Quota per project. Once have created all Security Groups, you can +easily attach them with any existing VM(s). You can select the VM from +Compute -> Instances tab and then select "Edit Security Groups" as shown below:

    +

    Edit Security Groups

    +

    Then select all Security Group(s) that you want to attach to this VM by clicking +on "+" icon and then click "Save" as shown here:

    +

    Select Security Groups

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/advanced-openstack-topics/domain-name-system/domain-names-for-your-vms/index.html b/openstack/advanced-openstack-topics/domain-name-system/domain-names-for-your-vms/index.html new file mode 100644 index 00000000..7af2fc6d --- /dev/null +++ b/openstack/advanced-openstack-topics/domain-name-system/domain-names-for-your-vms/index.html @@ -0,0 +1,3554 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    DNS services in NERC OpenStack

    +

    What is DNS?

    +

    The Domain Name System (DNS) is a ranked and distributed system for naming resources +connected to a network, and works by storing various types of record, such as an +IP address associated with a domain name.

    +

    DNS simplifies the communication between computers and servers through a network +and provides a user-friendly method for users to interact with and get the desired +information.

    +

    How to get user-friendly domain names for your NERC VMs?

    +

    NERC does not currently offer integrated domain name service management.

    +

    You can use one of the following methods to configure name resolution (DNS) for +your NERC's virtual instances.

    +

    1. Using freely available free Dynamic DNS services

    +

    Get a free domain or host name from no-ip.com or other

    +

    free Dynamic DNS services.

    +

    Here we will describe how to use No-IP to configure dynamic DNS.

    +

    Step 1: Create your No-IP Account.

    +

    No-IP Account Signup

    +

    During this process you can add your desired unique hostname with pre-existing +domain name or you can choose to create your hostname later on.

    +

    Create No-IP Account

    +

    Step 2: Confirm Your Account by verifing your email address.

    +

    Activate Your Account

    +

    Step 3: Log In to Your Account to view your dashboard.

    +

    Dashboard

    +

    Step 4: Add Floating IP of your instance to the Hostname.

    +

    Click on "Modify" to add your own Floating IP attached to your NERC virtual instance.

    +

    Update Floating IP on Hostname

    +

    Then, browse your host or domain name as you setup during registration or later +i.e. http://nerc.hopto.org on above example.

    +

    Easy video tutorial can be found here.

    +

    Having a free option is great for quick demonstrate your project but this has +the following restrictions:

    +

    no-ip Free vs Paid Version

    +

    2. Using Nginx Proxy Manager

    +

    You can setup Nginx Proxy Manager on one of +your NERC VMs and then use this Nginx Proxy Manager as your gateway to forward +to your other web based services.

    +

    Quick Setup

    +

    i. Launch a VM +with a security group that has opened rule for port 80, 443 and 22 to +enable SSH Port Forwarding, aka SSH Tunneling +i.e. Local Port Forwarding into the VM.

    +

    ii. SSH into your VM +using your private key after attaching a Floating IP.

    +

    iii. Install Docker and Docker-Compose +based on your OS choice for your VM.

    +

    iv. Create a docker-compose.yml file similar to this:

    +
    version: '3'
    +services:
    +  app:
    +    image: 'jc21/nginx-proxy-manager:latest'
    +    restart: unless-stopped
    +    ports:
    +      - '80:80'
    +      - '81:81'
    +      - '443:443'
    +    volumes:
    +      - ./data:/data
    +      - ./letsencrypt:/etc/letsencrypt
    +
    +

    v. Bring up your stack by running:

    +
    docker-compose up -d
    +
    +# If using docker-compose-plugin
    +docker compose up -d
    +
    +

    vi. Once the docker container runs successfully, connect to it on Admin Web Port +i.e. 81 opened for the admin interface via SSH Tunneling i.e. Local Port Forwarding +from your local machine's terminal by running:

    +

    ssh -N -L <Your_Preferred_Port>:localhost:81 <User>@<Floating-IP> -i <Path_To_Your_Private_Key>

    +

    Here, you can choose any port that is available on your machine as <Your_Preferred_Port> +and then VM's assigned Floating IP as <Floating-IP> and associated Private +Key pair attached to the VM as <Path_To_Your_Private_Key>.

    +

    For e.g. ssh -N -L 8081:localhost:81 ubuntu@199.94.60.24 -i ~/.ssh/cloud.key

    +

    vii. Once the SSH Tunneling is successful, log in to the Nginx Proxy Manager +Admin UI on your web browser: +http://localhost:<Your_Preferred_Port> i.e. http://localhost:8081

    +
    +

    Information

    +

    It may take some time to spin up the Admin UI. Your terminal running the SSH +Tunneling i.e. Local Port Forwarding will not show any logs or output when +successfully done. Also your should not close or terminate the terminal while +runnng the tunneling sessions and using the Admin UI.

    +
    +

    Default Admin User:

    +
    Email:    admin@example.com
    +Password: changeme
    +
    +

    Immediately after logging in with this default user you will be asked to modify +your admin details and change your password.

    +

    How to create a Proxy Host with Let's Encrypt SSL Certificate attached to it

    +

    i. Click on Hosts >> Proxy Hosts, then click on "Add Proxy Host" button as shown +below:

    +

    Add Proxy Hosts

    +

    ii. On the popup box, enter your Domain Names (This need to be registed from your +research institution or purchased on other third party vendor services and your have +its administrative access)

    +
    +

    Important Note

    +

    The Domain Name need to have an A Record pointing to the public floating +IP of your NERC VM where you are hosting the Nginx Proxy Manager!

    +
    +

    Please fill out the following information on this popup box:

    +
      +
    • +

      Scheme: http

      +
    • +
    • +

      Forward Hostname/IP: <The Private-IP of your NERC VM where you are hosting the +web services>

      +
    • +
    • +

      Forward Port: <Port exposed on your VM to the public>

      +
    • +
    • +

      Enable all toggles i.e. Cache Assets, Block Common Exploits, Websockets Support

      +
    • +
    • +

      Access List: Publicly Accessible

      +
    • +
    +

    For your reference, you can review your selection should looks like below with your +own Domain Name and other settings:

    +

    Add Proxy Hosts Settings

    +

    Also, select the SSL tab and then "Request a new SSL Certificate" with settings +as shown below:

    +

    Add Proxy Hosts SSL Settings

    +

    iii. Once Saved clicking the "Save" button. It should show you Status "Online" and +when you click on the created Proxy Host link it will load the web services with +https and domain name you defined i.e. https://<Your-Domain-Name>.

    +

    3. Using your local Research Computing (RC) department or academic institution's Central IT services

    +

    You need to contact and work with your Research Computing department or +academic institution's Central IT services to create A record for your hostname +that maps to the address of a Floating IP of your NERC virtual instance.

    +

    A record: The primary DNS record used to connect your domain to an IP address +that directs visitors to your website.

    +

    4. Using commercial DNS providers

    +

    Alternatively, you can purchase a fully registered domain name or host name from +commercial hosting providers and then register DNS records for your virtual instance +from commercial cloud servies i.e. AWS Route53, Azure DNS, CloudFlare, Google Cloud +Platform, GoDaddy, etc.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/advanced-openstack-topics/domain-name-system/images/activate-your-account.png b/openstack/advanced-openstack-topics/domain-name-system/images/activate-your-account.png new file mode 100644 index 00000000..8ee1cc55 Binary files /dev/null and b/openstack/advanced-openstack-topics/domain-name-system/images/activate-your-account.png differ diff --git a/openstack/advanced-openstack-topics/domain-name-system/images/create-no-ip-account.png b/openstack/advanced-openstack-topics/domain-name-system/images/create-no-ip-account.png new file mode 100644 index 00000000..3fd2830e Binary files /dev/null and b/openstack/advanced-openstack-topics/domain-name-system/images/create-no-ip-account.png differ diff --git a/openstack/advanced-openstack-topics/domain-name-system/images/dashboard.png b/openstack/advanced-openstack-topics/domain-name-system/images/dashboard.png new file mode 100644 index 00000000..36b435fd Binary files /dev/null and b/openstack/advanced-openstack-topics/domain-name-system/images/dashboard.png differ diff --git a/openstack/advanced-openstack-topics/domain-name-system/images/floating-ip-to-hostname.png b/openstack/advanced-openstack-topics/domain-name-system/images/floating-ip-to-hostname.png new file mode 100644 index 00000000..df702eb4 Binary files /dev/null and b/openstack/advanced-openstack-topics/domain-name-system/images/floating-ip-to-hostname.png differ diff --git a/openstack/advanced-openstack-topics/domain-name-system/images/nginx-proxy-manager-add-proxy-host.png b/openstack/advanced-openstack-topics/domain-name-system/images/nginx-proxy-manager-add-proxy-host.png new file mode 100644 index 00000000..8754007e Binary files /dev/null and b/openstack/advanced-openstack-topics/domain-name-system/images/nginx-proxy-manager-add-proxy-host.png differ diff --git a/openstack/advanced-openstack-topics/domain-name-system/images/nginx-proxy-manager-proxy-host.png b/openstack/advanced-openstack-topics/domain-name-system/images/nginx-proxy-manager-proxy-host.png new file mode 100644 index 00000000..6bd88219 Binary files /dev/null and b/openstack/advanced-openstack-topics/domain-name-system/images/nginx-proxy-manager-proxy-host.png differ diff --git a/openstack/advanced-openstack-topics/domain-name-system/images/nginx-proxy-manager-ssl-setting.png b/openstack/advanced-openstack-topics/domain-name-system/images/nginx-proxy-manager-ssl-setting.png new file mode 100644 index 00000000..8c040b7d Binary files /dev/null and b/openstack/advanced-openstack-topics/domain-name-system/images/nginx-proxy-manager-ssl-setting.png differ diff --git a/openstack/advanced-openstack-topics/domain-name-system/images/no-ip-free-vs-paid.png b/openstack/advanced-openstack-topics/domain-name-system/images/no-ip-free-vs-paid.png new file mode 100644 index 00000000..6eef0ed3 Binary files /dev/null and b/openstack/advanced-openstack-topics/domain-name-system/images/no-ip-free-vs-paid.png differ diff --git a/openstack/advanced-openstack-topics/domain-name-system/images/signup.png b/openstack/advanced-openstack-topics/domain-name-system/images/signup.png new file mode 100644 index 00000000..9bddb9f8 Binary files /dev/null and b/openstack/advanced-openstack-topics/domain-name-system/images/signup.png differ diff --git a/openstack/advanced-openstack-topics/python-sdk/python-SDK/index.html b/openstack/advanced-openstack-topics/python-sdk/python-SDK/index.html new file mode 100644 index 00000000..2a8f6d44 --- /dev/null +++ b/openstack/advanced-openstack-topics/python-sdk/python-SDK/index.html @@ -0,0 +1,3257 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    References

    +

    Python SDK page at PyPi

    +

    OpenStack Python SDK User Guide

    +

    From the Python SDK page at Pypi:

    +
    +

    Definition

    +

    openstacksdk is a client library for building applications to work with +OpenStack clouds. The project aims to provide a consistent and complete set of +interactions with OpenStack's many services, along with complete documentation, +examples, and tools.

    +
    +

    If you need to plug OpenStack into existing scripts using another language, +there are a variety of other SDKs at various levels of active development.

    +

    A list of known SDKs is maintained on the official OpenStack wiki. +Known SDKs

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/advanced-openstack-topics/setting-up-a-network/create-a-router/index.html b/openstack/advanced-openstack-topics/setting-up-a-network/create-a-router/index.html new file mode 100644 index 00000000..dd0821ff --- /dev/null +++ b/openstack/advanced-openstack-topics/setting-up-a-network/create-a-router/index.html @@ -0,0 +1,3332 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Create a Router

    +

    A router acts as a gateway for external connectivity.

    +

    By connecting your private network to the public network via a router, you can +connect your instance to the Internet, +install packages, etc. without needing to associate it with a public IP address.

    +

    You can view routers by clicking Project, then click Network panel and choose +Routers from the tabs that appears.

    +

    Click "Create Network" button on the right side of the screen.

    +

    In the Create Router dialog box, specify a name for the router.

    +

    From the External Network dropdown, select the ‘provider’ network, and click +"Create Router" button. This will set the Gateway for the new router to public network.

    +

    Create Router

    +

    The new router is now displayed in the Routers tab. You should now see the +router in the Network Topology view. (It also appears under Project -> Network +-> Routers).

    +

    Notice that it is now connected to the public network, but not your private network.

    +

    Router in Network

    +

    Set Internal Interface on the Router

    +

    In order to route between your private network and the outside world, you must +give the router an interface on your private network.

    +

    Perform the following steps in order to To connect a private network to the +newly created router:

    +

    a. On the Routers tab, click the name of the router.

    +

    Routers

    +

    b. On the Router Details page, click the Interfaces tab, then click Add Interface.

    +

    c. In the Add Interface dialog box, select a Subnet.

    +

    Add Interface

    +

    Optionally, in the Add Interface dialog box, set an IP Address for the router +interface for the selected subnet.

    +

    If you choose not to set the IP Address value, then by default OpenStack +Networking uses the first host IP address in the subnet.

    +

    The Router Name and Router ID fields are automatically updated.

    +

    d. Click "Add Interface".

    +

    The Router will now appear connected to the private network in Network Topology tab.

    +

    Router connected to Private Network

    +

    OR,

    +

    You can set Internal Interface on the Router From the Network Topology view, +click on the router you just created, and click ‘Add Interface’ on the popup +that appears.

    +

    Add Interface from Network Topology

    +

    Then, this will show Add Interface dialog box. So, you just complete steps b to +c as mentioned above.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/advanced-openstack-topics/setting-up-a-network/images/create_network.png b/openstack/advanced-openstack-topics/setting-up-a-network/images/create_network.png new file mode 100644 index 00000000..345c63bb Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-a-network/images/create_network.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-a-network/images/create_router.png b/openstack/advanced-openstack-topics/setting-up-a-network/images/create_router.png new file mode 100644 index 00000000..1adfcba7 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-a-network/images/create_router.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-a-network/images/default-network.png b/openstack/advanced-openstack-topics/setting-up-a-network/images/default-network.png new file mode 100644 index 00000000..dc1eb4e5 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-a-network/images/default-network.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-a-network/images/network_blank.png b/openstack/advanced-openstack-topics/setting-up-a-network/images/network_blank.png new file mode 100644 index 00000000..d46e1138 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-a-network/images/network_blank.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-a-network/images/network_new.png b/openstack/advanced-openstack-topics/setting-up-a-network/images/network_new.png new file mode 100644 index 00000000..2ac1f2b1 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-a-network/images/network_new.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-a-network/images/network_router.png b/openstack/advanced-openstack-topics/setting-up-a-network/images/network_router.png new file mode 100644 index 00000000..9456a898 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-a-network/images/network_router.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-a-network/images/network_subnet.png b/openstack/advanced-openstack-topics/setting-up-a-network/images/network_subnet.png new file mode 100644 index 00000000..ec529eee Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-a-network/images/network_subnet.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-a-network/images/network_subnet_details.png b/openstack/advanced-openstack-topics/setting-up-a-network/images/network_subnet_details.png new file mode 100644 index 00000000..0ba55658 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-a-network/images/network_subnet_details.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-a-network/images/router_add_interface.png b/openstack/advanced-openstack-topics/setting-up-a-network/images/router_add_interface.png new file mode 100644 index 00000000..0d2ac9ac Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-a-network/images/router_add_interface.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-a-network/images/router_add_interface_from_topology.png b/openstack/advanced-openstack-topics/setting-up-a-network/images/router_add_interface_from_topology.png new file mode 100644 index 00000000..238779cc Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-a-network/images/router_add_interface_from_topology.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-a-network/images/router_private_network_topology.png b/openstack/advanced-openstack-topics/setting-up-a-network/images/router_private_network_topology.png new file mode 100644 index 00000000..63043ef0 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-a-network/images/router_private_network_topology.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-a-network/images/routers.png b/openstack/advanced-openstack-topics/setting-up-a-network/images/routers.png new file mode 100644 index 00000000..9e30f48f Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-a-network/images/routers.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network/index.html b/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network/index.html new file mode 100644 index 00000000..54d4c2f6 --- /dev/null +++ b/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network/index.html @@ -0,0 +1,3351 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Set up a Private Network

    +
    +

    Default Network for your Project

    +

    During your project setup, NERC will setup a default network, router and interface +for your project that is ready-to-use.

    +

    Deafult Network Topology

    +
    +

    Create Your Own Private Network

    +

    You can view/ create your/ existing network topology by clicking Project, then click +Network panel and choose Network Topology from the tabs that appears. This +shows public network which is accessible to all projects.

    +

    Network Topology

    +

    Click on "Networks" tab and then click "Create Network" button on the right +side of the screen.

    +

    In the Create Network dialog box, specify the following values.

    +
      +
    • Network tab:
    • +
    +

    Network Name: Specify a name to identify the network.

    +

    Admin State: The state to start the network in.

    +

    Create Subnet: Select this check box to create a subnet

    +

    Give your network a name, and leave the two checkboxes for "Admin State" and +"Create Subnet" with the default settings.

    +

    Create a Network

    +
      +
    • Subnet tab: +You do not have to specify a subnet when you create a network, but if you do +not specify a subnet, the network can not be attached to an instance.
    • +
    +

    Subnet Name: Specify a name for the subnet.

    +

    Network Address: Specify the IP address for the subnet. For your private +networks, you should use IP addresses which fall within the ranges that are +specifically reserved for private networks:

    +
    10.0.0.0/8
    +172.16.0.0/12
    +192.168.0.0/16
    +
    +

    In the example below, we configure a network containing addresses 192.168.0.1 +to 192.168.0.255 using CIDR 192.168.0.0/24 +Technically, your private network will still work if you choose any IP outside +these ranges, but this causes problems with connecting to IPs in the outside +world - so don't do it!

    +

    Network Topology

    +

    IP Version: Select IPv4 or IPv6.

    +

    Gateway IP: Specify an IP address for a specific gateway. This parameter is optional.

    +

    Disable Gateway: Select this check box to disable a gateway IP address.

    +
      +
    • Subnet Details tab
    • +
    +

    Enable DHCP: Select this check box to enable DHCP so that your VM instances +will automatically be assigned an IP on the subnet.

    +

    Allocation Pools: Specify IP address pools.

    +

    DNS Name Servers: Specify a name for the DNS server. If you use '8.8.8.8' (you +may recognize this as one of Google's public name servers).

    +

    Host Routes: Specify the IP address of host routes.

    +

    For now, you can leave the Allocation Pools and Host Routes boxes empty and +click on "Create" button. But here we specify Allocation Pools of 192.168.0.2,192.168.0.254.

    +

    Network Topology

    +

    The Network Topology should now show your virtual private network next to the +public network.

    +

    Newly Created Network Topology

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/how-to-build-windows-image/index.html b/openstack/advanced-openstack-topics/setting-up-your-own-images/how-to-build-windows-image/index.html new file mode 100644 index 00000000..16fdae5c --- /dev/null +++ b/openstack/advanced-openstack-topics/setting-up-your-own-images/how-to-build-windows-image/index.html @@ -0,0 +1,3781 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Virtual Machine Image Guide

    +

    An OpenStack Compute cloud needs to have virtual machine images in order to +launch an instance. A virtual machine image is a single file which contains a +virtual disk that has a bootable operating system installed on it.

    +
    +

    Very Important

    +

    The provided Windows Server 2022 R2 image is for evaluation only. This evaluation +edition expires in 180 days. This is intended to evaluate if the product +is right for you. This is on user discretion to update, extend, and handle +licensing issues for future usages.

    +
    +
    +

    How to extend activation grace period for another 180 days?

    +

    Remote desktop to your running Windows VM. Using the search function in your +taskbar, look up Command Prompt. When you see it in the results, right-click +on it and choose Run as Administrator. Your VM's current activation grace +period can be reset by running: slmgr -rearm. Once this command is run +successfully, restart your instance for the changes to take effect. This command +typically resets the activation timer to 180 days and can be performed only for +a limited number of times. For more about this read here.

    +
    +

    Existing Microsoft Windows Image

    +

    Cloudbase Solutions provides Microsoft Windows Server 2022 R2 Standard +Evaluation for OpenStack. This +includes the required support for hypervisor-specific drivers (Hyper-V / KVM). +Also integrated are the guest initialization tools (Cloudbase-Init), security +updates, proper performance, and security configurations as well as the final Sysprep.

    +

    How to Build and Upload your custom Microsoft Windows Image

    +
    +

    Overall Process

    +

    To create a new image, you will need the installation CD or DVD ISO file for +the guest operating system. You will also need access to a virtualization tool. +You can use KVM hypervisor for this. Or, if you have a GUI desktop virtualization +tool (such as, virt-manager, VMware Fusion or +VirtualBox), you can use that instead. Convert the file to QCOW2 (KVM, Xen) +once you are done.

    +
    +

    You can customize and build the new image manually on your own system and then +upload the image to the NERC's OpenStack Compute cloud. Please follow the following +steps which describes how to obtain, create, and modify virtual machine images that +are compatible with the NERC's OpenStack.

    +

    1. Prerequisite

    +

    Follow these steps to prepare the installation

    +

    a. Download a Windows Server 2022 installation ISO file. Evaluation images are +available on the Microsoft website +(registration required).

    +

    b. Download the signed VirtIO drivers ISO file from the Fedora website.

    +

    c. Install Virtual Machine Manager on your +local Windows 10 machine using WSL:

    +
      +
    • +

      Enable WSL on your local Windows 10 subsystem for Linux:

      +

      The steps given here are straightforward, however, before following them +make sure on Windows 10, you have WSL enabled and have at least Ubuntu +20.04 or above LTS version running over it. If you don’t know how to do +that then see our tutorial on how to enable WSL and install Ubuntu over +it.

      +
    • +
    • +

      Download and install MobaXterm:

      +

      MobaXterm is a free application that can be downloaded using this link. +After downloading, install it like any other normal Windows software.

      +
    • +
    • +

      Open MobaXterm and run WSL Linux:

      +

      As you open this advanced terminal for Windows 10, WSL installed Ubuntu +app will show on the left side panel of it. Double click on that to start +the WSL session.

      +

      MobaXterm WSL Ubuntu-20.04 LTS

      +
    • +
    • +

      Install Virt-Manager:

      +
      sudo apt update
      +sudo apt install virt-manager
      +
      +
    • +
    • +

      Run Virtual Machine Manager:

      +

      Start the Virtual Machine Manager running this command on the opened +terminal: virt-manager as shown below:

      +

      MobaXterm start Virt-Manager

      +

      This will open Virt-Manager as following:

      +

      Virt-Manager interface

      +
    • +
    • +

      Connect QEMU/KVM user session on Virt-Manager:

      +

      Virt-Manager Add Connection

      +

      Virt-Manager QEMU/KVM user session

      +

      Virt-Manager Connect

      +
    • +
    +

    2. Create a virtual machine

    +

    Create a virtual machine with the storage set to a 15 GB qcow2 disk image +using Virtual Machine Manager

    +

    Virt-Manager New Virtual Machine

    +

    Virt-Manager Local install media

    +

    Virt-Manager Browse Win ISO

    +

    Virt-Manager Browse Local

    +

    Virt-Manager Select the ISO file

    +

    Virt-Manager Selected ISO

    +

    Virt-Manager default Memory and CPU

    +

    Please set 15 GB disk image size as shown below:

    +

    Virt-Manager disk image size

    +

    Set the virtual machine name and also make sure "Customize configuration before +install" is selected as shown below:

    +

    Virt-Manager Virtual Machine Name

    +

    3. Customize the Virtual machine

    +

    Virt-Manager Customize Image

    +

    Enable the VirtIO driver. By default, the Windows installer does not +detect the disk.

    +

    Virt-Manager Disk with VirtIO driver

    +

    Virt-Manager Add Hardware

    +

    Click Add Hardware > select CDROM device and attach to downloaded +virtio-win-* ISO file:

    +

    Virt-Manager Add CDROM with virtio ISO

    +

    Virt-Manager Browse virtio ISO

    +

    Virt-Manager Select virtio ISO

    +

    Make sure the NIC is using the virtio Device model as shown below:

    +

    Virt-Manager Modify  NIC

    +

    Virt-Manager Apply Change on NIC

    +

    Make sure to set proper order of Boot Options as shown below, so that +CDROM with Windows ISO is set on the first and Apply the order change. +After this please begin windows installation by clicking on "Begin Installation" +button.

    +

    Windows Boot Options

    +

    Click "Apply" button.

    +

    4. Continue with the Windows installation

    +

    You need to continue with the Windows installation process.

    +

    When prompted you can choose "Windows Server 2022 Standard Evaluation (Desktop Experinece)" +option as shown below:

    +

    Windows Desktop Installation

    +

    Windows Custom Installation

    +

    Load VirtIO SCSI drivers and network drivers by choosing an installation +target when prompted. Click Load driver and browse the file system.

    +

    Windows Custom Load Driver

    +

    Browse Local Attached Drives

    +

    Select VirtIO CDROM

    +

    Select the E:\virtio-win-*\viostor\2k22\amd64 folder. When converting an +image file with Windows, ensure the virtio driver is installed. Otherwise, +you will get a blue screen when launching the image due to lack of the virtio +driver.

    +

    Select Appropriate Win Version viostor driver

    +

    The Windows installer displays a list of drivers to install. Select the +VirtIO SCSI drivers.

    +

    Windows viostor driver Installation

    +

    Click Load driver again and browse the file system, and select the +E:\NetKVM\2k22\amd64 folder.

    +

    Select Appropriate Win Version NetKVM driver

    +

    Select the network drivers, and continue the installation.

    +

    Windows NetKVM driver Installation

    +

    Windows Ready for Installation

    +

    Windows Continue Installation

    +

    5. Restart the installed virtual machine (VM)

    +

    Once the installation is completed, the VM restarts

    +

    Define a password for the Adminstrator when prompted and click on +"Finish" button:

    +

    Windows Administrator Login

    +

    Send the "Ctrl+Alt+Delete" key using Send Key Menu, this will +unlock the windows and then prompt login for the Administrator - please login +using the password you set on previous step:

    +

    Windows Send Key

    +

    Administrator Login

    +

    Administrator Profile Finalize

    +

    Windows Installation Successful

    +

    6. Go to device manager and install all unrecognized devices

    +

    Device Manager View

    +

    Device Manager Update Driver

    +

    Device Manager Browse Driver

    +

    Browse To Attached vitio-win CDROM

    +

    Select Attached vitio-win CDROM

    +

    Successfully Installed Driver

    +

    Similarly as shown above repeat and install all missing drivers.

    +

    7. Enable Remote Desktop Protocol (RDP) login

    +

    Explicitly enable RDP login and uncheck "Require computers to use Network +Level Authentication to connect" option

    +

    Enable RDP

    +

    Disable Network Level Authentication

    +

    8. Delete the recovery parition

    +

    Delete the recovery parition which will allow expanding the Image as required +running the following commands on Command Prompt (Run as Adminstrator)

    +
        diskpart
    +    select disk 0
    +    list partition
    +    select partition 3
    +    delete partition override
    +    list partition
    +
    +

    Disk Partition 3 Delete using CMD

    +

    and then extend C: drive to take up the remaining space using "Disk Management".

    +

    C Drive Extended using Disk Management

    +

    C Drive Extended to Take all Unallocated Space

    +

    C Drive on DIsk Management

    +

    9. Install any new Windows updates. (Optional)

    +

    10. Setup cloudbase-init to generate QCOW2 image

    +

    Download and install stable version of cloudbase-init (A Windows project +providing guest initialization features, similar to cloud-init) by browsing the +Download Page on the web browser +on virtual machine running Windows, you can escape registering and just click +on "No. just show me the downloads" to navigate to the download page as +shown below:

    +

    Download Cloudbase-init

    +

    During Installation, set Serial port for logging to COM1 as shown below:

    +

    Download Cloudbase-init setup for Admin

    +

    When the installation is done, in the Complete the Cloudbase-Init Setup Wizard +window, select the Run Sysprep and Shutdown check boxes and click "Finish" +as shown below:

    +

    Cloudbase-init Final Setup Options

    +

    Wait for the machine to shutdown.

    +

    Sysprep Setup in Progress

    +

    11. Where is the newly generated QCOW2 image?

    +

    The Sysprep will generate QCOW2 image i.e. win2k22.qcow2 on /home/<YourUserName>/.local/share/libvirt/images/

    +

    Windows QCOW2 Image

    +

    12. Create OpenStack image and push to NERC's image list

    +

    You can copy/download this windows image to the folder where you configured your +OpenStack CLI as described Here and upload +to the NERC's OpenStack running the following OpenStack Image API command:

    +
    openstack image create --disk-format qcow2 --file win2k22.qcow2 MS-Windows-2022
    +
    +

    You can verify the uploaded image is available by running:

    +
    openstack image list
    +
    ++--------------------------------------+---------------------+--------+
    +| ID                                   | Name                | Status |
    ++--------------------------------------+---------------------+--------+
    +| a9b48e65-0cf9-413a-8215-81439cd63966 | MS-Windows-2022     | active |
    +| ...                                  | ...                 | ...    |
    ++--------------------------------------+---------------------+--------+
    +
    +

    13. Launch an instance using newly uploaded MS-Windows-2022 image

    +

    Login to the NERC's OpenStack and verify the +uploaded MS-Windows-2022 is there also available on the NERC's OpenStack +Images List for your project as shown below:

    +

    MS-Windows-2022 OpenStack Image

    +

    Create a Volume using that Windows Image:

    +

    MS-Winodws-2022 Image to Volume Create

    +

    Create Volume

    +

    Once successfully Volume is created, we can use the Volume to launch an instance +as shown below:

    +

    Launch Instance from Volume

    +

    Add other information and setup a Security Group that allows RDP (port: 3389) as +shown below:

    +

    Launch Instance Security Group for RDP

    +

    After some time the instance will be Active in Running state as shown below:

    +

    Running Windows Instance

    +

    Attach a Floating IP to your instance:

    +

    Associate Floating IP

    +
    +

    More About Floating IP

    +

    If you don't have any available floating IPs, please refer to +this documentation +on how to allocate a new Floating IP to your project.

    +
    +

    Click on detail view of the Instance and then click on Console tab menu +and click on "Send CtrlAltDel" button located on the top right side of +the console as shown below:

    +

    View Console of Instance

    +

    Administrator Sign in Prompt

    +

    Administrator Prompted to Change Password

    +

    Set Administrator Password

    +

    Proceed Changed Administrator Password

    +

    Administrator Password Changed Successful

    +

    14. How to have Remote Desktop login to your Windows instance

    +

    Remote Desktop login should work with the Floating IP associated with the instance:

    +

    Search Remote Desktop Protocol locally

    +

    Connect to Remote Instance using Floating IP

    +

    Prompted Administrator Login

    +

    Prompted RDP connection

    +

    Successfully Remote Connected Instance

    +

    For more detailed information about OpenStack's image management, the +OpenStack image creation guide +provides further references and details.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/0.0.add_virtual_connection.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/0.0.add_virtual_connection.png new file mode 100644 index 00000000..bea24956 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/0.0.add_virtual_connection.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/0.1.select_qemu_kvm_user_session.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/0.1.select_qemu_kvm_user_session.png new file mode 100644 index 00000000..f9751e7d Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/0.1.select_qemu_kvm_user_session.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/0.2.qemu_kvm_user_session.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/0.2.qemu_kvm_user_session.png new file mode 100644 index 00000000..0594a8d7 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/0.2.qemu_kvm_user_session.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/0.virtual-manager.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/0.virtual-manager.png new file mode 100644 index 00000000..8619bc65 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/0.virtual-manager.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/1.new_virtual_machine.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/1.new_virtual_machine.png new file mode 100644 index 00000000..88bf0bb6 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/1.new_virtual_machine.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/10.browse_driver.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/10.browse_driver.png new file mode 100644 index 00000000..1d0af30a Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/10.browse_driver.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/11.browse_CDRom_virtio_iso.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/11.browse_CDRom_virtio_iso.png new file mode 100644 index 00000000..713e2a8a Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/11.browse_CDRom_virtio_iso.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/12.select_viostor_driver.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/12.select_viostor_driver.png new file mode 100644 index 00000000..bf7ab651 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/12.select_viostor_driver.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/13.install_viostor_driver.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/13.install_viostor_driver.png new file mode 100644 index 00000000..2e8409cc Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/13.install_viostor_driver.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/14.select_netkvm_driver.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/14.select_netkvm_driver.png new file mode 100644 index 00000000..a2b127dc Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/14.select_netkvm_driver.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/15.install_netkvm_driver.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/15.install_netkvm_driver.png new file mode 100644 index 00000000..9aed0c7b Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/15.install_netkvm_driver.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/16.install_win.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/16.install_win.png new file mode 100644 index 00000000..86be1132 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/16.install_win.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/17.wait_installation_finish.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/17.wait_installation_finish.png new file mode 100644 index 00000000..2afc7442 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/17.wait_installation_finish.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/2.select_local_ISO_image.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/2.select_local_ISO_image.png new file mode 100644 index 00000000..b2c360e0 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/2.select_local_ISO_image.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/3.0.Choose_ISO.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/3.0.Choose_ISO.png new file mode 100644 index 00000000..b8cb2bde Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/3.0.Choose_ISO.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/3.1.browse_local.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/3.1.browse_local.png new file mode 100644 index 00000000..197aa8df Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/3.1.browse_local.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/3.3.open_local_iso_file.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/3.3.open_local_iso_file.png new file mode 100644 index 00000000..4706fd57 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/3.3.open_local_iso_file.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/3.4.select_iso.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/3.4.select_iso.png new file mode 100644 index 00000000..43f3bd5e Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/3.4.select_iso.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/4.default_mem_cpu.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/4.default_mem_cpu.png new file mode 100644 index 00000000..4cc64ffe Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/4.default_mem_cpu.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/5.set_15_GB_disk_size.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/5.set_15_GB_disk_size.png new file mode 100644 index 00000000..575f9e7c Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/5.set_15_GB_disk_size.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/6.set_name.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/6.set_name.png new file mode 100644 index 00000000..3e8036a0 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/6.set_name.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.0.customize_iso.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.0.customize_iso.png new file mode 100644 index 00000000..e7db808a Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.0.customize_iso.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.1.customize_sata_disk_virtio.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.1.customize_sata_disk_virtio.png new file mode 100644 index 00000000..54a9ad8e Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.1.customize_sata_disk_virtio.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.2.customize_nic_virtio.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.2.customize_nic_virtio.png new file mode 100644 index 00000000..6720b7dd Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.2.customize_nic_virtio.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.3.customize_nic_virtio_apply.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.3.customize_nic_virtio_apply.png new file mode 100644 index 00000000..74674dc2 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.3.customize_nic_virtio_apply.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.4.add_virtio_iso_hardware.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.4.add_virtio_iso_hardware.png new file mode 100644 index 00000000..13210cd8 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.4.add_virtio_iso_hardware.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.5.add_virtio_iso_cdrom.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.5.add_virtio_iso_cdrom.png new file mode 100644 index 00000000..0dfec019 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.5.add_virtio_iso_cdrom.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.6.browse_virtio_iso.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.6.browse_virtio_iso.png new file mode 100644 index 00000000..97e3b2fb Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.6.browse_virtio_iso.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.7.select_virtion_iso.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.7.select_virtion_iso.png new file mode 100644 index 00000000..41e16519 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.7.select_virtion_iso.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.8.boot_option_win_cdrom_first.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.8.boot_option_win_cdrom_first.png new file mode 100644 index 00000000..5156292e Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.8.boot_option_win_cdrom_first.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.windows_installation_desktop.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.windows_installation_desktop.png new file mode 100644 index 00000000..0160d8a8 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/7.windows_installation_desktop.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/8.custom_setup.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/8.custom_setup.png new file mode 100644 index 00000000..2e708386 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/8.custom_setup.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/9.load_driver.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/9.load_driver.png new file mode 100644 index 00000000..7ad90fe9 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/9.load_driver.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/RDP_on_local_machine.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/RDP_on_local_machine.png new file mode 100644 index 00000000..68e9920c Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/RDP_on_local_machine.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/a.mobaxterm_ubuntu_WSL.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/a.mobaxterm_ubuntu_WSL.png new file mode 100644 index 00000000..f225fa19 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/a.mobaxterm_ubuntu_WSL.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/administrator_singin_prompt.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/administrator_singin_prompt.png new file mode 100644 index 00000000..30ddf2bd Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/administrator_singin_prompt.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/b.mobaxterm_init_virt-manager.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/b.mobaxterm_init_virt-manager.png new file mode 100644 index 00000000..fb38bdd3 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/b.mobaxterm_init_virt-manager.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/browse_driver.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/browse_driver.png new file mode 100644 index 00000000..1151cebf Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/browse_driver.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/browse_driver_CDROM.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/browse_driver_CDROM.png new file mode 100644 index 00000000..648245d5 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/browse_driver_CDROM.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/c_drive_extended_to_take_all_unallocated_space.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/c_drive_extended_to_take_all_unallocated_space.png new file mode 100644 index 00000000..fc594160 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/c_drive_extended_to_take_all_unallocated_space.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/cloudinit-final-setup.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/cloudinit-final-setup.png new file mode 100644 index 00000000..ed71c365 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/cloudinit-final-setup.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/coludbase-init-serial-port-com1.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/coludbase-init-serial-port-com1.png new file mode 100644 index 00000000..7bd3f398 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/coludbase-init-serial-port-com1.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/console_win_instance.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/console_win_instance.png new file mode 100644 index 00000000..980438ad Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/console_win_instance.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/create_volume.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/create_volume.png new file mode 100644 index 00000000..a4cf4fdf Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/create_volume.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/device-manager-update-drivers.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/device-manager-update-drivers.png new file mode 100644 index 00000000..97566164 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/device-manager-update-drivers.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/disk_partition_manager_delete_partition_3.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/disk_partition_manager_delete_partition_3.png new file mode 100644 index 00000000..5db62e01 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/disk_partition_manager_delete_partition_3.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/download_icow2_win2022_image.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/download_icow2_win2022_image.png new file mode 100644 index 00000000..8f24edd0 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/download_icow2_win2022_image.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/extend_C_drive_using_disk_manager.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/extend_C_drive_using_disk_manager.png new file mode 100644 index 00000000..30fa0cb0 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/extend_C_drive_using_disk_manager.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/finalize_win_installtion_with_user.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/finalize_win_installtion_with_user.png new file mode 100644 index 00000000..b4a7a4eb Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/finalize_win_installtion_with_user.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/install_cloudbase-init.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/install_cloudbase-init.png new file mode 100644 index 00000000..0796215b Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/install_cloudbase-init.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/installed_driver.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/installed_driver.png new file mode 100644 index 00000000..e9243825 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/installed_driver.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/launch_instance_from_volume.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/launch_instance_from_volume.png new file mode 100644 index 00000000..b4497ece Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/launch_instance_from_volume.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/login_administrator.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/login_administrator.png new file mode 100644 index 00000000..71784bb6 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/login_administrator.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/new_c_drive.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/new_c_drive.png new file mode 100644 index 00000000..3e737403 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/new_c_drive.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/new_password_administrator.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/new_password_administrator.png new file mode 100644 index 00000000..af2dc2c4 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/new_password_administrator.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/ok_to_change_password_administrator.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/ok_to_change_password_administrator.png new file mode 100644 index 00000000..4efd92f7 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/ok_to_change_password_administrator.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/password_changed_success.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/password_changed_success.png new file mode 100644 index 00000000..97486a49 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/password_changed_success.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/proceed_change_password_administrator.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/proceed_change_password_administrator.png new file mode 100644 index 00000000..883a3731 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/proceed_change_password_administrator.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/prompted_administrator_login.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/prompted_administrator_login.png new file mode 100644 index 00000000..dc7da852 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/prompted_administrator_login.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/prompted_rdp_connection.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/prompted_rdp_connection.png new file mode 100644 index 00000000..a5b69f5d Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/prompted_rdp_connection.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/rdp-enable.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/rdp-enable.png new file mode 100644 index 00000000..45db2afb Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/rdp-enable.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/rdp-network-level-auth-not-required.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/rdp-network-level-auth-not-required.png new file mode 100644 index 00000000..d48ed1e8 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/rdp-network-level-auth-not-required.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/remote_connected_instance.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/remote_connected_instance.png new file mode 100644 index 00000000..e79ca6d9 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/remote_connected_instance.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/remote_connection_floating_ip.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/remote_connection_floating_ip.png new file mode 100644 index 00000000..2e11ce9b Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/remote_connection_floating_ip.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/security_group_for_rdp.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/security_group_for_rdp.png new file mode 100644 index 00000000..18ec5878 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/security_group_for_rdp.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/select_attached_driver.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/select_attached_driver.png new file mode 100644 index 00000000..8d4680e4 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/select_attached_driver.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/send_ctrl_alt_delete_key.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/send_ctrl_alt_delete_key.png new file mode 100644 index 00000000..aa54a47d Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/send_ctrl_alt_delete_key.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/setup_admininstrator_profile.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/setup_admininstrator_profile.png new file mode 100644 index 00000000..6dc916e5 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/setup_admininstrator_profile.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/stack_image_to_volume.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/stack_image_to_volume.png new file mode 100644 index 00000000..30e2e242 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/stack_image_to_volume.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/stack_images_windows.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/stack_images_windows.png new file mode 100644 index 00000000..bb47c44b Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/stack_images_windows.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/sysprep_in_progress.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/sysprep_in_progress.png new file mode 100644 index 00000000..f0839142 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/sysprep_in_progress.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/update_driver.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/update_driver.png new file mode 100644 index 00000000..21bae4e2 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/update_driver.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/win2k22_instance_running.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/win2k22_instance_running.png new file mode 100644 index 00000000..2660a499 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/win2k22_instance_running.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/win_instance_add_floating_ip.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/win_instance_add_floating_ip.png new file mode 100644 index 00000000..3ca90b3d Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/win_instance_add_floating_ip.png differ diff --git a/openstack/advanced-openstack-topics/setting-up-your-own-images/images/windows-successful-login.png b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/windows-successful-login.png new file mode 100644 index 00000000..d04b8cc3 Binary files /dev/null and b/openstack/advanced-openstack-topics/setting-up-your-own-images/images/windows-successful-login.png differ diff --git a/openstack/advanced-openstack-topics/terraform/images/Ansible vs Terraform.jfif b/openstack/advanced-openstack-topics/terraform/images/Ansible vs Terraform.jfif new file mode 100644 index 00000000..8e5de867 Binary files /dev/null and b/openstack/advanced-openstack-topics/terraform/images/Ansible vs Terraform.jfif differ diff --git a/openstack/advanced-openstack-topics/terraform/images/NERC-Terrform.png b/openstack/advanced-openstack-topics/terraform/images/NERC-Terrform.png new file mode 100644 index 00000000..c21d781f Binary files /dev/null and b/openstack/advanced-openstack-topics/terraform/images/NERC-Terrform.png differ diff --git a/openstack/advanced-openstack-topics/terraform/terraform-on-NERC/index.html b/openstack/advanced-openstack-topics/terraform/terraform-on-NERC/index.html new file mode 100644 index 00000000..e525e394 --- /dev/null +++ b/openstack/advanced-openstack-topics/terraform/terraform-on-NERC/index.html @@ -0,0 +1,3606 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Provisioning the NERC resources using Terraform

    +

    Terraform +is an open-source Infrastructure as Code (IaC) software tool that works +with NERC and allows you to orchestrate, provision, and manage infrastructure +resources quickly and easily. Terraform codifies cloud application programming +interfaces (APIs) into human-readable, declarative configuration (*.tf) files. +These files are used to manage underlying infrastructure rather than through +NERC's web-based graphical interface - Horizon. +Terraform allows you to build, change, and manage your infrastructure in a safe, +consistent, and repeatable way by defining resource configurations that you can +version, reuse, and share. Terraform’s main job is to create, modify, and destroy +compute instances, private networks and other NERC resources.

    +

    Benefits of Terraform

    +

    If you have multiple instances/ VMs you are managing for your work or research, +it can be simpler and more reproducible if you are doing it with automation tool +like Terraform.

    +

    Installing Terraform

    +

    To use Terraform you will need to install it from here.

    +

    Basic Template to use Terraform on your NERC Project

    +

    You can Git clone: git clone https://github.com/nerc-project/terraform-nerc.git +and run our base template for terraform to provision some basic NERC's OpenStack +resources using this terraform-nerc repo.

    +
    +

    Note

    +

    The main branch of this git repo should be a good starting point in developing +your own terraform code.

    +
    +

    Template to setup R Shiny server using Terraform on your NERC Project

    +

    You can Git clone: git clone https://github.com/nerc-project/terraform-nerc-r-shiny.git +and can run this template locally using terraform to provision +R Shiny server on NERC's +OpenStack resources using this terraform-nerc-r-shiny repo.

    +
    +

    Important Note

    +

    Please make sure to review bash script file i.e. install-R-Shiny.sh located +in this repo that is pointing as user-data-path variable in example.tfvars. +This repo includes the script required to setup Shiny R server. You can use +similar concept to any other project that needs custom user defined scripts +while launching an instance. If you want to change and update this script you +can just change this file and then run terraform plan and terraform apply +command pointing this example.tfvars file.

    +
    +

    How Terraform Works

    +

    Terraform reads configuration files and provides an execution plan of changes, which +can be reviewed for safety and then applied and provisioned. Terraform reads all +files with the extension .tf in your current directory. Resources can be in a +single file, or organised across several different files.

    +

    The basic Terraform deployment workflow is:

    +

    i. Scope - Identify the infrastructure for your project.

    +

    ii. Author - Write the configuration for your infrastructure in which you +declare the elements of your infrastructure that you want to create.

    +

    The format of the resource definition is straightforward and looks like this:

    +
    resource type_of_resource "resource name" {
    +    attribute = "attribue value"
    +    ...
    +}
    +
    +

    iii. Initialize - Install the plugins Terraform needs to manage the infrastructure.

    +

    iv. Plan - Preview the changes Terraform will make to match your configuration.

    +

    v. Apply - Make the planned changes.

    +

    Running Terraform

    +

    The Terraform deployment workflow on the NERC looks like this:

    +

    Automating NERC resources using Terraform

    +

    Prerequisite

    +
      +
    1. +

      You can download the "NERC's OpenStack RC File" with the credentials for +your NERC project from the NERC's OpenStack dashboard. +Then you need to source that RC file using: source *-openrc.sh. You can +read here +on how to do this.

      +
    2. +
    3. +

      Setup SSH key pairs running ssh-keygen -t rsa -f username-keypair and then +make sure the newly generated SSH key pairs exist on your ~/.ssh folder.

      +
    4. +
    +

    Terraform Init

    +

    The first command that should be run after writing a new Terraform configuration +or cloning an existing one is terraform init. This command is used to initialize +a working directory containing Terraform configuration files and install the plugins.

    +
    +

    Information

    +

    You will need to run terraform init if you make any changes to providers.

    +
    +

    Terraform Plan

    +

    terraform plan command creates an execution plan, which lets you preview the changes +that Terraform plans to make to your infrastructure based on your configuration files.

    +

    Terraform Apply

    +

    When you use terraform apply without passing it a saved plan file, it incorporates +the terraform plan command functionality and so the planning options are also +available while running this command.

    +

    Input Variables on the Command Line

    +

    You can use the -var 'NAME=VALUE' command line option to specify values for input +variables declared in your root module for e.g. terraform plan -var 'name=value'

    +

    In most cases, it will be more convenient to set values for potentially many input +variables declared in the root module of the configuration, using definitions from +a "tfvars" file and use it using -var-file=FILENAME command for e.g. +terraform plan -var-file=FILENAME

    +

    Track your infrastructure and Collaborate

    +

    Terraform keeps track of your real infrastructure in a state file, which acts as +a source of truth for your environment. Terraform uses the state file to determine +the changes to make to your infrastructure so that it will match your configuration. +Terraform's state allows you to track resource changes throughout your deployments. +You can securely share your state with your teammates, provide a stable environment +for Terraform to run in, and prevent race conditions when multiple people make +configuration changes at once.

    +

    Some useful Terraform commands

    +
    terraform init
    +
    +terraform fmt
    +
    +terraform validate
    +
    +terraform plan
    +
    +terraform apply
    +
    +terraform show
    +
    +terraform destroy
    +
    +terraform output
    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/backup/backup-with-snapshots/index.html b/openstack/backup/backup-with-snapshots/index.html new file mode 100644 index 00000000..d2ef1f13 --- /dev/null +++ b/openstack/backup/backup-with-snapshots/index.html @@ -0,0 +1,3775 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Backup with snapshots

    +

    When you start a new instance, you can choose the Instance Boot Source from the +following list:

    +
      +
    • boot from image
    • +
    • boot from instance snapshot
    • +
    • boot from volume
    • +
    • boot from volume snapshot
    • +
    +

    In its default configuration, when the instance is launched from an Image or +an Instance Snapshot, the choice for utilizing persistent storage is configured +by selecting the Yes option for "Create New Volume". Additionally, the "Delete +Volume on Instance Delete" setting is pre-set to No, as indicated here:

    +

    Launching an Instance Boot Source

    +
    +

    Very Important: How do you make your VM setup and data persistent?

    +

    For more in-depth information on making your VM setup and data persistent, +you can explore the details here.

    +
    +

    Create and use Instance snapshots

    +

    The OpenStack snapshot mechanism allows you to create new images from your instances +while they are either running or stopped. An instance snapshot captures the current +state of a running VM along with its storage, configuration, and memory. It includes +the VM's disk image, memory state, and any configuration settings. Useful for +preserving the entire state of a VM, including its running processes and in-memory +data.

    +

    This mainly serves two purposes:

    +
      +
    • +

      As a backup mechanism: save the main disk of your instance to an image in +Horizon dashboard under Project -> Compute -> Images and later boot a new instance +from this image with the saved data.

      +
    • +
    • +

      As a templating mechanism: customise and upgrade a base image and save it to +use as a template for new instances.

      +
    • +
    +
    +

    Considerations: using Instance snapshots

    +

    It consumes more storage space due to including memory state. So, make sure +your resource allocations for Storage is sufficient to hold all. They are +suitable for scenarios where maintaining the exact VM state is crucial. The +creation time of instance snapshot will be proportional to the size of the +VM state.

    +
    +

    How to create an instance snapshot

    +

    Using the CLI

    +

    Prerequisites:

    +

    To run the OpenStack CLI commands, you need to have:

    + +

    To snapshot an instance to an image using the CLI, do this:

    +
    Using the openstack client
    +
    openstack server image create --name <name of my snapshot> --wait <instance name or uuid>
    +
    +
    To view newly created snapshot image
    +
    openstack image show --fit-width <name of my snapshot>
    +
    +

    Using this snapshot, the VM can be rolled back to the previous state with a +server rebuild.

    +
    openstack server rebuild --image <name of my snapshot> <existing instance name or uuid>
    +
    +

    For e.g.

    +
    openstack server image create --name my-snapshot --wait test-nerc-0
    +
    +openstack image show --fit-width my-snapshot
    +
    +openstack server rebuild --image my-snapshot test-nerc-0
    +
    +
    +

    Important Information

    +

    During the time it takes to do the snapshot, the machine can become unresponsive.

    +
    +

    Using Horizon dashboard

    +

    Once you're logged in to NERC's Horizon dashboard, you can create a snapshot via +the "Compute -> Instances" page by clicking on the "Create snapshot" action button +on desired instance as shown below:

    +

    Create Instance Snapshot

    +

    Instance Snapshot Information

    +
    +

    Live snapshots and data consistency

    +

    We call a snapshot taken against a running instance with no downtime a +"live snapshot". These snapshots are simply disk-only snapshots, and may be +inconsistent if the instance's OS is not aware of the snapshot being taken. +This is why we highly recommend, if possible, to Shut Off the instance +before creating snapshots.

    +
    +

    How to restore from Instance snapshot

    +

    Once created, you can find the image listed under Images in the Horizon dashboard.

    +

    Navigate to Project -> Compute -> Images.

    +

    Snapshot Instance Created

    +

    You have the option to launch this image as a new instance, or by clicking on the +arrow next to Launch, create a volume from the image, edit details about the +image, update the image metadata, or delete it:

    +

    Snapshot Instance Options

    +

    You can then select the snapshot when creating a new instance or directly click +"Launch" button to use the snapshot image to launch a new instance.

    +

    Take and use Volume Snapshots

    +

    Volume snapshots

    +

    You can also create snapshots of a volume, that then later can be used to +create other volumes or to rollback to a precedent point in time. You can take +a snapshot of volume that may or may not be attached to an instance. Snapshot of +available volumes or volumes that are not attached to an instance does not affect +the data on the volume. Snapshot of a volume serves as a backup for the persistent +data on the volume at a given point in time. Snapshots are of the size of the +actual data existing on the volume at the time at which the snapshot is taken. +Volume snapshots are pointers in the RW history of a volume. The creation of a +snapshot takes a few seconds and it can be done while the volume is in-use.

    +
    +

    Warning

    +

    Taking snapshots of volumes that are in use or attached to active instances +can result in data inconsistency on the volume. This is why we highly recommend, +if possible, to Shut Off the instance before creating snapshots.

    +
    +

    Once you have the snapshot, you can use it to create other volumes based on +this snapshot. Creation time for these volumes may depend on the type of the +volume you are creating as it may entitle some data transfer. But this is efficient +for backup and recovery of specific data without the need for the complete VM state. +Also, it consumes less storage space compared to instance snapshots.

    +

    How to create a volume snapshot

    +

    Using the OpenStack CLI

    +

    Prerequisites:

    +

    To run the OpenStack CLI commands, you need to have:

    + +

    To snapshot an instance to an image using the CLI, do this:

    +
    Using the openstack client commands
    +

    openstack volume snapshot create --volume <volume name or uuid> <name of my snapshot>

    +

    For e.g.

    +
    openstack volume snapshot create --volume test_volume my-volume-snapshot
    ++-------------+--------------------------------------+
    +| Field       | Value                                |
    ++-------------+--------------------------------------+
    +| created_at  | 2022-04-12T19:48:42.707250           |
    +| description | None                                 |
    +| id          | f1cf6846-4aba-4eb8-b3e4-2ff309f8f599 |
    +| name        | my-volume-snapshot                   |
    +| properties  |                                      |
    +| size        | 25                                   |
    +| status      | creating                             |
    +| updated_at  | None                                 |
    +| volume_id   | f2630d21-f8f5-4f02-adc7-14a3aa72cc9d |
    ++-------------+--------------------------------------+
    +
    +
    +

    Important Information

    +

    if the volume is in-use, you may need to specify --force

    +
    +

    You can list the volume snapshots with the following command.

    +
    openstack volume snapshot list
    +
    +For e.g.
    +
    +openstack volume snapshot list
    ++--------------------------------------+--------------------+-------------+-----------+------+
    +| ID                                   | Name               | Description | Status    | Size |
    ++--------------------------------------+--------------------+-------------+-----------+------+
    +| f1cf6846-4aba-4eb8-b3e4-2ff309f8f599 | my-volume-snapshot | None        | available |   25 |
    ++--------------------------------------+--------------------+-------------+-----------+------+
    +
    +

    Once the volume snapshot is in available state, then you can create other volumes +based on that snapshot. You don't need to specify the size of the volume, it will +use the size of the snapshot.

    +
    openstack volume create --description --source <name of my snapshot> "Volume from an snapshot" <volume name or uuid>
    +
    +

    You can delete the snapshots just by issuing the following command

    +
    openstack volume snapshot delete <name of my snapshot>
    +
    +For e.g.
    +
    +openstack volume snapshot delete my-volume-snapshot
    +
    +

    Using NERC's Horizon dashboard

    +

    Once you're logged in to NERC's Horizon dashboard, you can create a snapshot via +the "Volumes" menu by clicking on the "Create Snapshot" action button +on desired volume as shown below:

    +

    Create Volume Snapshot

    +

    In the dialog box that opens, enter a snapshot name and a brief description.

    +

    Volume Snapshot Information

    +

    How to restore from Volume snapshot

    +

    Once a snapshot is created and is in "Available" status, you can view and manage +it under the Volumes menu in the Horizon dashboard under Volume Snapshots.

    +

    Navigate to Project -> Volumes -> Snapshots.

    +

    Volume Snapshots List

    +

    You have the option to directly launch this volume as an instance by clicking on +the arrow next to "Create Volume" and selecting "Launch as Instance".

    +

    Launch an Instance from Volume Snapshot

    +

    Also it has other options i.e. to create a volume from the snapshot, edit details +about the snapshot, delete it, or Update the snapshot metadata.

    +

    Here, we will first Create Volume from Snapshot by clicking "Create Volume" button +as shown below:

    +

    Create Volume from Volume Snapshot

    +

    In the dialog box that opens, enter a volume name and a brief description.

    +

    Create Volume Popup

    +

    Any snapshots made into volumes can be found under Volumes:

    +

    Navigate to Project -> Volumes -> Volumes.

    +

    New Volume from Volume Snapshot

    +

    Then using this newly created volume, you can launch it as an instance by clicking +on the arrow next to "Edit Volume" and selecting "Launch as Instance" as shown +below:

    +

    Launch an Instance from Volume

    +
    +

    Very Important: Requested/Approved Allocated Storage Quota and Cost

    +

    Please remember that any volumes and snapshots stored will consume your +Storage quotas, which represent the storage space allocated to your project. +For NERC (OpenStack) Resource Allocations, storage quotas are specified +by the "OpenStack Volume Quota (GiB)" and "OpenStack Swift Quota (GiB)" +allocation attributes. You can delete any volumes and snapshots that are no +longer needed to free up space. However, even if you delete volumes and snapshots, +you will still be billed based on your approved and reserved storage allocation, +which reserves storage from the total NESE storage pool.

    +

    If you request additional storage by specifying a changed quota value for +the "OpenStack Volume Quota (GiB)" and "OpenStack Swift Quota (GiB)" +allocation attributes through NERC's ColdFront interface, +invoicing for the extra storage will take place upon fulfillment or approval +of your request, as explained in our +Billing FAQs.

    +

    Conversely, if you request a reduction in the Storage quotas, specified +by the "OpenStack Volume Quota (GiB)" and "OpenStack Swift Quota (GiB)", +through a change request using ColdFront, +your invoicing will be adjusted accordingly when the request is submitted.

    +

    In both scenarios, 'invoicing' refers to the accumulation of hours +corresponding to the added or removed storage quantity.

    +
    +
    +

    Help Regarding Billing

    +

    Please send your questions or concerns regarding Storage and Cost by emailing +us at help@nerc.mghpcc.org +or, by submitting a new ticket at the NERC's Support Ticketing System.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/backup/images/create-instance-snapshot.png b/openstack/backup/images/create-instance-snapshot.png new file mode 100644 index 00000000..828a4d72 Binary files /dev/null and b/openstack/backup/images/create-instance-snapshot.png differ diff --git a/openstack/backup/images/create-volume-from-volume-snapshot-info.png b/openstack/backup/images/create-volume-from-volume-snapshot-info.png new file mode 100644 index 00000000..42fa961b Binary files /dev/null and b/openstack/backup/images/create-volume-from-volume-snapshot-info.png differ diff --git a/openstack/backup/images/create-volume-from-volume-snapshot.png b/openstack/backup/images/create-volume-from-volume-snapshot.png new file mode 100644 index 00000000..eb37419c Binary files /dev/null and b/openstack/backup/images/create-volume-from-volume-snapshot.png differ diff --git a/openstack/backup/images/instance-boot-source-options.png b/openstack/backup/images/instance-boot-source-options.png new file mode 100644 index 00000000..3bd0b6d1 Binary files /dev/null and b/openstack/backup/images/instance-boot-source-options.png differ diff --git a/openstack/backup/images/instance-image-snapshot.png b/openstack/backup/images/instance-image-snapshot.png new file mode 100644 index 00000000..dafd7b97 Binary files /dev/null and b/openstack/backup/images/instance-image-snapshot.png differ diff --git a/openstack/backup/images/instance-snapshot-info.png b/openstack/backup/images/instance-snapshot-info.png new file mode 100644 index 00000000..9eff3cc7 Binary files /dev/null and b/openstack/backup/images/instance-snapshot-info.png differ diff --git a/openstack/backup/images/launch-instance-from-volume-snapshot.png b/openstack/backup/images/launch-instance-from-volume-snapshot.png new file mode 100644 index 00000000..000a8227 Binary files /dev/null and b/openstack/backup/images/launch-instance-from-volume-snapshot.png differ diff --git a/openstack/backup/images/launch_instance_from_volume.png b/openstack/backup/images/launch_instance_from_volume.png new file mode 100644 index 00000000..105cecbf Binary files /dev/null and b/openstack/backup/images/launch_instance_from_volume.png differ diff --git a/openstack/backup/images/new-volume-from-snapshot.png b/openstack/backup/images/new-volume-from-snapshot.png new file mode 100644 index 00000000..1174b44a Binary files /dev/null and b/openstack/backup/images/new-volume-from-snapshot.png differ diff --git a/openstack/backup/images/snapshot-instance-options.png b/openstack/backup/images/snapshot-instance-options.png new file mode 100644 index 00000000..fb0b8b8f Binary files /dev/null and b/openstack/backup/images/snapshot-instance-options.png differ diff --git a/openstack/backup/images/volume-create-snapshot.png b/openstack/backup/images/volume-create-snapshot.png new file mode 100644 index 00000000..e5d8127d Binary files /dev/null and b/openstack/backup/images/volume-create-snapshot.png differ diff --git a/openstack/backup/images/volume-snapshot-info.png b/openstack/backup/images/volume-snapshot-info.png new file mode 100644 index 00000000..ebdba2ba Binary files /dev/null and b/openstack/backup/images/volume-snapshot-info.png differ diff --git a/openstack/backup/images/volume-snapshots-list.png b/openstack/backup/images/volume-snapshots-list.png new file mode 100644 index 00000000..6708c47a Binary files /dev/null and b/openstack/backup/images/volume-snapshots-list.png differ diff --git a/openstack/create-and-connect-to-the-VM/assign-a-floating-IP/index.html b/openstack/create-and-connect-to-the-VM/assign-a-floating-IP/index.html new file mode 100644 index 00000000..1cbc1c6e --- /dev/null +++ b/openstack/create-and-connect-to-the-VM/assign-a-floating-IP/index.html @@ -0,0 +1,3397 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Assign a Floating IP

    +

    When an instance is created in OpenStack, it is automatically assigned a fixed +IP address in the network to which the instance is assigned. This IP address is +permanently associated with the instance until the instance is terminated.

    +

    However, in addition to the fixed IP address, a Floating IP address can also be +attached to an instance. Unlike fixed IP addresses, Floating IP addresses can +have their associations modified at any time, regardless of the state of the +instances involved. Floating IPs are a limited resource, so your project will +have a quota based on its needs. +You should only assign public IPs to VMs that need them. This procedure details +the reservation of a Floating IP address from an existing pool of addresses and +the association of that address with a specific instance.

    +

    By attaching a Floating IP to your instance, you can ssh into your vm from your +local machine.

    +

    Make sure you are using key forwarding as described in Create a Key Pair.

    +

    Allocate a Floating IP

    +

    Navigate to Project -> Compute -> Instances.

    +

    Next to Instance Name -> Click Actions dropdown arrow (far right) -> Choose +Associate Floating IP

    +

    Floating IP Associate

    +

    If you have some floating IPs already allocated to your project which are not +yet associated with a VM, they will be available in the dropdown list on this +screen.

    +

    Floating IP Successfully Allocated

    +

    If you have no floating IPs allocated, or all your allocated IPs are in use +already, the dropdown list will be empty.

    +

    Floating IP Not Available

    +

    Click the "+" icon to allocate an IP. You will see the following screen.

    +

    Floating IP Allocated

    +

    Make sure 'provider' appears in the dropdown menu, and that you have not +already met your quota of allocated IPs.

    +

    In this example, the project has a quota of 50 floating IPs, but we have +allocated 5 so far, so we can still allocate up to next 45 Floating IPs.

    +

    Click "Allocate IP".

    +

    You will get a green "success" popup in the top right corner that shows your +public IP address and that is listed as option to choose from "IP Address" dropdown +list.

    +

    Floating IP Successfully Allocated

    +

    You will be able to select between multiple Floating IPs under "IP Address" +dropdown and any unassociated VMs from "Port to be associated" dropdown options:

    +

    Floating IP Successfully Allocated

    +

    Now click on "Associate" button.

    +

    Then, a green "success" popup in the top left +and you can see the Floating IP is attached to your VM on the Instances page:

    +

    Floating IP Successfully Associated

    +
    +

    Floating IP Quota Exceed

    +

    If you have already exceed your quota, you will get a red error message +saying "You are already using all of your available floating IPs" as shown below:

    +

    Floating IP Quota Exceed

    +

    NOTE: By default, each approved project is provided with only 2 OpenStack +Floating IPs, regardless of the units requested in the quota, as +described here. +Your PI or Project Manager(s) can adjust the quota and request additional +Floating IPs as needed, following this documentation. +This is controlled by the "OpenStack Floating IP Quota" attribute.

    +
    +

    Disassociate a Floating IP

    +

    You may need to disassociate a Floating IP from an instance which no longer +needs it, so you can assign it to one that does.

    +

    Navigate to Project -> Compute -> Instances.

    +

    Find the instance you want to remove the IP from in the list. Click the red +"Disassociate Floating IP" to the right.

    +

    This IP will be disassociated from the instance, but it will still remain +allocated to your project.

    +

    Floating IP Disassociate

    +

    Release a Floating IP

    +

    You may discover that your project does not need all the floating IPs that are +allocated to it.

    +

    We can release a Floating IP while disassociating it just we need to check the +"Release Floating IP" option as shown here:

    +

    Floating IP Successfully Disassociated

    +

    OR,

    +

    Navigate to Project -> Network -> Floating IPs.

    +

    To release the Floating IP address back into the Floating IP pool, click the +Release Floating IP option in the Actions column.

    +

    Release Floating IP

    +
    +

    Pro Tip

    +

    You can also choose multiple Floating IPs and release them all at once.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/bastion_host_demo_sg.png b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/bastion_host_demo_sg.png new file mode 100644 index 00000000..7e470006 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/bastion_host_demo_sg.png differ diff --git a/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/bastion_host_security_group.png b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/bastion_host_security_group.png new file mode 100644 index 00000000..2a7ae686 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/bastion_host_security_group.png differ diff --git a/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/bastion_host_ssh_tunnel.png b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/bastion_host_ssh_tunnel.png new file mode 100644 index 00000000..a8449770 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/bastion_host_ssh_tunnel.png differ diff --git a/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/floating_ip.png b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/floating_ip.png new file mode 100644 index 00000000..6809f131 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/floating_ip.png differ diff --git a/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/private1_sg.png b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/private1_sg.png new file mode 100644 index 00000000..e8894642 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/private1_sg.png differ diff --git a/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/private_instances_sg.png b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/private_instances_sg.png new file mode 100644 index 00000000..ad3c7d23 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/private_instances_sg.png differ diff --git a/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/security_groups.png b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/security_groups.png new file mode 100644 index 00000000..69302b86 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/security_groups.png differ diff --git a/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/select_bastion_sg_as_remote.png b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/select_bastion_sg_as_remote.png new file mode 100644 index 00000000..9d8eaa5a Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/select_bastion_sg_as_remote.png differ diff --git a/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/ssh_connection_successful.png b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/ssh_connection_successful.png new file mode 100644 index 00000000..d559b22b Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/images/ssh_connection_successful.png differ diff --git a/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.html b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.html new file mode 100644 index 00000000..d09a851e --- /dev/null +++ b/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.html @@ -0,0 +1,3365 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Bastion Host

    +

    A bastion host is a server that provides secure access to private networks over +SSH from an external network, such as the Internet. We can leverage a bastion +host to record all SSH sessions established with private network instances +which enables auditing and can help us in efforts to comply with regulatory requirements.

    +

    The following diagram illustrates the concept of using an SSH bastion host to +provide access to Linux instances running inside OpenStack cloud network.

    +

    Bastion Host SSH tunnel

    +

    In OpenStack, users can deploy instances in a private tenant network. In order +to make these instances to be accessible externally via internet, the tenant +must assign each instance a Floating IP address i.e., an external public IP. +Nevertheless, users may still want a way to deploy instances without having to +assign a Floating IP address for every instance.

    +

    This is useful in the context of an OpenStack project as you don't necessarily +want to reserve a Floating IP for all your instances. This way you can isolate +certain resources so that there is only a single point of access to them and +conserve Floating IP addresses so that you don't need as big of a quota.

    +

    Leveraging an SSH bastion host allows this sort of configuration while still +enabling SSH access to the private instances.

    +

    Before trying to access instances from the outside world using SSH tunneling +via Bastion Host, you need to make sure you have followed these steps:

    +
      +
    • You followed the instruction in Create a Key Pair +to set up a public ssh key. You can use the same key for both the bastion +host and the remote instances, or different keys; you'll just need to ensure +that the keys are loaded by ssh-agent appropriately so they can be used as +needed. Please read this instruction +on how to add ssh-agent and load your private key using ssh-add command to +access the bastion host.
    • +
    +

    Verify you have an SSH agent running. This should match whatever you built +your cluster with.

    +
    ssh-add -l
    +
    +

    If you need to add the key to your agent:

    +
    ssh-add path/to/private/key
    +
    +

    Now you can SSH into the bastion host:

    +
    ssh -A <user>@<bastion-floating-IP>
    +
    +
      +
    • +

      Your public ssh-key was selected (in the Access and Security tab) while +launching the instance.

      +
    • +
    • +

      Add two Security Groups, one will be used by the Bastion host and another one +will be used by any private instances.

      +
    • +
    +

    Security Groups

    +

    i. Bastion Host Security Group:

    +

    Allow inbound SSH (optional ICMP) for this security group. Make sure you have +added rules in the Security Groups to allow ssh to the bastion host.

    +

    Bastion Host Security Group

    +

    ii. Private Instances Security Group:

    +

    You need to select "Security Group" in Remote dropdown option, and +then select the "Bastion Host Security Group" under Security +Group option as shown below:

    +

    Bastion Host Security Group as SG

    +

    Private Instances Security Group

    + +

    Make a note of the Floating IP you have associated to your instance.

    +

    Floating IP

    +

    While adding the Bastion host and private instance, please select appropriate +Security Group as shown below:

    +

    private1:

    +

    private1 Instance Security Group

    +

    bastion_host_demo:

    +

    Bastion Host Security Group

    +

    Finally, you'll want to configure the ProxyJump setting for the remote +instances in your SSH configuration file (typically found in ~/.ssh/config). +In SSH configuration file, we can define multiple hosts by pet names, specify +custom ports, hostnames, users, etc. For example, let's say that you had a +remote instance named "private1" and you wanted to run SSH connections +through a bastion host called "bastion". The appropriate SSH configuration +file might look something like this:

    +
    Host bastion
    +  HostName 140.247.152.139
    +  User ubuntu
    +
    +Host private1
    +  Hostname 192.168.0.40
    +  User ubuntu
    +  ProxyJump bastion
    +
    +

    ProxyJump makes it super simple to jump from one host to another totally transparently.

    +

    OR,

    +

    if you don't have keys loaded by ssh-add command starting ssh-agent on your +local machine. you can load the private key using IdentityFile variable in +SSH configuration file as shown below:

    +
    Host private1
    +  Hostname 192.168.0.40
    +  User ubuntu
    +  IdentityFile ~/.ssh/cloud.key
    +  ProxyJump bastion
    +
    +Host bastion
    +  HostName 140.247.152.139
    +  User ubuntu
    +  IdentityFile ~/.ssh/cloud.key
    +
    +

    With this configuration in place, when you type ssh private1 SSH will +establish a connection to the bastion host and then through the bastion host +connect to "private1", using the agent added keys or specified private keys.

    +

    In this sort of arrangement, SSH traffic to private servers that are not +directly accessible via SSH is instead directed through a bastion host, which +proxies the connection between the SSH client and the remote servers. The +bastion host runs on an instance that is typically in a public subnet with +attached floating public IP. Private instances are in a subnet that is not +publicly accessible, and they are set up with a security group that allows SSH +access from the security group attached to the underlying instance running the +bastion host.

    +

    The user won't see any of this; he or she will just see a shell for +"private1" appear. If you dig a bit further, though (try running who on the +remote node), you'll see the connections are coming from the bastion host, not +the original SSH client.

    +

    Successful SSH Connection

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/create-and-connect-to-the-VM/create-a-Windows-VM/index.html b/openstack/create-and-connect-to-the-VM/create-a-Windows-VM/index.html new file mode 100644 index 00000000..3eb27885 --- /dev/null +++ b/openstack/create-and-connect-to-the-VM/create-a-Windows-VM/index.html @@ -0,0 +1,3725 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Create a Windows virtual machine

    +

    Launch an Instance using a boot volume

    +

    In this example, we will illustrate how to utilize a boot volume to launch a +Windows virtual machine, similar steps can be used on other types of virtual +machines. The following steps show how to create a virtual machine which boots +from an external volume:

    +
      +
    • +

      Create a volume with source data from the image

      +
    • +
    • +

      Launch a VM with that volume as the system disk

      +
    • +
    +
    +

    Recommendations

    +
      +
    • +

      The recommended method to create a Windows desktop virtual machine is boot +from volume, although you can also launch a Windows-based instance following +the normal process using boot from image as described here.

      +
    • +
    • +

      To ensure smooth upgrade and maintenance of the system, select at least +100 GiB for the size of the volume.

      +
    • +
    • +

      Make sure your project has sufficient storage quotas.

      +
    • +
    +
    +

    Create a volume from image

    +

    1. Using NERC's Horizon dashboard

    +

    Navigate: Project -> Compute -> Images.

    +

    Make sure you are able to see MS-Windows-2022 is available on Images List for +your project as shown below:

    +

    MS-Windows-2022 OpenStack Image

    +

    Create a Volume using that Windows Image:

    +

    MS-Winodws-2022 Image to Volume Create

    +

    To ensure smooth upgrade and maintenance of the system, select at least 100 GiB +for the size of the volume as shown below:

    +

    Create Volume

    +

    2. Using the OpenStack CLI

    +

    Prerequisites:

    +

    To run the OpenStack CLI commands, you need to have:

    + +

    To create a volume from image using the CLI, do this:

    +

    Using the openstack client commands

    +

    Identify the image for the initial volume contents from openstack image list.

    +
    openstack image list
    ++--------------------------------------+---------------------+--------+
    +| ID                                   | Name                | Status |
    ++--------------------------------------+---------------------+--------+
    +| a9b48e65-0cf9-413a-8215-81439cd63966 | MS-Windows-2022     | active |
    +...
    ++--------------------------------------+---------------------+--------+
    +
    +

    In the example above, this is image id a9b48e65-0cf9-413a-8215-81439cd63966 for +MS-Windows-2022.

    +

    Creating a disk from this image with a size of 100 GiB named "my-volume" +as follows.

    +
    openstack volume create --image a9b48e65-0cf9-413a-8215-81439cd63966 --size 100 --description "Using MS Windows Image" my-volume
    ++---------------------+--------------------------------------+
    +| Field               | Value                                |
    ++---------------------+--------------------------------------+
    +| attachments         | []                                   |
    +| availability_zone   | nova                                 |
    +| bootable            | false                                |
    +| consistencygroup_id | None                                 |
    +| created_at          | 2024-02-03T23:38:50.000000           |
    +| description         | Using MS Windows Image               |
    +| encrypted           | False                                |
    +| id                  | d8a5da4c-41c8-4c2d-b57a-8b6678ce4936 |
    +| multiattach         | False                                |
    +| name                | my-volume                            |
    +| properties          |                                      |
    +| replication_status  | None                                 |
    +| size                | 100                                  |
    +| snapshot_id         | None                                 |
    +| source_volid        | None                                 |
    +| status              | creating                             |
    +| type                | tripleo                              |
    +| updated_at          | None                                 |
    +| user_id             | 938eb8bfc72e4cb3ad2b94e2eb4059f7     |
    ++---------------------+--------------------------------------+
    +
    +

    Checking the status again using openstack volume show my-volume will allow the +volume creation to be followed.

    +

    "downloading" means that the volume contents is being transferred from the image +service to the volume service

    +

    "available" means the volume can now be used for booting. A set of volume_image +meta data is also copied from the image service.

    +

    Launch instance from existing bootable volume

    +

    1. Using Horizon dashboard

    +

    Navigate: Project -> Volumes -> Volumes.

    +

    Once successfully Volume is created, we can use the Volume to launch an instance +as shown below:

    +

    Launch Instance from Volume

    +
    +

    How do you make your VM setup and data persistent?

    +

    Only one instance at a time can be booted from a given volume. Make sure +"Delete Volume on Instance Delete" is selected as No if you want the +volume to persist even after the instance is terminated, which is the +default setting, as shown below:

    +

    Instance Persistent Storage Option

    +

    NOTE: For more in-depth information on making your VM setup and data persistent, +you can explore the details here.

    +
    +

    Add other information and setup a Security Group that allows RDP (port: 3389) as +shown below:

    +

    Launch Instance Security Group for RDP

    +
    +

    Very Important: Setting Administrator Credentials to Log into Your VM.

    +

    To access this Windows VM, you must log in using Remote Desktop, as +described here. +To configure a password for the "Administrator" user account, proceed to the +"Configuration" section and enter the supplied PowerShell-based Customized Script. +Make sure to substitute <Your_Own_Admin_Password> with your preferred password, +which will enable Remote Desktop login to the Windows VM.

    +
    #ps1
    +
    +net user Administrator <Your_Own_Admin_Password>
    +
    +

    Please ensure that your script in the "Configuration" section resembles the +following syntax: +Setting Administrator Password Custom Script

    +
    +

    After some time the instance will be Active in Running state as shown below:

    +

    Running Windows Instance

    +

    Attach a Floating IP to your instance:

    +

    Associate Floating IP

    +

    2. Using the OpenStack CLI from the terminal

    +

    Prerequisites:

    +

    To run the OpenStack CLI commands, you need to have:

    + +

    To launch an instance from existing bootable volume using the CLI, do this:

    +

    Using the openstack client commands from terminal

    +

    Get the flavor name using openstack flavor list:

    +
    openstack flavor list | grep cpu-su.4
    +| b3f5dded-efe3-4630-a988-2959b73eba70 | cpu-su.4      |  16384 |   20 |         0 |     4 | True      |
    +
    +

    To access this Windows VM, you must log in using Remote Desktop, as +described here. Before +launching the VM using the OpenStack CLI, we'll prepare a PowerShell-based Customized +Script as "user-data".

    +
    +

    What is a user data file?

    +

    A user data file is a text file that you can include when running the +openstack server create command. This file is used to customize your +instance during boot.

    +
    +

    You can place user data in a local file and pass it through the +--user-data <user-data-file> parameter at instance creation. You'll create a +local file named admin_password.ps1 with the following content. Please remember +to replace <Your_Own_Admin_Password> with your chosen password, which will be +used to log in to the Windows VM via Remote Desktop.

    +
    #ps1
    +
    +net user Administrator <Your_Own_Admin_Password>
    +
    +

    Setup a Security Group named "rdp_test" that allows RDP (port: 3389) using the +CLI, use the command openstack security group create <group-name>:

    +
    openstack security group create --description 'Allows RDP' rdp_test
    +
    +openstack security group rule create --protocol tcp --dst-port 3389 rdp_test
    +
    +

    To create a Windows VM named "my-vm" using the specified parameters, including the +flavor name "cpu-su.4", existing key pair "my-key", security group "rdp_test", +user data from the file "admin_password.ps1" created above, and the volume with +name "my-volume" created above, you can run the following command:

    +
    openstack server create --flavor cpu-su.4 \
    +    --key-name my-key \
    +    --security-group rdp_test \
    +    --user-data admin_password.ps1 \
    +    --volume my-volume \
    +    my-vm
    +
    +

    To list all Floating IP addresses that are allocated to the current project, run:

    +
    openstack floating ip list
    +
    ++--------------------------------------+---------------------+------------------+------+
    +| ID                                   | Floating IP Address | Fixed IP Address | Port |
    ++--------------------------------------+---------------------+------------------+------+
    +| 760963b2-779c-4a49-a50d-f073c1ca5b9e | 199.94.60.220       | 192.168.0.195    | None |
    ++--------------------------------------+---------------------+------------------+------+
    +
    +
    +

    More About Floating IP

    +

    If the above command returns an empty list, meaning you don't have any +available floating IPs, please refer to this documentation +on how to allocate a new Floating IP to your project.

    +
    +

    Attach a Floating IP to your instance:

    +
    openstack server add floating ip INSTANCE_NAME_OR_ID FLOATING_IP_ADDRESS
    +
    +

    For example:

    +
    openstack server add floating ip my-vm 199.94.60.220
    +
    +

    Accessing the graphical console in the Horizon dashboard

    +

    You can access the graphical console using the browser once the VM is in status +ACTIVE. It can take up to 15 minutes to reach this state.

    +

    The console is accessed by selecting the Instance Details for the machine and the +'Console' tab as shown below:

    +

    View Console of Instance

    +

    Administrator Sign in Prompt

    +

    How to add Remote Desktop login to your Windows instance

    +

    When the build and the Windows installation steps have completed, you can access +the console using the Windows Remote Desktop application. Remote Desktop login +should work with the Floating IP associated with the instance:

    +

    Search Remote Desktop Protocol locally

    +

    Connect to Remote Instance using Floating IP

    +

    Prompted Administrator Login

    +
    +

    What is the user login for Windows Server 2022?

    +

    The default username is "Administrator," and the password is the one you set +using the user data PowerShell script during the launch.

    +
    +

    Prompted RDP connection

    +

    Successfully Remote Connected Instance

    +
    +

    Storage and Volume

    +
      +
    • System disks are the first disk based on the flavor disk space and are +generally used to store the operating system created from an image when the +virtual machine is booted.
    • +
    • Volumes are +persistent virtualized block devices independent of any particular instance. +Volumes may be attached to a single instance at a time, but may be detached +or reattached to a different instance while retaining all data, much like a +USB drive. The size of the volume can be selected when it is created within +the storage quota limits for the particular resource allocation.
    • +
    +
    +

    Connect additional disk using volume

    +

    To attach additional disk to a running Windows machine you can follow +this documentation. +This guide +provides instructions on formatting and mounting a volume as an attached disk +within a Windows virtual machine.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/create-and-connect-to-the-VM/flavors/index.html b/openstack/create-and-connect-to-the-VM/flavors/index.html new file mode 100644 index 00000000..1f797e78 --- /dev/null +++ b/openstack/create-and-connect-to-the-VM/flavors/index.html @@ -0,0 +1,4058 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Nova flavors

    +

    In NERC OpenStack, flavors define the compute, memory, and storage capacity of +nova computing instances. In other words, a flavor is an available hardware +configuration for a server.

    +
    +

    Note

    +

    Flavors are visible only while you are launching an instance and under "Flavor" +tab as explained here.

    +
    +

    The important fields are

    + + + + + + + + + + + + + + + + + + + + + + + + + +
    FieldDescription
    RAMMemory size in MiB
    DiskSize of disk in GiB
    EphemeralSize of a second disk. 0 means no second disk is defined and mounted.
    VCPUsNumber of virtual cores
    +

    Comparison Between CPU and GPU

    +

    Here are the key differences between CPUs and GPUs:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    CPUsGPUs
    Work mostly in sequence. While several cores and excellent task switching give the impression of parallelism, a CPU is fundamentally designed to run one task at a time.Are designed to work in parallel. A vast number of cores and threading managed in hardware enable GPUs to perform many simple calculations simultaneously.
    Are designed for task parallelism.Are designed for data parallelism.
    Have a small number of cores that can complete single complex tasks at very high speeds.Have a large number of cores that work in tandem to compute many simple tasks.
    Have access to a large amount of relatively slow RAM with low latency, optimizing them for latency (operation).Have access to a relatively small amount of very fast RAM with higher latency, optimizing them for throughput.
    Have a very versatile instruction set, allowing the execution of complex tasks in fewer cycles but creating overhead in others.Have a limited (but highly optimized) instruction set, allowing them to execute their designed tasks very efficiently.
    Task switching (as a result of running the OS) creates overhead.Task switching is not used; instead, numerous serial data streams are processed in parallel from point A to point B.
    Will always work for any given use case but may not provide adequate performance for some tasks.Would only be a valid choice for some use cases but would provide excellent performance in those cases.
    +

    In summary, for applications such as Machine Learning (ML), Artificial +Intelligence (AI), or image processing, a GPU can provide a performance increase +of 50x to 200x compared to a typical CPU performing the same tasks.

    +

    Currently, our setup supports and offers the following flavors

    +

    NERC offers the following flavors based on our Infrastructure-as-a-Service +(IaaS) - OpenStack offerings (Tiers of Service).

    +
    +

    Pro Tip

    +

    Choose a flavor for your instance from the available Tier that suits your +requirements, use-cases, and budget when launching a VM as shown here.

    +
    +

    1. Standard Compute Tier

    +

    The standard compute flavor "cpu-su" is provided from Lenovo SD530 (2x Intel +8268 2.9 GHz, 48 cores, 384 GB memory) server. The base unit is 1 vCPU, 4 GB +memory with default of 20 GB root disk at a rate of $0.013 / hr of wall time.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    FlavorSUsGPUvCPURAM(GiB)Storage(GiB)Cost / hr
    cpu-su.1101420$0.013
    cpu-su.2202820$0.026
    cpu-su.44041620$0.052
    cpu-su.88083220$0.104
    cpu-su.16160166420$0.208
    +

    2. Memory Optimized Tier

    +

    The memory optimized flavor "mem-su" is provided from the same servers at +"cpu-su" but with 8 GB of memory per core. The base unit is 1 vCPU, 8 GB +memory with default of 20 GB root disk at a rate of $0.026 / hr of wall time.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    FlavorSUsGPUvCPURAM(GiB)Storage(GiB)Cost / hr
    mem-su.1101820$0.026
    mem-su.22021620$0.052
    mem-su.44043220$0.104
    mem-su.88086420$0.208
    mem-su.161601612820$0.416
    +

    3. GPU Tier

    +

    NERC also supports the most demanding workloads including Artificial Intelligence +(AI), Machine Learning (ML) training and Deep Learning modeling, simulation, +data analytics, data visualization, distributed databases, and more. For such +demanding workloads, the NERC's GPU-based distributed computing flavor is +recommended, which is integrated into a specialized hardware such as GPUs +that produce unprecedented performance boosts for technical computing workloads.

    +
    +

    Guidelines for Utilizing GPU-Based Flavors in Active Resource Allocation

    +

    To effectively utilize GPU-based flavors on any NERC (OpenStack) resource allocation, +the Principal Investigator (PI) or project manager(s) must submit a +change request +for their currently active NERC (OpenStack) resource allocation. This request +should specify the number of GPUs they intend to use by setting the "OpenStack +GPU Quota" attribute. We recommend ensuring that this count accurately reflects +the current GPU usage. Additionally, they need to adjust the quota values for +"OpenStack Compute RAM Quota (MiB)" and "OpenStack Compute vCPU Quota" to sufficiently +accommodate the GPU flavor they wish to use when launching a VM in their +OpenStack Project.

    +

    Once the change request is reviewed and approved by the NERC's admin, users +will be able to select the appropriate GPU-based flavor during the flavor +selection tab +when launching a new VM.

    +
    +

    There are four different options within the GPU tier, featuring the newer +NVIDIA A100 SXM4, NVIDIA A100s, NVIDIA V100s, and NVIDIA K80s.

    +
    +

    How can I get customized A100 SXM4 GPUs not listed in the current flavors?

    +

    We also provide customized A100 SXM4 GPU-based flavors, which are not publicly +listed on our NVIDIA A100 SXM4 40GB GPU Tiers list. These options are exclusively +available for demanding projects and are subject to availability.

    +

    To request access, please fill out this form. +Our team will review your request and reach out to you to discuss further.

    +
    +

    i. NVIDIA A100 SXM4 40GB

    +

    The "gpu-su-a100sxm4" flavor is provided from Lenovo SD650-N V2 (2x Intel Xeon +Platinum 8358 32C 250W 2.6GHz, 128 cores, 1024 GB RAM 4x NVIDIA HGX A100 40GB) servers. +The higher number of tensor cores available can significantly enhance the speed +of machine learning applications. The base unit is 32 vCPU, 240 GB memory with +default of 20 GB root disk at a rate of $2.078 / hr of wall time.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    FlavorSUsGPUvCPURAM(GiB)Storage(GiB)Cost / hr
    gpu-su-a100sxm4.1113224020$2.078
    gpu-su-a100sxm4.2226448020$4.156
    +
    +

    How to setup NVIDIA driver for "gpu-su-a100sxm4" flavor based VM?

    +

    After launching a VM with an NVIDIA A100 SXM4 GPU flavor, you will need to +setup the NVIDIA driver in order to use GPU-based codes and libraries. +Please run the following commands to setup the NVIDIA driver and CUDA +version required for these flavors in order to execute GPU-based codes. +NOTE: These commands are ONLY applicable for the VM based on +"ubuntu-22.04-x86_64" image. You might need to find corresponding +packages for your own OS of choice.

    +
    sudo apt update
    +sudo apt -y install nvidia-driver-495
    +# Just click *Enter* if any popups appear!
    +# Confirm and verify that you can see the NVIDIA device attached to your VM
    +lspci | grep -i nvidia
    +# 00:05.0 3D controller: NVIDIA Corporation GA100 [A100 SXM4 40GB] (rev a1)
    +sudo reboot
    +# SSH back to your VM and then you will be able to use nvidia-smi command
    +nvidia-smi
    +
    +
    +

    ii. NVIDIA A100 40GB

    +

    The "gpu-su-a100" flavor is provided from Lenovo SR670 (2x Intel 8268 2.9 GHz, +48 cores, 384 GB memory, 4x NVIDIA A100 40GB) servers. These latest GPUs deliver +industry-leading high throughput and low latency networking. The base unit is 24 +vCPU, 74 GB memory with default of 20 GB root disk at a rate of $1.803 / hr of +wall time.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    FlavorSUsGPUvCPURAM(GiB)Storage(GiB)Cost / hr
    gpu-su-a100.111247420$1.803
    gpu-su-a100.2224814820$3.606
    +
    +

    How to setup NVIDIA driver for "gpu-su-a100" flavor based VM?

    +

    After launching a VM with an NVIDIA A100 GPU flavor, you will need to +setup the NVIDIA driver in order to use GPU-based codes and libraries. +Please run the following commands to setup the NVIDIA driver and CUDA +version required for these flavors in order to execute GPU-based codes. +NOTE: These commands are ONLY applicable for the VM based on +"ubuntu-22.04-x86_64" image. You might need to find corresponding +packages for your own OS of choice.

    +
    sudo apt update
    +sudo apt -y install nvidia-driver-495
    +# Just click *Enter* if any popups appear!
    +# Confirm and verify that you can see the NVIDIA device attached to your VM
    +lspci | grep -i nvidia
    +# 0:05.0 3D controller: NVIDIA Corporation GA100 [A100 PCIe 40GB] (rev a1)
    +sudo reboot
    +# SSH back to your VM and then you will be able to use nvidia-smi command
    +nvidia-smi
    +
    +
    +

    iii. NVIDIA V100 32GB

    +

    The "gpu-su-v100" flavor is provided from Dell R740xd (2x Intel Xeon Gold 6148, +40 cores, 768GB memory, 1x NVIDIA V100 32GB) servers. The base unit is 48 vCPU, +192 GB memory with default of 20 GB root disk at a rate of $1.214 / hr of wall time.

    + + + + + + + + + + + + + + + + + + + + + + + +
    FlavorSUsGPUvCPURAM(GiB)Storage(GiB)Cost / hr
    gpu-su-v100.1114819220$1.214
    +
    +

    How to setup NVIDIA driver for "gpu-su-v100" flavor based VM?

    +

    After launching a VM with an NVIDIA V100 GPU flavor, you will need to +setup the NVIDIA driver in order to use GPU-based codes and libraries. +Please run the following commands to setup the NVIDIA driver and CUDA +version required for these flavors in order to execute GPU-based codes. +NOTE: These commands are ONLY applicable for the VM based on +"ubuntu-22.04-x86_64" image. You might need to find corresponding +packages for your own OS of choice.

    +
    sudo apt update
    +sudo apt -y install nvidia-driver-470
    +# Just click *Enter* if any popups appear!
    +# Confirm and verify that you can see the NVIDIA device attached to your VM
    +lspci | grep -i nvidia
    +# 00:05.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] (rev a1)
    +sudo reboot
    +# SSH back to your VM and then you will be able to use nvidia-smi command
    +nvidia-smi
    +
    +
    +

    iv. NVIDIA K80 12GB

    +

    The "gpu-su-k80" flavor is provided from Supermicro X10DRG-H (2x Intel +E5-2620 2.40GHz, 24 cores, 128GB memory, 4x NVIDIA K80 12GB) servers. The base unit +is 6 vCPU, 28.5 GB memory with default of 20 GB root disk at a rate of $0.463 / +hr of wall time.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    FlavorSUsGPUvCPURAM(GiB)Storage(GiB)Cost / hr
    gpu-su-k80.111628.520$0.463
    gpu-su-k80.222125720$0.926
    gpu-su-k80.4442411420$1.852
    +
    +

    How to setup NVIDIA driver for "gpu-su-k80" flavor based VM?

    +

    After launching a VM with an NVIDIA K80 GPU flavor, you will need to +setup the NVIDIA driver in order to use GPU-based codes and libraries. +Please run the following commands to setup the NVIDIA driver and CUDA +version required for these flavors in order to execute GPU-based codes. +NOTE: These commands are ONLY applicable for the VM based on +"ubuntu-22.04-x86_64" image. You might need to find corresponding +packages for your own OS of choice.

    +
    sudo apt update
    +sudo apt -y install nvidia-driver-470
    +# Just click *Enter* if any popups appear!
    +# Confirm and verify that you can see the NVIDIA device attached to your VM
    +lspci | grep -i nvidia
    +# 00:05.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1)
    +sudo reboot
    +# SSH back to your VM and then you will be able to use nvidia-smi command
    +nvidia-smi
    +
    +
    +
    +

    NERC IaaS Storage Tiers Cost

    +

    Storage both OpenStack Swift (object storage) and +Cinder (block storage/ volumes) are charged separately at a rate of +$0.009 TiB/hr or $9.00E-6 GiB/hr. More about cost +can be found here and +some of the common billing related FAQs are listed here.

    +
    +

    How can I get customized A100 SXM4 GPUs not listed in the current flavors?

    +

    We also provide customized A100 SXM4 GPU-based flavors, which are not publicly +listed on our NVIDIA A100 SXM4 40GB GPU Tiers list. These options are exclusively +available for demanding projects and are subject to availability.

    +

    To request access, please fill out this form. +Our team will review your request and reach out to you to discuss further.

    +

    How to Change Flavor of an instance

    +

    Using Horizon dashboard

    +

    Once you're logged in to NERC's Horizon dashboard, you can navigate to +Project -> Compute -> Instances.

    +

    You can select the instance you wish to extend or change the flavor. Here, you +will see several options available under the Actions menu located on the right-hand +side of your instance, as shown here:

    +

    Resize VM's Instance

    +

    Click "Resize Instance".

    +

    In the Resize Instance dialog box, select the new flavor of your choice under the +"New Flavor" dropdown options. In this example, we are changing the current flavor +"cpu-su.1" to the new flavor "cpu-su.2" for our VM, as shown below:

    +

    Resize Instance Dialog

    +

    Once reviwing the new flavor details and verified all details, press "Resize" button.

    +
    +

    Very Important Information

    +

    You will only be able to choose flavors that are within your current available +resource quotas, i.e., vCPUs and RAM.

    +
    +

    You will see the status of the resize in the following page.

    +

    When it says "Confirm or Revert Resize/Migrate", login to the instance and verify +that it worked as intended (meaning the instance is working as before but with +the new flavor).

    +

    If you are happy with the result, press "Confirm Resize/Rigrate" in drop-down to +the far right (it should be pre-selected) as shown below:

    +

    Confirm Resize/Migrate

    +

    This will finalise the process and make it permanent.

    +

    If you are unhappy (for some reason the process failed), you are able to instead +press "Revert resize/Migrate" (available in the drop-down). This will revert the +process.

    +

    Using the CLI

    +

    Prerequisites:

    +

    To run the OpenStack CLI commands, you need to have:

    + +

    If you want to change the flavor that is bound to a VM, then you can run the +following openstack client commands, here we are changing flavor of an existing +VM i.e. named "test-vm" from mem-su.2 to mem-su.4:

    +

    First, stop the running VM using:

    +
    openstack server stop test-vm
    +
    +

    Then, verify the status is "SHUTOFF" and also the used flavor is mem-su.2 as +shown below:

    +
    openstack server list
    ++--------------------------------------+------+---------+--------------------------------------------+--------------------------+---------+
    +| ID | Name | Status | Networks | Image | Flavor |
    ++--------------------------------------+------+---------+--------------------------------------------+--------------------------+---------+
    +| cd51dbba-fe95-413c-9afc-71370be4d4fd | test-vm | SHUTOFF | default_network=192.168.0.58, 199.94.60.10 | N/A (booted from volume) | mem-su.2 |
    ++--------------------------------------+------+---------+--------------------------------------------+--------------------------+---------+
    +
    +

    Then, resize the flavor from mem-su.2 to mem-su.4 by running:

    +
    openstack server resize --flavor mem-su.4 cd51dbba-fe95-413c-9afc-71370be4d4fd
    +
    +

    Confirm the resize:

    +
    openstack server resize confirm cd51dbba-fe95-413c-9afc-71370be4d4fd
    +
    +

    Then, start the VM:

    +
    openstack server start cd51dbba-fe95-413c-9afc-71370be4d4fd
    +
    +

    Verify the VM is using the new flavor of mem-su.4 as shown below:

    +
    openstack server list
    ++--------------------------------------+------+--------+--------------------------------------------+--------------------------+---------+
    +| ID | Name | Status | Networks | Image | Flavor |
    ++--------------------------------------+------+--------+--------------------------------------------+--------------------------+---------+
    +| cd51dbba-fe95-413c-9afc-71370be4d4fd | test-vm | ACTIVE | default_network=192.168.0.58, 199.94.60.10 | N/A (booted from volume) | mem-su.4 |
    ++--------------------------------------+------+--------+--------------------------------------------+--------------------------+---------+
    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/create-and-connect-to-the-VM/images/RDP_on_local_machine.png b/openstack/create-and-connect-to-the-VM/images/RDP_on_local_machine.png new file mode 100644 index 00000000..68e9920c Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/RDP_on_local_machine.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/administrator_singin_prompt.png b/openstack/create-and-connect-to-the-VM/images/administrator_singin_prompt.png new file mode 100644 index 00000000..30ddf2bd Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/administrator_singin_prompt.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/console_win_instance.png b/openstack/create-and-connect-to-the-VM/images/console_win_instance.png new file mode 100644 index 00000000..980438ad Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/console_win_instance.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/create_volume.png b/openstack/create-and-connect-to-the-VM/images/create_volume.png new file mode 100644 index 00000000..d9e32f98 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/create_volume.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/flavor-not-available-due-to-quota.png b/openstack/create-and-connect-to-the-VM/images/flavor-not-available-due-to-quota.png new file mode 100644 index 00000000..44b75fb7 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/flavor-not-available-due-to-quota.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/floating_ip_allocate.png b/openstack/create-and-connect-to-the-VM/images/floating_ip_allocate.png new file mode 100644 index 00000000..af5b2412 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/floating_ip_allocate.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/floating_ip_allocate_success.png b/openstack/create-and-connect-to-the-VM/images/floating_ip_allocate_success.png new file mode 100644 index 00000000..a7400978 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/floating_ip_allocate_success.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/floating_ip_associate.png b/openstack/create-and-connect-to-the-VM/images/floating_ip_associate.png new file mode 100644 index 00000000..ba1ee567 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/floating_ip_associate.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/floating_ip_created_successfully.png b/openstack/create-and-connect-to-the-VM/images/floating_ip_created_successfully.png new file mode 100644 index 00000000..c32f70af Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/floating_ip_created_successfully.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/floating_ip_disassociate.png b/openstack/create-and-connect-to-the-VM/images/floating_ip_disassociate.png new file mode 100644 index 00000000..fef78155 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/floating_ip_disassociate.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/floating_ip_disassociate_release.png b/openstack/create-and-connect-to-the-VM/images/floating_ip_disassociate_release.png new file mode 100644 index 00000000..55d06eb4 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/floating_ip_disassociate_release.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/floating_ip_is_associated.png b/openstack/create-and-connect-to-the-VM/images/floating_ip_is_associated.png new file mode 100644 index 00000000..8edb8a1a Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/floating_ip_is_associated.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/floating_ip_none.png b/openstack/create-and-connect-to-the-VM/images/floating_ip_none.png new file mode 100644 index 00000000..95c1f775 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/floating_ip_none.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/floating_ip_quota_exceed.png b/openstack/create-and-connect-to-the-VM/images/floating_ip_quota_exceed.png new file mode 100644 index 00000000..a81900e4 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/floating_ip_quota_exceed.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/floating_ip_release.png b/openstack/create-and-connect-to-the-VM/images/floating_ip_release.png new file mode 100644 index 00000000..ed53854e Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/floating_ip_release.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/index.html b/openstack/create-and-connect-to-the-VM/images/index.html new file mode 100644 index 00000000..cbb25570 --- /dev/null +++ b/openstack/create-and-connect-to-the-VM/images/index.html @@ -0,0 +1,3385 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Images

    +

    Image composed of a virtual collection of a kernel, operating system, and configuration.

    +

    Glance

    +

    Glance is the API-driven OpenStack image service that provides services and associated +libraries to store, browse, register, distribute, and retrieve bootable disk images. +It acts as a registry for virtual machine images, allowing users to copy server +images for immediate storage. These images can be used as templates when setting +up new instances.

    +

    NERC Images List

    +

    Once you're logged in to NERC's Horizon dashboard.

    +

    Navigate to Project -> Compute -> Images.

    +

    NERC provides a set of default images that can be used as source while launching +an instance:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    IDName
    a9b48e65-0cf9-413a-8215-81439cd63966MS-Windows-2022
    cfecb5d4-599c-4ffd-9baf-9cbe35424f97almalinux-8-x86_64
    263f045e-86c6-4344-b2de-aa475dbfa910almalinux-9-x86_64
    41fa5991-89d5-45ae-8268-b22224c772b2debian-10-x86_64
    99194159-fcd1-4281-b3e1-15956c275692fedora-36-x86_64
    74a33f77-fc42-4dd1-a5a2-55fb18fc50ccrocky-8-x86_64
    d7d41e5f-58f4-4ba6-9280-7fef9ac49060rocky-9-x86_64
    75a40234-702b-4ab7-9d83-f436b05827c9ubuntu-18.04-x86_64
    8c87cf6f-32f9-4a4b-91a5-0d734b7c9770ubuntu-20.04-x86_64
    da314c41-19bf-486a-b8da-39ca51fd17deubuntu-22.04-x86_64
    +

    How to create and upload own custom images?

    +

    Beside the above mentioned system provided images users can customize and upload +their own images to the NERC, as documented in this documentation.

    +

    Please refer to this guide +to learn more about how to obtain other publicly available virtual machine images +for the NERC OpenStack platform within your project space.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/create-and-connect-to-the-VM/images/instance-boot-source-options.png b/openstack/create-and-connect-to-the-VM/images/instance-boot-source-options.png new file mode 100644 index 00000000..3bd0b6d1 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/instance-boot-source-options.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/instance-persistent-storage-option.png b/openstack/create-and-connect-to-the-VM/images/instance-persistent-storage-option.png new file mode 100644 index 00000000..1daf87bd Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/instance-persistent-storage-option.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/instance_configuration.png b/openstack/create-and-connect-to-the-VM/images/instance_configuration.png new file mode 100644 index 00000000..3f88b2f6 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/instance_configuration.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/launch_a_vm.png b/openstack/create-and-connect-to-the-VM/images/launch_a_vm.png new file mode 100644 index 00000000..26f14dca Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/launch_a_vm.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/launch_flavor.png b/openstack/create-and-connect-to-the-VM/images/launch_flavor.png new file mode 100644 index 00000000..048a4847 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/launch_flavor.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/launch_instance_from_volume.png b/openstack/create-and-connect-to-the-VM/images/launch_instance_from_volume.png new file mode 100644 index 00000000..b4497ece Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/launch_instance_from_volume.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/launch_networks.png b/openstack/create-and-connect-to-the-VM/images/launch_networks.png new file mode 100644 index 00000000..35e0529c Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/launch_networks.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/launch_security_groups.png b/openstack/create-and-connect-to-the-VM/images/launch_security_groups.png new file mode 100644 index 00000000..09994eb6 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/launch_security_groups.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/launch_security_key_pairs.png b/openstack/create-and-connect-to-the-VM/images/launch_security_key_pairs.png new file mode 100644 index 00000000..df3e0ee7 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/launch_security_key_pairs.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/launch_source.png b/openstack/create-and-connect-to-the-VM/images/launch_source.png new file mode 100644 index 00000000..ae4b5f35 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/launch_source.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/persistent_volume.png b/openstack/create-and-connect-to-the-VM/images/persistent_volume.png new file mode 100644 index 00000000..8a2e7c4c Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/persistent_volume.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/prompted_administrator_login.png b/openstack/create-and-connect-to-the-VM/images/prompted_administrator_login.png new file mode 100644 index 00000000..dc7da852 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/prompted_administrator_login.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/prompted_rdp_connection.png b/openstack/create-and-connect-to-the-VM/images/prompted_rdp_connection.png new file mode 100644 index 00000000..a5b69f5d Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/prompted_rdp_connection.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/rdp_popup_for_xrdp.png b/openstack/create-and-connect-to-the-VM/images/rdp_popup_for_xrdp.png new file mode 100644 index 00000000..90c9ec24 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/rdp_popup_for_xrdp.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/rdp_windows_for_xrdp.png b/openstack/create-and-connect-to-the-VM/images/rdp_windows_for_xrdp.png new file mode 100644 index 00000000..400a74ee Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/rdp_windows_for_xrdp.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/remote_connected_instance.png b/openstack/create-and-connect-to-the-VM/images/remote_connected_instance.png new file mode 100644 index 00000000..e79ca6d9 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/remote_connected_instance.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/remote_connection_floating_ip.png b/openstack/create-and-connect-to-the-VM/images/remote_connection_floating_ip.png new file mode 100644 index 00000000..2e11ce9b Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/remote_connection_floating_ip.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/resize_instance_confirm.png b/openstack/create-and-connect-to-the-VM/images/resize_instance_confirm.png new file mode 100644 index 00000000..89e8fa3e Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/resize_instance_confirm.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/resize_instance_dialog.png b/openstack/create-and-connect-to-the-VM/images/resize_instance_dialog.png new file mode 100644 index 00000000..e27ee7bf Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/resize_instance_dialog.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/resize_instance_flavor.png b/openstack/create-and-connect-to-the-VM/images/resize_instance_flavor.png new file mode 100644 index 00000000..392f1e03 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/resize_instance_flavor.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/running_instance.png b/openstack/create-and-connect-to-the-VM/images/running_instance.png new file mode 100644 index 00000000..df8eba94 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/running_instance.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/security_group_for_rdp.png b/openstack/create-and-connect-to-the-VM/images/security_group_for_rdp.png new file mode 100644 index 00000000..18ec5878 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/security_group_for_rdp.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/set_windows_administrator_password.png b/openstack/create-and-connect-to-the-VM/images/set_windows_administrator_password.png new file mode 100644 index 00000000..070ae39a Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/set_windows_administrator_password.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/show_options_rdp_windows.png b/openstack/create-and-connect-to-the-VM/images/show_options_rdp_windows.png new file mode 100644 index 00000000..b32d38cb Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/show_options_rdp_windows.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/ssh_to_vm.png b/openstack/create-and-connect-to-the-VM/images/ssh_to_vm.png new file mode 100644 index 00000000..9463340f Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/ssh_to_vm.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/stack_image_to_volume.png b/openstack/create-and-connect-to-the-VM/images/stack_image_to_volume.png new file mode 100644 index 00000000..1cfb3ba7 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/stack_image_to_volume.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/stack_images_windows.png b/openstack/create-and-connect-to-the-VM/images/stack_images_windows.png new file mode 100644 index 00000000..cc94e00f Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/stack_images_windows.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/vm_images.png b/openstack/create-and-connect-to-the-VM/images/vm_images.png new file mode 100644 index 00000000..c318fe8b Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/vm_images.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/vm_launch_details.png b/openstack/create-and-connect-to-the-VM/images/vm_launch_details.png new file mode 100644 index 00000000..e1a453eb Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/vm_launch_details.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/win2k22_instance_running.png b/openstack/create-and-connect-to-the-VM/images/win2k22_instance_running.png new file mode 100644 index 00000000..2660a499 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/win2k22_instance_running.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/win_instance_add_floating_ip.png b/openstack/create-and-connect-to-the-VM/images/win_instance_add_floating_ip.png new file mode 100644 index 00000000..3ca90b3d Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/win_instance_add_floating_ip.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/xrdp_desktop.png b/openstack/create-and-connect-to-the-VM/images/xrdp_desktop.png new file mode 100644 index 00000000..fccfee91 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/xrdp_desktop.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/xrdp_display_manager.png b/openstack/create-and-connect-to-the-VM/images/xrdp_display_manager.png new file mode 100644 index 00000000..b9fab5a5 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/xrdp_display_manager.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/xrdp_login.png b/openstack/create-and-connect-to-the-VM/images/xrdp_login.png new file mode 100644 index 00000000..14170285 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/xrdp_login.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/xrdp_macos_add_pc.png b/openstack/create-and-connect-to-the-VM/images/xrdp_macos_add_pc.png new file mode 100644 index 00000000..a29f4e2c Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/xrdp_macos_add_pc.png differ diff --git a/openstack/create-and-connect-to-the-VM/images/xrdp_macos_add_user_account.png b/openstack/create-and-connect-to-the-VM/images/xrdp_macos_add_user_account.png new file mode 100644 index 00000000..d704d602 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/images/xrdp_macos_add_user_account.png differ diff --git a/openstack/create-and-connect-to-the-VM/launch-a-VM/index.html b/openstack/create-and-connect-to-the-VM/launch-a-VM/index.html new file mode 100644 index 00000000..dbf37493 --- /dev/null +++ b/openstack/create-and-connect-to-the-VM/launch-a-VM/index.html @@ -0,0 +1,3568 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    How to launch an Instance

    +

    Prerequisites:

    +
      +
    • +

      You followed the instruction in Create a Key Pair +to set up a public ssh key.

      +
    • +
    • +

      Make sure you have added rules in the +Security Groups to +allow ssh using Port 22 access to the instance.

      +
    • +
    +

    Using Horizon dashboard

    +

    Once you're logged in to NERC's Horizon dashboard.

    +

    Navigate: Project -> Compute -> Instances.

    +

    Click on "Launch Instance" button:

    +

    VM Launch Instance

    +

    In the Launch Instance dialog box, specify the following values:

    +

    Details Tab

    +

    Instance Name: Give your instance a name that assign a name to the virtual machine.

    +
    +

    Important Note

    +

    The instance name you assign here becomes the initial host name of the server. +If the name is longer than 63 characters, the Compute service truncates it +automatically to ensure dnsmasq works correctly.

    +
    +

    Availability Zone: By default, this value is set to the availability zone given +by the cloud provider i.e. nova.

    +

    Count: To launch multiple instances, enter a value greater than 1. The default +is 1.

    +

    VM Launch Instance Detail

    +

    Source Tab

    +

    Double check that in the dropdown "Select Boot Source".

    +

    When you start a new instance, you can choose the Instance Boot Source from the +following list:

    +
      +
    • boot from image
    • +
    • boot from instance snapshot
    • +
    • boot from volume
    • +
    • boot from volume snapshot
    • +
    +

    In its default configuration, when the instance is launched from an Image or +an Instance Snapshot, the choice for utilizing persistent storage is configured +by selecting the Yes option for "Create New Volume". Additionally, the "Delete +Volume on Instance Delete" setting is pre-set to No, as indicated here:

    +

    Launching an Instance Boot Source

    +

    If you set the "Create New Volume" option to No, the instance will boot +from either an image or a snapshot, with the instance only being attached to an +ephemeral disk as described here. +To mitigate potential data loss, we strongly recommend regularly taking a snapshot +of such a running ephemeral instance, referred to as an "instance snapshot", +especially if you want to safeguard or recover important states of your instance.

    +

    When deploying a non-ephemeral instance, which involves creating a new volume and +selecting Yes for "Delete Volume on Instance Delete", deleting the instance +will also remove the associated volume. Consequently, all data on that disk is +permanently lost, which is undesirable when the data on attached volumes needs +to persist even after the instance is deleted. Ideally, selecting "Yes" for this +setting should be reserved for instances where persistent data storage is not required.

    +
    +

    Very Important: How do you make your VM setup and data persistent?

    +

    For more in-depth information on making your VM setup and data persistent, +you can explore the details here.

    +
    +

    To start a VM, for the first time we will need a base image so, please make sure +"Image" dropdown option is selected. In the example, we chose ubuntu-22.04-x86_64, +you may choose any available images.

    +
    +

    Bootable Images

    +

    NERC has made several Public bootable images available to the users as +listed here. Customers can also upload their own custom images, +as documented in this guide.

    +

    To view them, Navigate: Project -> Compute -> Images.

    +

    VM Images

    +
    +

    VM Launch Instance Source

    +
    +

    How to override the flavor's Default root disk volume size

    +

    If you don't specify custom value for the "Volume Size (GB)", that will +be set to the root disk size of your selected Flavor. For more about the +default root disk size you can refer to this documentation. +We can override this value by entering our own custom value (in GiB) and that +is available as a Volume that is attach to the instance to enable persistent +storage.

    +
    +

    Flavor Tab

    +

    Specify the size of the instance to launch. Choose cpu-su.4 from the 'Flavor' +tab by clicking on the "+" icon.

    +
    +

    Important Note

    +

    In NERC OpenStack, flavors define the compute, memory, and storage +capacity of nova computing instances. In other words, a flavor is an +available hardware configuration for a server.

    +

    Some of the flavors will not be available for your use as per your resource +Quota limits and will be shown as below:

    +

    Flavor Not Avaliable due to Your Quota

    +

    NOTE: More details about available flavors can be found here +and how to change request the current allocation quota attributes can be found +here.

    +
    +

    After choosing cpu-su.4, you should see it moved up to "Allocated".

    +

    VM Launch Instance Flavor

    +
    +

    Storage and Volume

    +
      +
    • System disks are the first disk based on the flavor disk space and are +generally used to store the operating system created from an image when the +virtual machine is booted.
    • +
    • Volumes are +persistent virtualized block devices independent of any particular instance. +Volumes may be attached to a single instance at a time, but may be detached +or reattached to a different instance while retaining all data, much like a +USB drive. The size of the volume can be selected when it is created within +the storage quota limits for the particular resource allocation.
    • +
    +
    +

    Networks Tab

    +

    Make sure the Default Network that is created by default is moved up to "Allocated". +If not, you can click on the "+" icon in "Available".

    +

    VM Launch Instance Networks

    +

    Security Groups Tab

    +

    Make sure to add the security group where you enabled SSH. To add an SSH +security group first, see here.

    +

    VM Launch Instance Security Groups

    +
    +

    How to update New Security Group(s) on any running VM?

    +

    If you want to attach/deattach any new Security Group(s) to/from a running VM +after it has launched. First create all new Security Group(s) with all the rules +required. Following this guide, +you'll be able to attach created security group(s) with all the +required rules to a running VM. You can modify the Rules setup for any Security +Group(s) but that will affect all VMs using that security groups.

    +
    +

    Key Pair Tab

    +

    Add the key pair you created for your local machine/laptop to use with this VM. +To add a Key Pair first create and add them to your Project as described here.

    +

    VM Launch Instance Key Pairs

    +
    +

    Important Note

    +

    If you did not provide a key pair, security groups, or rules, users can +access the instance only from inside the cloud through VNC. Even pinging the +instance is not possible without an ICMP rule configured. We recommend limiting +access as much as possible for best security practices.

    +
    +

    Ignore other Tabs

    +

    Network Ports, Configuration, Server Groups, Schedular Hints, and Metadata: +tab: Please ignore these tabs as these are not important and only for advance setup.

    +
    +

    How to use 'Configuration' tab

    +

    If you want to specify a customization script that runs after your instance +launches then you can write those custom script inside the +"Customization Script" text area. For example: +VM Launch Instance Configuration Script

    +
    +

    You are now ready to launch your VM - go ahead and click "Launch Instance". This +will initiate an instance.

    +

    On a successful launch you would be redirected to Compute -> Instances tab and +can see the VM spawning.

    +

    Once your VM is successfully running you will see the Power State changes +from "No State" to "running".

    +

    VM Launch Instance Successful

    +
    +

    Note

    +

    Here we explained about launching an instance using Image but you can also +launch an instance from the "instance snapshot" or "volume" or "volume snapshot" +option similar to the steps above. If you want to use OpenStack CLI to launch +a VM you can read this +or if you want to provision the NERC resources using Terraform you can +read this.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/create-and-connect-to-the-VM/ssh-to-the-VM/index.html b/openstack/create-and-connect-to-the-VM/ssh-to-the-VM/index.html new file mode 100644 index 00000000..6e9afd73 --- /dev/null +++ b/openstack/create-and-connect-to-the-VM/ssh-to-the-VM/index.html @@ -0,0 +1,3781 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    SSH to the VM

    +

    Shell, or SSH, is used to administering and managing Linux workloads. +Before trying to access instances from the outside world, you need to make sure +you have followed these steps:

    +
      +
    • +

      You followed the instruction in Create a Key Pair +to set up a public ssh key.

      +
    • +
    • +

      Your public ssh-key has selected (in "Key Pair" tab) while +launching the instance.

      +
    • +
    • +

      Assign a Floating IP to the instance in order to +access it from outside world.

      +
    • +
    • +

      Make sure you have added rules in the +Security Groups to +allow ssh using Port 22 access to the instance.

      +
    • +
    +
    +

    How to update New Security Group(s) on any running VM?

    +

    If you want to attach/deattach any new Security Group(s) to/from a running VM +after it has launched. First create all new Security Group(s) with all the rules +required. Following this guide, +you'll be able to attach created security group(s) with all the +required rules to a running VM.

    +
    +

    Make a note of the Floating IP you have associated to your instance.

    +

    Associated Instance Floating IP

    +

    In our example, the IP is 199.94.60.66.

    +

    Default usernames for all the base images are:

    +
      +
    • all Ubuntu images: ubuntu
    • +
    • all AlmaLinux images: almalinux
    • +
    • all Rocky Linux images: rocky
    • +
    • all Fedora images: fedora
    • +
    • all Debian images: debian
    • +
    • all RHEL images: cloud-user
    • +
    +
    +

    Removed Centos Images

    +

    If you still have VMs running with deleted CentOS images, you need to use +the following default username for your CentOS images: centos.

    +
      +
    • all CentOS images: centos
    • +
    +
    +

    Our example VM was launched with the ubuntu-22.04-x86_64 base image, the +user we need is 'ubuntu'.

    +

    Open a Terminal window and type:

    +
    ssh ubuntu@199.94.60.66
    +
    +

    Since you have never connected to this VM before, you will be asked if you are +sure you want to connect. Type yes.

    +

    SSH To VM Successful

    +
    +

    Important Note

    +

    If you haven't added your key to ssh-agent, you may need to specify the +private key file, like this: ssh -i ~/.ssh/cloud.key ubuntu@199.94.60.66

    +

    To add your private key to the ssh-agent you can follow the following steps:

    +
      +
    1. +

      eval "$(ssh-agent -s)"

      +

      Output: Agent pid 59566

      +
    2. +
    3. +

      ssh-add ~/.ssh/cloud.key

      +

      If your private key is password protected, you'll be prompted to enter the +passphrase.

      +
    4. +
    5. +

      Verify that the key has been added by running ssh-add -l.

      +
    6. +
    +
    +

    SSH to the VM using SSH Config

    +

    Alternatively, You can also configure the setting for the remote instances in +your SSH configuration file (typically found in ~/.ssh/config). The SSH configuration +file might include entry for your newly launched VM like this:

    +
    Host ExampleHostLabel
    +    HostName 199.94.60.66
    +    User ubuntu
    +    IdentityFile ~/.ssh/cloud.key
    +
    +

    Here, the Host value can be any label you want. The HostName value is the +Floating IP you have associated to your instance that you want to access, the +User value specifies the default account username based on your base OS image +used for the VM and IdentityFile specify the path to your Private Key on +your local machine. With this configuration defined, you can connect to the account +by simply using the Host value set as "ExampleHostLabel". You do not have to type +the username, hostname, and private key each time.

    +

    So, you can SSH into your host VM by running:

    +
    ssh ExampleHostLabel
    +
    +
    +

    Setting a password

    +

    When the VMs are launched, a strong, randomly-generated password is created for +the default user, and then discarded.

    +

    Once you connect to your VM, you will want to set a password in case you ever +need to log in via the console in the web dashboard.

    +

    For example, if your network connections aren't working correctly.

    +
    +

    Setting a password is necessary to use Remote Desktop Protocol (RDP)

    +

    Remote Desktop Protocol(RDP) +is widely used for Windows remote connections, but you can also access +and interact with the graphical user interface of a remote Linux server by +using a tool like xrdp, an open-source implementation of +the RDP server. You can use xrdp to remotely access the Linux desktop. To +do so, you need to utilize the RDP client. Moreover, xrdp delivers a login +to the remote machines employing Microsoft RDP. This is why a user with +the password is necessary to access the VM. You can refer to this guide +on how to install and configure a RDP server using xrdp on a Ubuntu server +and access it using a RDP client from your local machine.

    +
    +

    Since you are not using it to log in over SSH or to sudo, it doesn't really +matter how hard it is to type, and we recommend using a randomly-generated +password.

    +

    Create a random password like this:

    +
    ubuntu@test-vm:~$ cat /dev/urandom | base64 | dd count=14 bs=1
    +T1W16HCyfZf8V514+0 records in
    +14+0 records out
    +14 bytes copied, 0.00110367 s, 12.7 kB/s
    +
    +

    The 'count' parameter controls the number of characters.

    +

    The first [count] characters of the output are your randomly generated output, +followed immediately by "[count]+0", +so in the above example the password is: T1W16HCyfZf8V5.

    +

    Set the password for ubuntu using the command:

    +
    ubuntu@test-vm:~$ sudo passwd ubuntu
    +New password:
    +Retype new password:
    +... password updated successfully
    +
    +

    Store the password in a secure place. Don't send it over email, post it on your +wall on a sticky note, etc.

    +

    Adding other people's SSH keys to the instance

    +

    You were able to log in using your own SSH key.

    +

    Right now Openstack only permits one key to be added at launch, so you need to +add your teammates keys manually.

    +

    Get your teammates' public keys. If they used ssh-keygen to create their key, +this will be in a file called .pub on their machine.

    +

    If they created a key via the dashboard, or imported the key created with +ssh-keygen, their public key is viewable from the Key Pairs tab.

    +

    Click on the key pair name. The public key starts with 'ssh-rsa' and looks +something like this:

    +
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDL6O5qNZHfgFwf4vnnib2XBub7ZU6khy6z6JQl3XRJg6I6gZ
    ++Ss6tNjz0Xgax5My0bizORcka/TJ33S36XZfzUKGsZqyEl/ax1Xnl3MfE/rgq415wKljg4
    ++QvDznF0OFqXjDIgL938N8G4mq/
    +cKKtRSMdksAvNsAreO0W7GZi24G1giap4yuG4XghAXcYxDnOSzpyP2HgqgjsPdQue919IYvgH8shr
    ++sPa48uC5sGU5PkTb0Pk/ef1Y5pLBQZYchyMakQvxjj7hHZaT/
    +Lw0wIvGpPQay84plkjR2IDNb51tiEy5x163YDtrrP7RM2LJwXm+1vI8MzYmFRrXiqUyznd
    +test_user@demo
    +
    +

    Create a file called something like 'teammates.txt' and paste in your team's +public keys, one per line.

    +

    Hang onto this file to save yourself from having to do all the copy/pasting +every time you launch a new VM.

    +

    Copy the file to the vm:

    +
    [you@your-laptop ~]$ scp teammates.txt ubuntu@199.94.60.66:~
    +
    +

    If the copy works, you will see the output:

    +
    teammates.txt                  100%    0     0KB/s   00:00
    +
    +

    Append the file's contents to authorized_keys:

    +
    [cloud-user@test-vm ~] #cat teammates.txt >> ~/.ssh/authorized_keys
    +
    +

    Now your teammates should also be able to log in.

    +
    +

    Important Note

    +

    Make sure to use >> instead of > to avoid overwriting your own key.

    +
    +
    +

    Adding users to the instance

    +

    You may decide that each teammate should have their own user on the VM instead +of everyone logging in to the default user.

    +

    Once you log into the VM, you can create another user like this.

    +
    +

    Note

    +

    The 'sudo_group' is different for different OS - in CentOS and Red Hat, the +group is called 'wheel', while in Ubuntu, the group is called 'sudo'.

    +
      sudo su
    +  # useradd -m <username>
    +  # passwd <username>
    +  # usermod -aG <sudo_group> <username>    <-- skip this step for users who
    +  # should not have root access
    +  # su username
    +  cd ~
    +  mkdir .ssh
    +  chmod 700 .ssh
    +  cd .ssh
    +  vi authorized_keys   <-- paste the public key for that user in this file
    +  chmod 600 authorized_keys
    +
    +
    +

    How To Enable Remote Desktop Protocol Using xrdp on Ubuntu

    +

    Log in to the server with Sudo access

    +

    In order to install the xrdp, you need to login to the server with sudo access +to it.

    +
    ssh username@your_server_ip
    +
    +

    For example:

    +
    ssh ubuntu@199.94.60.66
    +
    +

    Installing a Desktop Environment

    +

    After connecting to your server using SSH update the list of available packages +using the following command:

    +
    sudo apt update
    +
    +

    Next, install the xfce and xfce-goodies packages on your server:

    +
    sudo apt install xfce4 xfce4-goodies -y
    +
    +
    +

    Select Display Manager

    +

    If prompted to choose a display manager, which manages graphical login mechanisms +and user sessions, you can select any option from the list of available display +managers. For instance, here we have gdm3 as the default selection.

    +

    xrdp Display Manager

    +
    +

    Installing xrdp

    +

    To install xrdp, run the following command in the terminal:

    +
    sudo apt install xrdp -y
    +
    +

    After installing xrdp, verify the status of xrdp using systemctl:

    +
    sudo systemctl status xrdp
    +
    +

    This command will show the status as active (running):

    +

    Output:

    +
    ● xrdp.service - xrdp daemon
    +    Loaded: loaded (/lib/systemd/system/xrdp.service; enabled; vendor preset: enab>
    +    Active: active (running) since Mon 2024-02-12 21:33:01 UTC; 9s ago
    +    ...
    +    CGroup: /system.slice/xrdp.service
    +            └─8839 /usr/sbin/xrdp
    +
    +
    +

    What if xrdp is not Running?

    +

    If the status of xrdp is not running, you may have to start the service manually +with this command: sudo systemctl start xrdp. After executing the above command, +verify the status again to ensure xrdp is in a running state.

    +
    +

    Make xrdp use the desktop environment we previously created:

    +
    sudo sed -i.bak '/fi/a #xrdp multiple users configuration \n xfce-session \n' /etc/xrdp/startwm.sh
    +
    +

    Configuring xrdp and Updating Security Groups

    +

    If you want to customize the default xrdp configuration (optional), you will need +to review the default configuration of xrdp, which is stored under /etc/xrdp/xrdp.ini. +xrdp.ini is the default configuration file to set up RDP connections to the +xrdp server. The configuration file can be modified and customized to meet the +RDP connection requirements.

    +

    Add a new security group with a RDP (port 3389) rule open to the public for a +RDP connection and attach that security group to your instance as described here.

    +
    +

    How to Update Security Group(s) on a Running VM?

    +

    Following this guide, +you'll be able to attach created security group(s) with all the +required rules to a running VM.

    +
    +

    Restart the xrdp server to make sure all the above changes are reflected:

    +
    sudo systemctl restart xrdp
    +
    +

    Testing the RDP Connection

    +

    You should now be able to connect to the Ubuntu VM via xrdp.

    +

    Testing the RDP Connection on Windows

    +

    If you are using Windows as a local desktop, Windows users have a RDP connection +application by default on their machines.

    +

    Enter your VM's Floating IP and username into the fillable text boxes for Computer +and User name.

    +

    RDP Windows

    +

    You may need to press the down arrow for "Show Options" to input the username i.e. +ubuntu:

    +

    Show Options To Enter Username

    +

    Press the Connect button. If you receive an alert that the "Remote Desktop can't +connect to the remote computer", check that you have properly attached the security +group with a RDP (port 3389) rule open to the public to your VM as described here.

    +

    Press Yes if you receive the identity verification popup:

    +

    RDP Windows Popup

    +

    Then, enter your VM's username (ubuntu) and the password you created +for user ubuntu following this steps.

    +

    Press Ok.

    +

    xrdp Login Popup

    +

    Once you have logged in, you should be able to access your Ubuntu Desktop environment:

    +

    xrdp Desktop

    +

    Testing the RDP Connection on macOS

    +

    To test the connection using the Remote Desktop Connection client on macOS, first +launch the Microsoft Remote Desktop Connection app.

    +

    Press Add PC, then enter your remote server's Floating IP in the PC name +fillable box:

    +

    xrdp Add PC

    +

    You can Add a user account when setting up the connection:

    +

    xrdp Add User Account

    +

    Once you have logged in, you can access your Ubuntu remote desktop. You can close +it with the exit button.

    +

    Testing the RDP Connection on Linux

    +

    If you are using Linux as your Local desktop you can connect to the server via +Remmina.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/access_popup.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/access_popup.png new file mode 100644 index 00000000..5573a3b1 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/access_popup.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/add_a_config.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/add_a_config.png new file mode 100644 index 00000000..c11838a6 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/add_a_config.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/available_instances.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/available_instances.png new file mode 100644 index 00000000..8c68e7d2 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/available_instances.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/client_config_file.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/client_config_file.png new file mode 100644 index 00000000..9a8ed8f0 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/client_config_file.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/client_config_installed_successfully.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/client_config_installed_successfully.png new file mode 100644 index 00000000..21c3dac6 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/client_config_installed_successfully.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/configuration.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/configuration.png new file mode 100644 index 00000000..025f5427 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/configuration.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/configuration_file_options.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/configuration_file_options.png new file mode 100644 index 00000000..667a12f1 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/configuration_file_options.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/configuration_mac.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/configuration_mac.png new file mode 100644 index 00000000..7c913c64 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/configuration_mac.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/connect_menu.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/connect_menu.png new file mode 100644 index 00000000..565b13e6 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/connect_menu.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/connect_vpn.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/connect_vpn.png new file mode 100644 index 00000000..d4e95663 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/connect_vpn.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/connected_notification.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/connected_notification.png new file mode 100644 index 00000000..b4c4fbbb Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/connected_notification.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/connection_successful.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/connection_successful.png new file mode 100644 index 00000000..95ded717 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/connection_successful.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/disconnect_using_tunnelblick_icon.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/disconnect_using_tunnelblick_icon.png new file mode 100644 index 00000000..3db5741f Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/disconnect_using_tunnelblick_icon.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/disconnect_vpn.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/disconnect_vpn.png new file mode 100644 index 00000000..910825c1 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/disconnect_vpn.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/dmg_installer.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/dmg_installer.png new file mode 100644 index 00000000..dd1402db Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/dmg_installer.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/file_imported_successful.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/file_imported_successful.png new file mode 100644 index 00000000..58041360 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/file_imported_successful.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/generate_client_nerc.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/generate_client_nerc.png new file mode 100644 index 00000000..103f3fb3 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/generate_client_nerc.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/import_config_file.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/import_config_file.png new file mode 100644 index 00000000..84287231 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/import_config_file.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/installation_complete.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/installation_complete.png new file mode 100644 index 00000000..dac6c6f4 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/installation_complete.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/installation_path_customization.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/installation_path_customization.png new file mode 100644 index 00000000..32ddf361 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/installation_path_customization.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/installation_setting.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/installation_setting.png new file mode 100644 index 00000000..cc6db051 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/installation_setting.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/logs.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/logs.png new file mode 100644 index 00000000..26ad4294 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/logs.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/no_config_alert.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/no_config_alert.png new file mode 100644 index 00000000..14145660 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/no_config_alert.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/notification_settings.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/notification_settings.png new file mode 100644 index 00000000..69f78fa2 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/notification_settings.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/openvpn_security_rule.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/openvpn_security_rule.png new file mode 100644 index 00000000..707ff6af Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/openvpn_security_rule.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/popup_open.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/popup_open.png new file mode 100644 index 00000000..9c722aa8 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/popup_open.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/preview_connection_log.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/preview_connection_log.png new file mode 100644 index 00000000..5ece5c7c Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/preview_connection_log.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/private_instance_accessible.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/private_instance_accessible.png new file mode 100644 index 00000000..e94379e6 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/private_instance_accessible.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/second_client_generate.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/second_client_generate.png new file mode 100644 index 00000000..9a475b7d Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/second_client_generate.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/security_groups.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/security_groups.png new file mode 100644 index 00000000..833c5a5d Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/security_groups.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/setup_client_completed.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/setup_client_completed.png new file mode 100644 index 00000000..f56b8c69 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/setup_client_completed.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/ssh_server.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/ssh_server.png new file mode 100644 index 00000000..d33f80e1 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/ssh_server.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/ssh_vpn_server.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/ssh_vpn_server.png new file mode 100644 index 00000000..e94379e6 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/ssh_vpn_server.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/start_openvpn_using_config_file.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/start_openvpn_using_config_file.png new file mode 100644 index 00000000..39a74669 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/start_openvpn_using_config_file.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnel_successful.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnel_successful.png new file mode 100644 index 00000000..b79a1f6e Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnel_successful.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnelblick_app_icon.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnelblick_app_icon.png new file mode 100644 index 00000000..40313d43 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnelblick_app_icon.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnelblick_configuration_interface.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnelblick_configuration_interface.png new file mode 100644 index 00000000..7c913c64 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnelblick_configuration_interface.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnelblick_download.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnelblick_download.png new file mode 100644 index 00000000..e18855ae Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnelblick_download.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnelblick_icon.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnelblick_icon.png new file mode 100644 index 00000000..40313d43 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnelblick_icon.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnelblick_interface.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnelblick_interface.png new file mode 100644 index 00000000..cda0f1ee Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/tunnelblick_interface.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/user_authentication.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/user_authentication.png new file mode 100644 index 00000000..4eea62e3 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/user_authentication.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/user_to_authorize.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/user_to_authorize.png new file mode 100644 index 00000000..a1081ec5 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/user_to_authorize.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/vpn_details_menu.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/vpn_details_menu.png new file mode 100644 index 00000000..e9184aac Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/vpn_details_menu.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/windows_installer.png b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/windows_installer.png new file mode 100644 index 00000000..ac8ee1c9 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/images/windows_installer.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/index.html b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/index.html new file mode 100644 index 00000000..f54a29cd --- /dev/null +++ b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/index.html @@ -0,0 +1,3493 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    OpenVPN

    +

    OpenVPN is a full-featured SSL VPN which implements OSI layer 2 or 3 secure +network extension using the industry standard SSL/TLS protocol, supports +flexible client authentication methods based on certificates, smart cards, and/ +or username/password credentials, and allows user or group-specific access +control policies using firewall rules applied to the VPN virtual interface.

    +

    OpenVPN offers a scalable client/server mode, allowing multiple clients to +connect to a single OpenVPN server process over a single TCP or UDP port.

    +

    Installing OpenVPN Server

    +

    You can read official documentation here.

    +

    You can spin up a new instance with "ubuntu-22.04-x86_64" or any available +Ubuntu OS image, named "openvpn_server" on OpenStack, with "default" +and "ssh_only" Security Groups attached to it.

    +

    Available instances

    +

    Also, attach a Floating IP to this instance so you can ssh into it from outside.

    +

    Create a new Security Group i.e. "openvpn" that is listening on +UDP port 1194 as shown below:

    +

    OpenVPN Security Rule

    +

    The Security Groups attached to the OpenVPN server should look similar to the image +below:

    +

    The Security Groups attached to the OpenVPN server includes "default", +"ssh_only" and "openvpn". It should look similar to the image shown below:

    +

    Security Groups

    +

    Finally, you'll want to configure the setting for the remote instances in your +SSH configuration file (typically found in ~/.ssh/config). The SSH +configuration file might include entry for your newly created OpenVPN server +like this:

    +
    Host openvpn
    +  HostName 199.94.60.66
    +  User ubuntu
    +  IdentityFile ~/.ssh/cloud.key
    +
    +
      +
    1. +

      Then you can ssh into the OpenVPN Server running: ssh openvpn

      +

      SSH OpenVPN server

      +
    2. +
    3. +

      Also note that OpenVPN must be installed and run by a user who has +administrative/root privileges. So, we need to run the command: sudo su

      +
    4. +
    5. +

      We are using this repo to install +OpenVPN server on this ubuntu server.

      +

      For that, run the script and follow the assistant:

      +

      wget https://git.io/vpn -O openvpn-install.sh && bash openvpn-install.sh

      +

      Generating first client

      +

      You can press Enter for all default values. And, while entering a name + for the first client you can give "nerc" as the client name, this will + generate a new configuration file (.ovpn file) named as "nerc.ovpn". + Based on your client's name it will name the config file as + ".ovpn"

      +

      Setup Client completed

      +
    6. +
    7. +

      Copy the generated config file from "/root/nerc.ovpn" to "/home/ubuntu/ +nerc.ovpn" by running: cp /root/nerc.ovpn .

      +
    8. +
    9. +

      Update the ownership of the config file to ubuntu user and ubuntu group by +running the following command: chown ubuntu:ubuntu nerc.ovpn

      +
    10. +
    11. +

      You can exit from the root and ssh session all together and then copy the +configuration file to your local machine by running the following script on +your local machine's terminal: scp openvpn:nerc.ovpn .

      +
    12. +
    +

    To add a new client user

    +

    Once it ends, you can run it again to add more users, remove some of them or +even completely uninstall OpenVPN.

    +

    For this, run the script and follow the assistant:

    +

    wget https://git.io/vpn -O openvpn-install.sh && bash openvpn-install.sh

    +

    Second Client Generate

    +

    Here, you are giving client name as "mac_client" and that will generate a +new configuration file at "/root/mac_client.ovpn". You can repeat above +steps: 4 to 6 to copy this new client's configuration file and share it to +the new client.

    +
    +
    +

    Important Note

    +

    You need to contact your project administrator to get your own OpenVPN +configuration file (file with .ovpn extension). Download it and Keep it in +your local machine so in next steps we can use this configuration client +profile file.

    +
    +

    A OpenVPN client or compatible software is needed to connect to the OpenVPN +server. Please install one of these clients depending on your device. The +client program must be configured with a client profile to connect to the +OpenVPN server.

    +

    Windows

    +

    OpenVPN source code and Windows installers can be downloaded here. The OpenVPN executable should be installed +on both server and client machines since the single executable provides both +client and server functions. Please see the OpenVPN client setup guide for +Windows.

    +

    Mac OS X

    +

    The client we recommend and support for Mac OS is Tunnelblick. To install +Tunnelblick, download the dmg installer file from the Tunnelblick site, mount the dmg, and drag the Tunnelblick +application to Applications. Please refer to +this guide for more information.

    +

    Linux

    +

    OpenVPN is available through the package management system on most Linux distributions.

    +

    On Debian/Ubuntu:

    +
    sudo apt-get install openvpn
    +
    +

    On RedHat/Rocky/AlmaLinux:

    +
    sudo dnf install openvpn
    +
    +

    Then, to run OpenVPN using the client profile:

    +

    Move the VPN client profile (configuration) file to /etc/openvpn/ :

    +
    sudo mv nerc.ovpn /etc/openvpn/client.conf
    +
    +

    Restart the OpenVPN daemon (i.e., This will start OpenVPN connection and will +automatically run on boot):

    +
    sudo /etc/init.d/openvpn start
    +
    +

    OR,

    +
    sudo systemctl enable --now openvpn@client
    +sudo systemctl start openvpn@client
    +
    +

    Checking the status:

    +
    systemctl status openvpn@client
    +
    +

    Alternatively, if you want to run OpenVPN manually each time, then run:

    +
    sudo openvpn --config /etc/openvpn/client.ovpn
    +
    +

    OR,

    +
    sudo openvpn --config nerc.ovpn
    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/openvpn_gui_for_windows/index.html b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/openvpn_gui_for_windows/index.html new file mode 100644 index 00000000..6a599160 --- /dev/null +++ b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/openvpn_gui_for_windows/index.html @@ -0,0 +1,3435 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    OpenVPN-GUI

    +

    Official OpenVPN Windows installers +include a Windows +OpenVPN-GUI, which +allows managing OpenVPN connections from a system tray applet.

    +

    Find your client account credentials

    +

    You need to contact your project administrator to get your own OpenVPN +configuration file (file with .ovpn extension). Download it and Keep it in your +local machine so in next steps we can use this configuration client profile file.

    +

    Download and install OpenVPN-GUI

    +
      +
    1. +

      Download the OpenVPN client installer:

      +

      OpenVPN for Windows can be installed from the self-installing exe file on the +OpenVPN download page. Also note +that OpenVPN must be installed and run by a user who has administrative +privileges (this restriction is imposed by Windows, not OpenVPN)

      +
    2. +
    3. +

      Launch the installer and follow the prompts as directed.

      +

      Windows Installer

      +
    4. +
    5. +

      Clicking "Customize" button we can see settings and features of OpenVPN +GUI client.

      +

      Installation Customization

      +
    6. +
    7. +

      Click "Install Now" to continue.

      +

      Installation Complete

      +
    8. +
    9. +

      Click "Close"button.

      +
    10. +
    11. +

      For the newly installed OpenVPN GUI there will be no configuration profile +for the client so it will show a pop up that alerts:

      +

      No Config Alert

      +
    12. +
    +

    Set up the VPN with OpenVPN GUI

    +

    After you've run the Windows installer, OpenVPN is ready for use and will +associate itself with files having the .ovpn extension.

    +
      +
    1. +

      You can use the previously downloaded .ovpn file from your Downloads folder +to setup the connection profiles.

      +

      a. Either you can Right click on the OpenVPN configuration file (.ovpn) and +select "Start OpenVPN on this config file":

      +

      Start OpenVPN on selected config file

      +

      b. OR, you can use "Import file…" menu to select the previously +downloaded .ovpn file.

      +

      Import file from taskbar app

      +

      Once, done it will show:

      +

      File Imported Successful Alert

      +

      c. OR, you can manually copy the config file to one of OpenVPN's +configuration directories:

      +
      C:\Program Files\OpenVPN\config (global configs)
      +C:\Program Files\OpenVPN\config-auto (autostarted global configs)
      +%USERPROFILE%\OpenVPN\config (per-user configs)
      +
      +
    2. +
    +

    Connect to a VPN server location

    +

    For launching OpenVPN Connections you click on OpenVPN GUI (tray applet). +OpenVPN GUI is used to launching VPN connections on demand. OpenVPN GUI is a +system-tray applet, so an icon for the GUI will appear in the lower-right +corner of the screen located at the taskbar notification area. Right click on +the system tray icon, and if you have multiple configurations then a menu +should appear showing the names of your OpenVPN configuration profiles and +giving you the option to connect. If you have only one configuration then you +can just click on "Connect" menu.

    +

    Connect Menu

    +

    Connection Successful

    +

    When you are connected to OpenVPN server successfully, you will see popup +message as shown below. That's it! You are now connected to a VPN.

    +

    Connected Notification

    +

    Once you are connected to the OpenVPN server, you can run commands like shown +below in your terminal to connect to the private instances: ssh ubuntu@192.168. +0.40 -A -i cloud.key

    +

    SSH VPN Server

    +

    Disconnect VPN server

    +

    To disconnect, right click on the system tray icon, in your status bar and +select Disconnect from the menu.

    +

    Disconnect VPN server

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/tunnelblick_for_macos/index.html b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/tunnelblick_for_macos/index.html new file mode 100644 index 00000000..490da72e --- /dev/null +++ b/openstack/create-and-connect-to-the-VM/using-vpn/openvpn/tunnelblick_for_macos/index.html @@ -0,0 +1,3473 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Tunnelblick

    +

    Tunnelblick is a free, open-source GUI (graphical user interface) for OpenVPN +on macOS and OS X: More details can be found here. +Access to a VPN server — your computer is one end of the tunnel and the VPN +server is the other end.

    +

    Find your client account credentials

    +

    You need to contact your project administrator to get your own OpenVPN +configuration file (file with .ovpn extension). Download it and Keep it in your +local machine so in next steps we can use this configuration client profile file.

    +

    Download and install Tunnelblick

    +
      +
    1. +

      Download Tunnelblick, a free and +user-friendly app for managing OpenVPN connections on macOS.

      +

      Tunnelblick Download

      +
    2. +
    3. +

      Navigate to your Downloads folder and double-click the Tunnelblick +installation file (.dmg installer file) you have just downloaded.

      +

      dmg Installer File

      +
    4. +
    5. +

      In the window that opens, double-click on the Tunnelblick icon.

      +

      Tunnelblick Interface

      +
    6. +
    7. +

      A new dialogue box will pop up, asking you if you are sure you want to open +the app. Click Open.

      +

      Popup Open Confirmation

      +

      Access Popup

      +
    8. +
    9. +

      You will be asked to enter your device password. Enter it and click OK:

      +

      User Password prompt to Authorize

      +
    10. +
    11. +

      Select Allow or Don't Allow for your notification preference.

      +

      Notification Settings

      +
    12. +
    13. +

      Once the installation is complete, you will see a pop-up notification asking +you if you want to launch Tunnelblick now. (An administrator username and +password will be required to secure Tunnelblick). Click Launch.

      +
    14. +
    +

    Alternatively, you can click on the Tunnelblick icon in the status bar +and select VPN Details...:

    +

    VPN Details Menu

    +

    Configuration

    +

    Set up the VPN with Tunnelblick

    +
      +
    1. +

      A new dialogue box will appear. Click I have configuration files.

      +

      Configuration File Options

      +
    2. +
    3. +

      Another notification will pop-up, instructing you how to import +configuration files. Click OK.

      +

      Add A Configuration

      +
    4. +
    5. +

      Drag and drop the previously downloaded .ovpn file from your Downloads +folder to the Configurations tab in Tunnelblick.

      +

      Load Client Config File

      +

      OR,

      +

      You can just drag and drop the provided OpenVPN configuration file (file +with .ovpn extension) directly to Tunnelblick icon in status bar at the +top-right corner of your screen.

      +

      Load config on Tunnelblick

      +
    6. +
    7. +

      A pop-up will appear, asking you if you want to install the configuration +profile for your current user only or for all users on your Mac. Select your +preferred option. If the VPN is intended for all accounts on your Mac, select +All Users. If the VPN will only be used by your current account, select +Only Me.

      +

      Configuration Installation Setting

      +
    8. +
    9. +

      You will be asked to enter your Mac password.

      +

      User Login for Authentication

      +

      Loaded Client Configuration

      +

      Then the screen reads "Tunnelblick successfully: installed one configuration".

      +

      VPN Configuration Installed Successfully

      +
    10. +
    +

    You can see the configuration setting is loaded and installed successfully.

    +

    Connect to a VPN server location

    +
      +
    1. +

      To connect to a VPN server location, click the Tunnelblick icon in status +bar at the top-right corner of your screen.

      +

      Tunnelblick icon in status bar

      +
    2. +
    3. +

      From the drop down menu select the server and click Connect [name of +the .ovpn configuration file]..

      +

      Connect VPN

      +

      Alternatively, you can select "VPN Details" from the menu and then +click the "Connect"button:

      +

      Tunnelblick Configuration Interface

      +

      This will show the connection log on the dialog:

      +

      Connection Log

      +
    4. +
    5. +

      When you are connected to OpenVPN server successfully, you will see popup +message as shown below. That's it! You are now connected to a VPN.

      +

      Tunnel Successful

      +
    6. +
    7. +

      Once you are connected to the OpenVPN server, you can run commands like +shown below to connect to the private instances:

      +
      ssh ubuntu@192.168.0.40 -A -i cloud.key
      +
      +

      Private Instance SSH Accessible

      +
    8. +
    +

    Disconnect VPN server

    +

    To disconnect, click on the Tunnelblick icon in your status bar and select +Disconnect in the drop-down menu.

    +

    Disconnect using Tunnelblick icon

    +

    While closing the log will be shown on popup as shown below: +Preview Connection Log

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/images/available_instances.png b/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/images/available_instances.png new file mode 100644 index 00000000..f431daec Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/images/available_instances.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/images/client_connected.png b/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/images/client_connected.png new file mode 100644 index 00000000..11730c4e Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/images/client_connected.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/images/security_groups.png b/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/images/security_groups.png new file mode 100644 index 00000000..d130a0d7 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/images/security_groups.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/images/ssh_server.png b/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/images/ssh_server.png new file mode 100644 index 00000000..dd84bca0 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/images/ssh_server.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/index.html b/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/index.html new file mode 100644 index 00000000..d65f301b --- /dev/null +++ b/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/index.html @@ -0,0 +1,3471 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    sshuttle

    +

    sshuttle is a lightweight SSH-encrypted VPN. This is a Python based script that +allows you to tunnel connections through SSH in a far more efficient way then +traditional ssh proxying.

    +

    Installing sshuttle Server

    +

    You can spin up a new instance with "ubuntu-22.04-x86_64" or any available +Ubuntu OS image, named "sshuttle_server" on OpenStack, with +"default" and "ssh_only" Security Groups attached to it.

    +

    Available instances

    +

    Also, attach a Floating IP to this instance so you can ssh into it from outside.

    +

    Security Groups

    +

    Finally, you'll want to configure the setting for the remote instances in your +SSH configuration file (typically found in ~/.ssh/config). The SSH +configuration file might include entry for your newly created sshuttle server +like this:

    +
    Host sshuttle
    +
    +  HostName 140.247.152.244
    +  User ubuntu
    +  IdentityFile ~/.ssh/cloud.key
    +
    +
      +
    1. Then you can ssh into the sshuttle Server running: ssh sshuttle
    2. +
    +

    SSH sshuttle server

    +
    +

    Note

    +

    Unlike other VPN servers, for sshuttle you don't need to install +anything on the server side. As long as you have an SSH server (with +python3 installed) you're good to go.

    +
    +

    To connect from a new client

    +

    Install sshuttle

    +

    Windows

    +

    Currently there is no built in support for running sshuttle directly on +Microsoft Windows. What you can do is to create a Linux VM with Vagrant (or +simply Virtualbox if you like) and then try to connect via that VM. For more +details read here

    +

    Mac OS X

    +

    Install using Homebrew:

    +
    brew install sshuttle
    +
    +

    OR, via MacPorts

    +
    sudo port selfupdate
    +sudo port install sshuttle
    +
    +

    Linux

    +

    sshuttle is available through the package management system on most Linux distributions.

    +

    On Debian/Ubuntu:

    +
    sudo apt-get install sshuttle
    +
    +

    On RedHat/Rocky/AlmaLinux:

    +
    sudo dnf install sshuttle
    +
    +

    It is also possible to install into a virtualenv as a non-root user.

    +
      +
    • From PyPI:
    • +
    +
    virtualenv -p python3 /tmp/sshuttle
    + . /tmp/sshuttle/bin/activate
    + pip install sshuttle
    +
    +
      +
    • Clone:
    • +
    +
    virtualenv -p python3 /tmp/sshuttle
    + . /tmp/sshuttle/bin/activate
    + git clone [https://github.com/sshuttle/sshuttle.git](https://github.com/sshuttle/sshuttle.git)
    + cd sshuttle
    + ./setup.py install
    +
    +

    How to Connect

    +

    Tunnel to all networks (0.0.0.0/0):

    +
    sshuttle -r ubuntu @140.247.152.244 0.0.0.0/0
    +
    +

    OR, shorthand:

    +
    sudo sshuttle -r ubuntu@140.247.152.244 0/0
    +
    +

    If you would also like your DNS queries to be proxied through the DNS server of +the server, you are connected to:

    +
    sshuttle --dns -r ubuntu@140.247.152.244 0/0
    +
    +

    sshuttle Client connected

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/add_vpn_config_popup.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/add_vpn_config_popup.png new file mode 100644 index 00000000..fb355d34 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/add_vpn_config_popup.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/app.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/app.png new file mode 100644 index 00000000..e859e6b7 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/app.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/available_instances.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/available_instances.png new file mode 100644 index 00000000..864da978 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/available_instances.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/block_untunnelled_traffic_option.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/block_untunnelled_traffic_option.png new file mode 100644 index 00000000..19e514d9 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/block_untunnelled_traffic_option.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/browse_import_config_file.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/browse_import_config_file.png new file mode 100644 index 00000000..4296c25e Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/browse_import_config_file.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/client_config_template.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/client_config_template.png new file mode 100644 index 00000000..75f5a41d Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/client_config_template.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/deactivate.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/deactivate.png new file mode 100644 index 00000000..098b84a0 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/deactivate.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/deactivate_connection.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/deactivate_connection.png new file mode 100644 index 00000000..1b5ca6c0 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/deactivate_connection.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/edit_tunnel_config.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/edit_tunnel_config.png new file mode 100644 index 00000000..51611495 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/edit_tunnel_config.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/generate_client_nerc.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/generate_client_nerc.png new file mode 100644 index 00000000..40c39600 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/generate_client_nerc.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/import_config_file.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/import_config_file.png new file mode 100644 index 00000000..405d69f3 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/import_config_file.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/import_config_file_mac.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/import_config_file_mac.png new file mode 100644 index 00000000..0eb68485 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/import_config_file_mac.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/imported_config.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/imported_config.png new file mode 100644 index 00000000..f30ee078 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/imported_config.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/mac_import_config_file.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/mac_import_config_file.png new file mode 100644 index 00000000..21cc875a Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/mac_import_config_file.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/on_demand_option.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/on_demand_option.png new file mode 100644 index 00000000..17763937 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/on_demand_option.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/second_client_generate.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/second_client_generate.png new file mode 100644 index 00000000..d13c7757 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/second_client_generate.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/security_groups.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/security_groups.png new file mode 100644 index 00000000..9e1ef30f Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/security_groups.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/setup.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/setup.png new file mode 100644 index 00000000..3945aa5b Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/setup.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/setup_client_completed.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/setup_client_completed.png new file mode 100644 index 00000000..bc04bf52 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/setup_client_completed.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/ssh_server.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/ssh_server.png new file mode 100644 index 00000000..fc05d41d Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/ssh_server.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/ssh_vpn_server.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/ssh_vpn_server.png new file mode 100644 index 00000000..e94379e6 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/ssh_vpn_server.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/tunnel_activated.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/tunnel_activated.png new file mode 100644 index 00000000..ae00db2f Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/tunnel_activated.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/tunnel_activated_mac.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/tunnel_activated_mac.png new file mode 100644 index 00000000..5354f39f Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/tunnel_activated_mac.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/tunnel_public_info.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/tunnel_public_info.png new file mode 100644 index 00000000..8e7b0071 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/tunnel_public_info.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/wireguard_app_icon.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/wireguard_app_icon.png new file mode 100644 index 00000000..92f5d933 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/wireguard_app_icon.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/wireguard_security_rule.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/wireguard_security_rule.png new file mode 100644 index 00000000..b9738673 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/wireguard_security_rule.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/wireguard_taskbar_icon.png b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/wireguard_taskbar_icon.png new file mode 100644 index 00000000..692b63d8 Binary files /dev/null and b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/images/wireguard_taskbar_icon.png differ diff --git a/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.html b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.html new file mode 100644 index 00000000..129fe9ad --- /dev/null +++ b/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.html @@ -0,0 +1,3733 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    WireGuard

    +

    WireGuard is an extremely simple yet fast and +modern VPN that utilizes state-of-the-art cryptography.

    +

    Here's what it will look like:

    +

    WireGuard setup

    +

    Installing WireGuard Server

    +

    You can spin up a new instance with "ubuntu-22.04-x86_64" or any available +Ubuntu OS image, named "wireguard_server" on OpenStack, with +"default" and "ssh_only" Security Groups attached to it.

    +

    Available instances

    +

    Also, attach a Floating IP to this instance so you can ssh into it from outside.

    +

    Create a new Security Group i.e. "wireguard" that is listening on +UDP port 51820 as shown below:

    +

    WireGuard Security Rule

    +

    The Security Groups attached to the WireGuard server includes "default", +"ssh_only" and "wireguard". It should look similar to the image shown below:

    +

    Security Groups

    +

    Finally, you'll want to configure the setting for the remote instances in your +SSH configuration file (typically found in ~/.ssh/config). The SSH +configuration file might include entry for your newly created WireGuard server +like this:

    +
    Host wireguard
    +  HostName 140.247.152.188
    +  User ubuntu
    +  IdentityFile ~/.ssh/cloud.key
    +
    +
      +
    1. +

      Then you can ssh into the WireGuard Server running: ssh wireguard

      +

      SSH sshuttle server

      +
    2. +
    3. +

      Also note that WireGuard must be installed and run by a user who has +administrative/root privileges. So, we need to run the command: sudo su

      +
    4. +
    5. +

      We are using this repo to +install WireGuard server on this ubuntu server.

      +

      For that, run the script and follow the assistant:

      +

      wget https://git.io/wireguard -O wireguard-install.sh && bash wireguard-install.sh

      +

      Generating first client

      +

      You can press Enter for all default values. And, while entering a name +for the first client you can give "nerc" as the client name, +this will generate a new configuration file (.conf file) named as +"nerc.conf". Based on your client's name it will name the +config file as ".conf"

      +

      Setup Client completed

      +

      NOTE: For each peers the client configuration files comply with the +following template:

      +

      Client Config Template

      +
    6. +
    7. +

      Copy the generated config file from "/root/nerc.conf" to +"/home/ubuntu/nerc.conf" by running: cp /root/nerc.conf .

      +
    8. +
    9. +

      Update the ownership of the config file to ubuntu user and ubuntu group by +running the following command: chown ubuntu:ubuntu nerc.conf

      +
    10. +
    11. +

      You can exit from the root and ssh session all together and then copy the +configuration file to your local machine by running the following script on +your local machine's terminal: scp wireguard:nerc.conf .

      +
    12. +
    +

    To add a new client user

    +

    Once it ends, you can run it again to add more users, remove some of them or +even completely uninstall WireGuard.

    +

    For this, run the script and follow the assistant:

    +

    wget https://git.io/wireguard -O wireguard-install.sh && bash wireguard-install.sh

    +

    Second Client Generate

    +

    Here, you are giving client name as "mac_client" and that will +generate a new configuration file at "/root/mac_client.conf". +You can repeat above steps: 4 to 6 to copy this new client's configuration +file and share it to the new client.

    +

    Authentication Mechanism

    +

    It would be kind of pointless to have our VPN server allow anyone to connect. +This is where our public & private keys come into play.

    +
      +
    • Each client's **public** key needs to be added to the + SERVER'S configuration file
    • +
    • The server's **public** key added to the CLIENT'S + configuration file
    • +
    +

    Useful commands

    +

    To view server config: wg show or, wg

    +

    To activateconfig: wg-quick up /path/to/file_name.config

    +

    To deactivate config: wg-quick down /path/to/file_name.config

    +

    Read more:

    +

    https://git.zx2c4.com/wireguard-tools/about/src/man/wg.8

    +

    https://git.zx2c4.com/wireguard-tools/about/src/man/wg-quick.8

    +
    +
    +

    Important Note

    +

    You need to contact your project administrator to get your own WireGUard +configuration file (file with .conf extension). Download it and Keep it in +your local machine so in next steps we can use this configuration client +profile file.

    +
    +

    A WireGuard client or compatible software is needed to connect to the WireGuard +VPN server. Please install +one of these clients depending on your +device. The client program must be configured with a client profile to connect +to the WireGuard VPN server.

    +

    Windows

    +

    WireGuard client can be downloaded here. +The WireGuard executable should be installed on client machines. After the +installation, you should see the WireGuard icon in the lower-right corner of +the screen located at the taskbar notification area.

    +

    WireGuard taskbar icon

    +

    Set up the VPN with WireGuard GUI

    +

    Next, we configure the VPN tunnel. This includes setting up the endpoints and +exchanging the public keys.

    +

    Open the WireGuard GUI and either click on Add Tunnel -> Import tunnel(s) +from file… OR,

    +

    click on "Import tunnel(s) from file" button located at the center.

    +

    Import Config File

    +

    The software automatically loads the client configuration. Also, it creates a +public key for this new tunnel and displays it on the screen.

    +

    Imported Config

    +

    Either, Right Click on your tunnel name and select +"Edit selected tunnel…" menu OR, click on +"Edit" button at the lower left.

    +

    Edit selected Tunnel Config

    +

    Checking Block untunneled traffic (kill-switch) will make sure that all +your traffic is being routed through this new VPN server.

    +

    Block Untunnelled Traffic Option

    +

    Test your connection

    +

    On your Windows machine, press the "Activate" button. You should +see a successful connection be made:

    +

    Tunnel Activated

    +

    After a few seconds, the status should change to Active.

    +

    If the connection is routed through the VPN, it should show the IP address of +the WireGuard server as the public address.

    +

    If that's not the case, to troubleshoot please check the "Log" +tab and verify and validate the client and server configuration.

    +

    Clicking " Deactivate" button closes the VPN connection.

    +

    Deactivate Connection

    +

    Mac OS X

    +

    I. Using HomeBrew

    +

    This allows more than one Wireguard tunnel active at a time unlike the +WireGuard GUI app.

    +
      +
    1. +

      Install WireGuard CLI on macOS through brew: brew install wireguard-tools

      +
    2. +
    3. +

      Copy the ".conf" file to +"/usr/local/etc/wireguard/" (or "/etc/wireguard/"). +You'll need to create the " wireguard" directory first. For your +example, you will have your config file located at: " /usr/local/etc +/wireguard/mac_client.conf" or, "/etc/wireguard/mac_client.conf"

      +
    4. +
    5. +

      To activate the VPN: "wg-quick up [name of the conf file without +including .conf extension]". For example, in your case, running +wg-quick up mac_client - If the peer system is already configured +and its interface is up, then the VPN connection should establish +automatically, and you should be able to start routing traffic through the peer.

      +
    6. +
    +

    Use wg-quick down mac_client to take the VPN connection down.

    +

    II. Using WireGuard GUI App

    +
      +
    1. +

      Download WireGuard Client from the macOS App Store

      +

      You can find the official WireGuard Client app on the App Store here.

      +

      WireGuard Client App

      +
    2. +
    3. +

      Set up the VPN with WireGuard

      +

      Next, we configure the VPN tunnel. This includes setting up the endpoints +and exchanging the public keys.

      +

      Open the WireGuard GUI by directly clicking WireGuard icon in status bar at +the top-right corner of your screen.

      +

      WireGuard app icon

      +

      And then click on "Import tunnel(s) from file" menu to load your +client config file.

      +

      Import Config File in Mac

      +

      OR,

      +

      Find and click the WireGUard GUI from your Launchpad and then either click on +Add Tunnel -> Import tunnel(s) from file… or, just click on "Import +tunnel(s) from file" button located at the center.

      +

      Import Config File in Mac

      +

      Browse to the configuration file:

      +

      Browse and Locate Import Config File

      +

      The software automatically loads the client configuration. Also, it creates +a public key for this new tunnel and displays it on the screen.

      +

      Add VPN Config Popup

      +

      Tunnel Public Info

      +

      If you would like your computer to automatically connect to the WireGuard +VPN server as soon as either (or both) Ethernet or Wi-Fi network adapter +becomes active, check the relevant 'On-Demand' checkboxes for +"Ethernet" and " Wi-Fi".

      +

      Checking Exclude private IPs will generate a list of networks which +excludes the server IP address and add them to the AllowedIPs list. This +setting allows you to pass all your traffic through your Wireguard VPN +EXCLUDING private address ranges like 10.0.0.0/8, 172.16.0.0/12, +and 192.168.0.0/16.

      +

      On-Demand Option for Ethernet and WiFi

      +
    4. +
    5. +

      Test your connection

      +

      On your Windows machine, press the "Activate" button. You +should see a successful connection be made:

      +
    6. +
    +

    Tunnel Activated in Mac.png

    +

    After a few seconds, the status should change to Active.

    +

    Clicking "Deactivate" button from the GUI's interface or + directly clicking "Deactivate" menu from the WireGuard icon in + status bar at the top-right corner of your screen closes the VPN connection.

    +

    Deactivate Connection

    +

    Linux

    +

    WireGuard is available through the package management system on most Linux distributions.

    +

    On Debian/Ubuntu:

    +
    sudo apt update
    +sudo apt-get install wireguard resolvconf -y
    +
    +

    On RedHat/Rocky/AlmaLinux:

    +
    sudo dnf install wireguard
    +
    +

    Then, to run WireGuard using the client profile: +Move the VPN client profile (configuration) file to /etc/wireguard/:

    +
    sudo mv nerc.conf /etc/wireguard/client.conf
    +
    +

    Restart the WireGuard daemon (i.e., This will start WireGuard connection and +will automatically run on boot):

    +
    sudo /etc/init.d/wireguard start
    +
    +

    OR,

    +
    sudo systemctl enable --now wg-quick@client
    +sudo systemctl start wg-quick@client
    +
    +

    OR,

    +
    wg-quick up /etc/wireguard/client.conf
    +
    +

    Checking the status:

    +
    systemctl status wg-quick@client
    +
    +

    Alternatively, if you want to run WireGuard manually each time, then run:

    +
    sudo wireguard --config /etc/wireguard/client.conf
    +
    +

    OR,

    +
    sudo wireguard --config nerc.conf
    +
    +

    To test the connection

    +

    Once you are connected to the WireGuard server, you can run commands like shown +below in your terminal to connect to the private instances: ssh ubuntu@192.168. +0.40 -A -i cloud.key

    +

    SSH VPN Server

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/data-transfer/data-transfer-from-to-vm/index.html b/openstack/data-transfer/data-transfer-from-to-vm/index.html new file mode 100644 index 00000000..a2adb05b --- /dev/null +++ b/openstack/data-transfer/data-transfer-from-to-vm/index.html @@ -0,0 +1,4251 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Data Transfer To/From NERC VM

    +

    Transfer using Volume

    +

    You may wish to transfer a volume which includes all data to a different project +which can be your own (with access in project dropdown list) or external collaborators +with in NERC. For this you can follow this guide.

    +
    +

    Very Important Note

    +

    If you transfer the volume then that will be removed from the source and will +only be available on destination project.

    +
    +

    Using Globus

    +

    Globus is a web-based service that is the preferred method +for transferring substantial data between NERC VM and other locations. It effectively +tackles the typical obstacles researchers encounter when moving, sharing, and +storing vast quantities of data. By utilizing Globus, you can delegate data transfer +tasks to a managed service that oversees the entire process. This service monitors +performance and errors, retries failed transfers, automatically resolves issues +whenever feasible, and provides status updates to keep you informed. This allows +you to concentrate on your research while relying on Globus to handle data movement +efficiently. For information on the user-friendly web interface of Globus and its +flexible REST/API for creating scripted tasks and operations, please visit +Globus.org.

    +
    +

    Important Information

    +

    For large data sets and/or for access by external users, consider using Globus. +An institutional endpoint/collection is not required to use Globus - you can +set up a personal endpoint on your NERC VM and also on your local machine if +you need to transfer large amounts of data.

    +
    +

    Setting up a Personal Globus Endpoint on NERC VM

    +

    You can do this using Globus Connect Personal +to configure an endpoint on your NERC VM. In general, it is always fastest to setup +a Personal endpoint on your NERC VM, and then use that endpoint for transfers +to/from a local machine or any other shared or private Globus endpoints.

    +

    You can find instructions for downloading and installing the Globus Connect Personal +on the Globus web site.

    +
    +

    Helpful Tip

    +

    You may get a "Permission Denied" error for certain paths with Globus Connect +Personal. If you do, you may need to add this path to your list of allowed +paths for Globus Connect Personal. You can do this by editing the +~/.globusonline/lta/config-paths file and adding the new path as a line in +the end of the list. The path must be followed by sharing (0/1) and +R/W (0/1) flags.

    +

    For example, to enable read-write access to the /data/tables directory, add +the following line i.e. /data/tables,0,1.

    +
    +

    Usage of Globus

    +

    Once a Personal Endpoint is set up on a NERC VM, you will be able to find that named +collection on Globus file explorer +and then can be chosen as source or destination for data transfer to/from another +Guest Collection (Globus Shared Endpoints). Login into the +Globus web interface, select your organization +which will allow you to log in to Globus, and land on +File Manager page.

    +

    If your account belong to Globus Subscription +that you will be able to use data transfers between two personal endpoints +i.e. you can setup your local machine as another personal endpoint.

    +

    Globus Transfer

    +

    Using SCP

    +
    +

    Important Information

    +

    SCP is suggested for smaller files (<~10GB), otherwise use Globus. +When you want to transfer many small files in a directory, we recommend Globus.

    +
    +

    We generally recommend using SCP (Secure Copy) to copy data to and from your VM. +SCP is used to securely transfer files between two hosts using the Secure Shell +(ssh) protocol. It’s usage is simple, but the order that file locations are +specified is crucial. SCP always expects the 'from' location first, then the 'to' +destination. Depending on which is the remote system, you will prefix your username +and Floating IP of your NERC VM.

    +

    scp [username@Floating_IP:][location of file] [destination of file]

    +

    or,

    +

    scp [location of file] [username@Floating_IP:][destination of file]

    +

    Usage

    +

    Below are some examples of the two most common scenarios of SCP to copy to and from +various sources.

    +
    +

    Helpful Tip

    +

    We use '~' in the examples. The tilde '~' is a Unix short-hand that means +"my home directory". So if user almalinux uses ~/ this is the same as typing +out the full path to almalinux user's home directory (easier to remember than +/home/almalinux/). You can, of course, specify other paths (ex. – +/user/almalinux/output/files.zip) Also, we use . in the examples to specify +the current directory path from where the command is issued. This can be +replaced with the actual path.

    +
    +

    i. Copying Files From the NERC VM to Another Computer:

    +

    From a terminal/shell from your local machine, you'll issue your SCP command by +specifying the SSH Private Key to connect with the VM that has included corresponding +SSH Public Key. The syntax is:

    +
    scp -i <Your SSH Private Key including Path> <Default User name based on OS>@<Your Floating IP of VM>:~/<File In VM> .
    +
    +

    This copies the file <File In VM> from your VM's default user's directory (~ +is a Unix shortcut for my home directory) on your VM to your current directory +(. is a Unix shortcut the current directory) on your computer from where the command +is issued or you can specify the actual path instead of ..

    +

    For e.g.

    +
    scp -i ~/.ssh/your_pem_key_file.pem almalinux@199.94.60.219:~/myfile.zip /my_local_directory/
    +
    +

    ii. Copying Files From Another Computer to the NERC VM:

    +

    From a terminal/shell on your computer (or another server or cluster) where you +have access to the SSH Private Key, you'll issue your SCP command. The syntax is:

    +
    scp -i <Your SSH Private Key including Path> ./<Your Local File> <Default User name based on OS>@<Your Floating IP of VM>:~/`
    +
    +

    This copies the file <Your Local File> from the current directory on the computer +you issued the command from, to your home directory on your NERC VM. (recall that +. is a Unix shortcut for the current directory path and ~ is a Unix shortcut +for my home directory)

    +

    For e.g.

    +
    scp -i ~/.ssh/your_pem_key_file.pem ./myfile.zip almalinux@199.94.60.219:~/myfile.zip
    +
    +
    +

    Important Note

    +

    While it’s probably best to compress all the files you intend to transfer into +one file, this is not always an option. To copy the contents of an entire directory, +you can use the -r (for recursive) flag.

    +

    For e.g.

    +
    scp -i ~/.ssh/your_pem_key_file.pem -r almalinux@<Floating_IP>:~/mydata/ ./destination_directory/
    +
    +

    This copies all the files from ~/mydata/ on the cluster to the current +directory (i.e. .) on the computer you issued the command from. Here we can +replace ./ with actual full path on you local machine and also ~/ with +actual full path on your NERC VM.

    +
    +

    Using tar+ssh

    +

    When you want to transfer many small files in a directory, we recommend +Globus. If you don't wish to use Globus, you can consider using +ssh piped with tar.

    +

    i. Send a directory to NERC VM:

    +
    tar cz /local/path/dirname | ssh -i <Your SSH Private Key including Path> <Default User name based on OS>@<Your Floating IP of VM> tar zxv -C /remote/path
    +
    +

    ii. Get a directory from NERC VM:

    +
    ssh -i <Your SSH Private Key including Path> <Default User name based on OS>@<Your Floating IP of VM> tar cz /remote/path/dirname | tar zxv -C /local/path
    +
    +

    Using rsync

    +

    Rsync is a fast, versatile, remote (and local) +file-copying tool. It is famous for its delta-transfer algorithm, which reduces +the amount of data sent over the network by sending only the differences between +the source files and the existing files in the destination. This can often lead +to efficiencies in repeat-transfer scenarios, as rsync only copies files that are +different between the source and target locations (and can even transfer partial +files when only part of a file has changed). This can be very useful in reducing +the amount of copies you may perform when synchronizing two datasets.

    +

    The basic syntax is: rsync SOURCE DESTINATION where SOURCE and DESTINATION +are filesystem paths. They can be local, either absolute or relative to the current +working directory, or they can be remote but prefixing something like +USERNAME@HOSTNAME: to the front of them.

    +

    i. Synchronizing from a local machine to NERC VM:

    +
    rsync -avxz ./source_directory/ -e "ssh -i ~/.ssh/your_pem_key_file.pem" <user_name>@<Floating_IP>:~/destination_directory/
    +
    +

    ii. Synchronizing from NERC VM to a local machine:

    +
    rsync -avz -e "ssh -i ~/.ssh/your_pem_key_file.pem" -r <user_name>@<Floating_IP>:~/source_directory/ ./destination_directory/
    +
    +

    iii. Update a previously made copy of "foo" on the NERC VM after you’ve made changes +to the local copy:

    +
    rsync -avz --delete foo/ -e "ssh -i ~/.ssh/your_pem_key_file.pem" <user_name>@<Floating_IP>:~/foo/
    +
    +
    +

    Be careful with this option!

    +

    The --delete option has no effect when making a new copy, and therefore can +be used in the previous example too (making the commands identical), but since +it recursively deletes files, it’s best to use it sparingly. If you want to +maintain a mirror (i.e. the DESTINATION is to be an exact copy of the +SOURCE) then you will want to add the --delete option. This deletes +files/directories in the DESTINATION that are no longer in the SOURCE.

    +
    +

    iv. Update a previously made copy of "foo" on the NERC VM after you or someone +else has already updated it from a different source:

    +
    rsync -aAvz --update foo/ -e "ssh -i ~/.ssh/your_pem_key_file.pem" <user_name>@<Floating_IP>:~/foo/
    +
    +
    +

    Information

    +

    The --update option has no effect when making a new copy and can also be +specified in that case. If you're updating a master copy (i.e. the +DESTINATION may have files that are newer than the version(s) in SOURCE) +then you will also want to add the --update option. This will leave those +files alone and not revert them to the older copy in SOURCE.

    +
    +

    Progress, Verbosity, Statistics

    +

    -v +Verbose mode — list each file transferred. +Adding more vs makes it more verbose.

    +

    --progress +Show a progress meter for each individual file transfer that is part of the +entire operation. If you have many small files then this option can significantly +slow down the transfer.

    +

    --stats +Print a short paragraph of statistics at the end of the session (e.g. average transfer +rate, total number of files transferred, etc).

    +

    Other Useful Options

    +

    --dry-run +Perform a dry-run of the session instead of actually modifying the DESTINATION. +Mostly useful when adding multiple -v options, especially for verifying --delete +is doing what you want.

    +

    --exclude PATTERN +Skip files/directories in the SOURCE that match a given pattern (supports regular +expressions)

    +

    Using Rclone

    +

    rclone is a convenient and performant command-line tool for transferring files +and synchronizing directories directly between your local file systems and a +given NERC VM.

    +

    Prerequisites:

    +

    To run the rclone commands, you need to have:

    + +

    Configuring Rclone

    +

    First you'll need to configure rclone. The filesystem protocols, especially, +can have complicated authentication parameters so it's best to place these details +in a config file.

    +

    If you run rclone config file you will see where the default location is for +your current user.

    +
    +

    Note

    +

    For Windows users, you may need to specify the full path to the Rclone +executable file if it's not included in your system's %PATH% variable.

    +
    +

    Edit the config file's content on the path location described by +rclone config file command and add the following entry with the name [nerc]:

    +
    [nerc]
    +type = sftp
    +host = 199.94.60.219
    +user = almalinux
    +port =
    +pass =
    +key_file = C:\Users\YourName\.ssh\cloud.key
    +shell_type = unix
    +
    +

    More about the config for SFTP can be found here.

    +

    OR, You can locally copy this content to a new config file and then use this +flag to override the config location, e.g. rclone --config=FILE

    +
    +

    Interactive Configuration

    +

    Run rclone config to setup. See Rclone config docs +for more details.

    +
    +

    How to use Rclone

    +

    rclone supports many subcommands (see +the complete list of Rclone subcommands). +A few commonly used subcommands (assuming you configured the NERC VM filesystem +as nerc):

    +

    Listing Files and Folders

    +

    Once your NERC VM filesystem has been configured in Rclone, you can then use the +Rclone interface to List all the directories with the "lsd" command:

    +
    rclone lsd "nerc:"
    +
    +

    or,

    +
    rclone lsd "nerc:" --config=rclone.conf
    +
    +

    For e.g.

    +
    rclone lsd "nerc:" --config=rclone.conf
    +        -1 2023-07-06 12:18:24        -1 .ssh
    +        -1 2023-07-06 19:27:19        -1 destination_directory
    +
    +

    To list the files and folders available within the directory (i.e. +"destination_directory") we can use the "ls" command:

    +
    rclone ls "nerc:destination_directory/"
    +  653 README.md
    +    0 image.png
    +   12 test-file
    +
    +

    Uploading and Downloading Files and Folders

    +

    rclone support a variety of options to allow you to copy, sync, and move files +from one destination to another.

    +

    A simple example of this can be seen below where we copy/upload the file +upload.me to the <your-directory> directory:

    +
    rclone copy "./upload.me" "nerc:<your-directory>/"
    +
    +

    Another example, to copy/download the file upload.me from the remote +directory, <your-directory>, to your local machine:

    +
    rclone -P copy "nerc:<your-directory>/upload.me" "./"
    +
    +

    Also, to Sync files into the <your-directory> directory it's recommended to +first try with --dry-run first. This will give you a preview of what would be +synced without actually performing any transfers.

    +
    rclone --dry-run sync /path/to/files nerc:<your-directory>
    +
    +

    Then sync for real

    +
    rclone sync --interactive /path/to/files nerc:<your-directory>
    +
    +

    Mounting VM filesystem on local filesystem

    +

    Linux:

    +

    First, you need to create a directory on which you will mount your filesystem:

    +

    mkdir ~/mnt-rclone

    +

    Then you can simply mount your filesystem with:

    +

    rclone -vv --vfs-cache-mode writes mount nerc: ~/mnt-rclone

    +

    Windows:

    +

    First you have to download Winfsp:

    +

    WinFsp is an open source Windows File System Proxy which provides a FUSE +emulation layer.

    +

    Then you can simply mount your VM's filesystem with (no need to create the directory +in advance):

    +

    rclone -vv --vfs-cache-mode writes mount nerc: C:/mnt-rclone

    +

    The vfs-cache-mode flag enables file caching. You can use either the writes +or full option. For further explanation you can see the official documentation.

    +

    Now that your VM's filesystem is mounted locally, you can list, create, and delete +files in it.

    +

    Unmount NERC VM filesystem

    +

    To unmount, simply press CTRL-C and the mount will be interrupted.

    +

    Using Graphical User Interface (GUI) Tools

    +

    i. WinSCP

    +

    WinSCP is a popular and free open-source SFTP +client, SCP client, and FTP client for Windows. Its main function is file transfer +between a local and a remote computer, with some basic file management functionality +using FTP, FTPS, SCP, SFTP, WebDAV, or S3 file transfer protocols.

    +

    Prerequisites:

    +
      +
    • +

      WinSCP installed, see Download and Install the latest version of the WinSCP +for more information.

      +
    • +
    • +

      Go to WinSCP menu and open "View > Preferences".

      +
    • +
    • +

      When the "Preferences" dialog window appears, select "Transfer" in the options +on the left pane.

      +
    • +
    • +

      Click on the "Edit" button.

      +
    • +
    • +

      Then, in the popup dialog box, review the "Common options" group and uncheck the +"Preserve timestamp" option as shown below:

      +
    • +
    +

    Disable Preserve TimeStamp

    +

    Configuring WinSCP

    +
      +
    • Click on the "New Tab" button as shown below:
    • +
    +

    Login

    +
      +
    • Select either "SFTP" or "SCP" from the "File protocol" dropdown options +as shown below:
    • +
    +

    Choose SFTP or SCP File Protocol

    +
      +
    • +

      Provide the following required information:

      +

      "File protocol": Choose either ""SFTP" or "SCP""

      +

      "Host name": "<Your Floating IP of VM>"

      +

      "Port number": "22"

      +

      "User name": "<Default User name based on OS>"

      +
      +

      Default User name based on OS

      +
        +
      • all Ubuntu images: ubuntu
      • +
      • all AlmaLinux images: almalinux
      • +
      • all Rocky Linux images: rocky
      • +
      • all Fedora images: fedora
      • +
      • all Debian images: debian
      • +
      • all RHEL images: cloud-user
      • +
      +

      If you still have VMs running with deleted CentOS images, you need to +use the following default username for your CentOS images: centos.

      +
      +

      "Password": "<Leave blank as you using SSH key>"

      +
    • +
    • +

      Change Authentication Options

      +
    • +
    +

    Before saving, click the "Advanced" button. +In the "Advanced Site Settings", under "SSH >> Authentication" settings, check +"Allow agent forwarding" and select the private key file with .ppk extension from +the file picker.

    +

    Advanced Site Settings for SSH Authentication

    +
    +

    Helpful Tip

    +

    You can save your above configured site with some preferred name by +clicking the "Save" button and then giving a proper name to your site. +This prevents needing to manually enter all of your configuration again the +next time you need to use WinSCP. +Save Site WinSCP

    +
    +

    Using WinSCP

    +

    You can follow the above steps to manually add a new site the next time you open +WinSCP, or you can connect to your previously saved site. Saved sites will be +listed in the popup dialog and can be selected by clicking on the site name.

    +

    Then click the "Login" button to connect to your NERC project's VM as shown below:

    +

    Login

    +

    Successful connection

    +

    You should now be connected to the VM's remote directories/files. You can drag +and drop your files to/from file windows to begin transfer. When you're finished, +click the "X" icon in the top right to disconnect.

    +

    ii. Cyberduck

    +

    Cyberduck is a libre server and cloud +storage browser for Mac and Windows. Its user-friendly interface enables seamless +connections to servers, enterprise file sharing, and various cloud storage platforms.

    +

    Prerequisites:

    + +

    Configuring Cyberduck

    +
      +
    • Click on the "Open Connection" button as shown below:
    • +
    +

    Open Connection

    +
      +
    • Select either "SFTP" or "FTP" from the dropdown options as shown below:
    • +
    +

    Choose Amazon S3

    +
      +
    • +

      Provide the following required information:

      +

      "Server": "<Your Floating IP of VM>"

      +

      "Port": "22"

      +

      "User name": "<Default User name based on OS>"

      +
      +

      Default User name based on OS

      +
        +
      • all Ubuntu images: ubuntu
      • +
      • all AlmaLinux images: almalinux
      • +
      • all Rocky Linux images: rocky
      • +
      • all Fedora images: fedora
      • +
      • all Debian images: debian
      • +
      • all RHEL images: cloud-user
      • +
      +
      +

      "Password": "<Leave blank as you using SSH key>"

      +

      "SSH Private Key": "Choose the appropriate SSH Private Key from your local +machine that has the corresponding public key attached to your VM"

      +
    • +
    +

    Cyberduck SFTP or FTP Configuration

    +

    Using Cyberduck

    +

    Then click the "Connect" button to connect to your NERC VM as shown below:

    +

    Successful connection

    +

    You should now be connected to the VM's remote directories/files. You can drag +and drop your files to/from file windows to begin transfer. When you're +finished, click the "X" icon in the top right to disconnect.

    +

    iii. Filezilla

    +

    Filezilla is a free and +open source SFTP client which is built on modern standards. It is available +cross-platform (Mac, Windows and Linux) and is actively maintained. You can transfer +files to/from the cluster from your computer or any resources connected to your +computer (shared drives, Dropbox, etc.)

    +

    Prerequisites:

    + +

    Configuring Filezilla

    +
      +
    • Click on "Site Manager" icon as shown below:
    • +
    +

    Site Manager

    +
      +
    • Click on "New Site" as shown below:
    • +
    +

    Click New Site

    +
      +
    • Select either "SFTP" or "FTP" from the dropdown options as shown below:
    • +
    +

    Select Protocol

    +
      +
    • +

      Provide the following required information:

      +

      "Server": "<Your Floating IP of VM>"

      +

      "Port": "22"

      +

      "Logon Type": "Key file" from the dropdown option

      +

      "User": "<Default User name based on OS>"

      +
      +

      Default User name based on OS

      +
        +
      • all Ubuntu images: ubuntu
      • +
      • all AlmaLinux images: almalinux
      • +
      • all Rocky Linux images: rocky
      • +
      • all Fedora images: fedora
      • +
      • all Debian images: debian
      • +
      • all RHEL images: cloud-user
      • +
      +

      If you still have VMs running with deleted CentOS images, you need to +use the following default username for your CentOS images: centos.

      +
      +

      "Key file": "Browse and choose the appropriate SSH Private Key from you +local machine that has corresponding Public Key attached to your VM"

      +
    • +
    +

    Filezilla SFTP or FTP Configuration

    +

    Using Filezilla

    +

    Then click "Connect" button to connect to your NERC VM as shown below:

    +

    Successful connection

    +

    You should now be connected to the VM and see your local files in the left-hand +pane and the remote files in the right-hand pane. You can drag and drop between +them or drag and drop to/from file windows on your computer. When you're +finished, click the "X" icon in the top right to disconnect.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/data-transfer/images/choose_SFTP_or_SCP_protocol.png b/openstack/data-transfer/images/choose_SFTP_or_SCP_protocol.png new file mode 100644 index 00000000..86e2a9d7 Binary files /dev/null and b/openstack/data-transfer/images/choose_SFTP_or_SCP_protocol.png differ diff --git a/openstack/data-transfer/images/cyberduck-open-connection-new.png b/openstack/data-transfer/images/cyberduck-open-connection-new.png new file mode 100644 index 00000000..8224b273 Binary files /dev/null and b/openstack/data-transfer/images/cyberduck-open-connection-new.png differ diff --git a/openstack/data-transfer/images/cyberduck-open-connection-sftp.png b/openstack/data-transfer/images/cyberduck-open-connection-sftp.png new file mode 100644 index 00000000..89bc6237 Binary files /dev/null and b/openstack/data-transfer/images/cyberduck-open-connection-sftp.png differ diff --git a/openstack/data-transfer/images/cyberduck-select-sftp-or-ftp.png b/openstack/data-transfer/images/cyberduck-select-sftp-or-ftp.png new file mode 100644 index 00000000..7be81f6a Binary files /dev/null and b/openstack/data-transfer/images/cyberduck-select-sftp-or-ftp.png differ diff --git a/openstack/data-transfer/images/cyberduck-sftp-successful-connection.png b/openstack/data-transfer/images/cyberduck-sftp-successful-connection.png new file mode 100644 index 00000000..296fc905 Binary files /dev/null and b/openstack/data-transfer/images/cyberduck-sftp-successful-connection.png differ diff --git a/openstack/data-transfer/images/filezilla-click-new-site.png b/openstack/data-transfer/images/filezilla-click-new-site.png new file mode 100644 index 00000000..69aa41a2 Binary files /dev/null and b/openstack/data-transfer/images/filezilla-click-new-site.png differ diff --git a/openstack/data-transfer/images/filezilla-connect-config.png b/openstack/data-transfer/images/filezilla-connect-config.png new file mode 100644 index 00000000..dc704c07 Binary files /dev/null and b/openstack/data-transfer/images/filezilla-connect-config.png differ diff --git a/openstack/data-transfer/images/filezilla-new-site.png b/openstack/data-transfer/images/filezilla-new-site.png new file mode 100644 index 00000000..35e23687 Binary files /dev/null and b/openstack/data-transfer/images/filezilla-new-site.png differ diff --git a/openstack/data-transfer/images/filezilla-sftp-or-ftp.png b/openstack/data-transfer/images/filezilla-sftp-or-ftp.png new file mode 100644 index 00000000..979046d1 Binary files /dev/null and b/openstack/data-transfer/images/filezilla-sftp-or-ftp.png differ diff --git a/openstack/data-transfer/images/filezilla-sftp-successful-connection.png b/openstack/data-transfer/images/filezilla-sftp-successful-connection.png new file mode 100644 index 00000000..52629f44 Binary files /dev/null and b/openstack/data-transfer/images/filezilla-sftp-successful-connection.png differ diff --git a/openstack/data-transfer/images/globus-transfer.png b/openstack/data-transfer/images/globus-transfer.png new file mode 100644 index 00000000..b6668d9e Binary files /dev/null and b/openstack/data-transfer/images/globus-transfer.png differ diff --git a/openstack/data-transfer/images/winscp-new-tab.png b/openstack/data-transfer/images/winscp-new-tab.png new file mode 100644 index 00000000..1fdf69c2 Binary files /dev/null and b/openstack/data-transfer/images/winscp-new-tab.png differ diff --git a/openstack/data-transfer/images/winscp-preferences-perserve-timestamp-disable.png b/openstack/data-transfer/images/winscp-preferences-perserve-timestamp-disable.png new file mode 100644 index 00000000..62d7337a Binary files /dev/null and b/openstack/data-transfer/images/winscp-preferences-perserve-timestamp-disable.png differ diff --git a/openstack/data-transfer/images/winscp-save-site.png b/openstack/data-transfer/images/winscp-save-site.png new file mode 100644 index 00000000..1ad86e79 Binary files /dev/null and b/openstack/data-transfer/images/winscp-save-site.png differ diff --git a/openstack/data-transfer/images/winscp-site-login.png b/openstack/data-transfer/images/winscp-site-login.png new file mode 100644 index 00000000..1c6c36ff Binary files /dev/null and b/openstack/data-transfer/images/winscp-site-login.png differ diff --git a/openstack/data-transfer/images/winscp-site-successfully-connected.png b/openstack/data-transfer/images/winscp-site-successfully-connected.png new file mode 100644 index 00000000..05538f51 Binary files /dev/null and b/openstack/data-transfer/images/winscp-site-successfully-connected.png differ diff --git a/openstack/data-transfer/images/winscp-ssh-auth.png b/openstack/data-transfer/images/winscp-ssh-auth.png new file mode 100644 index 00000000..e4aac8db Binary files /dev/null and b/openstack/data-transfer/images/winscp-ssh-auth.png differ diff --git a/openstack/decommission/decommission-openstack-resources/index.html b/openstack/decommission/decommission-openstack-resources/index.html new file mode 100644 index 00000000..a80cfaeb --- /dev/null +++ b/openstack/decommission/decommission-openstack-resources/index.html @@ -0,0 +1,3691 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Decommission Your NERC OpenStack Resources

    +

    You can decommission all of your NERC OpenStack resources sequentially as outlined +below.

    +

    Prerequisite

    +
      +
    • +

      Backup: Back up any critical data or configurations stored on the resources +that going to be decommissioned. This ensures that important information is not +lost during the process. You can refer to this guide +to initiate and carry out data transfer to and from the virtual machine.

      +
    • +
    • +

      Shutdown Instances: If applicable, Shut Off any running instances +to ensure they are not actively processing data during decommissioning.

      +
    • +
    • +

      Setup OpenStack CLI, see OpenStack Command Line setup +for more information.

      +
    • +
    +

    Delete all VMs

    +

    For instructions on deleting instance(s), please refer to this documentation.

    +

    Delete volumes and snapshots

    +

    For instructions on deleting volume(s), please refer to this documentation.

    +

    To delete snapshot(s), if that snapshot is not used for any running instance.

    +

    Navigate to Project -> Volumes -> Snapshots.

    +

    Delete Snapshots

    +
    +

    Unable to Delete Snapshots

    +

    First delete all volumes and instances (and its attached volumes) that are +created using the snapshot first, you will not be able to delete the volume +snapshots.

    +
    +

    Delete all custom built Images and Instance Snapshot built Images

    +

    Navigate to Project -> Compute -> Images.

    +

    Select all of the custom built that have Visibility set as "Private" images to delete.

    +

    Delete your all private Networks, Routers and Internal Interfaces on the Routers

    +

    To review all Network and its connectivities, you need to:

    +

    Navigate to Project -> Network -> Network Topology.

    +

    This will shows all view of current Network in your project in Graph or Topology +view. Make sure non instances are connected to your private network, which is +setup by following this documentation. +If there are any instances then refer this to delete those VMs.

    +

    Network Topology

    +

    First, delete all other Routers used to create private networks, which is +setup by following this documentation +except default_router from:

    +

    Navigate to Project -> Network -> Routers.

    +

    First, delete all other Routers used to create private networks except default_network +and provider then only you will be able to delete the Networks from:

    +

    Navigate to Project -> Network -> Networks.

    +
    +

    Unable to Delete Networks

    +

    First delete all instances and then delete all routers then only you will be +able to delete the associated private networks.

    +
    +

    Release all Floating IPs

    +

    Navigate to Project -> Network -> Floating IPs.

    +

    Release all Floating IPs

    +

    For instructions on releasing your allocated Floating IP back into the NERC floating +IP pool, please refer to this documentation.

    +

    Clean up all added Security Groups

    +

    First, delete all other security groups except default also make sure the default +security group does not have any extra rules. To view all Security Groups:

    +

    Navigate to Project -> Network -> Security Groups.

    +
    +

    Unable to Delete Security Groups

    +

    First delete all instances and then only you will be able to delete the +security groups. If a security group is attached to a VM, that security group +will not be allowed to delete.

    +
    +

    Delete all of your stored Key Pairs

    +

    Navigate to Project -> Compute -> Key Pairs.

    +
    +

    Unable to Delete Key Pairs

    +

    First delete all instances that are using the selected Key Pairs then only you +will be to delete them.

    +
    +

    Delete all buckets and objects

    +

    For instructions on deleting bucket(s) along with all objects, please refer to +this documentation.

    +

    To delete snapshot(s), if that snapshot is not used for any running instance.

    +

    Navigate to Project -> Object Store -> Containers.

    +

    Delete Containers

    +
    +

    Unable to Delete Container with Objects inside

    +

    First delete all objects inside a Container first, then only you will be able +to delete the container. Please make sure any critical objects data are already +been remotely backed up before deleting them. You can also use openstack client +to recursively delete the containers which has multi-level objects inside as +described here. +So, you don't need to manually delete all objects inside a container prior +deleting the container. This will save a lot of your time and effort.

    +
    +

    Use ColdFront to reduce the Storage Quota to Zero

    +

    Each allocation, whether requested or approved, will be billed based on the +pay-as-you-go model. The exception is for Storage quotas, where the cost +is determined by your requested and approved allocation values +to reserve storage from the total NESE storage pool. For NERC (OpenStack) +Resource Allocations, storage quotas are specified by the "OpenStack Volume Quota +(GiB)" and "OpenStack Swift Quota (GiB)" allocation attributes.

    +

    Even if you have deleted all volumes, snapshots, and object storage buckets and +objects in your OpenStack project. It is very essential to adjust the approved +values for your NERC (OpenStack) resource allocations to zero (0) otherwise you +still be incurring a charge for the approved storage as explained in +Billing FAQs.

    +

    To achieve this, you must submit a final change request to reduce the +Storage Quotas for the "OpenStack Volume Quota (GiB)" and "OpenStack Swift Quota +(GiB)" to zero (0) for your NERC (OpenStack) resource type. You can review +and manage these resource allocations by visiting the +resource allocations. Here, you +can filter the allocation of your interest and then proceed to request a +change request.

    +

    Please make sure your change request looks like this:

    +

    Change Request to Set Storage Quotas Zero

    +

    Wait until the requested resource allocation gets approved by the NERC's admin.

    +

    After approval, kindly review and verify that the quotas are accurately +reflected in your resource allocation +and OpenStack project. Please ensure that the +approved quota values are accurately displayed as explained here.

    +

    Review your Block Storage(Volume/Cinder) Quota

    +

    Please confirm and verify that the gigabytes resource value that specifies total +space in external volumes is set to +a limit of zero (0) in correspondence with the approved "OpenStack Volume Quota (GiB)" +of your allocation when running openstack quota show openstack client command +as shown below:

    +
    openstack quota show
    ++-----------------------+--------+
    +| Resource              |  Limit |
    ++-----------------------+--------+
    +...
    +| gigabytes             |      0 |
    +...
    ++-----------------------+--------+
    +
    +

    Review your Object Storage(Swift) Quota

    +

    To check the overall space used, you can use the following command

    +

    Also, please confirm and verify that the Quota-Bytes property value is set to +a limit of zero (0) in correspondence with the approved "OpenStack Swift Quota +(GiB)" of your allocation and also check the overall space used in Bytes +is one (1) along with no Containers and Objects, when running +openstack object store account show openstack client command as shown below:

    +
    openstack object store account show
    ++------------+---------------------------------------+
    +| Field      | Value                                 |
    ++------------+---------------------------------------+
    +| Account    | AUTH_5e1cbcfe729a4c7e8fb2fd5328456eea |
    +| Bytes      | 0                                     |
    +| Containers | 0                                     |
    +| Objects    | 0                                     |
    +| properties | Quota-Bytes='1'                       |
    ++------------+---------------------------------------+
    +
    +

    Review your Project Usage

    +

    Several commands are available to access project-level resource utilization details. +The openstack limits show --absolute command offers a comprehensive view of the +most crucial resources and also allows you to view your current resource consumption.

    +

    Multiple commands are at your disposal to access project resource utilization +details. The openstack limits show --absolute command offers a comprehensive +view of critical resources and allows you to assess your current resource consumption.

    +
    +

    Very Important: Ensure No Resources that will be Billed are Used

    +

    Most importantly, ensure that there is no active usage for any of your +currently allocated project resources.

    +
    +

    Please ensure the output appears as follows, with all used resources having a value +of zero (0), except for totalSecurityGroupsUsed.

    +
    openstack limits show --absolute
    ++--------------------------+-------+
    +| Name                     | Value |
    ++--------------------------+-------+
    +...
    +| totalRAMUsed             |     0 |
    +| totalCoresUsed           |     0 |
    +| totalInstancesUsed       |     0 |
    +| totalFloatingIpsUsed     |     0 |
    +| totalSecurityGroupsUsed  |     1 |
    +| totalServerGroupsUsed    |     0 |
    +...
    +| totalVolumesUsed         |     0 |
    +| totalGigabytesUsed       |     0 |
    +| totalSnapshotsUsed       |     0 |
    +| totalBackupsUsed         |     0 |
    +| totalBackupGigabytesUsed |     0 |
    ++--------------------------+-------+
    +
    +

    Review your Project's Resource Quota from the OpenStack Dashboard

    +

    After removing all OpenStack resources and updating the Storage Quotas to set them +to zero (0), you can review and verify that these changes are reflected in your +Horizon Dashboard Overview.

    +

    Navigate to Project -> Compute -> Overview.

    +

    Horizon Dashboard

    +

    Finally, Archive your ColdFront Project

    +

    As a PI, you will now be able to Archive your ColdFront Project via +accessing NERC's ColdFront interface. +Please refer to these intructions +on how to archive your projects that need to be decommissioned.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/decommission/images/change_request_zero_storage.png b/openstack/decommission/images/change_request_zero_storage.png new file mode 100644 index 00000000..0089929f Binary files /dev/null and b/openstack/decommission/images/change_request_zero_storage.png differ diff --git a/openstack/decommission/images/delete-containers.png b/openstack/decommission/images/delete-containers.png new file mode 100644 index 00000000..f3ba7789 Binary files /dev/null and b/openstack/decommission/images/delete-containers.png differ diff --git a/openstack/decommission/images/delete-snapshots.png b/openstack/decommission/images/delete-snapshots.png new file mode 100644 index 00000000..af39799e Binary files /dev/null and b/openstack/decommission/images/delete-snapshots.png differ diff --git a/openstack/decommission/images/horizon_dashboard.png b/openstack/decommission/images/horizon_dashboard.png new file mode 100644 index 00000000..7d26c241 Binary files /dev/null and b/openstack/decommission/images/horizon_dashboard.png differ diff --git a/openstack/decommission/images/instance_change_security_groups.png b/openstack/decommission/images/instance_change_security_groups.png new file mode 100644 index 00000000..5437f8a4 Binary files /dev/null and b/openstack/decommission/images/instance_change_security_groups.png differ diff --git a/openstack/decommission/images/network-topology.png b/openstack/decommission/images/network-topology.png new file mode 100644 index 00000000..6c4d4163 Binary files /dev/null and b/openstack/decommission/images/network-topology.png differ diff --git a/openstack/decommission/images/release_floating_ips.png b/openstack/decommission/images/release_floating_ips.png new file mode 100644 index 00000000..06ff1f9b Binary files /dev/null and b/openstack/decommission/images/release_floating_ips.png differ diff --git a/openstack/index.html b/openstack/index.html new file mode 100644 index 00000000..7ebf385a --- /dev/null +++ b/openstack/index.html @@ -0,0 +1,3609 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    OpenStack Tutorial Index

    +

    If you're just starting out, we recommend starting from

    +

    Access the OpenStack Dashboard +and going through the tutorial in order.

    +

    If you just need to review a specific step, you can find the page you need in +the list below.

    +

    Logging In

    + +

    Access and Security

    + +

    Create & Connect to the VM

    + +

    OpenStack CLI

    + +

    Persistent Storage

    +

    Block Storage/ Volumes/ Cinder

    + +

    Object Storage/ Swift

    + +

    Data Transfer

    + +

    Backup your instance and data

    + +

    VM Management

    + +

    Decommission OpenStack Resources

    + +
    +

    Advanced OpenStack Topics

    +
    +

    Setting Up Your Own Network

    + +

    Domain or Host Name for your VM

    + +

    Using Terraform to provision NERC resources

    + +

    Python SDK

    + +

    Setting Up Your Own Images

    + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/logging-in/access-the-openstack-dashboard/index.html b/openstack/logging-in/access-the-openstack-dashboard/index.html new file mode 100644 index 00000000..0eb1d217 --- /dev/null +++ b/openstack/logging-in/access-the-openstack-dashboard/index.html @@ -0,0 +1,3289 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Access the OpenStack Dashboard

    +

    The OpenStack Dashboard which is a web-based graphical interface, code named +Horizon, is located at https://stack.nerc.mghpcc.org.

    +

    The NERC Authentication supports CILogon using Keycloak for gateway authentication +and authorization that provides federated login via your institution accounts and +it is the recommended authentication method.

    +

    Make sure you are selecting "OpenID Connect" (which is selected by default) as +shown here:

    +

    OpenID Connect

    +

    Next, you will be redirected to CILogon welcome page as shown below:

    +

    CILogon Welcome Page

    +

    MGHPCC Shared Services (MSS) Keycloak will request approval of access to the +following information from the user:

    +
      +
    • Your CILogon user identifier
    • +
    • Your name
    • +
    • Your email address
    • +
    • Your username and affiliation from your identity provider
    • +
    +

    which are required in order to allow access your account on NERC's OpenStack +dashboard.

    +

    From the "Selected Identity Provider" dropdown option, please select your institution's +name. If you would like to remember your selected institution name for future +logins please check the "Remember this selection" checkbox this will bypass the +CILogon welcome page on subsequent visits and proceed directly to the selected insitution's +identity provider(IdP). Click "Log On". This will redirect to your respective institutional +login page where you need to enter your institutional credentials.

    +
    +

    Important Note

    +

    The NERC does not see or have access to your institutional account credentials, +it points to your selected insitution's identity provider and redirects back +once authenticated.

    +
    +

    Once you successfully authenticate you should see an overview of the resources +like Compute (instances, VCPUs, RAM, etc.), Volume and Network. You can also +see usage summary for provided date range.

    +

    OpenStack Horizon dashboard

    +
    +

    I can't find my virtual machine

    +

    If you are a member of several projects i.e. ColdFront NERC (OpenStack) +allocations, you may need to switch the project before you can see and use the +OpenStack resources you or your team has created. Clicking on the project dropdown +which is displayed near the top right side will popup the list of projects you +are in. You can select the new project by hovering and clicking on the project +name in that list as shown below:

    +

    OpenStack Project List

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/logging-in/dashboard-overview/index.html b/openstack/logging-in/dashboard-overview/index.html new file mode 100644 index 00000000..4083b6a5 --- /dev/null +++ b/openstack/logging-in/dashboard-overview/index.html @@ -0,0 +1,3462 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Dashboard Overview

    +

    When you are logged-in, you will be redirected to the Compute panel which is under +the Project tab. In the top bar, you can see the two small tabs: "Project" and "Identity".

    +

    Beneath that you can see six panels in larger print: "Project", "Compute", +"Volumes", "Network", "Orchestration", and "Object Store".

    +

    Project Panel

    +

    Navigate: Project -> Project

    +
      +
    • API Access: View API endpoints.
    • +
    +

    Project API Access

    +

    Compute Panel

    +

    Navigate: Project -> Compute

    +
      +
    • Overview: View reports for the project.
    • +
    +

    Compute dashboard

    +
      +
    • +

      Instances: View, launch, create a snapshot from, stop, pause, or reboot +instances, or connect to them through VNC.

      +
    • +
    • +

      Images: View images and instance snapshots created by project users, plus any +images that are publicly available. Create, edit, and delete images, and launch +instances from images and snapshots.

      +
    • +
    • +

      Key Pairs: View, create, edit, import, and delete key pairs.

      +
    • +
    • +

      Server Groups: View, create, edit, and delete server groups.

      +
    • +
    +

    Volume Panel

    +

    Navigate: Project -> Volume

    +
      +
    • +

      Volumes: View, create, edit, delete volumes, and accept volume trnasfer.

      +
    • +
    • +

      Backups: View, create, edit, and delete backups.

      +
    • +
    • +

      Snapshots: View, create, edit, and delete volume snapshots.

      +
    • +
    • +

      Groups: View, create, edit, and delete groups.

      +
    • +
    • +

      Group Snapshots: View, create, edit, and delete group snapshots.

      +
    • +
    +

    Network Panel

    +

    Navigate: Project -> Network

    +
      +
    • Network Topology: View the network topology.
    • +
    +

    Network Topology

    +
      +
    • +

      Networks: Create and manage public and private networks.

      +
    • +
    • +

      Routers: Create and manage routers.

      +
    • +
    • +

      Security Groups: View, create, edit, and delete security groups and security +group rules..

      +
    • +
    • +

      Load Balancers: View, create, edit, and delete load balancers.

      +
    • +
    • +

      Floating IPs: Allocate an IP address to or release it from a project.

      +
    • +
    • +

      Trunks: View, create, edit, and delete trunk.

      +
    • +
    +

    Orchestration Panel

    +

    Navigate: Project->Orchestration

    +
      +
    • +

      Stacks: Use the REST API to orchestrate multiple composite cloud applications.

      +
    • +
    • +

      Resource Types: view various resources types and their details.

      +
    • +
    • +

      Template Versions: view different heat templates.

      +
    • +
    • +

      Template Generator: GUI to generate and save template using drag and drop resources.

      +
    • +
    +

    Object Store Panel

    +

    Navigate: Project->Object Store

    +
      +
    • Containers: Create and manage containers and objects. In future you would use +this tab to create Swift object storage +for your projects on a need basis.
    • +
    +

    Swift Object Containers

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/logging-in/images/CILogon_interface.png b/openstack/logging-in/images/CILogon_interface.png new file mode 100644 index 00000000..fd1c073f Binary files /dev/null and b/openstack/logging-in/images/CILogon_interface.png differ diff --git a/openstack/logging-in/images/horizon_dashboard.png b/openstack/logging-in/images/horizon_dashboard.png new file mode 100644 index 00000000..4814d2ba Binary files /dev/null and b/openstack/logging-in/images/horizon_dashboard.png differ diff --git a/openstack/logging-in/images/network_topology.png b/openstack/logging-in/images/network_topology.png new file mode 100644 index 00000000..1f98944f Binary files /dev/null and b/openstack/logging-in/images/network_topology.png differ diff --git a/openstack/logging-in/images/object_containers.png b/openstack/logging-in/images/object_containers.png new file mode 100644 index 00000000..5b76ca81 Binary files /dev/null and b/openstack/logging-in/images/object_containers.png differ diff --git a/openstack/logging-in/images/openstack_login.png b/openstack/logging-in/images/openstack_login.png new file mode 100644 index 00000000..ea48c9f7 Binary files /dev/null and b/openstack/logging-in/images/openstack_login.png differ diff --git a/openstack/logging-in/images/openstack_project_list.png b/openstack/logging-in/images/openstack_project_list.png new file mode 100644 index 00000000..74a5878c Binary files /dev/null and b/openstack/logging-in/images/openstack_project_list.png differ diff --git a/openstack/logging-in/images/project_API_access.png b/openstack/logging-in/images/project_API_access.png new file mode 100644 index 00000000..b2d4c8a9 Binary files /dev/null and b/openstack/logging-in/images/project_API_access.png differ diff --git a/openstack/management/images/delete_multiple_instances.png b/openstack/management/images/delete_multiple_instances.png new file mode 100644 index 00000000..f7ccec88 Binary files /dev/null and b/openstack/management/images/delete_multiple_instances.png differ diff --git a/openstack/management/images/edit_instance.png b/openstack/management/images/edit_instance.png new file mode 100644 index 00000000..b0cc3cd2 Binary files /dev/null and b/openstack/management/images/edit_instance.png differ diff --git a/openstack/management/images/edit_instance_to_rename.png b/openstack/management/images/edit_instance_to_rename.png new file mode 100644 index 00000000..f03b86a9 Binary files /dev/null and b/openstack/management/images/edit_instance_to_rename.png differ diff --git a/openstack/management/images/instance_actions.png b/openstack/management/images/instance_actions.png new file mode 100644 index 00000000..5ddbae08 Binary files /dev/null and b/openstack/management/images/instance_actions.png differ diff --git a/openstack/management/images/rescue_instance_popup.png b/openstack/management/images/rescue_instance_popup.png new file mode 100644 index 00000000..e90d550c Binary files /dev/null and b/openstack/management/images/rescue_instance_popup.png differ diff --git a/openstack/management/vm-management/index.html b/openstack/management/vm-management/index.html new file mode 100644 index 00000000..7e78e2fb --- /dev/null +++ b/openstack/management/vm-management/index.html @@ -0,0 +1,3744 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    VM Management

    +

    RedHat OpenStack offers numerous functionalities for handling virtual machines, +and comprehensive information can be found in the +official OpenStack site user guide, +please keep in mind that certain features may not be fully implemented at NERC OpenStack.

    +

    Instance Management Actions

    +

    After launching an instance (On the left side bar, click on +Project -> Compute -> Instances), several options are available under the +Actions menu located on the right hand side of your screen as shown here:

    +

    Instance Management Actions

    +

    Renaming VM

    +

    Once a VM is created, its name is set based on user specified Instance Name +while launching an instance using Horizon dashboard or specified +in openstack server create ... command using openstack client.

    +

    To rename a VM, navigate to Project -> Compute -> Instances.

    +

    Select an instance.

    +

    In the menu list in the actions column, select "Edit Instance" by clicking on +the arrow next to "Create Snapshot" as shown below:

    +

    Edit Instance to Rename

    +

    Then edit the Name and also Description(Optional) in "Information" tab and +save it:

    +

    Edit Instance

    +

    Stopping and Starting

    +

    Virtual machines can be stopped and initiated using various methods, and these +actions are executed through the openstack command with the relevant parameters.

    +
      +
    1. +

      Reboot is equivalent to powering down the machine and then restarting it. A +complete boot sequence takes place and thus the machine returns to use in a few +minutes.

      +

      Soft Reboot:

      +
        +
      • +

        A soft reboot attempts a graceful shut down and restart of the instance. It +sends an ACPI Restart request to the VM. Similar to sending a reboot command +to a physical computer.

        +
      • +
      • +

        Click Action -> Soft Reboot Instance.

        +
      • +
      • +

        Status will change to Reboot.

        +
      • +
      +

      Hard Reboot:

      +
        +
      • +

        A hard reboot power cycles the instance. This forcibly restart your VM. Similar +to cycling the power on a physical computer.

        +
      • +
      • +

        Click Action -> Hard Reboot Instance.

        +
      • +
      • +

        Status will change to Hard Reboot.

        +
      • +
      +
    2. +
    3. +

      The Pause & Resume feature enables the temporary suspension of the VM. While +in this state, the VM is retained in memory but doesn't receive any allocated +CPU time. This proves handy when conducting interventions on a group of servers, +preventing the VM from processing during the intervention.

      +
        +
      • +

        Click Action -> Pause Instance.

        +
      • +
      • +

        Status will change to Paused.

        +
      • +
      • +

        The Resume operation typically completes in less than a second by clicking +Action -> Resume Instance.

        +
      • +
      +
    4. +
    5. +

      The Suspend & Resume function saves the VM onto disk and swiftly restores +it (in less than a minute). This process is quicker than the stop/start method, +and the VM resumes from where it was suspended, avoiding a new boot cycle.

      +
        +
      • +

        Click Action -> Suspend Instance.

        +
      • +
      • +

        Status will change to Suspended.

        +
      • +
      • +

        The Resume operation typically completes in less than a second by clicking +Action -> Resume Instance.

        +
      • +
      +
    6. +
    7. +

      Shelve & Unshelve

      +
        +
      • +

        Click Action -> Shelve Instance.

        +
      • +
      • +

        When shelved it stops all computing, stores a snapshot of the instance. The +shelved instances are already imaged as part of the shelving process and appear +in Project -> Compute -> Images as "_shelved".

        +
      • +
      • +

        We strongly recommend detaching volumes before shelving.

        +
      • +
      • +

        Status will change to Shelved Offloaded.

        +
      • +
      • +

        To unshelve the instance, click Action -> Unshelve Instance.

        +
      • +
      +
    8. +
    9. +

      Shut Off & Start Instance

      +
        +
      • +

        Click Action -> Shut Off Instance.

        +
      • +
      • +

        When shut off it stops active computing, consuming fewer resources than a Suspend.

        +
      • +
      • +

        Status will change to Shutoff.

        +
      • +
      • +

        To start the shut down VM, click Action -> Start Instance.

        +
      • +
      +
    10. +
    +

    Using openstack client commands

    +

    The above mentioned actions can all be performed running the openstack client +commands with the following syntax:

    +
    openstack server <operation> <INSTANCE_NAME_OR_ID>
    +
    +

    such as,

    +
    openstack server shutoff my-vm
    +
    +openstack server restart my-vm
    +
    +
    +

    Pro Tip

    +

    If your instance name <INSTANCE_NAME_OR_ID> includes spaces, you need to +enclose the name of your instance in quotes, i.e. "<INSTANCE_NAME_OR_ID>"

    +

    For example: openstack server restart "My Test Instance"

    +
    +

    Create Snapshot

    +
      +
    • +

      Click Action -> Create Snapshot.

      +
    • +
    • +

      Instances must have status Active, Suspended, or Shutoff to create snapshot.

      +
    • +
    • +

      This creates an image template from a VM instance also known as "Instance Snapshot" +as described here.

      +
    • +
    • +

      The menu will automatically shift to Project -> Compute -> Images once the +image is created.

      +
    • +
    • +

      The sole distinction between an image directly uploaded to the image data +service, glance and an image generated +through a snapshot is that the snapshot-created image possesses additional +properties in the glance database and defaults to being private.

      +
    • +
    +
    +

    Glance Image Service

    +

    Glance is a central image repository which provides discovering, registering, +retrieving for disk and server images. More about this service can be +found here.

    +
    +

    Rescue a VM

    +

    There are instances where a virtual machine may encounter boot failure due to +reasons like misconfiguration or issues with the system disk. To diagnose and +address the problem, the virtual machine console offers valuable diagnostic +information on the underlying cause.

    +

    Alternatively, utilizing OpenStack's rescue functions involves booting the +virtual machine using the original image, with the system disk provided as a +secondary disk. This allows manipulation of the disk, such as using fsck to +address filesystem issues or mounting and editing the configuration.

    +
    +

    Important Note

    +

    We cannot rescue a volume-backed instance that means ONLY instance running +using Ephemeral disk can +be rescued. Also, this procedure has not been tested for Windows virtual machines.

    +
    +

    VMs can be rescued using either the OpenStack dashboard by clicking +Action -> Rescue Instance or via the openstack client using +openstack server rescue ... command.

    +

    If however, the virtual machine is no longer required and no data on the +associated system or ephemeral disk needs to be preserved, the following command +can be run:

    +
    openstack server rescue <INSTANCE_NAME_OR_ID>
    +
    +

    or, using Horizon dashboard:

    +

    Navigate to Project -> Compute -> Instances.

    +

    Select an instance.

    +

    Click Action -> Rescue Instance.

    +
    +

    When to use Rescue Instance

    +

    The rescue mode is only for emergency purpose, for example in case of a +system or access failure. This will shut down your instance and mount the +root disk to a temporary server. Then, you will be able to connect to this +server, repair the system configuration or recover your data. You may +optionally select an image and set a password on the rescue instance server.

    +
    +

    Rescue Instance Popup

    +

    Troubleshoot the disk

    +

    This will reboot the virtual machine and you can then log in using the key pair +previously defined. You will see two disks, /dev/vda which is the new system disk +and /dev/vdb which is the old one to be repaired.

    +
    ubuntu@my-vm:~$ lsblk
    +NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
    +loop0     7:0    0   62M  1 loop /snap/core20/1587
    +loop1     7:1    0 79.9M  1 loop /snap/lxd/22923
    +loop2     7:2    0   47M  1 loop /snap/snapd/16292
    +vda     252:0    0  2.2G  0 disk
    +├─vda1  252:1    0  2.1G  0 part /
    +├─vda14 252:14   0    4M  0 part
    +└─vda15 252:15   0  106M  0 part /boot/efi
    +vdb     252:16   0   20G  0 disk
    +├─vdb1  252:17   0 19.9G  0 part
    +├─vdb14 252:30   0    4M  0 part
    +└─vdb15 252:31   0  106M  0 part
    +
    +

    The old one can be mounted and configuration files edited or fsck'd.

    +
    # lsblk
    +# cat /proc/diskstats
    +# mkdir /tmp/rescue
    +# mount /dev/vdb1 /tmp/rescue
    +
    +

    Unrescue the VM

    +

    On completion, the VM can be returned to active state with +openstack server unrescue ... openstack client command, and rebooted.

    +
    openstack server unrescue <INSTANCE_NAME_OR_ID>
    +
    +

    Then the secondary disk is removed as shown below:

    +
    ubuntu@my-vm:~$ lsblk
    +NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
    +loop0     7:0    0   47M  1 loop /snap/snapd/16292
    +vda     252:0    0   20G  0 disk
    +├─vda1  252:1    0 19.9G  0 part /
    +├─vda14 252:14   0    4M  0 part
    +└─vda15 252:15   0  106M  0 part /boot/efi
    +
    +

    Alternatively, using Horizon dashboard:

    +

    Navigate to Project -> Compute -> Instances.

    +

    Select an instance.

    +

    Click Action -> Unrescue Instance.

    +

    And then Action -> Soft Reboot Instance.

    +

    Delete Instance

    +

    VMs can be deleted using either the OpenStack dashboard by clicking +Action -> Delete Instance or via the openstack client openstack server delete +command.

    +
    +

    How can I delete multiple instances at once?

    +

    Using the Horizon dashboard, navigate to Project -> Compute -> Instances. +In the Instances panel, you should see a list of all instances running in +your project. Select the instances you want to delete by ticking the checkboxes +next to their names. Then, click on "Delete Instances" button located on the +top right side, as shown below: +Delete Multiple Instances At Once

    +
    +
    +

    Important Note

    +

    This will immediately terminate the instance, delete all contents of the +virtual machine and erase the disk. This operation is not recoverable.

    +
    +

    There are other options available if you wish to keep the virtual machine for +future usage. These do, however, continue to use quota for the project even though +the VM is not running.

    +
      +
    • Snapshot the VM to keep an offline copy of the virtual machine that can be +performed as described here.
    • +
    +

    If however, the virtual machine is no longer required and no data on the +associated system or ephemeral disk needs to be preserved, the following command +can be run:

    +
    openstack server delete <INSTANCE_NAME_OR_ID>
    +
    +

    or, using Horizon dashboard:

    +

    Navigate to Project -> Compute -> Instances.

    +

    Select an instance.

    +

    Click Action -> Delete Instance.

    +
    +

    Important Note: Unmount volumes first

    +

    Ensure to unmount any volumes attached to your instance before initiating +the deletion process, as failure to do so may lead to data corruption in +both your data and the associated volume.

    +
    +
      +
    • +

      If the instance is using Ephemeral disk: +It stops and removes the instance along with the ephemeral disk. +All data will be permanently lost!

      +
    • +
    • +

      If the instance is using Volume-backed disk: +It stops and removes the instance. If "Delete Volume on Instance Delete" was +explicitely set to Yes, All data will be permanently lost!. If set to +No (which is default selected while launching an instance), the volume may be +used to boot a new instance, though any data stored in memory will be permanently +lost. For more in-depth information on making your VM setup and data persistent, +you can explore the details here.

      +
    • +
    • +

      Status will briefly change to Deleting while the instance is being removed.

      +
    • +
    +

    The quota associated with this virtual machine will be returned to the project +and you can review and verify that looking at your +OpenStack dashboard overview.

    +
      +
    • Navigate to Project -> Compute -> Overview.
    • +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/openstack-cli/images/openstack_cli_cred.png b/openstack/openstack-cli/images/openstack_cli_cred.png new file mode 100644 index 00000000..9656f0dd Binary files /dev/null and b/openstack/openstack-cli/images/openstack_cli_cred.png differ diff --git a/openstack/openstack-cli/launch-a-VM-using-openstack-CLI/index.html b/openstack/openstack-cli/launch-a-VM-using-openstack-CLI/index.html new file mode 100644 index 00000000..55df56af --- /dev/null +++ b/openstack/openstack-cli/launch-a-VM-using-openstack-CLI/index.html @@ -0,0 +1,3657 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Launch a VM using OpenStack CLI

    +

    First find the following details using openstack command, we would required +these details during the creation of virtual machine.

    +
      +
    • Flavor
    • +
    • Image
    • +
    • Network
    • +
    • Security Group
    • +
    • Key Name
    • +
    +

    Get the flavor list using below openstack command:

    +
      openstack flavor list
    +  +--------------------------------------+-------------+--------+------+-----------+-------+-----------+
    +  | ID                                   | Name        |    RAM | Disk | Ephemeral | VCPUs | Is Public |
    +  +--------------------------------------+-------------+--------+------+-----------+-------+-----------+
    +  | 12ded228-1a7f-4d35-b994-7dd394a6ca90 |gpu-su-a100.2| 196608 |   20 |         0 |    24 | True      |
    +  | 15581358-3e81-4cf2-a5b8-c0fd2ad771b4 | mem-su.8    |  65536 |   20 |         0 |     8 | True      |
    +  | 17521416-0ecf-4d85-8d4c-ec6fd1bc5f9d | cpu-su.1    |   2048 |   20 |         0 |     1 | True      |
    +  | 2b1dbea2-736d-4b85-b466-4410bba35f1e | cpu-su.8    |  16384 |   20 |         0 |     8 | True      |
    +  | 2f33578f-c3df-4210-b369-84a998d77dac | mem-su.4    |  32768 |   20 |         0 |     4 | True      |
    +  | 4498bfdb-5342-4e51-aa20-9ee74e522d59 | mem-su.1    |   8192 |   20 |         0 |     1 | True      |
    +  | 7f2f5f4e-684b-4c24-bfc6-3fce9cf1f446 | mem-su.16   | 131072 |   20 |         0 |    16 | True      |
    +  | 8c05db2f-6696-446b-9319-c32341a09c41 | cpu-su.16   |  32768 |   20 |         0 |    16 | True      |
    +  | 9662b5b2-aeaa-4d56-9bd3-450deee668af | cpu-su.4    |   8192 |   20 |         0 |     4 | True      |
    +  | b3377fdd-fd0f-4c88-9b4b-3b5c8ada0732 |gpu-su-a100.1|  98304 |   20 |         0 |    12 | True      |
    +  | e9125ab0-c8df-4488-a252-029c636cbd0f | mem-su.2    |  16384 |   20 |         0 |     2 | True      |
    +  | ee6417bd-7cd4-4431-a6ce-d09f0fba3ba9 | cpu-su.2    |   4096 |   20 |         0 |     2 | True      |
    +  +--------------------------------------+------------+--------+------+-----------+-------+------------+
    +
    +

    Get the image name and its ID,

    +
      openstack image list  | grep almalinux-9
    +  | 263f045e-86c6-4344-b2de-aa475dbfa910 | almalinux-9-x86_64  | active |
    +
    +

    Get Private Virtual network details, which will be attached to the VM:

    +
      openstack network list
    +  +--------------------------------------+-----------------+--------------------------------------+
    +  | ID                                   | Name            | Subnets                              |
    +  +--------------------------------------+-----------------+--------------------------------------+
    +  | 43613b84-e1fb-44a4-b1ea-c530edc49018 | provider        | 1cbbb98d-3b57-4f6d-8053-46045904d910 |
    +  | 8a91900b-d43c-474d-b913-930283e0bf43 | default_network | e62ce2fd-b11c-44ce-b7cc-4ca943e75a23 |
    +  +--------------------------------------+-----------------+--------------------------------------+
    +
    +

    Find the Security Group:

    +
      openstack security group list
    +  +--------------------------------------+----------------------------------+------------------------+----------------------------------+------+
    +  | ID                                   | Name                             | Description            | Project                          | Tags |
    +  +--------------------------------------+----------------------------------+------------------------+----------------------------------+------+
    +  | 8285530a-34e3-4d96-8e01-a7b309a91f9f | default                          | Default security group | 8ae3ae25c3a84c689cd24c48785ca23a | []   |
    +  | bbb738d0-45fb-4a9a-8bc4-a3eafeb49ba7 | ssh_only                         |                        | 8ae3ae25c3a84c689cd24c48785ca23a | []   |
    +  +--------------------------------------+----------------------------------+------------------------+----------------------------------+------+
    +
    +

    Find the Key pair, in my case you can choose your own,

    +
      openstack keypair list | grep -i cloud_key
    +  | cloud_key | d5:ab:dc:1f:e5:08:44:7f:a6:21:47:23:85:32:cc:04 | ssh  |
    +
    +
    +

    Note

    +

    Above details will be different for you based on your project and env.

    +
    +

    Launch an instance from an Image

    +

    Now we have all the details, let’s create a virtual machine using "openstack +server create" command

    +

    Syntax :

    +
      openstack server create --flavor {Flavor-Name-Or-Flavor-ID } \
    +      --image {Image-Name-Or-Image-ID} \
    +      --nic net-id={Network-ID} \
    +      --user-data USER-DATA-FILE \
    +      --security-group {Security_Group_ID} \
    +      --key-name {Keypair-Name} \
    +      --property KEY=VALUE \
    +      <Instance_Name>
    +
    +
    +

    Important Note

    +

    If you boot an instance with an "Instance_Name" greater than 63 +characters, Compute truncates it automatically when turning it into a +hostname to ensure the correct functionality of dnsmasq.

    +
    +

    Optionally, you can provide a key name for access control and a security group +for security.

    +

    You can also include metadata key and value pairs: --key-name {Keypair-Name}. +For example, you can add a description for your server by providing the +--property description="My Server" parameter.

    +

    You can pass user data in a local file at instance launch by using the +--user-data USER-DATA-FILE parameter. If you do not provide a key pair, you +will be unable to access the instance.

    +

    You can also place arbitrary local files into the instance file system at +creation time by using the --file <dest-filename=source-filename> parameter. +You can store up to five files. +For example, if you have a special authorized keys file named +special_authorized_keysfile that you want to put on the instance rather than +using the regular SSH key injection, you can add the –file option as shown in +the following example.

    +
      --file /root/.ssh/authorized_keys=special_authorized_keysfile
    +
    +

    To create a VM in Specific "Availability Zone and compute Host" specify +--availability-zone {Availbility-Zone-Name}:{Compute-Host} in above syntax.

    +

    Example:

    +
      openstack server create --flavor cpu-su.2 \
    +      --image almalinux-8-x86_64 \
    +      --nic net-id=8ee63932-464b-4999-af7e-949190d8fe93 \
    +      --security-group default \
    +      --key-name cloud_key \
    +      --property description="My Server" \
    +      my-vm
    +
    +

    NOTE: To get more help on "openstack server create" command , use:

    +
      openstack -h server create
    +
    +

    Detailed syntax:

    +
      openstack server create
    +    (--image <image> | --volume <volume>)
    +    --flavor <flavor>
    +    [--security-group <security-group>]
    +    [--key-name <key-name>]
    +    [--property <key=value>]
    +    [--file <dest-filename=source-filename>]
    +    [--user-data <user-data>]
    +    [--availability-zone <zone-name>]
    +    [--block-device-mapping <dev-name=mapping>]
    +    [--nic <net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid,auto,none>]
    +    [--network <network>]
    +    [--port <port>]
    +    [--hint <key=value>]
    +    [--config-drive <config-drive-volume>|True]
    +    [--min <count>]
    +    [--max <count>]
    +    [--wait]
    +    <server-name>
    +
    +
    +

    Note

    +

    Similarly, we can lauch a VM using bootable "Volume" as described here.

    +
    +

    Now verify the test vm "my-vm" is "Running" using the following commands:

    +
      openstack server list | grep my-vm
    +
    +

    OR,

    +
      openstack server show my-vm
    +
    +

    Check console of virtual machine

    +

    The console for a Linux VM can be displayed using console log.

    +
      openstack console log show --line 20 my-vm
    +
    +

    Associating a Floating IP to VM

    +

    To Associate a Floating IP to VM, first get the unused Floating IP using the +following command:

    +
      openstack floating ip list | grep None | head -2
    +  | 071f08ac-cd10-4b89-aee4-856ead8e3ead | 169.144.107.154 | None |
    +  None                                 |
    +  | 1baf4232-9cb7-4a44-8684-c604fa50ff60 | 169.144.107.184 | None |
    +  None                                 |
    +
    +

    Now Associate the first IP to the server using the following command:

    +
      openstack server add floating ip my-vm 169.144.107.154
    +
    +

    Use the following command to verify whether Floating IP is assigned to the VM +or not:

    +
      openstack server list | grep my-vm
    +  | 056c0937-6222-4f49-8405-235b20d173dd | my-vm | ACTIVE  | ...
    +  nternal=192.168.15.62, 169.144.107.154 |
    +
    +

    Remove existing floating ip from the VM

    +
      openstack server remove floating ip <INSTANCE_NAME_OR_ID> <FLOATING_IP_ADDRESS>
    +
    +

    Get all available security group in your project

    +
      openstack security group list
    +  +--------------------------------------+----------+------------------------+----------------------------------+------+
    +  | 3ca248ac-56ac-4e5f-a57c-777ed74bbd7c | default  | Default security group |
    +  f01df1439b3141f8b76e68a3b58ef74a | []   |
    +  | 5cdc5f33-78fc-4af8-bf25-60b8d4e5db2a | ssh_only | Enable SSH access.     |
    +  f01df1439b3141f8b76e68a3b58ef74a | []   |
    +  +--------------------------------------+----------+------------------------+----------------------------------+------+
    +
    +

    Add existing security group to the VM

    +
      openstack server add security group <INSTANCE_NAME_OR_ID> <SECURITY_GROUP>
    +
    +

    Example:

    +
      openstack server add security group my-vm ssh_only
    +
    +

    Remove existing security group from the VM

    +
      openstack server remove security group <INSTANCE_NAME_OR_ID> <SECURITY_GROUP>
    +
    +

    Example:

    +
      openstack server remove security group my-vm ssh_only
    +
    +

    Alternatively, you can use the openstack port unset command to remove the +group from a port:

    +
      openstack port unset --security-group <SECURITY_GROUP> <PORT>
    +
    +

    Adding volume to the VM

    +
      openstack server add volume
    +    [--device <device>]
    +    <INSTANCE_NAME_OR_ID>
    +    <VOLUME_NAME_OR_ID>
    +
    +

    Remove existing volume from the VM

    +
      openstack server remove volume <INSTANCE_NAME_OR_ID> <volume>
    +
    +

    Reboot a virtual machine

    +
    openstack server reboot my-vm
    +
    +

    Deleting Virtual Machine from Command Line

    +
      openstack server delete my-vm
    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/openstack-cli/openstack-CLI/index.html b/openstack/openstack-cli/openstack-CLI/index.html new file mode 100644 index 00000000..b0dfb378 --- /dev/null +++ b/openstack/openstack-cli/openstack-CLI/index.html @@ -0,0 +1,3568 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    OpenStack CLI

    +

    References

    +

    OpenStack Command Line Client(CLI) Cheat Sheet

    +

    The OpenStack CLI is designed for interactive use. OpenStackClient (aka OSC) is +a command-line client for OpenStack that brings the command set for Compute, +Identity, Image, Object Storage and Block Storage APIs together in a single +shell with a uniform command structure. OpenStackClient is primarily configured +using command line options and environment variables. Most of those settings +can also be placed into a configuration file to simplify managing multiple +cloud configurations. Most global options have a corresponding environment +variable that may also be used to set the value. If both are present, the +command-line option takes priority.

    +

    It's also possible to call it from a bash script or similar, but typically it +is too slow for heavy scripting use.

    +

    Command Line setup

    +

    To use the CLI, you must create an application credentials and set the +appropriate environment variables.

    +

    You can download the environment file with the credentials from the OpenStack dashboard.

    +
      +
    • +

      Log in to the NERC's OpenStack dashboard, choose +the project for which you want to download the OpenStack RC file.

      +
    • +
    • +

      Navigate to Identity -> Application Credentials.

      +
    • +
    • +

      Click on "Create Application Credential" button and provide a Name and Roles +for the application credential. All other fields are optional and leaving the +"Secret" field empty will set it to autogenerate (recommended).

      +
    • +
    +

    OpenStackClient Credentials Setup

    +
    +

    Important Note

    +

    Please note that an application credential is only valid for a single +project, and to access multiple projects you need to create an application +credential for each. You can switch projects by clicking on the project name +at the top right corner and choosing from the dropdown under "Project".

    +
    +

    After clicking "Create Application Credential" button, the ID and +Secret will be displayed and you will be prompted to Download openrc file +or to Download clouds.yaml. Both of these are different methods of +configuring the client for CLI access. Please save the file.

    +

    Configuration

    +

    The CLI is configured via environment variables and command-line options as +listed in Authentication.

    +

    Configuration Files

    +

    OpenStack RC File

    +

    Find the file (by default it will be named the same as the application +credential name with the suffix -openrc.sh where project is the name of your +OpenStack project).

    +

    Source your downloaded OpenStack RC File:

    +
      source app-cred-<Credential_Name>-openrc.sh
    +
    +
    +

    Important Note

    +

    When you source the file, environment variables are set for your current +shell. The variables enable the openstack client commands to communicate with +the OpenStack services that run in the cloud. This just stores your entry into +the environment variable - there's no validation at this stage. You can inspect +the downloaded file to retrieve the ID and Secret if necessary and see what +other environment variables are set.

    +
    +

    clouds.yaml

    +

    clouds.yaml is a configuration file that contains everything needed to +connect to one or more clouds. It may contain private information and is +generally considered private to a user.

    +

    For more information on configuring the OpenStackClient with clouds.yaml +please see the OpenStack documentation.

    +
    +

    Install the OpenStack command-line clients

    +

    For more information on configuring the OpenStackClient please see the +OpenStack documentation.

    +

    OpenStack Hello World

    +

    Generally, the OpenStack terminal client offers the following methods:

    +
      +
    • +

      list: Lists information about objects currently in the cloud.

      +
    • +
    • +

      show: Displays information about a single object currently in the cloud.

      +
    • +
    • +

      create: Creates a new object in the cloud.

      +
    • +
    • +

      set: Edits an existing object in the cloud.

      +
    • +
    +

    To test that you have everything configured, try out some commands. The +following command lists all the images available to your project:

    +
      openstack image list
    ++--------------------------------------+---------------------+--------+
    +| ID                                   | Name                | Status |
    ++--------------------------------------+---------------------+--------+
    +| a9b48e65-0cf9-413a-8215-81439cd63966 | MS-Windows-2022     | active |
    +| cfecb5d4-599c-4ffd-9baf-9cbe35424f97 | almalinux-8-x86_64  | active |
    +| 263f045e-86c6-4344-b2de-aa475dbfa910 | almalinux-9-x86_64  | active |
    +| 41fa5991-89d5-45ae-8268-b22224c772b2 | debian-10-x86_64    | active |
    +| 99194159-fcd1-4281-b3e1-15956c275692 | fedora-36-x86_64    | active |
    +| 74a33f77-fc42-4dd1-a5a2-55fb18fc50cc | rocky-8-x86_64      | active |
    +| d7d41e5f-58f4-4ba6-9280-7fef9ac49060 | rocky-9-x86_64      | active |
    +| 75a40234-702b-4ab7-9d83-f436b05827c9 | ubuntu-18.04-x86_64 | active |
    +| 8c87cf6f-32f9-4a4b-91a5-0d734b7c9770 | ubuntu-20.04-x86_64 | active |
    +| da314c41-19bf-486a-b8da-39ca51fd17de | ubuntu-22.04-x86_64 | active |
    ++--------------------------------------+---------------------+--------+
    +
    +

    If you have launched some instances already, the following command shows a list +of your project's instances:

    +
      openstack server list --fit-width
    +  +--------------------------------------+------------------+--------+----------------------------------------------+--------------------------+--------------+
    +  | ID                                   | Name             | Status | Networks                                     | Image                    |  Flavor      |
    +  +--------------------------------------+------------------+--------+----------------------------------------------+--------------------------+--------------+
    +  | 1c96ba49-a20f-4c88-bbcf-93e2364365f5 |    vm-test       | ACTIVE | default_network=192.168.0.146, 199.94.60.4   | N/A (booted from volume) |  cpu-su.4     |
    +  | dd0d8053-ab88-4d4f-b5bc-97e7e2fe035a |    gpu-test      | ACTIVE | default_network=192.168.0.146, 199.94.60.4   | N/A (booted from volume) |  gpu-su-a100.1  |
    +  +--------------------------------------+------------------+--------+----------------------------------------------+--------------------------+--------------+
    +
    +
    +

    How to fit the CLI output to your terminal?

    +

    you can use --fit-width at the end of the command to fit the output to your +terminal.

    +
    +

    If you don't have any instances, you will get the error list index out of +range, which is why we didn't suggest this command for your first test:

    +
      openstack server list
    +  list index out of range
    +
    +

    If you see this error:

    +
      openstack server list
    +  The request you have made requires authentication. (HTTP 401) (Request-ID: req-6a827bf3-d5e8-47f2-984c-b6edeeb2f7fb)
    +
    +

    Then your environment variables are likely not configured correctly.

    +

    The most common reason is that you made a typo when entering your password. +Try sourcing the OpenStack RC file again and retyping it.

    +

    You can type openstack -h to see a list of available commands.

    +
    +

    Note

    +

    This includes some admin-only commands.

    +
    +

    If you try one of these by mistake, you might see this output:

    +
      openstack user list
    +  You are not authorized to perform the requested action: identity:list_users.
    +  (HTTP 403) (Request-ID: req-cafe1e5c-8a71-44ab-bd21-0e0f25414062)
    +
    +

    Depending on your needs for API interaction, this might be sufficient.

    +

    If you just occasionally want to run 1 or 2 of these commands from your +terminal, you can do it manually or write a quick bash script that makes use of +this CLI.

    +

    However, this isn't a very optimized way to do complex interactions with +OpenStack. For that, you want to write scripts that interact with the python +SDK bindings directly.

    +
    +

    Pro Tip

    +

    If you find yourself fiddling extensively with awk and grep to extract things +like project IDs from the CLI output, it's time to move on to using the client +libraries or the RESTful API directly in your scripts.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/persistent-storage/attach-the-volume-to-an-instance/index.html b/openstack/persistent-storage/attach-the-volume-to-an-instance/index.html new file mode 100644 index 00000000..847243f7 --- /dev/null +++ b/openstack/persistent-storage/attach-the-volume-to-an-instance/index.html @@ -0,0 +1,3407 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Attach The Volume To An Instance

    +

    Using Horizon dashboard

    +

    Once you're logged in to NERC's Horizon dashboard.

    +

    Navigate to Project -> Volumes -> Volumes.

    +

    In the Actions column, click the dropdown and select "Manage Attachments".

    +

    Volume Dropdown Options

    +

    From the menu, choose the instance you want to connect the volume to from +Attach to Instance, and click "Attach Volume".

    +

    Attach Volume

    +

    The volume now has a status of "In-use" and "Attached To" column shows which +instance it is attached to, and what device name it has.

    +

    This will be something like /dev/vdb but it can vary depending on the state +of your instance, and whether you have attached volumes before.

    +

    Make note of the device name of your volume.

    +

    Attaching Volume Successful

    +

    Using the CLI

    +

    Prerequisites:

    +

    To run the OpenStack CLI commands, you need to have:

    + +

    To attach the volume to an instance using the CLI, do this:

    +

    Using the openstack client

    +

    When the status is 'available', the volume can be attached to a virtual machine +using the following openstack client command syntax:

    +
    openstack server add volume <INSTANCE_NAME_OR_ID> <VOLUME_NAME_OR_ID>
    +
    +

    For example:

    +
    openstack server add volume test-vm my-volume
    ++-----------------------+--------------------------------------+
    +| Field                 | Value                                |
    ++-----------------------+--------------------------------------+
    +| ID                    | 5b5380bd-a15b-408b-8352-9d4219cf30f3 |
    +| Server ID             | 8a876a17-3407-484c-85c4-8a46fbac1607 |
    +| Volume ID             | 5b5380bd-a15b-408b-8352-9d4219cf30f3 |
    +| Device                | /dev/vdb                             |
    +| Tag                   | None                                 |
    +| Delete On Termination | False                                |
    ++-----------------------+--------------------------------------+
    +
    +

    where "test-vm" is the virtual machine and the second parameter, "my-volume" is +the volume created before.

    +
    +

    Pro Tip

    +

    If your instance name <INSTANCE_NAME_OR_ID> and volume name <VOLUME_NAME_OR_ID> +include spaces, you need to enclose them in quotes, i.e. "<INSTANCE_NAME_OR_ID>" +and "<VOLUME_NAME_OR_ID>".

    +

    For example: openstack server remove volume "My Test Instance" "My Volume"

    +
    +

    To verify the volume is attached to the VM

    +
    openstack volume list
    ++--------------------------------------+-----------------+--------+------+----------------------------------+
    +| ID                                   | Name            | Status | Size | Attached to                      |
    ++--------------------------------------+-----------------+--------+------+----------------------------------+
    +| 563048c5-d27b-4397-bb4e-034e0f4d9fa7 |                 | in-use |   20 | Attached to test-vm on /dev/vda  |
    +| 5b5380bd-a15b-408b-8352-9d4219cf30f3 | my-volume       | in-use |   20 | Attached to test-vm on /dev/vdb  |
    ++--------------------------------------+-----------------+--------+------+----------------------------------+
    +
    +

    The volume now has a status of "in-use" and "Attached To" column shows which +instance it is attached to, and what device name it has.

    +

    This will be something like /dev/vdb but it can vary depending on the state +of your instance, and whether you have attached volumes before.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/persistent-storage/create-an-empty-volume/index.html b/openstack/persistent-storage/create-an-empty-volume/index.html new file mode 100644 index 00000000..a34809de --- /dev/null +++ b/openstack/persistent-storage/create-an-empty-volume/index.html @@ -0,0 +1,3415 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Create An Empty Volume

    +

    An empty volume is like an unformatted USB stick. We'll attach it to an +instance, create a filesystem on it, and mount it to the instance.

    +

    Using Horizon dashboard

    +

    Once you're logged in to NERC's Horizon dashboard, you can create a volume via +the "Volumes -> Volumes" page by clicking on the "Create Volume" button.

    +

    Navigate to Project -> Volumes -> Volumes.

    +

    Volumes

    +

    Click "Create Volume".

    +

    In the Create Volume dialog box, give your volume a name. The description +field is optional.

    +

    Create Volume

    +

    Choose "empty volume" from the Source dropdown. This will create a volume that +is like an unformatted hard disk. Choose a size (In GiB) for your volume. +Leave Type and Availibility Zone as it as. Only admin to the NERC OpenStack +will be able to manage volume types.

    +

    Click "Create Volume" button.

    +

    Checking the status of created volume will show:

    +

    "downloading" means that the volume contents is being transferred from the image +service to the volume service

    +

    In a few moments, the newly created volume will appear in the Volumes list with +the Status "available". "available" means the volume can now be used for booting. +A set of volume_image meta data is also copied from the image service.

    +

    Volumes List

    +

    Using the CLI

    +

    Prerequisites:

    +

    To run the OpenStack CLI commands, you need to have:

    + +

    To create a volume using the CLI, do this:

    +

    Using the openstack client

    +

    This allows an arbitrary sized disk to be attached to your virtual machine, like +plugging in a USB stick. The steps below create a disk of 20 gibibytes (GiB) with +name "my-volume".

    +
    openstack volume create --size 20 my-volume
    +
    ++---------------------+--------------------------------------+
    +| Field               | Value                                |
    ++---------------------+--------------------------------------+
    +| attachments         | []                                   |
    +| availability_zone   | nova                                 |
    +| bootable            | false                                |
    +| consistencygroup_id | None                                 |
    +| created_at          | 2024-02-03T17:06:05.000000           |
    +| description         | None                                 |
    +| encrypted           | False                                |
    +| id                  | 5b5380bd-a15b-408b-8352-9d4219cf30f3 |
    +| multiattach         | False                                |
    +| name                | my-volume                            |
    +| properties          |                                      |
    +| replication_status  | None                                 |
    +| size                | 20                                   |
    +| snapshot_id         | None                                 |
    +| source_volid        | None                                 |
    +| status              | creating                             |
    +| type                | tripleo                              |
    +| updated_at          | None                                 |
    +| user_id             | 938eb8bfc72e4ca3ad2b94e2eb4059f7     |
    ++---------------------+--------------------------------------+
    +
    +

    To view newly created volume

    +
    openstack volume list
    ++--------------------------------------+-----------------+-----------+------+----------------------------------+
    +| ID                                   | Name            | Status    | Size | Attached to                      |
    ++--------------------------------------+-----------------+-----------+------+----------------------------------+
    +| 563048c5-d27b-4397-bb4e-034e0f4d9fa7 |                 | in-use    |   20 | Attached to test-vm on /dev/vda  |
    +| 5b5380bd-a15b-408b-8352-9d4219cf30f3 | my-volume       | available |   20 |                                  |
    ++--------------------------------------+-----------------+-----------+------+----------------------------------+
    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/persistent-storage/delete-volumes/index.html b/openstack/persistent-storage/delete-volumes/index.html new file mode 100644 index 00000000..4c095223 --- /dev/null +++ b/openstack/persistent-storage/delete-volumes/index.html @@ -0,0 +1,3372 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Delete Volumes

    +

    Using Horizon dashboard

    +

    Once you're logged in to NERC's Horizon dashboard.

    +

    Navigate to Project -> Volumes -> Volumes.

    +

    Select the volume or volumes that you want to delete.

    +

    Click "Delete Volumes" button.

    +

    In the Confirm Delete Volumes window, click the Delete Volumes button to +confirm the action.

    +
    +

    Unable to Delete Volume

    +

    You cannot delete a bootable volume that is actively in use by a running +VM. If you really want to delete such volume then first delete the insance and +then you are allowed to delete the detached volume. Before deleting +Please make sure during the launch of this insance is using the default +selected No for "Delete Volume on Instance Delete" configuration option. +If you had set this configuration "Yes" for "Delete Volume on Instance Delete", +then deleting the instance will automatically remove the associated volume. +Launch Instance With Persistent Volume

    +
    +

    Using the CLI

    +

    Prerequisites:

    +

    To run the OpenStack CLI commands, you need to have:

    + +

    To delete a volume using the CLI, do this:

    +

    Using the openstack client

    +

    The following openstack client command syntax can be used to delete a volume:

    +
    openstack volume delete <VOLUME_NAME_OR_ID>
    +
    +

    For example:

    +
    openstack volume delete my-volume
    +
    +
    +

    Pro Tip

    +

    If your volume name <VOLUME_NAME_OR_ID> include spaces, you need to enclose +them in quotes, i.e. "<VOLUME_NAME_OR_ID>".

    +

    For example: openstack volume delete "My Volume"

    +
    +

    Your volume will now go into state 'deleting' and completely disappear from the +openstack volume list output.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/persistent-storage/detach-a-volume/index.html b/openstack/persistent-storage/detach-a-volume/index.html new file mode 100644 index 00000000..1d2530a4 --- /dev/null +++ b/openstack/persistent-storage/detach-a-volume/index.html @@ -0,0 +1,3436 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Detach A Volume and Attach it to an instance

    +

    Detach A Volume

    +

    Using Horizon dashboard

    +

    Once you're logged in to NERC's Horizon dashboard.

    +

    Navigate to Project -> Volumes -> Volumes.

    +

    To detach a mounted volume by going back to "Manage Attachments" and choosing +Detach Volume.

    +

    This will popup the following interface to proceed:

    +

    Detach a volume

    +
    +

    Unable to Detach Volume

    +

    If your bootable volume that is attached to a VM then that volume cannot be +detached as this is a root device volume. This bootable volume is created when +you launch an instance from an Image or an Instance Snapshot, and the +choice for utilizing persistent storage is configured by selecting the Yes +option for "Create New Volume". If you explicitly chosen as "No" for this option +then there will be no attach volume created for the instance instead an Ephemeral +disk storage is used. +Launch Instance Set Create New Volume

    +
    +

    Using the CLI

    +

    Prerequisites:

    +

    To run the OpenStack CLI commands, you need to have:

    + +

    Using the openstack client

    +

    The following openstack client command syntax can be used to detach a volume +from a VM:

    +
    openstack server remove volume <INSTANCE_NAME_OR_ID> <VOLUME_NAME_OR_ID>
    +
    +

    For example:

    +
    openstack server remove volume test-vm my-volume
    +
    +

    where "test-vm" is the virtual machine and the second parameter, "my-volume" is +the volume created before and attached to the VM and can be shown in +openstack volume list.

    +
    +

    Pro Tip

    +

    If your instance name <INSTANCE_NAME_OR_ID> and volume name <VOLUME_NAME_OR_ID> +include spaces, you need to enclose them in quotes, i.e. "<INSTANCE_NAME_OR_ID>" +and "<VOLUME_NAME_OR_ID>".

    +

    For example: openstack server remove volume "My Test Instance" "My Volume"

    +
    +

    Check that the volume is in state 'available' again.

    +

    If that's the case, the volume is now ready to either be attached to another +virtual machine or, if it is not needed any longer, to be completely deleted +(please note that this step cannot be reverted!).

    +

    Attach the detached volume to an instance

    +

    Once it is successfully detached, you can use "Manage Attachments" to attach it +to another instance if desired as explained here.

    +

    OR,

    +

    You can attach the existing volume (Detached!) to the new instance as shown below:

    +

    Attaching Volume to an Instance

    +

    After this run the following commands as root user to mount it:

    +
    mkdir /mnt/test_volume
    +mount /dev/vdb /mnt/test_volume
    +
    +

    All the previous data from previous instance will be available under the mounted +folder at /mnt/test_volume.

    +
    +

    Very Important Note

    +

    Also, a given volume might not get the same device name the second time you +attach it to an instance.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/persistent-storage/extending-volume/index.html b/openstack/persistent-storage/extending-volume/index.html new file mode 100644 index 00000000..32f3d626 --- /dev/null +++ b/openstack/persistent-storage/extending-volume/index.html @@ -0,0 +1,3381 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Extending Volume

    +

    A volume can be made larger while maintaining the existing contents, assuming the +file system supports resizing. We can extend a volume that is not attached to any +VM and in "Available" status.

    +

    The steps are as follows:

    +
      +
    • +

      Extend the volume to its new size

      +
    • +
    • +

      Extend the filesystem to its new size

      +
    • +
    +

    Using Horizon dashboard

    +

    Once you're logged in to NERC's Horizon dashboard.

    +

    Navigate to Project -> Volumes -> Volumes.

    +

    Extending Volume

    +

    Specify, the new extened size in GiB:

    +

    Volume New Extended Size

    +

    Using the CLI

    +

    Prerequisites:

    +

    To run the OpenStack CLI commands, you need to have:

    + +

    Using the openstack client

    +

    The following openstack client command syntax can be used to extend any existing +volume from its previous size to a new size of :

    +
    openstack volume set --size <NEW_SIZE_IN_GiB> <VOLUME_NAME_OR_ID>
    +
    +

    For example:

    +
    openstack volume set --size 100 my-volume
    +
    +

    where "my-volume" is the existing volume with a size of 80 GiB and is going +to be extended to a new size of 100 GiB."

    +
    +

    Pro Tip

    +

    If your volume name <VOLUME_NAME_OR_ID> includes spaces, you need to enclose +them in quotes, i.e. "<VOLUME_NAME_OR_ID>".

    +

    For example: openstack volume set --size 100 "My Volume"

    +
    +

    For windows systems, please follow the provider documentation.

    +
    +

    Please note

    +
      +
    • Volumes can be made larger, but not smaller. There is no support for +shrinking existing volumes.
    • +
    • The procedure given above has been tested with ext4 and XFS filesystems only.
    • +
    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/persistent-storage/format-and-mount-the-volume/index.html b/openstack/persistent-storage/format-and-mount-the-volume/index.html new file mode 100644 index 00000000..4440fc02 --- /dev/null +++ b/openstack/persistent-storage/format-and-mount-the-volume/index.html @@ -0,0 +1,3413 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Format And Mount The Volume

    +

    Prerequisites:

    +

    Before formatting and mounting the volume, you need to have already created a +new volume as referred here and attached it to any +running VM, as described here.

    +

    For Linux based virtual machine

    +

    To verify that the newly created volume, "my-volume", exists and is attached to +a VM, "test-vm", run this openstack client command:

    +
    openstack volume list
    ++--------------------------------------+-----------------+--------+------+----------------------------------+
    +| ID                                   | Name            | Status | Size | Attached to                      |
    ++--------------------------------------+-----------------+--------+------+----------------------------------+
    +| 563048c5-d27b-4397-bb4e-034e0f4d9fa7 |                 | in-use |   20 | Attached to test-vm on /dev/vda  |
    +| 5b5380bd-a15b-408b-8352-9d4219cf30f3 | my-volume       | in-use |   20 | Attached to test-vm on /dev/vdb  |
    ++--------------------------------------+-----------------+--------+------+----------------------------------+
    +
    +

    The volume has a status of "in-use" and "Attached To" column shows which instance +it is attached to, and what device name it has.

    +

    This will be something like /dev/vdb but it can vary depending on the state +of your instance, and whether you have attached volumes before.

    +

    Make note of the device name of your volume.

    +

    SSH into your instance. You should now see the volume as an additional disk in +the output of sudo fdisk -l or lsblk or cat /proc/partitions.

    +
    # lsblk
    +NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    +...
    +vda     254:0    0   10G  0 disk
    +├─vda1  254:1    0  9.9G  0 part /
    +├─vda14 254:14   0    4M  0 part
    +└─vda15 254:15   0  106M  0 part /boot/efi
    +vdb     254:16   0    1G  0 disk
    +
    +

    Here, we see the volume as the disk vdb, which matches the /dev/vdb/ we previously +noted in the "Attached To" column.

    +

    Create a filesystem on the volume and mount it. In this example, we will create +an ext4 filesystem:

    +

    Run the following commands as root user:

    +
    mkfs.ext4 /dev/vdb
    +mkdir /mnt/test_volume
    +mount /dev/vdb /mnt/test_volume
    +df -H
    +
    +

    The volume is now available at the mount point:

    +
    lsblk
    +NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    +...
    +vda     254:0    0   10G  0 disk
    +├─vda1  254:1    0  9.9G  0 part /
    +├─vda14 254:14   0    4M  0 part
    +└─vda15 254:15   0  106M  0 part /boot/efi
    +vdb     254:16   0    1G  0 disk /mnt/test_volume
    +
    +

    If you place data in the directory /mnt/test_volume, detach the volume, and +mount it to another instance, the second instance will have access to the data.

    +
    +

    Important Note

    +

    In this case it's easy to spot because there is only one additional disk attached +to the instance, but it's important to keep track of the device name, especially +if you have multiple volumes attached.

    +
    +

    For Windows virtual machine

    +

    Here, we create an empty volume following the steps outlined in this documentation.

    +

    Please make sure you are creating volume of the size 100 GiB:

    +

    Create Volume for Windows VM

    +

    Then attach the newly created volume to a running Windows VM:

    +

    Attach Volume to a running Windows VM

    +

    Login remote desktop using the Floating IP attached to the Windows VM:

    +

    Connect to Remote Instance using Floating IP

    +

    Prompted Administrator Login

    +
    +

    What is the user login for Windows Server 2022?

    +

    The default username is "Administrator," and the password is the one you set +using the user data PowerShell script during the launch as +described here.

    +
    +

    Successfully Remote Connected Instance

    +

    Once connected search for "Disk Management" from Windows search box. This will +show all attached disk as Unknown and Offline as shown here:

    +

    Windows Disk Management

    +

    In Disk Management, select and hold (or right-click) the disk you want to +initialize, and then select "Initialize Disk". If the disk is listed as Offline, +first select and hold (or right-click) the disk, and then select "Online".

    +

    Windows Set Disk Online

    +

    Windows Initialize Disk

    +

    In the Initialize Disk dialog box, make sure the correct disk is selected, and +then choose OK to accept the default partition style. If you need to change the +partition style (GPT or MBR), see Compare partition styles - GPT and MBR.

    +

    Windows Disk Partition Style

    +

    Format the New Volume:

    +
      +
    • Select and hold (or right-click) the unallocated space of the new disk.
    • +
    • Select "New Simple Volume" and follow the wizard to create a new partition.
    • +
    +

    Windows Simple Volume Wizard Start

    +
      +
    • Choose the file system (usually NTFS for Windows).
    • +
    • Assign a drive letter or mount point.
    • +
    +

    Complete Formatting:

    +
      +
    • +

      Complete the wizard to format the new volume.

      +
    • +
    • +

      Once formatting is complete, the new volume should be visible in File Explorer + as shown below:

      +
    • +
    +

    Windows Simple Volume Wizard Start

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/persistent-storage/images/attach-volume-to-an-instance.png b/openstack/persistent-storage/images/attach-volume-to-an-instance.png new file mode 100644 index 00000000..b388141d Binary files /dev/null and b/openstack/persistent-storage/images/attach-volume-to-an-instance.png differ diff --git a/openstack/persistent-storage/images/attach-volume-to-an-win-instance.png b/openstack/persistent-storage/images/attach-volume-to-an-win-instance.png new file mode 100644 index 00000000..e7e9854a Binary files /dev/null and b/openstack/persistent-storage/images/attach-volume-to-an-win-instance.png differ diff --git a/openstack/persistent-storage/images/choose_S3_protocol.png b/openstack/persistent-storage/images/choose_S3_protocol.png new file mode 100644 index 00000000..f65d1a88 Binary files /dev/null and b/openstack/persistent-storage/images/choose_S3_protocol.png differ diff --git a/openstack/persistent-storage/images/config_winscp.png b/openstack/persistent-storage/images/config_winscp.png new file mode 100644 index 00000000..9d5283cc Binary files /dev/null and b/openstack/persistent-storage/images/config_winscp.png differ diff --git a/openstack/persistent-storage/images/container-file-upload-success.png b/openstack/persistent-storage/images/container-file-upload-success.png new file mode 100644 index 00000000..81e5e4fd Binary files /dev/null and b/openstack/persistent-storage/images/container-file-upload-success.png differ diff --git a/openstack/persistent-storage/images/container-public-access-setting.png b/openstack/persistent-storage/images/container-public-access-setting.png new file mode 100644 index 00000000..5bbe4a25 Binary files /dev/null and b/openstack/persistent-storage/images/container-public-access-setting.png differ diff --git a/openstack/persistent-storage/images/container-upload-popup.png b/openstack/persistent-storage/images/container-upload-popup.png new file mode 100644 index 00000000..cffefdad Binary files /dev/null and b/openstack/persistent-storage/images/container-upload-popup.png differ diff --git a/openstack/persistent-storage/images/create-container.png b/openstack/persistent-storage/images/create-container.png new file mode 100644 index 00000000..39200c15 Binary files /dev/null and b/openstack/persistent-storage/images/create-container.png differ diff --git a/openstack/persistent-storage/images/create-transfer-a-volume.png b/openstack/persistent-storage/images/create-transfer-a-volume.png new file mode 100644 index 00000000..8daae749 Binary files /dev/null and b/openstack/persistent-storage/images/create-transfer-a-volume.png differ diff --git a/openstack/persistent-storage/images/create_volume.png b/openstack/persistent-storage/images/create_volume.png new file mode 100644 index 00000000..c8d1945c Binary files /dev/null and b/openstack/persistent-storage/images/create_volume.png differ diff --git a/openstack/persistent-storage/images/create_volume_win.png b/openstack/persistent-storage/images/create_volume_win.png new file mode 100644 index 00000000..9151d6b9 Binary files /dev/null and b/openstack/persistent-storage/images/create_volume_win.png differ diff --git a/openstack/persistent-storage/images/cyberduck-open-connection.png b/openstack/persistent-storage/images/cyberduck-open-connection.png new file mode 100644 index 00000000..f750eb88 Binary files /dev/null and b/openstack/persistent-storage/images/cyberduck-open-connection.png differ diff --git a/openstack/persistent-storage/images/cyberduck-s3-configuration.png b/openstack/persistent-storage/images/cyberduck-s3-configuration.png new file mode 100644 index 00000000..eadfb833 Binary files /dev/null and b/openstack/persistent-storage/images/cyberduck-s3-configuration.png differ diff --git a/openstack/persistent-storage/images/cyberduck-select-Amazon-s3.png b/openstack/persistent-storage/images/cyberduck-select-Amazon-s3.png new file mode 100644 index 00000000..65f696ea Binary files /dev/null and b/openstack/persistent-storage/images/cyberduck-select-Amazon-s3.png differ diff --git a/openstack/persistent-storage/images/cyberduck-successful-connection.png b/openstack/persistent-storage/images/cyberduck-successful-connection.png new file mode 100644 index 00000000..33af80cb Binary files /dev/null and b/openstack/persistent-storage/images/cyberduck-successful-connection.png differ diff --git a/openstack/persistent-storage/images/detach-volume-from-an-instance.png b/openstack/persistent-storage/images/detach-volume-from-an-instance.png new file mode 100644 index 00000000..792d219b Binary files /dev/null and b/openstack/persistent-storage/images/detach-volume-from-an-instance.png differ diff --git a/openstack/persistent-storage/images/disable_public_access_container.png b/openstack/persistent-storage/images/disable_public_access_container.png new file mode 100644 index 00000000..15106354 Binary files /dev/null and b/openstack/persistent-storage/images/disable_public_access_container.png differ diff --git a/openstack/persistent-storage/images/download-file-from-container.png b/openstack/persistent-storage/images/download-file-from-container.png new file mode 100644 index 00000000..3c61c096 Binary files /dev/null and b/openstack/persistent-storage/images/download-file-from-container.png differ diff --git a/openstack/persistent-storage/images/ec2_credentials.png b/openstack/persistent-storage/images/ec2_credentials.png new file mode 100644 index 00000000..5c7508e5 Binary files /dev/null and b/openstack/persistent-storage/images/ec2_credentials.png differ diff --git a/openstack/persistent-storage/images/extending_volumes.png b/openstack/persistent-storage/images/extending_volumes.png new file mode 100644 index 00000000..f5791175 Binary files /dev/null and b/openstack/persistent-storage/images/extending_volumes.png differ diff --git a/openstack/persistent-storage/images/folder-upload-container.png b/openstack/persistent-storage/images/folder-upload-container.png new file mode 100644 index 00000000..4da21223 Binary files /dev/null and b/openstack/persistent-storage/images/folder-upload-container.png differ diff --git a/openstack/persistent-storage/images/fuse-config.png b/openstack/persistent-storage/images/fuse-config.png new file mode 100644 index 00000000..f7dda16e Binary files /dev/null and b/openstack/persistent-storage/images/fuse-config.png differ diff --git a/openstack/persistent-storage/images/instance-create-new-volume.png b/openstack/persistent-storage/images/instance-create-new-volume.png new file mode 100644 index 00000000..51775f93 Binary files /dev/null and b/openstack/persistent-storage/images/instance-create-new-volume.png differ diff --git a/openstack/persistent-storage/images/instance-delete-volume-delete.png b/openstack/persistent-storage/images/instance-delete-volume-delete.png new file mode 100644 index 00000000..610e14aa Binary files /dev/null and b/openstack/persistent-storage/images/instance-delete-volume-delete.png differ diff --git a/openstack/persistent-storage/images/instance-persistent-storage-option.png b/openstack/persistent-storage/images/instance-persistent-storage-option.png new file mode 100644 index 00000000..7fd0c0b9 Binary files /dev/null and b/openstack/persistent-storage/images/instance-persistent-storage-option.png differ diff --git a/openstack/persistent-storage/images/object-store.png b/openstack/persistent-storage/images/object-store.png new file mode 100644 index 00000000..5b76ca81 Binary files /dev/null and b/openstack/persistent-storage/images/object-store.png differ diff --git a/openstack/persistent-storage/images/prompted_administrator_login.png b/openstack/persistent-storage/images/prompted_administrator_login.png new file mode 100644 index 00000000..dc7da852 Binary files /dev/null and b/openstack/persistent-storage/images/prompted_administrator_login.png differ diff --git a/openstack/persistent-storage/images/redis-server-config.png b/openstack/persistent-storage/images/redis-server-config.png new file mode 100644 index 00000000..f39da058 Binary files /dev/null and b/openstack/persistent-storage/images/redis-server-config.png differ diff --git a/openstack/persistent-storage/images/remote_connected_instance.png b/openstack/persistent-storage/images/remote_connected_instance.png new file mode 100644 index 00000000..e79ca6d9 Binary files /dev/null and b/openstack/persistent-storage/images/remote_connected_instance.png differ diff --git a/openstack/persistent-storage/images/remote_connection_floating_ip.png b/openstack/persistent-storage/images/remote_connection_floating_ip.png new file mode 100644 index 00000000..2e11ce9b Binary files /dev/null and b/openstack/persistent-storage/images/remote_connection_floating_ip.png differ diff --git a/openstack/persistent-storage/images/s3fs_assets_download.png b/openstack/persistent-storage/images/s3fs_assets_download.png new file mode 100644 index 00000000..b3725fa3 Binary files /dev/null and b/openstack/persistent-storage/images/s3fs_assets_download.png differ diff --git a/openstack/persistent-storage/images/successful_accepted_volume_transfer.png b/openstack/persistent-storage/images/successful_accepted_volume_transfer.png new file mode 100644 index 00000000..40cb3f37 Binary files /dev/null and b/openstack/persistent-storage/images/successful_accepted_volume_transfer.png differ diff --git a/openstack/persistent-storage/images/transfer-volume-initiated.png b/openstack/persistent-storage/images/transfer-volume-initiated.png new file mode 100644 index 00000000..dc9490f2 Binary files /dev/null and b/openstack/persistent-storage/images/transfer-volume-initiated.png differ diff --git a/openstack/persistent-storage/images/transfer-volume-name.png b/openstack/persistent-storage/images/transfer-volume-name.png new file mode 100644 index 00000000..44b9b82b Binary files /dev/null and b/openstack/persistent-storage/images/transfer-volume-name.png differ diff --git a/openstack/persistent-storage/images/upload-file-container.png b/openstack/persistent-storage/images/upload-file-container.png new file mode 100644 index 00000000..c3ff81a9 Binary files /dev/null and b/openstack/persistent-storage/images/upload-file-container.png differ diff --git a/openstack/persistent-storage/images/volume-transfer-accepted.png b/openstack/persistent-storage/images/volume-transfer-accepted.png new file mode 100644 index 00000000..d27b2dba Binary files /dev/null and b/openstack/persistent-storage/images/volume-transfer-accepted.png differ diff --git a/openstack/persistent-storage/images/volume-transfer-key.png b/openstack/persistent-storage/images/volume-transfer-key.png new file mode 100644 index 00000000..e50b2309 Binary files /dev/null and b/openstack/persistent-storage/images/volume-transfer-key.png differ diff --git a/openstack/persistent-storage/images/volume_attach.png b/openstack/persistent-storage/images/volume_attach.png new file mode 100644 index 00000000..acbf59d7 Binary files /dev/null and b/openstack/persistent-storage/images/volume_attach.png differ diff --git a/openstack/persistent-storage/images/volume_in_use.png b/openstack/persistent-storage/images/volume_in_use.png new file mode 100644 index 00000000..6b0ca1e6 Binary files /dev/null and b/openstack/persistent-storage/images/volume_in_use.png differ diff --git a/openstack/persistent-storage/images/volume_new_extended_size.png b/openstack/persistent-storage/images/volume_new_extended_size.png new file mode 100644 index 00000000..38232388 Binary files /dev/null and b/openstack/persistent-storage/images/volume_new_extended_size.png differ diff --git a/openstack/persistent-storage/images/volume_options.png b/openstack/persistent-storage/images/volume_options.png new file mode 100644 index 00000000..bb6f8ea3 Binary files /dev/null and b/openstack/persistent-storage/images/volume_options.png differ diff --git a/openstack/persistent-storage/images/volumes-in-a-new-project.png b/openstack/persistent-storage/images/volumes-in-a-new-project.png new file mode 100644 index 00000000..1546c5e4 Binary files /dev/null and b/openstack/persistent-storage/images/volumes-in-a-new-project.png differ diff --git a/openstack/persistent-storage/images/volumes.png b/openstack/persistent-storage/images/volumes.png new file mode 100644 index 00000000..4277834d Binary files /dev/null and b/openstack/persistent-storage/images/volumes.png differ diff --git a/openstack/persistent-storage/images/volumes_list.png b/openstack/persistent-storage/images/volumes_list.png new file mode 100644 index 00000000..53ad5ce9 Binary files /dev/null and b/openstack/persistent-storage/images/volumes_list.png differ diff --git a/openstack/persistent-storage/images/win_disk_management.png b/openstack/persistent-storage/images/win_disk_management.png new file mode 100644 index 00000000..970120ee Binary files /dev/null and b/openstack/persistent-storage/images/win_disk_management.png differ diff --git a/openstack/persistent-storage/images/win_disk_partition_style.png b/openstack/persistent-storage/images/win_disk_partition_style.png new file mode 100644 index 00000000..a91c445f Binary files /dev/null and b/openstack/persistent-storage/images/win_disk_partition_style.png differ diff --git a/openstack/persistent-storage/images/win_disk_simple_volume.png b/openstack/persistent-storage/images/win_disk_simple_volume.png new file mode 100644 index 00000000..40cda0f4 Binary files /dev/null and b/openstack/persistent-storage/images/win_disk_simple_volume.png differ diff --git a/openstack/persistent-storage/images/win_initialize_disk.png b/openstack/persistent-storage/images/win_initialize_disk.png new file mode 100644 index 00000000..34ef3d12 Binary files /dev/null and b/openstack/persistent-storage/images/win_initialize_disk.png differ diff --git a/openstack/persistent-storage/images/win_new_drive.png b/openstack/persistent-storage/images/win_new_drive.png new file mode 100644 index 00000000..5cbb31d5 Binary files /dev/null and b/openstack/persistent-storage/images/win_new_drive.png differ diff --git a/openstack/persistent-storage/images/win_set_disk_online.png b/openstack/persistent-storage/images/win_set_disk_online.png new file mode 100644 index 00000000..31d04bef Binary files /dev/null and b/openstack/persistent-storage/images/win_set_disk_online.png differ diff --git a/openstack/persistent-storage/images/winscp-login.png b/openstack/persistent-storage/images/winscp-login.png new file mode 100644 index 00000000..de5768a7 Binary files /dev/null and b/openstack/persistent-storage/images/winscp-login.png differ diff --git a/openstack/persistent-storage/images/winscp-new-session.png b/openstack/persistent-storage/images/winscp-new-session.png new file mode 100644 index 00000000..b8d42581 Binary files /dev/null and b/openstack/persistent-storage/images/winscp-new-session.png differ diff --git a/openstack/persistent-storage/images/winscp-perserve-timestamp-disable.png b/openstack/persistent-storage/images/winscp-perserve-timestamp-disable.png new file mode 100644 index 00000000..db6ad8c2 Binary files /dev/null and b/openstack/persistent-storage/images/winscp-perserve-timestamp-disable.png differ diff --git a/openstack/persistent-storage/images/winscp-successfully-connected.png b/openstack/persistent-storage/images/winscp-successfully-connected.png new file mode 100644 index 00000000..2dcad363 Binary files /dev/null and b/openstack/persistent-storage/images/winscp-successfully-connected.png differ diff --git a/openstack/persistent-storage/mount-the-object-storage/index.html b/openstack/persistent-storage/mount-the-object-storage/index.html new file mode 100644 index 00000000..521e6cad --- /dev/null +++ b/openstack/persistent-storage/mount-the-object-storage/index.html @@ -0,0 +1,5418 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Mount The Object Storage To An Instance

    +

    Pre-requisite

    +

    We are using following setting for this purpose to mount the object storage to an +NERC OpenStack VM:

    +
      +
    • +

      1 Linux machine, ubuntu-22.04-x86_64 or your choice of Ubuntu OS image, +cpu-su.2 flavor with 2vCPU, 8GB RAM, 20GB storage - also assign Floating IP +to this VM.

      +
    • +
    • +

      Setup and enable your S3 API credentials:

      +

      To access the API credentials, you must login through the OpenStack Dashboard +and navigate to "Projects > API Access" where you can download the "Download +OpenStack RC File" as well as the "EC2 Credentials".

      +

      EC2 Credentials

      +

      While clicking on "EC2 Credentials", this will download a file zip file +including ec2rc.sh file that has content similar to shown below. The important +parts are EC2_ACCESS_KEY and EC2_SECRET_KEY, keep them noted.

      +
      #!/bin/bash
      +
      +NOVARC=$(readlink -f "${BASH_SOURCE:-${0}}" 2>/dev/null) || NOVARC=$(python -c 'import os,sys; print os.path.abspath(os.path.realpath(sys.argv[1]))' "${BASH_SOURCE:-${0}}")
      +NOVA_KEY_DIR=${NOVARC%/*}
      +export EC2_ACCESS_KEY=...
      +export EC2_SECRET_KEY=...
      +export EC2_URL=https://localhost/notimplemented
      +export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
      +export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
      +export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
      +export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem
      +export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set
      +
      +alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}"
      +alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}"
      +
      +

      Alternatively, you can obtain your EC2 access keys using the openstack client:

      +
      sudo apt install python3-openstackclient
      +
      +openstack ec2 credentials list
      ++------------------+------------------+--------------+-----------+
      +| Access           | Secret           | Project ID   | User ID   |
      ++------------------+------------------+--------------+-----------+
      +| <EC2_ACCESS_KEY> | <EC2_SECRET_KEY> | <Project_ID> | <User_ID> |
      ++------------------+------------------+--------------+-----------+
      +
      +

      OR, you can even create a new one by running:

      +
      openstack ec2 credentials create
      +
      +
    • +
    • +

      Source the downloaded OpenStack RC File from Projects > API Access by using: +source *-openrc.sh command. Sourcing the RC File will set the required environment +variables.

      +
    • +
    • +

      Allow Other User option by editing fuse config by editing /etc/fuse.conf file +and uncomment "user_allow_other" option.

      +
      sudo nano /etc/fuse.conf
      +
      +

      The output going to look like this:

      +

      Fuse Config to Allow Other User

      +
    • +
    +
    +

    A comparative analysis of Mountpoint for S3, Goofys, and S3FS.

    +

    When choosing between S3 clients that enable the utilization of an object store +with applications expecting files, it's essential to consider the specific use +case and whether the convenience and compatibility provided by FUSE clients +match the project's requirements.

    +

    To delve into a comparative analysis of Mountpoint for S3, Goofys, and +S3FS, please read this blog post.

    +
    +

    1. Using Mountpoint for Amazon S3

    +

    Mountpoint for Amazon S3 is a high-throughput +open-source file client designed to mount an Amazon S3 bucket as a local file system. +Mountpoint is optimized for workloads that need high-throughput read and write +access to data stored in S3 Object Storage through a file system interface.

    +
    +

    Very Important Note

    +

    Mountpoint for Amazon S3 intentionally does not implement the full POSIX +standard specification for file systems. Mountpoint supports file-based workloads +that perform sequential and random reads, sequential (append only) writes, +and that don’t need full POSIX semantics.

    +
    +

    Install Mountpoint

    +

    Access your virtual machine using SSH. Update the packages on your system and +install wget to be able to download the mount-s3 binary directly to your VM:

    +
    sudo apt update && sudo apt upgrade
    +sudo apt install wget
    +
    +

    Now, navigate to your home directory:

    +
    cd
    +
    +
      +
    1. +

      Download the Mountpoint for Amazon S3 package using wget command:

      +
      wget https://s3.amazonaws.com/mountpoint-s3-release/latest/x86_64/mount-s3.deb
      +
      +
    2. +
    3. +

      Install the package by entering the following command:

      +
      sudo apt-get install ./mount-s3.deb
      +
      +
    4. +
    5. +

      Verify that Mountpoint for Amazon S3 is successfully installed by entering the +following command:

      +
      mount-s3 --version
      +
      +

      You should see output similar to the following:

      +
      mount-s3 1.6.0
      +
      +
    6. +
    +

    Configuring and using Mountpoint

    +

    Make a folder to store your credentials:

    +
    mkdir ~/.aws/
    +
    +

    Create file ~/.aws/credentials using your favorite text editor (for example +nano or vim). Add the following contents to it which requires the EC2_ACCESS_KEY +and EC2_SECRET_KEY keys that you noted from ec2rc.sh file (during the "Setup +and enable your S3 API credentials" step):

    +
    [nerc]
    +aws_access_key_id=<EC2_ACCESS_KEY>
    +aws_secret_access_key=<EC2_SECRET_KEY>
    +
    +

    Save the file and exit the text editor.

    +

    Create a local directory as a mount point

    +
    mkdir -p ~/bucket1
    +
    +

    Mount the Container locally using Mountpoint

    +

    The object storage container i.e. "bucket1" will be mounted in the directory ~/bucket1

    +
    mount-s3 --profile "nerc" --endpoint-url "https://stack.nerc.mghpcc.org:13808" --allow-other --force-path-style --debug bucket1 ~/bucket1/
    +
    +

    In this command,

    +
      +
    • +

      mount-s3 is the Mountpoint for Amazon S3 package as installed in /usr/bin/ +path we don't need to specify the full path.

      +
    • +
    • +

      --profile corresponds to the name given on the ~/.aws/credentials file i.e. +[nerc].

      +
    • +
    • +

      --endpoint-url corresponds to the Object Storage endpoint url for NERC Object +Storage. You don't need to modify this url.

      +
    • +
    • +

      --allow-other: Allows other users to access the mounted filesystem. This is +particularly useful when multiple users need to access the mounted S3 bucket. Only +allowed if user_allow_other is set in /etc/fuse.conf.

      +
    • +
    • +

      --force-path-style: Forces the use of path-style URLs when accessing the S3 +bucket. This is necessary when working with certain S3-compatible storage services +that do not support virtual-hosted-style URLs.

      +
    • +
    • +

      --debug: Enables debug mode, providing additional information about the mounting +process.

      +
    • +
    • +

      bucket1 is the name of the container which contains the NERC Object Storage +resources.

      +
    • +
    • +

      ~/bucket1 is the location of the folder in which you want to mount the Object +Storage filesystem.

      +
    • +
    +
    +

    Important Note

    +

    Mountpoint automatically configures reasonable defaults for file system settings +such as permissions and performance. However, if you require finer control over +how the Mountpoint file system behaves, you can adjust these settings accordingly. +For further details, please refer to this resource.

    +
    +

    In order to test whether the mount was successful, navigate to the directory in +which you mounted the NERC container repository, for example:

    +
    cd ~/bucket1
    +
    +

    Use the ls command to list its content. You should see the output similar to this:

    +
    ls
    +
    +README.md   image.png   test-file
    +
    +

    The NERC Object Storage container repository has now been mounted using Mountpoint.

    +
    +

    Very Important Information

    +

    Please note that any of these Mountpoints is not persistent if your VM is +stopped or rebooted in the future. After each reboot, you will need to execute +the mounting command as mentioned above +again.

    +
    +

    Automatically mounting an S3 bucket at boot

    +

    Mountpoint does not currently support automatically mounting a bucket at system +boot time by configuring them in the /etc/fstab. If you would like your bucket/s +to automatically mount when the machine is started you will need to either set up +a Cron Job in crontab +or using a service manager +like systemd.

    +

    Using a Cron Job

    +

    You need to create a Cron job so that the script runs each time your VM reboots, +remounting S3 Object Storage to your VM.

    +
    crontab -e
    +
    +

    Add this command to the end of the file

    +
    @reboot sh /<Path_To_Directory>/script.sh
    +
    +

    For example,

    +
    @reboot sh /home/ubuntu/script.sh
    +
    +

    Create script.sh file paste the below code to it.

    +
    #!/bin/bash
    +mount-s3 [OPTIONS] <BUCKET_NAME> <DIRECTORY>
    +
    +

    For example,

    +
    #!/bin/bash
    +mount-s3 --profile "nerc" --endpoint-url "https://stack.nerc.mghpcc.org:13808" --allow-other --force-path-style --debug bucket1 ~/bucket1/
    +
    +

    Make the file executable by running the below command

    +
    chmod +x script.sh
    +
    +

    Reboot your VM:

    +
    sudo reboot
    +
    +

    Using a service manager like systemd by creating systemd unit file

    +

    Create directory in /root folder in which you will store the credentials:

    +
    sudo mkdir /root/.aws
    +
    +

    Copy the credentials you created in your local directory to the .aws directory +in the /root folder:

    +
    sudo cp ~/.aws/credentials /root/.aws/
    +
    +
    Create systemd unit file i.e. mountpoint-s3.service
    +

    Create a systemd service unit file that is going to execute the above script +and dynamically mount or unmount the container:

    +
    sudo nano /etc/systemd/system/mountpoint-s3.service
    +
    +

    Edit the file to look like the below:

    +
    [Unit]
    +Description=Mountpoint for Amazon S3 mount
    +Documentation=https://docs.aws.amazon.com/AmazonS3/latest/userguide/mountpoint.html
    +#Wants=network.target
    +Wants=network-online.target
    +#Requires=network-online.target
    +AssertPathIsDirectory=/home/ubuntu/bucket1
    +After=network-online.target
    +
    +[Service]
    +Type=forking
    +User=root
    +Group=root
    +ExecStart=/usr/bin/mount-s3 bucket1 /home/ubuntu/bucket1 \
    +        --profile "nerc" \
    +        --endpoint-url "https://stack.nerc.mghpcc.org:13808" \
    +        --allow-other \
    +        --force-path-style \
    +        --debug
    +
    +ExecStop=/bin/fusermount -u /home/ubuntu/bucket1
    +Restart=always
    +RestartSec=10
    +
    +[Install]
    +#WantedBy=remote-fs.target
    +WantedBy=default.target
    +
    +
    +

    Important Note

    +

    The network-online.target lines ensure that mounting is not attempted until +there's a network connection available. The service is launched as soon as the +network is up and running, it mounts the bucket and remains active.

    +
    +
    Launch the service
    +

    Now reload systemd deamon:

    +
    sudo systemctl daemon-reload
    +
    +

    Start your service

    +
    sudo systemctl start mountpoint-s3.service
    +
    +

    To check the status of your service

    +
    sudo systemctl status mountpoint-s3.service
    +
    +

    To enable your service on every reboot

    +
    sudo systemctl enable --now mountpoint-s3.service
    +
    +
    +

    Information

    +

    The service name is based on the file name i.e. /etc/systemd/system/mountpoint-s3.service +so you can just use mountpoint-s3 instead of mountpoint-s3.service on all +above systemctl commands.

    +

    To debug you can use:

    +

    sudo systemctl status mountpoint-s3.service -l --no-pager or, +journalctl -u mountpoint-s3 --no-pager | tail -50

    +
    +

    Verify, the service is running successfully in background as root user:

    +
    ps aux | grep mount-s3
    +
    +root       13585  0.0  0.0 1060504 11672 ?       Sl   02:00   0:00 /usr/bin/mount-s3 bucket1 /home/ubuntu/bucket1 --profile nerc --endpoint-url https://stack.nerc.mghpcc.org:13808 --read-only --allow-other --force-path-style --debug
    +
    +
    Stopping the service
    +

    Stopping the service causes the container to unmount from the mount point.

    +

    To disable your service on every reboot:

    +
    sudo systemctl disable --now mountpoint-s3.service
    +
    +

    Confirm the Service is not in "Active" Status:

    +
    sudo systemctl status mountpoint-s3.service
    +
    +○ mountpoint-s3.service - Mountpoint for Amazon S3 mount
    +    Loaded: loaded (/etc/systemd/system/mountpoint-s3.service; disabled; vendor p>
    +    Active: inactive (dead)
    +
    +

    Unmount the local mount point:

    +

    If you have the local mounted directory "bucket1" already mounted, unmount it +(replace ~/bucket1 with the location in which you have it mounted):

    +
    fusermount -u ~/bucket1
    +
    +

    Or,

    +
    sudo umount -l ~/bucket1
    +
    +

    Now reboot your VM:

    +
    sudo reboot
    +
    +
    +

    Further Reading

    +

    For further details, including instructions for downloading and installing +Mountpoint on various Linux operating systems, please refer to this resource.

    +
    +

    2. Using Goofys

    +

    Install goofys

    +

    Access your virtual machine using SSH. Update the packages on your system and +install wget to be able to download the goofys binary directly to your VM:

    +
    sudo apt update && sudo apt upgrade
    +sudo apt install wget
    +
    +

    Now, navigate to your home directory:

    +
    cd
    +
    +

    Use wget to download the goofys binary:

    +
    wget https://github.com/kahing/goofys/releases/latest/download/goofys
    +
    +

    Make the goofys binary executable:

    +
    chmod +x goofys
    +
    +

    Copy the goofys binary to somewhere in your path

    +
    sudo cp goofys /usr/bin/
    +
    +
    +

    To update goofys in the future

    +

    In order to update the newer version of goofys binary, you need to follow:

    +
      +
    • +

      make sure that the data in the NERC Object Storage container is not actively +used by any applications on your VM.

      +
    • +
    • +

      remove the goofys binary from ubuntu's home directory as well as from /usr/bin/.

      +
    • +
    • +

      execute the above commands (those starting with wget and chmod) from your +home directory again and copy it to your path i.e. /usr/bin/.

      +
    • +
    • +

      reboot your VM.

      +
    • +
    +
    +

    Provide credentials to configure goofys

    +

    Make a folder to store your credentials:

    +
    mkdir ~/.aws/
    +
    +

    Create file ~/.aws/credentials using your favorite text editor (for example +nano or vim). Add the following contents to it which requires the EC2_ACCESS_KEY +and EC2_SECRET_KEY keys that you noted from ec2rc.sh file (during the "Setup +and enable your S3 API credentials" step):

    +
    [nerc]
    +aws_access_key_id=<EC2_ACCESS_KEY>
    +aws_secret_access_key=<EC2_SECRET_KEY>
    +
    +

    Save the file and exit the text editor.

    +

    Create a local directory as a mount folder

    +
    mkdir -p ~/bucket1
    +
    +

    Mount the Container locally using goofys

    +

    The object storage container i.e. "bucket1" will be mounted in the directory ~/bucket1

    +
    goofys -o allow_other --region RegionOne --profile "nerc" --endpoint "https://stack.nerc.mghpcc.org:13808" bucket1 ~/bucket1
    +
    +

    In this command,

    +
      +
    • +

      goofys is the goofys binary as we already copied this in /usr/bin/ path we +don't need to specify the full path.

      +
    • +
    • +

      -o stands for goofys options, and is handled differently.

      +
    • +
    • +

      allow_other Allows goofys with option allow_other only allowed if user_allow_other +is set in /etc/fuse.conf.

      +
    • +
    • +

      --profile corresponds to the name given on the ~/.aws/credentials file i.e. +[nerc].

      +
    • +
    • +

      --endpoint corresponds to the Object Storage endpoint url for NERC Object Storage. +You don't need to modify this url.

      +
    • +
    • +

      bucket1 is the name of the container which contains the NERC Object Storage +resources.

      +
    • +
    • +

      ~/bucket1 is the location of the folder in which you want to mount the Object +Storage filesystem.

      +
    • +
    +

    In order to test whether the mount was successful, navigate to the directory in +which you mounted the NERC container repository, for example:

    +
    cd ~/bucket1
    +
    +

    Use the ls command to list its content. You should see the output similar to this:

    +
    ls
    +
    +README.md   image.png   test-file
    +
    +

    The NERC Object Storage container repository has now been mounted using goofys.

    +
    +

    Very Important Information

    +

    Please note that any of these Mountpoints is not persistent if your VM is +stopped or rebooted in the future. After each reboot, you will need to execute +the mounting command as mentioned above +again.

    +
    +

    Mounting on system startup

    +

    Mounts can be set to occur automatically during system initialization so that mounted +file systems will persist even after the VM reboot.

    +

    Create directory in /root folder in which you will store the credentials:

    +
    sudo mkdir /root/.aws
    +
    +

    Copy the credentials you created in your local directory to the .aws directory +in the /root folder:

    +
    sudo cp ~/.aws/credentials /root/.aws/
    +
    +

    Configure mounting of the bucket1 container

    +

    Open the file /etc/fstab using your favorite command line text editor for editing. +You will need sudo privileges for that. For example, if you want to use nano, execute +this command:

    +
    sudo nano /etc/fstab
    +
    +

    Proceed with one of the methods below depending on whether you wish to have the +"bucket1" repository automatically mounted at system startup:

    +
    Method 1: Mount the repository automatically on system startup
    +

    Add the following line to the /etc/fstab file:

    +
    /usr/bin/goofys#bucket1 /home/ubuntu/bucket1 fuse _netdev,allow_other,--dir-mode=0777,--file-mode=0666,--region=RegionOne,--profile=nerc,--endpoint=https://stack.nerc.mghpcc.org:13808 0 0
    +
    +
    Method 2: Do NOT mount the repository automatically on system startup
    +

    Add the following line to the /etc/fstab file:

    +
    /usr/bin/goofys#bucket1 /home/ubuntu/bucket1 fuse noauto,_netdev,allow_other,--dir-mode=0777,--file-mode=0666,--region=RegionOne,--profile=nerc,--endpoint=https://stack.nerc.mghpcc.org:13808 0 0
    +
    +

    The difference between this code and the code mentioned in Method 1 is the addition +of the option noauto.

    +
    +

    Content of /etc/fstab

    +

    In the /etc/fstab content as added above:

    +
    grep goofys /etc/fstab
    +
    +/usr/bin/goofys#bucket1 /home/ubuntu/bucket1 fuse _netdev,allow_other,--dir-mode=0777,--file-mode=0666,--region=RegionOne,--profile=nerc,--endpoint=https://stack.nerc.mghpcc.org:13808 0 0
    +
    +
      +
    • +

      /usr/bin/goofys with the location of your goofys binary.

      +
    • +
    • +

      /home/ubuntu/bucket1 is the location in which you wish to mount bucket1 +container from your NERC Object Storage.

      +
    • +
    • +

      --profile=nerc is the name you mentioned on the ~/.aws/credentials file +i.e. [nerc].

      +
    • +
    +
    +

    Once you have added that line to your /etc/fstab file, reboot the VM. After the +system has restarted, check whether the NERC Object Storage repository i.e. bucket1 +is mounted in the directory specified by you i.e. in /home/ubuntu/bucket1.

    +
    +

    Important Information

    +

    If you just want to test your mounting command written in /etc/fstab without +"Rebooting" the VM you can also do that by running sudo mount -a. +And if you want to stop automatic mounting of the container from the NERC +Object Storage repository i.e. bucket1, remove the line you added in the +/etc/fstab file. You can also comment it out by adding # character in front +of that line. After that, reboot the VM. Optionally, you can also remove the +goofys binary and the credentials file located at ~/.aws/credentials if +you no longer want to use goofys.

    +
    +

    3. Using S3FS

    +

    Install S3FS

    +

    Access your virtual machine using SSH. Update the packages on your system and install +s3fs:

    +
    sudo apt update && sudo apt upgrade
    +sudo apt install s3fs
    +
    +
    +

    For RedHat/Rocky/AlmaLinux

    +

    The RedHat/Rocky/AlmaLinux repositiories do not have s3fs. Therefore, +you will need to compile it yourself.

    +

    First, using your local computer, visit the following website (it contains +the releases of s3fs): https://github.com/s3fs-fuse/s3fs-fuse/releases/latest.

    +

    Then, in the section with the most recent release find the part Assets. +From there, find the link to the zip version of the Source code.

    +

    S3FS  Latest Assets Download

    +

    Right click on one of the Source Code i.e. "v1.94.zip" and select the "Copy +link address". You will need this link to use later as a parameter for the +wget command to download it to your virtual machine.

    +

    Access your VM on the NERC OpenStack using the web console or SSH.

    +

    Update your packages:

    +
    sudo dnf update -y
    +
    +

    Install the prerequisites including fuse, the C++ compiler and make:

    +
    sudo dnf config-manager --set-enabled crb
    +
    +sudo dnf install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel wget unzip
    +
    +# OR, sudo dnf --enablerepo=crb install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel wget unzip
    +
    +

    Now, use wget to download the source code. Replace https://github.com/s3fs-fuse/s3fs-fuse/archive/refs/tags/v1.94.zip with the link to the source code you found previously:

    +
    wget https://github.com/s3fs-fuse/s3fs-fuse/archive/refs/tags/v1.94.zip
    +
    +

    Use the ls command to verify that the zip archive has been downloaded:

    +
    ls
    +
    +

    Unzip the archive (replace v1.94.zip with the name of the archive you downloaded):

    +
    unzip v1.94.zip
    +
    +

    Use the ls command to find the name of the folder you just extracted:

    +
    ls
    +
    +

    Now, navigate to that folder (replace s3fs-fuse-1.94 with the name of the folder you just extracted):

    +
    cd s3fs-fuse-1.94
    +
    +

    Perform the compilation by executing the following commands in order:

    +
    ./autogen.sh
    +./configure
    +make
    +sudo make install
    +
    +

    s3fs should now be installed in /usr/local/bin/s3fs.

    +
    +

    Create a file which will store the S3 Credentials

    +

    Store your S3 credentials in a file ${HOME}/.passwd-s3fs and set "owner-only" +permissions. Run the following command to create a pair of EC2_ACCESS_KEY and +EC2_SECRET_KEY keys that you noted from ec2rc.sh file (above) to store them +in the file.

    +
    echo EC2_ACCESS_KEY:EC2_SECRET_KEY > ${HOME}/.passwd-s3fs
    +
    +

    Change the permissions of this file to 600 to set "owner-only" permissions:

    +
    chmod 600 ${HOME}/.passwd-s3fs
    +
    +

    Create a Container in the NERC Project's Object storage

    +

    We create it using the OpenStack Swift client:

    +
    sudo apt install python3-swiftclient
    +
    +

    Let's call the Container "bucket1"

    +
    swift post bucket1
    +
    +
    +

    More about Swift Interface

    +

    You can read more about using Swift Interface for NERC Object Storage here.

    +
    +

    Create a local directory as a mount point in your VM

    +
    mkdir -p ~/bucket1
    +
    +

    Mount the Container locally using s3fs

    +

    The object storage container i.e. "bucket1" will be mounted in the directory ~/bucket1

    +
    s3fs bucket1 ~/bucket1 -o passwd_file=~/.passwd-s3fs -o url=https://stack.nerc.mghpcc.org:13808 -o use_path_request_style -o umask=0002
    +
    +

    Unmount the local mount point

    +

    If you have the local mounted directory "bucket1" already mounted, unmount it +(replace ~/bucket1 with the location in which you have it mounted):

    +
    sudo umount -l ~/bucket1
    +
    +

    Configure mounting of the bucket1 repository

    +

    Open the file /etc/fstab using your favorite command line text editor for editing. +You will need sudo privileges for that. For example, if you want to use nano, execute +this command:

    +
    sudo nano /etc/fstab
    +
    +

    Proceed with one of the methods below depending on whether you wish to have the +"bucket1" repository automatically mounted at system startup:

    +

    Method 1: Mount the repository automatically on startup

    +

    Add the following line to the /etc/fstab file:

    +
    /usr/bin/s3fs#bucket1 /home/ubuntu/bucket1 fuse passwd_file=/home/ubuntu/.passwd-s3fs,_netdev,allow_other,use_path_request_style,uid=0,umask=0222,mp_umask=0222,gid=0,url=https://stack.nerc.mghpcc.org:13808 0 0
    +
    +

    Method 2: Do NOT mount the repository automatically on startup

    +

    Add the following line to the /etc/fstab file:

    +
    /usr/bin/s3fs#bucket1 /home/ubuntu/bucket1 fuse noauto,passwd_file=/home/ubuntu/.passwd-s3fs,_netdev,allow_other,use_path_request_style,uid=0,umask=0222,mp_umask=0222,gid=0,url=https://stack.nerc.mghpcc.org:13808 0 0
    +
    +

    The difference between this code and the code mentioned in Method 1 is the addition +of the option noauto.

    +
    +

    Content of /etc/fstab

    +

    In the /etc/fstab content as added above:

    +
      +
    • +

      /usr/bin/s3fs is the location of your s3fs binary. If you installed +it using apt on Debian or Ubuntu, you do not have to change anything here. +If you are using a self-compiled version of s3fs created on RedHat/Rocky/AlmaLinux +as explained above, that location is /usr/local/bin/s3fs.

      +
    • +
    • +

      /home/ubuntu/.passwd-s3fs is the location of the file which contains +the key pair used for mounting the "bucket1" repository as we named it in previous +step.

      +
    • +
    +
    +

    4. Using Rclone

    +

    Installing Rclone

    +

    Install rclone as described here or for our Ubuntu +based VM we can just SSH into the VM and then run the following command using default +ubuntu user:

    +
    curl -sSL https://rclone.org/install.sh | sudo bash
    +
    +

    Configuring Rclone

    +

    If you run rclone config file you will see where the default location is +for you.

    +
    rclone config file
    +Configuration file doesn't exist, but rclone will use this path:
    +/home/ubuntu/.config/rclone/rclone.conf
    +
    +

    So create the config file as mentioned above path: /home/ubuntu/.config/rclone/rclone.conf +and add the following entry with the name [nerc]:

    +
    [nerc]
    +type = s3
    +env_auth = false
    +provider = Other
    +endpoint = https://stack.nerc.mghpcc.org:13808
    +acl = public-read
    +access_key_id = <YOUR_EC2_ACCESS_KEY_FROM_ec2rc_FILE>
    +secret_access_key = <YOUR_EC2_SECRET_KEY_FROM_ec2rc_FILE>
    +location_constraint =
    +server_side_encryption =
    +
    +

    More about the config for AWS S3 compatible API can be seen here.

    +
    +

    Important Information

    +

    Mind that if set env_auth = true then it will take variables from environment, +so you shouldn't insert it in this case.

    +
    +

    Listing the Containers and Contents of a Container

    +

    Once your Object Storage has been configured in Rclone, you can then use the +Rclone interface to List all the Containers with the "lsd" command

    +
    rclone lsd "nerc:"
    +
    +

    Or,

    +
    rclone lsd "nerc:" --config=rclone.conf
    +
    +

    For e.g.,

    +
    rclone lsd "nerc:" --config=rclone.conf
    +        -1 2024-04-23 20:21:43        -1 bucket1
    +
    +

    To list the files and folders available within a container i.e. "bucket1" in this +case, within a container we can use the "ls" command:

    +
    rclone ls "nerc:bucket1/"
    +  653 README.md
    +    0 image.png
    +   12 test-file
    +
    +

    Create a mount point directory

    +
    mkdir -p bucket1
    +
    +

    Mount the container with Rclone

    +

    Start the mount like this, where home/ubuntu/bucket1 is an empty existing directory:

    +
    rclone -vv --vfs-cache-mode full mount nerc:bucket1 /home/ubuntu/bucket1 --allow-other --allow-non-empty
    +
    +

    On Linux, you can run mount in either foreground or background (aka daemon) +mode. Mount runs in foreground mode by default. Use the --daemon flag to force +background mode i.e.

    +
    rclone mount remote:path/to/files /path/to/local/mount --daemon
    +
    +

    When running in background mode the user will have to stop the mount manually:

    +
    fusermount -u /path/to/local/mount
    +
    +

    Or,

    +
    sudo umount -l /path/to/local/mount
    +
    +

    Now we have the mount running and we have background mode also enabled. Lets say +there is a scenario where we want the mount to be persistent after a server/machine +reboot. There are few ways to do it:

    +

    Create systemd unit file i.e. rclone-mount.service

    +

    Create a systemd service unit file that is going to execute the above script +and dynamically mount or unmount the container:

    +
    sudo nano /etc/systemd/system/rclone-mount.service
    +
    +

    Edit the file to look like the below:

    +
    [Unit]
    +Description=rclone mount
    +Documentation=http://rclone.org/docs/
    +AssertPathIsDirectory=/home/ubuntu/bucket1
    +After=network-online.target
    +
    +[Service]
    +Type=simple
    +User=root
    +Group=root
    +ExecStart=/usr/bin/rclone mount \
    +        --config=home/ubuntu/.config/rclone/rclone.conf \
    +        --vfs-cache-mode full \
    +        nerc:bucket1 /home/ubuntu/bucket1 \
    +                --allow-other \
    +                --allow-non-empty
    +
    +ExecStop=/bin/fusermount -u /home/ubuntu/bucket1
    +Restart=always
    +RestartSec=10
    +
    +[Install]
    +WantedBy=default.target
    +
    +

    The service is launched as soon as the network is up and running, it mounts the +bucket and remains active. Stopping the service causes the container to unmount +from the mount point.

    +

    Launch the service using a service manager

    +

    Now reload systemd deamon:

    +
    sudo systemctl daemon-reload
    +
    +

    Start your service

    +
    sudo systemctl start rclone-mount.service
    +
    +

    To check the status of your service

    +
    sudo systemctl status rclone-mount.service
    +
    +

    To enable your service on every reboot

    +
    sudo systemctl enable --now rclone-mount.service
    +
    +
    +

    Information

    +

    The service name is based on the file name i.e. /etc/systemd/system/rclone-mount.service +so you can just use rclone-mount instead of rclone-mount.service on all +above systemctl commands.

    +

    To debug you can use:

    +

    sudo systemctl status rclone-mount.service -l --no-pager or, +journalctl -u rclone-mount --no-pager | tail -50

    +
    +

    Verify, if the container is mounted successfully:

    +
    df -hT | grep rclone
    +nerc:bucket1   fuse.rclone  1.0P     0  1.0P   0% /home/ubuntu/bucket1
    +
    +

    5. Using JuiceFS

    +

    Preparation

    +

    A JuiceFS file system consists of two parts:

    +
      +
    • +

      Object Storage: Used for data storage.

      +
    • +
    • +

      Metadata Engine: A database used for storing metadata. In this case, we will +use a durable Redis in-memory database service that +provides extremely fast performance.

      +
    • +
    +

    Installation of the JuiceFS client

    +

    Access your virtual machine using SSH. Update the packages on your system and install +the JuiceFS client:

    +
    sudo apt update && sudo apt upgrade
    +# default installation path is /usr/local/bin
    +curl -sSL https://d.juicefs.com/install | sh -
    +
    +

    Verify the JuiceFS client is running in background:

    +
    ps aux | grep juicefs
    +ubuntu     16275  0.0  0.0   7008  2212 pts/0    S+   18:44   0:00 grep --color=auto juicefs
    +
    +

    Installing and Configuring Redis database

    +

    Install Redis by running:

    +
    sudo apt install redis-server
    +
    +

    This will download and install Redis and its dependencies. Following this, there +is one important configuration change to make in the Redis configuration file, +which was generated automatically during the installation.

    +

    You can check the line number where to find supervised by running:

    +
    sudo cat /etc/redis/redis.conf -n | grep supervised
    +
    +228  #   supervised no      - no supervision interaction
    +229  #   supervised upstart - signal upstart by putting Redis into SIGSTOP mode
    +231  #   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
    +232  #   supervised auto    - detect upstart or systemd method based on
    +236  supervised no
    +
    +

    Open this file with your preferred text editor:

    +
    sudo nano /etc/redis/redis.conf -l
    +
    +

    Inside the config file, find the supervised directive. This directive allows you +to declare an init system to manage Redis as a service, providing you with more +control over its operation. The supervised directive is set to no by default. +Since you are running Ubuntu, which uses the systemd +init system, change this to systemd as shown here:

    +

    Redis Server Config

    +
      +
    • Binding to localhost:
    • +
    +

    By default, Redis is only accessible from localhost. We need to verify that by +locating this line by running:

    +
    sudo cat /etc/redis/redis.conf -n | grep bind
    +
    +...
    +68  bind 127.0.0.1 ::1
    +...
    +
    +

    and make sure it is uncommented (remove the # if it exists) by editing this file +with your preferred text editor.

    +

    So save and close it when you are finished. If you used nano to edit the +file, do so by pressing CTRL + X, Y, then ENTER.

    +

    Then, restart the Redis service to reflect the changes you made to the configuration +file:

    +
    sudo systemctl restart redis.service
    +
    +

    With that, you've installed and configured Redis and it's running on your machine. +Before you begin using it, you should first check whether Redis is functioning +correctly.

    +

    Start by checking that the Redis service is running:

    +
    sudo systemctl status redis
    +
    +

    If it is running without any errors, this command will show "active (running)" Status.

    +

    To test that Redis is functioning correctly, connect to the server using redis-cli, +Redis's command-line client:

    +
    redis-cli
    +
    +

    In the prompt that follows, test connectivity with the ping command:

    +
    ping
    +
    +

    Output:

    +
    PONG
    +
    +

    Also, check that binding to localhost is working fine by running the following +netstat command:

    +
    sudo netstat -lnp | grep redis
    +
    +tcp        0      0 127.0.0.1:6379          0.0.0.0:*               LISTEN      16967/redis-server
    +tcp6       0      0 ::1:6379                :::*                    LISTEN      16967/redis-server
    +
    +
    +

    Important Note

    +

    The netstat command may not be available on your system by default. If this +is the case, you can install it (along with a number of other handy networking +tools) with the following command: sudo apt install net-tools.

    +
    +
    Configuring a Redis Password
    +

    Configuring a Redis password enables one of its two built-in security features — +the auth command, which requires clients to authenticate to access the database. +The password is configured directly in Redis's configuration file, +/etc/redis/redis.conf.

    +

    First, we need to locate the line where the requirepass directive is mentioned:

    +
    sudo cat /etc/redis/redis.conf -n | grep requirepass
    +
    +...
    +790  # requirepass foobared
    +...
    +
    +

    Then open the Redis's config file i.e. /etc/redis/redis.conf again with your +preferred editor:

    +
    sudo nano /etc/redis/redis.conf -l
    +
    +

    Uncomment it by removing the #, and change foobared to a secure password.

    +
    +

    How to generate random password?

    +

    You can use openssl to generate random password by running the following +command locally:

    +

    openssl rand 12 | openssl base64 -A

    +

    <your_redis_password>

    +
    +

    After saving and closing it when you are finished. You need to restart the Redis +service to reflect the changes you made to the configuration file by running:

    +
    sudo systemctl restart redis.service
    +
    +

    To test that the password works, open up the Redis client:

    +
    redis-cli
    +
    +

    The following shows a sequence of commands used to test whether the Redis password +works. The first command tries to set a key to a value before authentication:

    +
    127.0.0.1:6379> set key1 10
    +
    +

    That won’t work because you didn't authenticate, so Redis returns an error:

    +

    Output:

    +

    (error) NOAUTH Authentication required.

    +

    The next command authenticates with the password specified in the Redis configuration +file:

    +
    127.0.0.1:6379> auth <your_redis_password>
    +
    +

    Redis acknowledges:

    +

    Output:

    +
    OK
    +
    +

    After that, running the previous command again will succeed:

    +
    127.0.0.1:6379> set key1 10
    +
    +

    Output:

    +
    OK
    +
    +

    get key1 queries Redis for the value of the new key.

    +
    127.0.0.1:6379> get key1
    +
    +

    Output:

    +
    "10"
    +
    +

    After confirming that you're able to run commands in the Redis client after +authenticating, you can exit redis-cli:

    +
    127.0.0.1:6379> quit
    +
    +

    Setting authorizing S3 access using juicefs config

    +

    You can store the S3 credentials using juicefs config that allows us to add the +Access Key and Secret Key for the file system by running:

    +
    juicefs config \
    +--access-key=<EC2_ACCESS_KEY> \
    +--secret-key=<EC2_SECRET_KEY> \
    +redis://default:<your_redis_password>@127.0.0.1:6379/1
    +
    +

    Formatting file system

    +
    sudo juicefs format --storage s3 --bucket https://stack.nerc.mghpcc.org:13808/<your_container> redis://default:<your_redis_password>@127.0.0.1:6379/1 myjfs
    +
    +

    Mounting file system manually

    +
    Create a local directory as a mount point folder
    +
    mkdir -p ~/bucket1
    +
    +
    Mount the Container locally using juicefs
    +

    The formatted file system "myjfs" will be mounted in the directory ~/bucket1 by +running the following command:

    +
    juicefs mount redis://default:<your_redis_password>@127.0.0.1:6379/1 ~/bucket1
    +
    +

    Mount JuiceFS at Boot Time

    +

    After JuiceFS has been successfully formatted, follow this guide to set up auto-mount +on boot.

    +

    We can speficy the --update-fstab option on the mount command that will automatically +help you set up mount at boot:

    +
    sudo juicefs mount --update-fstab --max-uploads=50 --writeback --cache-size 204800 <META-URL> <MOUNTPOINT>
    +
    +grep <MOUNTPOINT> /etc/fstab
    +<META-URL> <MOUNTPOINT> juicefs _netdev,max-uploads=50,writeback,cache-size=204800 0 0
    +
    +ls -l /sbin/mount.juicefs
    +lrwxrwxrwx 1 root root 22 Apr 24 20:25 /sbin/mount.juicefs -> /usr/local/bin/juicefs
    +
    +

    For example,

    +
    sudo juicefs mount --update-fstab --max-uploads=50 --writeback --cache-size 204800 redis://default:<your_redis_password>@127.0.0.1:6379/1 ~/bucket1
    +
    +grep juicefs /etc/fstab
    +redis://default:<your_redis_password>@127.0.0.1:6379/1  /home/ubuntu/bucket1  juicefs  _netdev,cache-size=204800,max-uploads=50,writeback  0 0
    +
    +ls -l /sbin/mount.juicefs
    +lrwxrwxrwx 1 root root 22 Apr 24 20:25 /sbin/mount.juicefs -> /usr/local/bin/juicefs
    +
    +

    Automating Mounting with systemd service unit file

    +

    If you're using JuiceFS and need to apply settings like database access password, +S3 access key, and secret key, which are hidden from the command line using environment +variables for security reason, it may not be easy to configure them in the /etc/fstab +file. In such cases, you can utilize systemd to mount your JuiceFS instance.

    +

    Here's how you can set up your systemd configuration file:

    +

    Create a systemd service unit file that is going to execute the above script +and dynamically mount or unmount the container:

    +
    sudo nano /etc/systemd/system/juicefs-mount.service
    +
    +

    Edit the file to look like the below:

    +
    [Unit]
    +Description=JuiceFS mount
    +Documentation=https://juicefs.com/docs/
    +AssertPathIsDirectory=/home/ubuntu/bucket1
    +After=network-online.target
    +
    +[Service]
    +Type=simple
    +User=root
    +Group=root
    +ExecStart=/usr/local/bin/juicefs mount \
    +"redis://default:<your_redis_password>@127.0.0.1:6379/1" \
    +/home/ubuntu/bucket1 \
    +--no-usage-report \
    +--writeback \
    +--cache-size 102400 \
    +--cache-dir /home/juicefs_cache \
    +--buffer-size 2048 \
    +--open-cache 0 \
    +--attr-cache 1 \
    +--entry-cache 1 \
    +--dir-entry-cache 1 \
    +--cache-partial-only false \
    +--free-space-ratio 0.1 \
    +--max-uploads 20 \
    +--max-deletes 10 \
    +--backup-meta 0 \
    +--log /var/log/juicefs.log \
    +--get-timeout 300 \
    +--put-timeout 900 \
    +--io-retries 90 \
    +--prefetch 1
    +
    +ExecStop=/usr/local/bin/juicefs umount /home/ubuntu/bucket1
    +Restart=always
    +RestartSec=10
    +
    +[Install]
    +WantedBy=default.target
    +
    +
    +

    Important Information

    +

    Feel free to modify the options and environments according to your needs. Please +make sure you change <your_redis_password> to your own Redis password that +was setup by following this step.

    +
    +

    The service is launched as soon as the network is up and running, it mounts the +bucket and remains active. Stopping the service causes the container to unmount +from the mount point.

    +
    Launch the service as daemon
    +

    Now reload systemd deamon:

    +
    sudo systemctl daemon-reload
    +
    +

    Start your service

    +
    sudo systemctl start juicefs-mount.service
    +
    +

    To check the status of your service

    +
    sudo systemctl status juicefs-mount.service
    +
    +

    To enable your service on every reboot

    +
    sudo systemctl enable --now juicefs-mount.service
    +
    +
    +

    Information

    +

    The service name is based on the file name i.e. /etc/systemd/system/juicefs-mount.service +so you can just use juicefs-mount instead of juicefs-mount.service on all +above systemctl commands.

    +

    To debug you can use:

    +

    sudo systemctl status juicefs-mount.service -l --no-pager or, +journalctl -u juicefs-mount --no-pager | tail -50

    +
    +

    Verify, if the container is mounted successfully:

    +
    df -hT | grep juicefs
    +JuiceFS:myjfs  fuse.juicefs  1.0P  4.0K  1.0P   1% /home/ubuntu/bucket1
    +
    +

    Data Synchronization

    +

    juicefs sync is a powerful data migration tool, which can copy data across all +supported storages including object storage, JuiceFS itself, and local file systems, +you can freely copy data between any of these systems.

    +

    Command Syntax

    +

    To synchronize data from SRC i.e. the source data address or path to DST i.e. +the destination address or path;, capable for both directories and files.

    +
    juicefs sync [command options] SRC DST
    +
    +
    +

    More Information

    +

    [command options] are synchronization options. See command reference +for more details.

    +
    +

    Address format:

    +
    [NAME://][ACCESS_KEY:SECRET_KEY[:TOKEN]@]BUCKET[.ENDPOINT][/PREFIX]
    +
    +# MinIO only supports path style
    +minio://[ACCESS_KEY:SECRET_KEY[:TOKEN]@]ENDPOINT/BUCKET[/PREFIX]
    +
    +

    Synchronize between Object Storage and JuiceFS

    +

    The following command synchronizes movies container on Object Storage Container +to your local JuiceFS File System i.e ~/jfs:

    +
    # create local folder
    +mkdir -p ~/jfs
    +# mount JuiceFS
    +juicefs mount -d redis://default:<your_redis_password>@127.0.0.1:6379/1 ~/jfs
    +# synchronize
    +juicefs sync --force-update s3://<EC2_ACCESS_KEY>:<EC2_SECRET_KEY>@movies.stack.nerc.mghpcc.org:13808/ ~/jfs/
    +
    +

    The following command synchronizes images directory from your local +JuiceFS File System i.e ~/jfs to Object Storage Container i.e. movies +container:

    +
    # mount JuiceFS
    +juicefs mount -d redis://default:<your_redis_password>@127.0.0.1:6379/1 ~/jfs
    +# create local folder and add some file to this folder
    +mkdir -p ~/jfs/images/
    +cp "test.image" ~/jfs/images/
    +# synchronization
    +juicefs sync --force-update ~/jfs/images/ s3://<EC2_ACCESS_KEY>:<EC2_SECRET_KEY>@movies.stack.nerc.mghpcc.org:13808/images/
    +
    +

    How to destroy a file system

    +

    After JuiceFS has been successfully formatted, follow this guide to clean up.

    +

    JuiceFS client provides the destroy command to completely destroy a file system, +which will result in:

    +
      +
    • +

      Deletion of all metadata entries of this file system

      +
    • +
    • +

      Deletion of all data blocks of this file system

      +
    • +
    +

    Use this command in the following format:

    +
    juicefs destroy <METADATA URL> <UUID>
    +
    +

    Here,

    +

    <METADATA URL>: The URL address of the metadata engine

    +

    <UUID>: The UUID of the file system

    +

    Find the UUID of existing mount file system

    +

    You can run either juicefs config redis://default:<your_redis_password>@127.0.0.1:6379/1 +or juicefs status redis://default:<your_redis_password>@127.0.0.1:6379/1 to get +detailed information about mounted file system i.e. "myjfs" that is setup by +following this step. The +output looks like shown here:

    +
    {
    +...
    +"Name": "myjfs",
    +"UUID": "<UUID>",
    +...
    +}
    +
    +

    Destroy a file system

    +

    Please note the "UUID" that you will need to run juicefs destroy command as +shown below:

    +
    juicefs destroy redis://default:<your_redis_password>@127.0.0.1:6379/1 <UUID> --force
    +
    +

    When destroying a file system, the client will issue a confirmation prompt. Please +make sure to check the file system information carefully and enter y after confirming +it is correct.

    +
    +

    Danger

    +

    The destroy operation will cause all the data in the database and the object +storage associated with the file system to be deleted. Please make sure to +back up the important data before operating!

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/persistent-storage/object-storage/index.html b/openstack/persistent-storage/object-storage/index.html new file mode 100644 index 00000000..7e9ca97e --- /dev/null +++ b/openstack/persistent-storage/object-storage/index.html @@ -0,0 +1,5246 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Object Storage

    +

    OpenStack Object Storage (Swift) is a highly available, distributed, eventually consistent +object/blob store. Object Storage is used to manage cost-effective and long-term +preservation and storage of large amounts of data across clusters of standard server +hardware. The common use cases include the storage, backup and archiving of unstructured +data, such as documents, static web content, images, video files, and virtual +machine images, etc.

    +

    The end-users can interact with the object storage system through a RESTful HTTP +API i.e. the Swift API or use one of the many client libraries that exist for all +of the popular programming languages, such as Java, Python, Ruby, and C# based on +provisioned quotas. Swift also supports and is compatible with Amazon's Simple +Storage Service (S3) API +that makes it easier for the end-users to move data between multiple storage end +points and supports hybrid cloud setup.

    +

    1. Access by Web Interface i.e. Horizon Dashboard

    +

    To get started, navigate to Project -> Object Store -> Containers.

    +

    Object Store

    +

    Create a Container

    +

    In order to store objects, you need at least one Container to put them in. +Containers are essentially top-level directories. Other services use the +terminology buckets.

    +

    Click Create Container. Give your container a name.

    +

    Create a Container

    +
    +

    Important Note

    +

    The container name needs to be unique, not just within your project but +across all of our OpenStack installation. If you get an error message +after trying to create the container, try giving it a more unique name.

    +
    +

    For now, leave the "Container Access" set to Private.

    +

    Upload a File

    +

    Click on the name of your container, and click the Upload File icon as shown below:

    +

    Container Upload File

    +

    Click Browse and select a file from your local machine to upload.

    +

    It can take a while to upload very large files, so if you're just testing it out +you may want to use a small text file or similar.

    +

    Container Upload Popup

    +

    By default the File Name will be the same as the original file, but you can change +it to another name. Click "Upload File". Your file will appear inside the container +as shown below once successful:

    +

    Successful File Upload

    +

    Using Folders

    +

    Files stored by definition do not organize objects into folders, but you can use +folders to keep your data organized.

    +

    On the backend, the folder name is actually just prefixed to the object name, but +from the web interface (and most other clients) it works just like a folder.

    +

    To add a folder, click on the "+ folder" icon as shown below:

    +

    Upload Folder on Container

    +

    Make a container public

    +

    Making a container public allows you to send your collaborators a URL that gives +access to the container's contents.

    +
    +

    Hosting a static website using public Container

    +

    You can use public Container to host a static website. On a static website, +individual webpages include static website content (HTML, CSS etc.). They +might also contain client-side scripts (e.g. JavaScript).

    +
    +

    Click on your container's name, then check the "Public Access" checkbox. Note that +"Public Access" changes from "Disabled" to "Link".

    +

    Setting Container Public Access

    +

    Click "Link" to see a list of object in the container. This is the URL of your container.

    +
    +

    Important Note

    +

    Anyone who obtains the URL will be able to access the container, so this +is not recommended as a way to share sensitive data with collaborators.

    +
    +

    In addition, everything inside a public container is public, so we recommend creating +a separate container specifically for files that should be made public.

    +

    To download the file test-file we would use the following url.

    +
    +

    Very Important Information

    +

    Here 4c5bccef73c144679d44cbc96b42df4e is specific Tenant Id or +Project Id. You can get this value when you click on the public container's +Link on a new browser tab.

    +
    +

    Or, you can just click on "Download" next to the file's name as shown below:

    +

    Download File From Container

    +

    You can also interact with public objects using a utility such as curl:

    +
    curl https://stack.nerc.mghpcc.org:13808/v1/AUTH_4c5bccef73c144679d44cbc96b42df4e/unique-container-test
    +test-file
    +
    +

    To download a file:

    +
    curl -o local-file.txt https://stack.nerc.mghpcc.org:13808/v1/AUTH_4c5bccef73c144679d44cbc96b42df4e/unique-container-test/test-file
    +
    +

    Make a container private

    +

    You can make a public container private by clicking on your container's name, +then uncheck the "Public Access" checkbox. Note that "Public Access" changes +from "Link" to "Disabled".

    +

    This will deactivate the public URL of the container and then it will show "Disabled".

    +

    Disable Container Public Access

    +

    2. Access by using APIs

    +

    i. OpenStack CLI

    +

    Prerequisites:

    +

    To run the OpenStack CLI commands, you need to have:

    + +

    Some Object Storage management examples

    +
    Create a container
    +

    In order to create a container in the Object Storage service, you can use the +openstack client with the following command.

    +
    openstack container create mycontainer
    ++---------------------------------------+-------------+------------------------------------+
    +| account                               | container   | x-trans-id                         |
    ++---------------------------------------+-------------+------------------------------------+
    +| AUTH_4c5bccef73c144679d44cbc96b42df4e | mycontainer | txb875f426a011476785171-00624b37e8 |
    ++---------------------------------------+-------------+------------------------------------+
    +
    +

    Once created you can start adding objects.

    +
    Manipulate objects in a container
    +

    To upload files to a container you can use the following command

    +
    openstack object create --name my_test_file mycontainer test_file.txt
    ++--------------+-------------+----------------------------------+
    +| object       | container   | etag                             |
    ++--------------+-------------+----------------------------------+
    +| my_test_file | mycontainer | e3024896943ee80422d1e5ff44423658 |
    ++--------------+-------------+----------------------------------+
    +
    +

    Once uploaded you can see the metadata through:

    +
    openstack object show mycontainer my_test_file
    ++----------------+---------------------------------------+
    +| Field          | Value                                 |
    ++----------------+---------------------------------------+
    +| account        | AUTH_4c5bccef73c144679d44cbc96b42df4e |
    +| container      | mycontainer                           |
    +| content-length | 26                                    |
    +| content-type   | application/octet-stream              |
    +| etag           | e3024896943ee80422d1e5ff44423658      |
    +| last-modified  | Mon, 04 Apr 2022 18:27:14 GMT         |
    +| object         | my_test_file                          |
    ++----------------+---------------------------------------+
    +
    +

    You can save the contents of the object from your container to your local machine +by using:

    +

    openstack object save mycontainer my_test_file --file test_file.txt

    +
    +

    Very Important

    +

    Please note that this will overwrite the file in the local directory.

    +
    +

    Finally you can delete the object with the following command

    +

    openstack object delete mycontainer my_test_file

    +
    Delete the container
    +

    If you want to delete the container, you can use the following command

    +

    openstack container delete mycontainer

    +

    If the container has some data, you can trigger the recursive option to delete +the objects internally.

    +
    openstack container delete mycontainer
    +Conflict (HTTP 409) (Request-ID: tx6b53c2b3e52d453e973b4-00624b400f)
    +
    +

    So, try to delete the container recursively using command

    +

    openstack container delete --recursive mycontainer

    +
    List existing containers
    +

    You can check the existing containers with

    +
    openstack container list
    ++---------------+
    +| Name          |
    ++---------------+
    +| mycontainer   |
    ++---------------+
    +
    +
    Swift quota utilization
    +

    To check the overall space used, you can use the following command

    +
    openstack object store account show
    ++------------+---------------------------------------+
    +| Field      | Value                                 |
    ++------------+---------------------------------------+
    +| Account    | AUTH_4c5bccef73c144679d44cbc96b42df4e |
    +| Bytes      | 665                                   |
    +| Containers | 1                                     |
    +| Objects    | 3                                     |
    ++------------+---------------------------------------+
    +
    +

    To check the space used by a specific container

    +
    openstack container show mycontainer
    ++----------------+---------------------------------------+
    +| Field          | Value                                 |
    ++----------------+---------------------------------------+
    +| account        | AUTH_4c5bccef73c144679d44cbc96b42df4e |
    +| bytes_used     | 665                                   |
    +| container      | mycontainer                           |
    +| object_count   | 3                                     |
    +| read_acl       | .r:*,.rlistings                       |
    +| storage_policy | Policy-0                              |
    ++----------------+---------------------------------------+
    +
    +

    ii. Swift Interface

    +

    This is a python client for the Swift API. There's a Python API +(the swiftclient module), and a command-line script (swift).

    +
      +
    • +

      This example uses a Python3 virtual environment, but you are free to choose +any other method to create a local virtual environment like Conda.

      +
      python3 -m venv venv
      +
      +
      +

      Choosing Correct Python Interpreter

      +

      Make sure you are able to use python or python3 or py -3 (For +Windows Only) to create a directory named venv (or whatever name you +specified) in your current working directory.

      +
      +
    • +
    • +

      Activate the virtual environment by running:

      +

      on Linux/Mac: source venv/bin/activate

      +

      on Windows: venv\Scripts\activate

      +
    • +
    +

    Install Python Swift Client page at PyPi

    +
      +
    • +

      Once virtual environment is activated, install python-swiftclient and python-keystoneclient

      +

      pip install python-swiftclient python-keystoneclient

      +
    • +
    • +

      Swift authenticates using a user, tenant, and key, which map to your OpenStack +username, project,and password.

      +
    • +
    +

    For this, you need to download the "NERC's OpenStack RC File" with the +credentials for your NERC project from the NERC's OpenStack dashboard. +Then you need to source that RC file using: source *-openrc.sh. You can +read here +on how to do this.

    +

    By sourcing the "NERC's OpenStack RC File", you will set the all required +environmental variables.

    +
    Check your authentication variables
    +

    Check what the swift client will use as authentication variables:

    +
    swift auth
    +
    +
    Create your first container
    +

    Lets create your first container by using the following command:

    +
    swift post <container_name>
    +
    +

    For example:

    +
    swift post unique-container-test
    +
    +
    Upload files
    +

    Upload a file to your container:

    +
    swift upload <container_name> <file_or_folder>
    +
    +

    To upload a file to the above listed i.e. unique-container-test, you can run +the following command:

    +
    swift upload unique-container-test ./README.md
    +
    +
    Show containers
    +

    Then type the following command to get list of your containers:

    +
    swift list
    +
    +

    This will output your existing container on your project, for e.g. +unique-container-test

    +

    Show objects inside your container:

    +
    swift list <container_name>.
    +
    +

    For example:

    +
    swift list unique-container-test
    +README.md
    +
    +
    Show statistics of your containers and objects
    +

    You can see statistics, ranging from specific objects to the entire account. Use +the following command to se statistics of the specific container.

    +
    swift stat <container_name>
    +
    +

    You can also use swift stat <container_name> <filename> to check stats of +individual files.

    +

    If you want to see stats from your whole account, you can type:

    +
    swift stat
    +
    +
    Download objects
    +

    You can download single objects by using the following command:

    +
    swift download <container_name> <your_object> -o /path/to/local/<your_object>
    +
    +

    For example:

    +
    swift download unique-container-test README.md -o ./README.md
    +README.md [auth 2.763s, headers 2.907s, total 2.907s, 0.000 MB/s]
    +
    +

    It's possible to test downloading an object/container without actually downloading, +for testing purposes:

    +
    swift download <container-name> --no-download
    +
    +
    Download all objects from specific container
    +
    swift download <container_name> -D </path/to/folder/>
    +
    +
    Download all objects from your account
    +
    swift download --all -D </path/to/folder/>
    +
    +
    Delete objects
    +

    Delete specific object by issuing the following command:

    +
    swift delete <container_name> <object_name>
    +
    +

    For example:

    +
    swift delete unique-container-test README.md
    +README.md
    +
    +

    And finally delete specific container by typing the following:

    +
    swift delete <container_name>
    +
    +

    For example:

    +
    swift delete unique-container-test
    +
    +

    Other helpful Swift commands:

    +
    delete               Delete a container or objects within a container.
    +download             Download objects from containers.
    +list                 Lists the containers for the account or the objects
    +                    for a container.
    +post                 Updates meta information for the account, container,
    +                    or object; creates containers if not present.
    +copy                 Copies object, optionally adds meta
    +stat                 Displays information for the account, container,
    +                    or object.
    +upload               Uploads files or directories to the given container.
    +capabilities         List cluster capabilities.
    +tempurl              Create a temporary URL.
    +auth                 Display auth related environment variables.
    +bash_completion      Outputs option and flag cli data ready for
    +                    bash_completion.
    +
    +
    +

    Helpful Tip

    +

    Type swift -h to learn more about using the swift commands. The client +has a --debugflag, which can be useful if you are facing any issues.

    +
    +

    iii. Using AWS CLI

    +

    The Ceph Object Gateway supports basic operations through the Amazon S3 interface.

    +

    You can use both high-level (s3) commands with the AWS CLI +and API-Level (s3api) commands with the AWS CLI +to access object storage on your NERC project.

    +

    Prerequisites:

    +

    To run the s3 or s3api commands, you need to have:

    +
      +
    • +

      AWS CLI installed, see Installing or updating the latest version of the AWS CLI +for more information.

      +
    • +
    • +

      The NERC's Swift End Point URL: https://stack.nerc.mghpcc.org:13808

      +
    • +
    • +

      Understand these Amazon S3 terms:

      +

      i. Bucket – A top-level Amazon S3 folder.

      +

      ii. Prefix – An Amazon S3 folder in a bucket.

      +

      iii. Object – Any item that's hosted in an Amazon S3 bucket.

      +
    • +
    +

    Configuring the AWS CLI

    +

    To access this interface, you must login through the OpenStack Dashboard and navigate +to "Projects > API Access" where you can download the "Download OpenStack +RC File" as well as the "EC2 Credentials".

    +

    EC2 Credentials

    +

    While clicking on "EC2 Credentials", this will download a file zip file including +ec2rc.sh file that has content similar to shown below. The important parts are +EC2_ACCESS_KEY and EC2_SECRET_KEY, keep them noted.

    +
    #!/bin/bash
    +
    +NOVARC=$(readlink -f "${BASH_SOURCE:-${0}}" 2>/dev/null) || NOVARC=$(python -c 'import os,sys; print os.path.abspath(os.path.realpath(sys.argv[1]))' "${BASH_SOURCE:-${0}}")
    +NOVA_KEY_DIR=${NOVARC%/*}
    +export EC2_ACCESS_KEY=...
    +export EC2_SECRET_KEY=...
    +export EC2_URL=https://localhost/notimplemented
    +export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
    +export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
    +export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
    +export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem
    +export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set
    +
    +alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}"
    +alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}"
    +
    +

    Alternatively, you can obtain your EC2 access keys using the openstack client:

    +
    sudo apt install python3-openstackclient
    +
    +openstack ec2 credentials list
    ++------------------+------------------+--------------+-----------+
    +| Access           | Secret           | Project ID   | User ID   |
    ++------------------+------------------+--------------+-----------+
    +| <EC2_ACCESS_KEY> | <EC2_SECRET_KEY> | <Project_ID> | <User_ID> |
    ++------------------+------------------+--------------+-----------+
    +
    +

    OR, you can even create a new one by running:

    +
    openstack ec2 credentials create
    +
    +
      +
    • Source the downloaded OpenStack RC File from Projects > API Access by using: +source *-openrc.sh command. Sourcing the RC File will set the required environment +variables.
    • +
    +

    Then run aws configuration command which requires the EC2_ACCESS_KEY and +EC2_SECRET_KEY keys that you noted from ec2rc.sh file (during the "Configuring +the AWS CLI" step):

    +
        $> aws configure --profile "'${OS_PROJECT_NAME}'"
    +    AWS Access Key ID [None]: <EC2_ACCESS_KEY>
    +    AWS Secret Access Key [None]: <EC2_SECRET_KEY>
    +    Default region name [None]:
    +    Default output format [None]:
    +
    +

    This will create the configuration file for AWS cli in your home directory +~/.aws/config with the EC2 profile based on your ${OS_PROJECT_NAME} and +~/.aws/credentials credentials with Access and Secret keys that you provided above.

    +

    The EC2 profile is stored here:

    +
        cat ~/.aws/config
    +
    +    [profile ''"'"'${OS_PROJECT_NAME}'"'"'']
    +
    +

    Where as Credentials are store here:

    +
        cat ~/.aws/credentials
    +
    +    ['${OS_PROJECT_NAME}']
    +    aws_access_key_id = <EC2_ACCESS_KEY>
    +    aws_secret_access_key = <EC2_SECRET_KEY>
    +
    +

    Then you can manually create the configuration file for AWS cli in your home +directory ~/.aws/config with the ec2 profile and credentials as shown below:

    +
    cat ~/.aws/config
    +
    +['${OS_PROJECT_NAME}']
    +aws_access_key_id = <EC2_ACCESS_KEY>
    +aws_secret_access_key = <EC2_SECRET_KEY>
    +
    +
    +

    Information

    +

    We need to have a profile that you use must have permissions to allow +the AWS operations can be performed.

    +
    +

    Listing buckets using aws-cli

    +

    i. Using s3api:

    +
    aws --profile "'${OS_PROJECT_NAME}'" --endpoint-url=https://stack.nerc.mghpcc.org:13808 \
    +    s3api list-buckets
    +
    +{
    +    "Buckets": [
    +        {
    +            "Name": "unique-container-test",
    +            "CreationDate": "2009-02-03T16:45:09+00:00"
    +        }
    +    ],
    +    "Owner": {
    +        "DisplayName": "Test Project-f69dcff:mmunakami@fas.harvard.edu",
    +        "ID": "Test Project-f69dcff:mmunakami@fas.harvard.edu"
    +    }
    +}
    +
    +

    ii. Alternatively, you can do the same using s3:

    +
    aws --profile "'${OS_PROJECT_NAME}'" --endpoint-url=https://stack.nerc.mghpcc.org:13808 \
    +    s3 ls
    +
    +

    Output:

    +
    2009-02-03 11:45:09 unique-container-test
    +
    +

    To list contents inside bucket

    +
    aws --profile "'${OS_PROJECT_NAME}'" --endpoint-url=https://stack.nerc.mghpcc.org:13808 \
    +    s3 ls s3://<your-bucket>
    +
    +

    To make a bucket

    +
    aws --profile "'${OS_PROJECT_NAME}'" --endpoint-url=https://stack.nerc.mghpcc.org:13808 \
    +    s3 mb s3://<your-bucket>
    +
    +

    Adding/ Copying files from one container to another container

    +
      +
    1. +

      Single file copy using cp command:

      +

      The aws tool provides a cp command to move files to your s3 bucket:

      +
      aws --profile "'${OS_PROJECT_NAME}'" --endpoint-url=https://stack.nerc.mghpcc.org:13808 \
      +    s3 cp <Your-file> s3://<your-bucket>/
      +
      +

      Output:

      +
      upload: .\<Your-file> to s3://<your-bucket>/<Your-file>
      +
      +
    2. +
    3. +

      Whole directory copy using the --recursive flag

      +
      aws --profile "'${OS_PROJECT_NAME}'" --endpoint-url=https://stack.nerc.mghpcc.org:13808 \
      +    s3 cp <Your-directory> s3://<your-bucket>/ --recursive
      +
      +

      Output:

      +
      upload: <your-directory>/<file0> to s3://<your-bucket>/<file0>
      +upload: <your-directory>/<file1> to s3://<your-bucket>/<file1>
      +...
      +upload: <your-directory>/<fileN> to s3://<your-bucket>/<fileN>
      +
      +
    4. +
    +

    You can then use aws s3 ls to check that your files have been properly uploaded:

    +
    aws --profile "'${OS_PROJECT_NAME}'" --endpoint-url=https://stack.nerc.mghpcc.org:13808 \
    +    s3 ls s3://<your-bucket>/
    +
    +

    Output:

    +
    2022-04-04 16:32:38          <size> <file0>
    +2022-04-04 16:32:38          <size> <file1>
    +...
    +2022-04-04 16:25:50          <size> <fileN>
    +
    +
    +

    Other Useful Flags

    +

    Additionally, aws cp provides an --exclude flag to filter files not to be +transferred, the syntax is: --exclude "<regex>"

    +
    +

    To delete an object from a bucket

    +
    aws --profile "'${OS_PROJECT_NAME}'" --endpoint-url=https://stack.nerc.mghpcc.org:13808 \
    +    s3 rm s3://<your-bucket>/argparse-1.2.1.tar.gz
    +
    +

    To remove a bucket

    +
    aws --profile "'${OS_PROJECT_NAME}'" --endpoint-url=https://stack.nerc.mghpcc.org:13808 \
    +    s3 rb s3://<your-bucket>
    +
    +

    iv. Using s3cmd

    +

    S3cmd is a free command-line tool and client for uploading, retrieving and +managing data in Amazon S3 and other cloud storage service providers that use +the S3 protocol.

    +

    Prerequisites:

    + +

    Configuring s3cmd

    +

    The EC2_ACCESS_KEY and EC2_SECRET_KEY keys that you noted from ec2rc.sh +file can then be plugged into s3cfg config file.

    +

    The .s3cfg file requires the following configuration to work with our Object +storage service:

    +
    # Setup endpoint
    +host_base = stack.nerc.mghpcc.org:13808
    +host_bucket = stack.nerc.mghpcc.org:13808
    +use_https = True
    +
    +# Setup access keys
    +access_key = <YOUR_EC2_ACCESS_KEY_FROM_ec2rc_FILE>
    +secret_key = <YOUR_EC2_SECRET_KEY_FROM_ec2rc_FILE>
    +
    +# Enable S3 v4 signature APIs
    +signature_v2 = False
    +
    +

    We are assuming that the configuration file is placed in default location i.e. +$HOME/.s3cfg. If it is not the case you need to add the parameter --config=FILE +with the location of your configuration file to override the config location.

    +

    Using s3cmd

    +
    To list buckets
    +

    Use the following command to list all s3 buckets

    +
    s3cmd ls
    +
    +

    Or,

    +
    s3cmd ls s3://
    +
    +2009-02-03 16:45  s3://nerc-test-container
    +2009-02-03 16:45  s3://second-mycontainer
    +2009-02-03 16:45  s3://unique-container-test
    +
    +
    Create a new bucket
    +

    In order to create a bucket, you can use s3cmd with the following command

    +
    s3cmd mb s3://mybucket
    +
    +Bucket 's3://mybucket/' created
    +
    +s3cmd ls
    +2009-02-03 16:45  s3://mybucket
    +
    +2009-02-03 16:45  s3://nerc-test-container
    +2009-02-03 16:45  s3://second-mycontainer
    +2009-02-03 16:45  s3://unique-container-test
    +
    +
    To copy an object to bucket
    +

    Below command will upload file file.txt to the bucket using s3cmd command.

    +
    s3cmd put ~/file.txt s3://mybucket/
    +
    +upload: 'file.txt' -> 's3://mybucket/file.txt'  [1 of 1]
    +0 of 0     0% in    0s     0.00 B/s  done
    +
    +

    s3cmd also allows to set additional properties to the objects stored. In the +example below, we set the content type with the --mime-type option and the +cache-control parameter to 1 hour with --add-header.

    +
    s3cmd put --mime-type='application/json' --add-header='Cache-Control: max-age=3600' ~/file.txt s3://mybucket
    +
    +
    Uploading Directory in bucket
    +

    If we need to upload entire directory use -r to upload it recursively as below.

    +
    s3cmd put -r <your-directory> s3://mybucket/
    +
    +upload: 'backup/hello.txt' -> 's3://mybucket/backup/hello.txt'  [1 of 1]
    +0 of 0     0% in    0s     0.00 B/s  done
    +
    +
    List the objects of bucket
    +

    List the objects of the bucket using ls switch with s3cmd.

    +
    s3cmd ls s3://mybucket/
    +
    +                       DIR   s3://mybucket/backup/
    +2022-04-05 03:10         0   s3://mybucket/file.txt
    +2022-04-05 03:14         0   s3://mybucket/hello.txt
    +
    +
    To copy/ download an object to local system
    +

    Use the following command to download files from the bucket:

    +
    s3cmd get s3://mybucket/file.txt
    +
    +download: 's3://mybucket/file.txt' -> './file.txt'  [1 of 1]
    +0 of 0     0% in    0s     0.00 B/s  done
    +
    +
    To sync local file/directory to a bucket
    +
    s3cmd sync newdemo s3://mybucket
    +
    +upload: 'newdemo/newdemo_file.txt' -> 's3://mybucket/newdemo/newdemo_file.txt'  [1 of 1]
    +0 of 0     0% in    0s     0.00 B/s  done
    +
    +

    To sync bucket or object with local filesystem

    +
    s3cmd sync  s3://unique-container-test otherlocalbucket
    +
    +download: 's3://unique-container-test/README.md' -> 'otherlocalbucket/README.md'  [1 of 3]
    +653 of 653   100% in    0s     4.54 kB/s  done
    +download: 's3://unique-container-test/image.png' -> 'otherlocalbucket/image.png'  [2 of 3]
    +0 of 0     0% in    0s     0.00 B/s  done
    +download: 's3://unique-container-test/test-file' -> 'otherlocalbucket/test-file'  [3 of 3]
    +12 of 12   100% in    0s    83.83 B/s  done
    +Done. Downloaded 665 bytes in 1.0 seconds, 665.00 B/s.
    +
    +
    To delete an object from bucket
    +

    You can delete files from the bucket with the following s3cmd command

    +
    s3cmd del s3://unique-container-test/README.md
    +
    +delete: 's3://unique-container-test/README.md'
    +
    +
    To delete directory from bucket
    +
    s3cmd del s3://mybucket/newdemo
    +
    +delete: 's3://mybucket/newdemo'
    +
    +
    To delete a bucket
    +
    s3cmd rb s3://mybucket
    +
    +ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty
    +
    +
    +

    Important Information

    +

    The above command failed because of the bucket was not empty! You can remove +all objects inside the bucket and then use the command again. Or, you can +run the following command with -r or --recursive flag i.e. +s3cmd rb s3://mybucket -r or s3cmd rb s3://mybucket --recursive.

    +
    +

    v. Using Rclone

    +

    rclone is a convenient and performant command-line tool for transferring files +and synchronizing directories directly between your local file systems and the +NERC's containers.

    +

    Prerequisites:

    +

    To run the rclone commands, you need to have:

    + +

    Configuring Rclone

    +

    First, you’ll need to configure rclone. As the object storage systems +have quite complicated authentication these are kept in a config file.

    +

    If you run rclone config file you will see where the default location is +for you.

    +
    +

    Note

    +

    For Windows users, you many need to specify the full path to the Rclone +executable file, if its not included in your systems PATH variable.

    +
    +

    The EC2_ACCESS_KEY and EC2_SECRET_KEY keys that you noted from ec2rc.sh +file can then be plugged into rclone config file.

    +

    Edit the config file's content on the path location described by +rclone config file command and add the following entry with the name [nerc]:

    +
    [nerc]
    +type = s3
    +env_auth = false
    +provider = Other
    +endpoint = https://stack.nerc.mghpcc.org:13808
    +acl = public-read
    +access_key_id = <YOUR_EC2_ACCESS_KEY_FROM_ec2rc_FILE>
    +secret_access_key = <YOUR_EC2_SECRET_KEY_FROM_ec2rc_FILE>
    +location_constraint =
    +server_side_encryption =
    +
    +

    More about the config for AWS S3 compatible API can be seen here.

    +
    +

    Important Information

    +

    Mind that if set env_auth = true then it will take variables from environment, +so you shouldn't insert it in this case.

    +
    +

    OR, You can locally copy this content to a new config file and then use this +flag to override the config location, e.g. rclone --config=FILE

    +
    +

    Interactive Configuration

    +

    Run rclone config to setup. See rclone config docs +for more details.

    +
    +

    Using Rclone

    +

    rclone supports many subcommands (see +the complete list of Rclone subcommands). +A few commonly used subcommands (assuming you configured the NERC Object Storage +as nerc):

    +
    Listing the Containers and Contains of a Container
    +

    Once your Object Storage has been configured in Rclone, you can then use the +Rclone interface to List all the Containers with the "lsd" command

    +
    rclone lsd "nerc:"
    +
    +

    Or,

    +
    rclone lsd "nerc:" --config=rclone.conf
    +
    +

    For e.g.,

    +
    rclone lsd "nerc:" --config=rclone.conf
    +        -1 2009-02-03 11:45:09        -1 second-mycontainer
    +        -1 2009-02-03 11:45:09        -1 unique-container-test
    +
    +

    To list the files and folders available within a container i.e. "unique-container-test" +in this case, within a container we can use the "ls" command:

    +
    rclone ls "nerc:unique-container-test/"
    +  653 README.md
    +    0 image.png
    +   12 test-file
    +
    +
    Uploading and Downloading Files and Folders
    +

    rclone support a variety of options to allow you to Copy, Sync and Move +files from one destination to another.

    +

    A simple example of this can be seen below, where we copy (Upload) the file +"upload.me" to the <your-bucket> container:

    +
    rclone copy "./upload.me" "nerc:<your-bucket>/"
    +
    +

    Another example, to copy (Download) the file "upload.me" from the +<your-bucket> container to your local:

    +
    rclone -P copy "nerc:<your-bucket>/upload.me" "./"
    +
    +

    Also, to Sync files into to the <your-bucket> container - try with +--dry-run first

    +
    rclone --dry-run sync /path/to/files nerc:<your-bucket>
    +
    +

    Then sync for real

    +
    rclone sync /path/to/files nerc:<your-bucket>
    +
    +
    Mounting object storage on local filesystem
    +

    Linux:

    +

    First, you need to create a directory on which you will mount your filesystem:

    +

    mkdir ~/mnt-rclone

    +

    Then you can simply mount your object storage with:

    +

    rclone -vv --vfs-cache-mode writes mount nerc: ~/mnt-rclone

    +
    +

    More about using Rclone

    +

    You can read more about Rclone Mounting here.

    +
    +

    Windows:

    +

    First you have to download Winfsp:

    +

    WinFsp is an open source Windows File System Proxy which provides a FUSE +emulation layer.

    +

    Then you can simply mount your object storage with (no need to create the directory +in advance):

    +

    rclone -vv --vfs-cache-mode writes mount nerc: C:/mnt-rclone

    +

    vfs-cache-mode flag enable file caching, you can use either writes or full +option. For further explanation you can see official documentation.

    +

    Now that your object storage is mounted, you can list, create and delete files +in it.

    +
    Unmount object storage
    +

    To unmount, simply press CTRL-C and the mount will be interrupted.

    +

    vi. Using client (Python) libraries

    +

    a. The EC2_ACCESS_KEY and EC2_SECRET_KEY keys that you noted from ec2rc.sh +file can then be plugged into your application. See below example using the +Python Boto3 library, +which connects through the S3 API interface through EC2 +credentials, and perform some basic operations on available buckets and file +that the user has access to.

    +
    import boto3
    +
    +# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#bucket
    +s3 = boto3.resource('s3',
    +    aws_access_key_id='YOUR_EC2_ACCESS_KEY_FROM_ec2rc_FILE',
    +    aws_secret_access_key='YOUR_EC2_SECRET_KEY_FROM_ec2rc_FILE', #pragma: allowlist secret
    +    endpoint_url='https://stack.nerc.mghpcc.org:13808',
    +)
    +
    +# List all containers
    +for bucket in s3.buckets.all():
    +    print(' ->', bucket)
    +
    +# List all objects in a container i.e. unique-container-test is your current Container
    +bucket = s3.Bucket('unique-container-test')
    +for obj in bucket.objects.all():
    +    print(' ->', obj)
    +
    +# Download an S3 object i.e. test-file a file available in your unique-container-test Container
    +s3.Bucket('unique-container-test').download_file('test-file', './test-file.txt')
    +
    +# Add an image to the bucket
    +# bucket.put_object(Body=open('image.png', mode='rb'), Key='image.png')
    +
    +

    We can configure the Python Boto3 library, +to work with the saved aws profile.

    +
    import boto3
    +
    +# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html
    +session = boto3.Session(profile_name='<YOUR_CONFIGURED_AWS_PROFILE_NAME>')
    +
    +# List all containers
    +s3 = boto3.client('s3', endpoint_url='https://stack.nerc.mghpcc.org:13808',)
    +response = s3.list_buckets()
    +
    +for bucket in response['Buckets']:
    +    print(' ->', bucket)
    +
    +

    b. The EC2_ACCESS_KEY and EC2_SECRET_KEY keys that you noted from ec2rc.sh +file can then be plugged into your application. See below example using the +Python Minio library, +which connects through the S3 API interface through EC2 +credentials, and perform some basic operations on available buckets and file +that the user has access to.

    +
    from minio import Minio
    +
    +# Create client with access key and secret key.
    +# https://docs.min.io/docs/python-client-api-reference.html
    +client = Minio(
    +    "stack.nerc.mghpcc.org:13808",
    +    access_key='YOUR_EC2_ACCESS_KEY_FROM_ec2rc_FILE',
    +    secret_key='YOUR_EC2_SECRET_KEY_FROM_ec2rc_FILE', #pragma: allowlist secret
    +)
    +
    +# List all containers
    +buckets = client.list_buckets()
    +for bucket in buckets:
    +    # print(bucket.name, bucket.creation_date)
    +    print(' ->', bucket)
    +
    +# Make 'nerc-test-container' container if not exist.
    +found = client.bucket_exists("nerc-test-container")
    +if not found:
    +    client.make_bucket("nerc-test-container")
    +else:
    +    print("Bucket 'nerc-test-container' already exists")
    +
    +# Upload './nerc-backup.zip' as object name 'nerc-backup-2022.zip'
    +# to bucket 'nerc-test-container'.
    +client.fput_object(
    +    "nerc-test-container", "nerc-backup-2022.zip", "./nerc-backup.zip",
    +)
    +
    +

    3. Using Graphical User Interface (GUI) Tools

    +

    i. Using WinSCP

    +

    WinSCP is a popular and free open-source SFTP +client, SCP client, and FTP client for Windows. Its main function is file transfer +between a local and a remote computer, with some basic file management functionality +using FTP, FTPS, SCP, SFTP, WebDAV or S3 file transfer protocols.

    +

    Prerequisites:

    +
      +
    • +

      WinSCP installed, see Download and Install the latest version of the WinSCP +for more information.

      +
    • +
    • +

      Go to WinSCP menu and open "Options > Preferences".

      +
    • +
    • +

      When the "Preferences" dialog window appears, select "Transfer" in the options +on the left pane.

      +
    • +
    • +

      Click on "Edit" button.

      +
    • +
    • +

      Then, on shown popup dialog box review the "Common options" group, uncheck the +"Preserve timestamp" option as shown below:

      +
    • +
    +

    Disable Preserve TimeStamp

    +

    Configuring WinSCP

    +
      +
    • Click on "New Session" tab button as shown below:
    • +
    +

    Login

    +
      +
    • Select "Amazon S3" from the "File protocol" dropdown options as shown below:
    • +
    +

    Choose Amazon S3 File Protocol

    +
      +
    • +

      Provide the following required endpoint information:

      +

      "Host name": "stack.nerc.mghpcc.org"

      +

      "Port number": "13808"

      +

      The EC2_ACCESS_KEY and EC2_SECRET_KEY keys that you noted from ec2rc.sh +file can then be plugged into "Access key ID" and "Secret access key" +respectively.

      +
    • +
    +

    Config WinSCP

    +
    +

    Helpful Tips

    +

    You can save your above configured session with some preferred name by +clicking the "Save" button and then giving a proper name to your session. +So that next time you don't need to again manually enter all your configuration.

    +
    +

    Using WinSCP

    +

    You can follow above step to manually add a new session next time you open WinSCP +or, you can connect to your previously saved session (as listed on popup dialog +will show your all saved session name list) that will show up by just clicking on +the session name.

    +

    Then click "Login" button to connect to your NERC project's Object Storage as +shown below:

    +

    Login

    +

    Successful connection

    +

    ii. Using Cyberduck

    +

    Cyberduck is a libre server and cloud +storage browser for Mac and Windows. With an easy-to-use interface, connect to +servers, enterprise file sharing, and cloud storage.

    +

    Prerequisites:

    + +

    Configuring Cyberduck

    +
      +
    • Click on "Open Connection" tab button as shown below:
    • +
    +

    Open Connection

    +
      +
    • Select "Amazon S3" from the dropdown options as shown below:
    • +
    +

    Choose Amazon S3

    +
      +
    • +

      Provide the following required endpoint information:

      +

      "Server": "stack.nerc.mghpcc.org"

      +

      "Port": "13808"

      +

      The EC2_ACCESS_KEY and EC2_SECRET_KEY keys that you noted from ec2rc.sh +file can then be plugged into "Access key ID" and "Secret Access Key" +respectively

      +
    • +
    +

    Cyberduck Amazon S3 Configuration

    +

    Using Cyberduck

    +

    Then click "Connect" button to connect to your NERC project's Object Storage as +shown below:

    +

    Successful connection

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/persistent-storage/transfer-a-volume/index.html b/openstack/persistent-storage/transfer-a-volume/index.html new file mode 100644 index 00000000..e43e02d8 --- /dev/null +++ b/openstack/persistent-storage/transfer-a-volume/index.html @@ -0,0 +1,3465 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Transfer A Volume

    +

    You may wish to transfer a volume to a different project. Volumes are specific +to a project and can only be attached to one virtual machine at a time.

    +
    +

    Important

    +

    The volume to be transferred must not be attached to an instance. This can +be examined by looking into "Status" column of the volume i.e. it need to +be "Available" instead of "In-use" and "Attached To" column need to be +empty.

    +
    +

    Using Horizon dashboard

    +

    Once you're logged in to NERC's Horizon dashboard.

    +

    Navigate to Project -> Volumes -> Volumes.

    +

    Select the volume that you want to transfer and then click the dropdown next to +the "Edit volume" and choose "Create Transfer".

    +

    Create Transfer of a Volume

    +

    Give the transfer a name.

    +

    Volume Transfer Popup

    +

    You will see a screen like shown below. Be sure to capture the Transfer ID and +the Authorization Key.

    +

    Volume Transfer Initiated

    +
    +

    Important Note

    +

    You can always get the transfer ID later if needed, but there is no way to +retrieve the key. +If the key is lost before the transfer is completed, you will have to cancel +the pending transfer and create a new one.

    +
    +

    Then the volume will show the status like below:

    +

    Volume Transfer Initiated

    +

    Assuming you have access to the receiving project, switch to it using the Project +dropdown at the top right.

    +

    If you don't have access to the receiving project, give the transfer ID and +Authorization Key to a collaborator who does, and have them complete the next steps.

    +

    In the receiving project, go to the Volumes tab, and click "Accept Transfer" +button as shown below:

    +

    Volumes in a New Project

    +

    Enter the "Transfer ID" and the "Authorization Key" that were captured when the +transfer was created in the previous project.

    +

    Volume Transfer Accepted

    +

    The volume should now appear in the Volumes list of the receiving project as shown +below:

    +

    Successful Accepted Volume Transfer

    +
    +

    Important Note

    +

    Any pending transfers can be cancelled if they are not yet accepted, but there +is no way to "undo" a transfer once it is complete. +To send the volume back to the original project, a new transfer would be required.

    +
    +

    Using the CLI

    +

    Prerequisites:

    +

    To run the OpenStack CLI commands, you need to have:

    + +

    Using the openstack client

    +
      +
    • +

      Identifying volume to transfer in your source project

      +
      openstack volume list
      ++--------------------------------------+---------------------+-----------+------+----------------------------------+
      +| ID                                   | Name                | Status    | Size | Attached to                      |
      ++--------------------------------------+---------------------+-----------+------+----------------------------------+
      +| d8a5da4c-41c8-4c2d-b57a-8b6678ce4936 | my-volume           | available |  100 |                                  |
      ++--------------------------------------+---------------------+-----------+------+----------------------------------+
      +
      +
    • +
    • +

      Create the transfer request

      +
      openstack volume transfer request create my-volume
      ++------------+--------------------------------------+
      +| Field      | Value                                |
      ++------------+--------------------------------------+
      +| auth_key   | b92d98fec2766582                     |
      +| created_at | 2024-02-04T14:30:08.362907           |
      +| id         | a16494cf-cfa0-47f6-b606-62573357922a |
      +| name       | None                                 |
      +| volume_id  | d8a5da4c-41c8-4c2d-b57a-8b6678ce4936 |
      ++------------+--------------------------------------+
      +
      +
      +

      Pro Tip

      +

      If your volume name includes spaces, you need to enclose them in quotes, +i.e. "<VOLUME_NAME_OR_ID>". +For example: openstack volume transfer request create "My Volume"

      +
      +
    • +
    • +

      The volume can be checked as in the transfer status using +openstack volume transfer request list as follows and the volume is in status +awaiting-transfer while running openstack volume show <VOLUME_NAME_OR_ID> as +shown below:

      +
      openstack volume transfer request list
      ++--------------------------------------+------+--------------------------------------+
      +| ID                                   | Name | Volume                               |
      ++--------------------------------------+------+--------------------------------------+
      +| a16494cf-cfa0-47f6-b606-62573357922a | None | d8a5da4c-41c8-4c2d-b57a-8b6678ce4936 |
      ++--------------------------------------+------+--------------------------------------+
      +
      +
      openstack volume show my-volume
      ++------------------------------+--------------------------------------+
      +| Field                        | Value                                |
      ++------------------------------+--------------------------------------+
      +...
      +| name                         | my-volume                            |
      +...
      +| status                       | awaiting-transfer                    |
      ++------------------------------+--------------------------------------+
      +
      +
    • +
    • +

      The user of the destination project can authenticate and receive the authentication +key reported above. The transfer can then be initiated.

      +
      openstack volume transfer request accept --auth-key b92d98fec2766582 a16494cf-cfa0-47f6-b606-62573357922a
      ++-----------+--------------------------------------+
      +| Field     | Value                                |
      ++-----------+--------------------------------------+
      +| id        | a16494cf-cfa0-47f6-b606-62573357922a |
      +| name      | None                                 |
      +| volume_id | d8a5da4c-41c8-4c2d-b57a-8b6678ce4936 |
      ++-----------+--------------------------------------+
      +
      +
    • +
    • +

      And the results confirmed in the volume list for the destination project.

      +
      openstack volume list
      ++--------------------------------------+----------------------------------------+-----------+------+-------------+
      +| ID                                   | Name                                   | Status    | Size | Attached to |
      ++--------------------------------------+----------------------------------------+-----------+------+-------------+
      +| d8a5da4c-41c8-4c2d-b57a-8b6678ce4936 | my-volume                              | available |  100 |             |
      ++--------------------------------------+----------------------------------------+-----------+------+-------------+
      +
      +
    • +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/openstack/persistent-storage/volumes/index.html b/openstack/persistent-storage/volumes/index.html new file mode 100644 index 00000000..2ebd04a2 --- /dev/null +++ b/openstack/persistent-storage/volumes/index.html @@ -0,0 +1,3421 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Persistent Storage

    +

    Ephemeral disk

    +

    OpenStack offers two types of block storage: ephemeral storage and persistent volumes. +Ephemeral storage is available only during the instance's lifespan, persisting +across guest operating system reboots. However, once the instance is deleted, +its associated storage is also removed. The size of ephemeral storage is determined +by the virtual machine's flavor and remains constant for all virtual machines of +that flavor. The service level for ephemeral storage relies on the underlying hardware.

    +

    In its default configuration, when the instance is launched from an Image or +an Instance Snapshot, the choice for utilizing persistent storage is configured +by selecting the Yes option for "Create New Volume". Additionally, the "Delete +Volume on Instance Delete" setting is pre-set to No as shown below:

    +

    Instance Persistent Storage Option

    +

    If you set the "Create New Volume" option to No, the instance will boot +from either an image or a snapshot, with the instance only being attached to an +ephemeral disk. It's crucial to note that this configuration does NOT create +persistent block storage in the form of a Volume, which can pose risks. Consequently, +the disk of the instance won't appear in the "Volumes" list. To mitigate potential +data loss, we strongly recommend regularly taking a snapshot +of such a running ephemeral instance, referred to as an "instance snapshot", +especially if you want to safeguard or recover important states of your instance.

    +
    +

    Very Important Note

    +

    Never use Ephemeral disk if you're setting up a production-level environment. +When the instance is deleted, its associated ephemeral storage is also removed.

    +
    +

    Volumes

    +

    A volume is a detachable block storage device, similar to a USB hard drive. You +can attach a volume to only one instance.

    +

    Unlike Ephemeral disk, Volumes are the Block Storage devices that you attach to +instances to enable persistent storage. Users can attach a volume to a running +instance or detach a volume and attach it to another instance at any time.

    +

    Ownership of volumes can be transferred to another project by transferring it to +another project as described here.

    +

    Some uses for volumes:

    +
      +
    • Persistent data storage for ephemeral instances.
    • +
    • Transfer of data between projects
    • +
    • Bootable image where disk changes persist
    • +
    • Mounting the disk of one instance to another for troubleshooting
    • +
    +

    How do you make your VM setup and data persistent?

    +
      +
    • +

      By default, when the instance is launched from an Image or an +Instance Snapshot, the choice for utilizing persistent storage is configured +by selecting the Yes option for "Create New Volume". It's crucial to +note that this configuration automatically creates persistent block storage +in the form of a Volume instead of using Ephemeral disk, which appears in +the "Volumes" list in the Horizon dashboard: Project -> Volumes -> Volumes.

      +

      Instance Persistent Storage Option

      +
    • +
    • +

      By default, the setting for "Delete Volume on Instance Delete" is configured +to use No. This setting ensures that the volume created during the launch +of a virtual machine remains persistent and won't be deleted alongside the +instance unless explicitly chosen as "Yes". Such instances boot from a +bootable volume, utilizing an existing volume listed in the +Project -> Volumes -> Volumes menu.

      +
    • +
    +

    To minimize the risk of potential data loss, we highly recommend consistently +creating backups through snapshots. +You can opt for a "volume snapshot" if you only need to capture the volume's +data. However, if your VM involves extended running processes and vital +in-memory data, preserving the precise VM state is essential. In such cases, +we recommend regularly taking a snapshot of the entire instance, known as an +"instance snapshot", provided you have sufficient Volume Storage quotas, +specifically the "OpenStack Volume Quota (GiB)" allocated for your resource allocation. +Please ensure that your allocation includes sufficient quota for the "OpenStack +Number of Volumes Quota" to allow for the creation of additional volumes based on +your quota attributes. Utilizing snapshots for backups is of utmost importance, +particularly when safeguarding or recovering critical states and data from your +instance.

    +
    +

    Very Important: Requested/Approved Allocated Storage Quota and Cost

    +

    When you delete virtual machines +backed by persistent volumes, the disk data is retained, continuing to consume +approved storage resources for which you will still be billed. It's important +to note that the Storage quotas for NERC (OpenStack) Resource Allocations, +are specified by the "OpenStack Volume Quota (GiB)" and "OpenStack Swift Quota +(GiB)" allocation attributes. Storage cost is determined by +your requested and approved allocation values +to reserve storage from the total NESE storage pool.

    +

    If you request additional storage by specifying a changed quota value for +the "OpenStack Volume Quota (GiB)" and "OpenStack Swift Quota (GiB)" +allocation attributes through NERC's ColdFront interface, +invoicing for the extra storage will take place upon fulfillment or approval +of your request, as explained in our +Billing FAQs.

    +

    Conversely, if you request a reduction in the Storage quotas by specifying +a reduced quota value for the "OpenStack Volume Quota (GiB)" and "OpenStack Swift +Quota in Gigabytes" allocation attributes through a change request using ColdFront, +your invoicing will be adjusted accordingly when the request is submitted.

    +

    In both scenarios, 'invoicing' refers to the accumulation of hours +corresponding to the added or removed storage quantity.

    +
    +
    +

    Help Regarding Billing

    +

    Please send your questions or concerns regarding Storage and Cost by emailing +us at help@nerc.mghpcc.org +or, by submitting a new ticket at the NERC's Support Ticketing System.

    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/CI-CD/CI-CD-pipeline/index.html b/other-tools/CI-CD/CI-CD-pipeline/index.html new file mode 100644 index 00000000..e1ce559b --- /dev/null +++ b/other-tools/CI-CD/CI-CD-pipeline/index.html @@ -0,0 +1,3308 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    What is Continuous Integration/Continuous Delivery (CI/CD) Pipeline?

    +

    A Continuous Integration/Continuous Delivery (CI/CD) pipeline involves a series of +steps that is performed in order to deliver a new version of application. CI/CD +pipelines are a practice focused on improving software delivery using automation.

    +

    Components of a CI/CD pipeline

    +

    The steps that form a CI/CD pipeline are distinct subsets of tasks that are +grouped into a pipeline stage. Typical pipeline stages include:

    +
      +
    • Build - The stage where the application is compiled.
    • +
    • Test - The stage where code is tested. Automation here can save both time +and effort.
    • +
    • Release - The stage where the application is delivered to the central repository.
    • +
    • Deploy - In this stage code is deployed to production environment.
    • +
    • Validation and compliance - The steps to validate a build are determined by +the needs of your organization. Image security scanning, security scanning and +code analysis of applications ensure the quality of images and written application's +code.
    • +
    +

    CI/CD Pipeline Stages +Figure: CI/CD Pipeline Stages

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/CI-CD/github-actions/images/deployed_app.png b/other-tools/CI-CD/github-actions/images/deployed_app.png new file mode 100644 index 00000000..83d384b1 Binary files /dev/null and b/other-tools/CI-CD/github-actions/images/deployed_app.png differ diff --git a/other-tools/CI-CD/github-actions/images/editconfig.png b/other-tools/CI-CD/github-actions/images/editconfig.png new file mode 100644 index 00000000..94ea0ff1 Binary files /dev/null and b/other-tools/CI-CD/github-actions/images/editconfig.png differ diff --git a/other-tools/CI-CD/github-actions/images/gh-cli.png b/other-tools/CI-CD/github-actions/images/gh-cli.png new file mode 100644 index 00000000..f13af4b7 Binary files /dev/null and b/other-tools/CI-CD/github-actions/images/gh-cli.png differ diff --git a/other-tools/CI-CD/github-actions/images/github-actions-successful.png b/other-tools/CI-CD/github-actions/images/github-actions-successful.png new file mode 100644 index 00000000..27cfc7f7 Binary files /dev/null and b/other-tools/CI-CD/github-actions/images/github-actions-successful.png differ diff --git a/other-tools/CI-CD/github-actions/images/github-actions-terminology.png b/other-tools/CI-CD/github-actions/images/github-actions-terminology.png new file mode 100644 index 00000000..f0b43be6 Binary files /dev/null and b/other-tools/CI-CD/github-actions/images/github-actions-terminology.png differ diff --git a/other-tools/CI-CD/github-actions/images/github-secrets.png b/other-tools/CI-CD/github-actions/images/github-secrets.png new file mode 100644 index 00000000..00bb317f Binary files /dev/null and b/other-tools/CI-CD/github-actions/images/github-secrets.png differ diff --git a/other-tools/CI-CD/github-actions/images/running.png b/other-tools/CI-CD/github-actions/images/running.png new file mode 100644 index 00000000..6cc31c24 Binary files /dev/null and b/other-tools/CI-CD/github-actions/images/running.png differ diff --git a/other-tools/CI-CD/github-actions/setup-github-actions-pipeline/index.html b/other-tools/CI-CD/github-actions/setup-github-actions-pipeline/index.html new file mode 100644 index 00000000..7abed540 --- /dev/null +++ b/other-tools/CI-CD/github-actions/setup-github-actions-pipeline/index.html @@ -0,0 +1,3507 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    How to setup GitHub Actions Pipeline

    +

    GitHub Actions gives you the ability to +create workflows to automate the deployment process to OpenShift. GitHub Actions +makes it easy to automate all your CI/CD workflows.

    +

    Terminiology

    +

    GitHub Actions Terminiology

    +

    Workflow

    +

    Automation-as-code that you can set up in your repository.

    +

    Events

    +

    30+ workflow triggers, including on schedule and from external systems.

    +

    Actions

    +

    Community-powered units of work that you can use as steps to create a job in a +workflow.

    +

    Deploy an Application to your NERC OpenShift Project

    +
      +
    • +

      Prerequisites

      +

      You must have at least one active NERC-OCP (OpenShift) type resource allocation. +You can refer to this documentation +on how to get allocation and request "NERC-OCP (OpenShift)" type resource allocations.

      +
    • +
    +

    Steps

    +
      +
    1. +

      Access to the NERC's OpenShift Container Platform at https://console.apps.shift.nerc.mghpcc.org +as described here. +To get access to NERC's OCP web console you need to be part of ColdFront's active +allocation.

      +
    2. +
    3. +

      Setup the OpenShift CLI Tools locally and configure the OpenShift CLI to +enable oc commands. Refer to this user guide.

      +
    4. +
    5. +

      Setup Github CLI on your local machine as described here +and verify you are able to run gh commands as shown below:

      +

      GitHub CLI Setup

      +
    6. +
    7. +

      Fork the simple-node-app App in your own Github:

      +

      This application runs a simple node.js server and serves up some static routes +with some static responses. This demo shows a simple container based app can +easily be bootstrapped onto your NERC OpenShift project space.

      +
      +

      Very Important Information

      +

      As you won't have full access to this repository, +we recommend first forking the repository on your own GitHub account. So, +you'll need to update all references to https://github.com/nerc-project/simple-node-app.git +to point to your own forked repository.

      +
      +

      To create a fork of the example simple-node-app repository:

      +
        +
      1. +

        Go to https://github.com/nerc-project/simple-node-app.

        +
      2. +
      3. +

        Cick the "Fork" button to create a fork in your own GitHub account, e.g. "https://github.com/<github_username>/simple-node-app".

        +
      4. +
      +
    8. +
    9. +

      Clone the simple-node-app git repository:

      +
      git clone https://github.com/<github_username>/simple-node-app.git
      +cd simple-node-app
      +
      +
    10. +
    11. +

      Run either setsecret.cmd file if you are using Windows or setsecret.sh +file if you are using Linux based machine. Once executed, verify Github Secrets +are set Properly under your github repo's settings >> secrets and variables >> Actions +as shown here:

      +

      GitHub Secrets

      +
    12. +
    13. +

      Enable and Update GitHub Actions Pipeline on your own forked repo:

      +
        +
      • +

        Enable the OpenShift Workflow in the Actions tab of in your GitHub repository.

        +
      • +
      • +

        Update the provided sample OpenShift workflow YAML file i.e. openshift.yml, +which is located at "https://github.com/<github_username>/simple-node-app/actions/workflows/openshift.yml".

        +
        +

        Very Important Information

        +

        Workflow execution on OpenShift pipelines follows these steps: +1. Checkout your repository +2. Perform a container image build +3. Push the built image to the GitHub Container Registry (GHCR) or +your preferred Registry +4. Log in to your NERC OpenShift cluster's project space +5. Create an OpenShift app from the image and expose it to the internet

        +
        +
      • +
      +
    14. +
    15. +

      Edit the top-level 'env' section as marked with '🖊️' if the defaults are not +suitable for your project.

      +
    16. +
    17. +

      (Optional) Edit the build-image step to build your project:

      +

      The default build type uses a Dockerfile at the root of the repository, +but can be replaced with a different file, a source-to-image build, or a step-by-step +buildah build.

      +
    18. +
    19. +

      Commit and push the workflow file to your default branch to trigger a workflow +run as shown below:

      +

      GitHub Actions Successfully Complete

      +
    20. +
    21. +

      Verify that you can see the newly deployed application on the NERC's OpenShift +Container Platform at https://console.apps.shift.nerc.mghpcc.org +as described here, +and ensure that it can be browsed properly.

      +

      Application Deployed on NERC OCP

      +
    22. +
    +

    That's it! Every time you commit changes to your GitHub repo, GitHub Actions +will trigger your configured Pipeline, which will ultimately deploy your +application to your own NERC OpenShift Project.

    +

    Successfully Deployed Application

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/CI-CD/images/ci-cd-flow.png b/other-tools/CI-CD/images/ci-cd-flow.png new file mode 100644 index 00000000..28248028 Binary files /dev/null and b/other-tools/CI-CD/images/ci-cd-flow.png differ diff --git a/other-tools/CI-CD/jenkins/images/CICD-in-NERC-Kubernetes.png b/other-tools/CI-CD/jenkins/images/CICD-in-NERC-Kubernetes.png new file mode 100644 index 00000000..8dec58d9 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/CICD-in-NERC-Kubernetes.png differ diff --git a/other-tools/CI-CD/jenkins/images/Github-webhook-events.png b/other-tools/CI-CD/jenkins/images/Github-webhook-events.png new file mode 100644 index 00000000..a15dd523 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/Github-webhook-events.png differ diff --git a/other-tools/CI-CD/jenkins/images/Github-webhook-fields.png b/other-tools/CI-CD/jenkins/images/Github-webhook-fields.png new file mode 100644 index 00000000..a28bb070 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/Github-webhook-fields.png differ diff --git a/other-tools/CI-CD/jenkins/images/Github-webhook.png b/other-tools/CI-CD/jenkins/images/Github-webhook.png new file mode 100644 index 00000000..ac144181 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/Github-webhook.png differ diff --git a/other-tools/CI-CD/jenkins/images/Jenkins-pipeline-build-success.png b/other-tools/CI-CD/jenkins/images/Jenkins-pipeline-build-success.png new file mode 100644 index 00000000..2c97122e Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/Jenkins-pipeline-build-success.png differ diff --git a/other-tools/CI-CD/jenkins/images/Jenkins-pipeline-script.png b/other-tools/CI-CD/jenkins/images/Jenkins-pipeline-script.png new file mode 100644 index 00000000..5de93e28 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/Jenkins-pipeline-script.png differ diff --git a/other-tools/CI-CD/jenkins/images/add-credentials.png b/other-tools/CI-CD/jenkins/images/add-credentials.png new file mode 100644 index 00000000..1c0eb90d Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/add-credentials.png differ diff --git a/other-tools/CI-CD/jenkins/images/adding-Jenkins-pipeline.png b/other-tools/CI-CD/jenkins/images/adding-Jenkins-pipeline.png new file mode 100644 index 00000000..b1cbdfa4 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/adding-Jenkins-pipeline.png differ diff --git a/other-tools/CI-CD/jenkins/images/adding-github-build-trigger.png b/other-tools/CI-CD/jenkins/images/adding-github-build-trigger.png new file mode 100644 index 00000000..a51d90f9 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/adding-github-build-trigger.png differ diff --git a/other-tools/CI-CD/jenkins/images/all-credentials.png b/other-tools/CI-CD/jenkins/images/all-credentials.png new file mode 100644 index 00000000..9001fa8a Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/all-credentials.png differ diff --git a/other-tools/CI-CD/jenkins/images/console-output-build-now.png b/other-tools/CI-CD/jenkins/images/console-output-build-now.png new file mode 100644 index 00000000..2657c9ed Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/console-output-build-now.png differ diff --git a/other-tools/CI-CD/jenkins/images/customize-jenkins-installing-plugins.png b/other-tools/CI-CD/jenkins/images/customize-jenkins-installing-plugins.png new file mode 100644 index 00000000..ee0b7092 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/customize-jenkins-installing-plugins.png differ diff --git a/other-tools/CI-CD/jenkins/images/deployed-app-on-k8s-node.png b/other-tools/CI-CD/jenkins/images/deployed-app-on-k8s-node.png new file mode 100644 index 00000000..fa25092b Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/deployed-app-on-k8s-node.png differ diff --git a/other-tools/CI-CD/jenkins/images/docker-hub-credentials.png b/other-tools/CI-CD/jenkins/images/docker-hub-credentials.png new file mode 100644 index 00000000..fc17a5fd Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/docker-hub-credentials.png differ diff --git a/other-tools/CI-CD/jenkins/images/github-settings.png b/other-tools/CI-CD/jenkins/images/github-settings.png new file mode 100644 index 00000000..3325084b Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/github-settings.png differ diff --git a/other-tools/CI-CD/jenkins/images/global-credentials.png b/other-tools/CI-CD/jenkins/images/global-credentials.png new file mode 100644 index 00000000..7d1fddf3 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/global-credentials.png differ diff --git a/other-tools/CI-CD/jenkins/images/install-docker-pipeline-plugin.png b/other-tools/CI-CD/jenkins/images/install-docker-pipeline-plugin.png new file mode 100644 index 00000000..2e131429 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/install-docker-pipeline-plugin.png differ diff --git a/other-tools/CI-CD/jenkins/images/install-kubernetes-cli.png b/other-tools/CI-CD/jenkins/images/install-kubernetes-cli.png new file mode 100644 index 00000000..486e2d5c Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/install-kubernetes-cli.png differ diff --git a/other-tools/CI-CD/jenkins/images/installed-jenkins-plugins.png b/other-tools/CI-CD/jenkins/images/installed-jenkins-plugins.png new file mode 100644 index 00000000..4312ef94 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/installed-jenkins-plugins.png differ diff --git a/other-tools/CI-CD/jenkins/images/jenkins-admin-login.png b/other-tools/CI-CD/jenkins/images/jenkins-admin-login.png new file mode 100644 index 00000000..81002f28 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/jenkins-admin-login.png differ diff --git a/other-tools/CI-CD/jenkins/images/jenkins-continue-as-admin.png b/other-tools/CI-CD/jenkins/images/jenkins-continue-as-admin.png new file mode 100644 index 00000000..237e005b Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/jenkins-continue-as-admin.png differ diff --git a/other-tools/CI-CD/jenkins/images/jenkins-get-started.png b/other-tools/CI-CD/jenkins/images/jenkins-get-started.png new file mode 100644 index 00000000..5883de0a Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/jenkins-get-started.png differ diff --git a/other-tools/CI-CD/jenkins/images/jenkins-pipeline-build.png b/other-tools/CI-CD/jenkins/images/jenkins-pipeline-build.png new file mode 100644 index 00000000..f66c0bdb Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/jenkins-pipeline-build.png differ diff --git a/other-tools/CI-CD/jenkins/images/jenkins-pipeline-from-git.png b/other-tools/CI-CD/jenkins/images/jenkins-pipeline-from-git.png new file mode 100644 index 00000000..c6b57541 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/jenkins-pipeline-from-git.png differ diff --git a/other-tools/CI-CD/jenkins/images/jenkins_admin_password.png b/other-tools/CI-CD/jenkins/images/jenkins_admin_password.png new file mode 100644 index 00000000..14b64f3e Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/jenkins_admin_password.png differ diff --git a/other-tools/CI-CD/jenkins/images/jenkins_store.png b/other-tools/CI-CD/jenkins/images/jenkins_store.png new file mode 100644 index 00000000..ea421a25 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/jenkins_store.png differ diff --git a/other-tools/CI-CD/jenkins/images/kubernetes-config-secret-file.png b/other-tools/CI-CD/jenkins/images/kubernetes-config-secret-file.png new file mode 100644 index 00000000..44dc44ee Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/kubernetes-config-secret-file.png differ diff --git a/other-tools/CI-CD/jenkins/images/manage_credentials.png b/other-tools/CI-CD/jenkins/images/manage_credentials.png new file mode 100644 index 00000000..08bf5752 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/manage_credentials.png differ diff --git a/other-tools/CI-CD/jenkins/images/plugins-installation.png b/other-tools/CI-CD/jenkins/images/plugins-installation.png new file mode 100644 index 00000000..297b5ec0 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/plugins-installation.png differ diff --git a/other-tools/CI-CD/jenkins/images/security_groups_jenkins.png b/other-tools/CI-CD/jenkins/images/security_groups_jenkins.png new file mode 100644 index 00000000..d7689633 Binary files /dev/null and b/other-tools/CI-CD/jenkins/images/security_groups_jenkins.png differ diff --git a/other-tools/CI-CD/jenkins/integrate-your-GitHub-repository/index.html b/other-tools/CI-CD/jenkins/integrate-your-GitHub-repository/index.html new file mode 100644 index 00000000..5997ac9e --- /dev/null +++ b/other-tools/CI-CD/jenkins/integrate-your-GitHub-repository/index.html @@ -0,0 +1,3351 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    How to Integrate Your GitHub Repository to Your Jenkins Project

    +

    This explains how to add a GitHub Webhook in your Jenkins Pipeline that saves your +time and keeps your project updated all the time.

    +
    +

    Prerequisite

    +

    You need to have setup CI/CD Pipelines on NERC's OpenStack by following +this document.

    +
    +

    What is a webhook?

    +

    A webhook is an HTTP callback, an HTTP POST that occurs when something happens through +a simple event-notification via HTTP POST. Github provides its own webhooks options +for such tasks.

    +

    Configuring GitHub

    +

    Let's see how to configure and add a webhook in GitHub:

    +
      +
    1. +

      Go to your forked GitHub project repository.

      +
    2. +
    3. +

      Click on "Settings". in the right corner as shown below:

      +

      GitHub Settings

      +
    4. +
    5. +

      Click on "Webhooks" and then "Click "Add webhooks".

      +

      Github webhook

      +
    6. +
    7. +

      In the "Payload URL" field paste your Jenkins environment URL. At the end of this +URL add /github-webhook/ using http://<Floating-IP>:8080/github-webhook/ +i.e. http://199.94.60.4:8080/github-webhook/. +Select "Content type" as "application/json" and leave the "Secret" field empty.

      +

      Github webhook fields

      +
    8. +
    9. +

      In the page "Which events would you like to trigger this webhook?" select the +option "Let me select individual events". Then, check "Pull Requests" and "Pushes". +At the end of this option, make sure that the "Active" option is checked and then +click on "Add webhook" button.

      +

      Github webhook events

      +
    10. +
    +

    We're done with the configuration on GitHub's side! Now let's config on Jenkins side +to use this webhook.

    +

    That's it! in this way we can add a webhook to our job and ensure that everytime +you commits your changes to your Github repo, GitHub will trigger your new Jenkins +job. As we already had setup "Github hook tirgger for GITScm polling" for our +Jenkins pipeline setup previously.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline/index.html b/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline/index.html new file mode 100644 index 00000000..3eed4cd0 --- /dev/null +++ b/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline/index.html @@ -0,0 +1,3782 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    How to Set Up Jenkins Pipeline on a VM

    +

    This document will walk you through how to setup a minimal "CI/CD Pipeline To Deploy +To Kubernetes Cluster Using a CI/CD tool called Jenkins" on your NERC's OpenStack +environment. Jenkins uses the Kubernetes control plane on K8s Cluster to run pipeline +tasks that enable DevOps to spend more time coding and testing and less time +troubleshooting.

    +
    +

    Prerequisite

    +

    You need Kubernetes cluster running in your OpenStack environment. To setup your +K8s cluster please Read this.

    +
    +

    CI/CD Pipeline on NERC +Figure: CI/CD Pipeline To Deploy To Kubernetes Cluster Using Jenkins on NERC

    +

    Setup a Jenkins Server VM

    +
      +
    • +

      Launch 1 Linux machine based on ubuntu-20.04-x86_64 and cpu-su.2 flavor with +2vCPU, 8GB RAM, and 20GB storage.

      +
    • +
    • +

      Make sure you have added rules in the +Security Groups +to allow ssh using Port 22 access to the instance.

      +
    • +
    • +

      Setup a new Security Group with the following rules exposing port 8080 and +attach it to your new instance.

      +

      Jenkins Server Security Group

      +
    • +
    • +

      Assign a Floating IP +to your new instance so that you will be able to ssh into this machine:

      +
      ssh ubuntu@<Floating-IP> -A -i <Path_To_Your_Private_Key>
      +
      +

      For example:

      +
      ssh ubuntu@199.94.60.4 -A -i cloud.key
      +
      +
    • +
    +

    Upon successfully SSH accessing the machine, execute the following dependencies:

    +
    +

    Very Important

    +

    Run the following steps as non-root user i.e. ubuntu.

    +
    +
      +
    • +

      Update the repositories and packages:

      +
      sudo apt-get update && sudo apt-get upgrade -y
      +
      +
    • +
    • +

      Turn off swap

      +
      swapoff -a
      +sudo sed -i '/ swap / s/^/#/' /etc/fstab
      +
      +
    • +
    • +

      Install curl and apt-transport-https

      +
      sudo apt-get update && sudo apt-get install -y apt-transport-https curl
      +
      +
    • +
    +
    +

    Download and install the latest version of Docker CE

    +
      +
    • +

      Download and install Docker CE:

      +
      curl -fsSL https://get.docker.com -o get-docker.sh
      +sudo sh get-docker.sh
      +
      +
    • +
    • +

      Configure the Docker daemon:

      +
      sudo usermod -aG docker $USER && newgrp docker
      +
      +
    • +
    +
    +

    Install kubectl

    +

    kubectl: the command line util to talk to your cluster.

    +
      +
    • +

      Download the Google Cloud public signing key and add key to verify releases

      +
      curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
      +
      +
    • +
    • +

      add kubernetes apt repo

      +
      cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
      +deb https://apt.kubernetes.io/ kubernetes-xenial main
      +EOF
      +
      +
    • +
    • +

      Install kubectl

      +
      sudo apt-get update
      +sudo apt-get install -y kubectl
      +
      +
    • +
    • +

      apt-mark hold is used so that these packages will not be updated/removed automatically

      +
      sudo apt-mark hold kubectl
      +
      +
    • +
    +
    +

    Install a Jenkins Server using Docker

    +

    To install a Jenkins server using Docker run the following command:

    +
    docker run -u 0 --privileged --name jenkins -it -d -p 8080:8080 -p 50000:50000 \
    +    -v /var/run/docker.sock:/var/run/docker.sock \
    +    -v $(which docker):/usr/bin/docker \
    +    -v $(which kubectl):/usr/bin/kubectl \
    +    -v /home/jenkins_home:/var/jenkins_home \
    +    jenkins/jenkins:latest
    +
    +

    Once successfully docker run, browse to http://<Floating-IP>:8080 +this will show you where to get the initial Administrator password to get started +i.e. /var/jenkins_home/secrets/initialAdminPassword as shown below:

    +

    Jenkins Successfully Installed

    +

    The /var/jenkins_home in Jenkins docker container is a mounted volume to the +host's /home/jenkins_home so you can just browse to +/home/jenkins_home/secrets/initialAdminPassword on your ssh'ed host machine to +copy the same content from /var/jenkins_home/secrets/initialAdminPassword.

    +
    +

    Initial Admin Password

    +

    If you can't find the Admin password at /var/jenkins_home/secrets/initialAdminPassword, +then try to locate it at its original location, i.e. /home/jenkins_home/secrets/initialAdminPassword.

    +
    +

    OR, you can run docker ps on worker node where you run the Jenkins server. +Note the Name of the docker container and then run: docker logs -f <jenkins_docker_container_name>. +This will show the initial Administrator password on the terminal which you can +copy and paste on the web GUI on the browser.

    +
    +

    Initial Admin Password

    +

    When you run docker logs -f <jenkins_docker_container_name>, the initial +password for the "Admin" user can be found between the rows of asterisks +as shown below: +Initial Admin Password

    +
    +
      +
    • +

      Once you login to the Jenkins Web UI by entering the admin password shown on CLI +terminal, click on the "Install suggested plugins" button as shown below:

      +

      Install Customize Jenkins Plugins

      +

      Customize Jenkins Installing Plugins

      +

      Continue by selecting 'Skip and continue as admin' first as shown below:

      +

      Skip and continue as admin

      +

      Then click the 'Save and Finish' button as shown below and then, Jenkins is ready +to use.

      +

      Jenkins Get Started

      +
    • +
    +

    Install the required Plugins

    +
      +
    • +

      Jenkins has a wide range of plugin options. From your Jenkins dashboard navigate +to "Manage Jenkins > Manage Plugins" as shown below:

      +

      Jenkins Plugin Installation

      +

      Select the "Available" tab and then locate Docker pipeline by searching +and then click "Install without restart" button as shown below:

      +

      Jenkins Required Plugin To Install

      +

      Also, install the Kubernetes CLI plugin that allows you to configure kubectl +commands on Jenkinsfile to interact with Kubernetes clusters as shown below:

      +

      Install Kubernetes CLI

      +
    • +
    +

    Create the required Credentials

    +
      +
    • +

      Create a global credential for your Docker Hub Registry by providing the username +and password that will be used by the Jenkins pipelines:

      +
        +
      1. +

        Click on the "Manage Jenkins" menu and then click on the "Manage Credentials" + link as shown below:

        +

        Manage Credentials

        +
      2. +
      3. +

        Click on Jenkins Store as shown below:

        +

        Jenkins Store

        +
      4. +
      5. +

        The credentials can be added by clicking the 'Add Credentials' button in +the left pane.

        +

        Adding Credentials

        +
      6. +
      +
    • +
    • +

      First, add the 'DockerHub' credentials as 'Username with password' with the +ID dockerhublogin.

      +

      a. Select the Kind "Username with password" from the dropdown options.

      +

      b. Provide your Docker Hub Registry's username and password.

      +

      c. Give its ID and short description. ID is very important is that will need +to be specify as used on your Jenkinsfile i.e. dockerhublogin.

      +

      Docker Hub Credentials

      +
    • +
    • +

      Config the 'Kubeconfig' credentials as 'Secret file' that holds Kubeconfig +file from K8s master i.e. located at /etc/kubernetes/admin.conf with the ID 'kubernetes'

      +

      a. Click on the "Add Credentials" button in the left pane.

      +

      b. Select the Kind "Secret file" from the dropdown options.

      +

      c. On File section choose the config file that contains the EXACT content +from your K8s master's kubeconfig file located at: /etc/kubernetes/admin.conf

      +

      d. Give a ID and description that you will need to use on your Jenkinsfile +i.e. kubernetes.

      +

      Kubernetes Configuration Credentials

      +

      e. Once both credentials are successfully added the following credentials are +shown:

      +

      Jenkins All Credentials

      +
    • +
    +

    Fork the nodeapp App in your own Github

    +
    +

    Very Important Information

    +

    As you won't have full access to this repository, +we recommend first forking the repository on your own GitHub account. So, you'll +need to update all references to https://github.com/nerc-project/nodeapp.git +to point to your own forked repository.

    +
    +

    To create a fork of the example nodeapp repository:

    +
      +
    1. +

      Go to https://github.com/nerc-project/nodeapp.

      +
    2. +
    3. +

      Cick the "Fork" button to create a fork in your own GitHub account, e.g. "https://github.com/<github_username>/nodeapp".

      +
    4. +
    5. +

      Review the "Jenkinsfile" that is included at the root of the forked git repo.

      +
    6. +
    +
    +

    Very Important Information

    +

    A sample Jenkinsfile is available at the root of our demo application's Git +repository, which we can reference in our Jenkins pipeline steps. For example, +in this case, we are using this repository +where our demo Node.js application resides.

    +
    +

    Modify the Jenkins Declarative Pipeline Script file

    +
      +
    • +

      Modify the provided ‘Jenkinsfile’ to specify your own Docker Hub account and +github repository as specified in "<dockerhub_username>" and "<github_username>".

      +
      +

      Very Important Information

      +

      You need to replace "<dockerhub_username>" and "<github_username>" +with your actual DockerHub and GitHub usernames, respectively. Also, +ensure that the global credentials IDs mentioned above match those used +during the credential saving steps mentioned earlier. For instance, +dockerhublogin corresponds to the DockerHub ID saved during the +credential saving process for your Docker Hub Registry's username and +password. Similarly, kubernetes corresponds to the 'Kubeconfig' ID +assigned for the Kubeconfig credential file.

      +
      +
    • +
    • +

      Below is an example of a Jenkins declarative Pipeline Script file:

      +
      pipeline {
      +
      +  environment {
      +    dockerimagename = "<dockerhub_username>/nodeapp:${env.BUILD_NUMBER}"
      +    dockerImage = ""
      +  }
      +
      +  agent any
      +
      +  stages {
      +
      +    stage('Checkout Source') {
      +      steps {
      +        git branch: 'main', url: 'https://github.com/<github_username>/nodeapp.git'
      +      }
      +    }
      +
      +    stage('Build image') {
      +      steps{
      +        script {
      +          dockerImage = docker.build dockerimagename
      +        }
      +      }
      +    }
      +
      +    stage('Pushing Image') {
      +      environment {
      +        registryCredential = 'dockerhublogin'
      +      }
      +      steps{
      +        script {
      +          docker.withRegistry('https://registry.hub.docker.com', registryCredential){
      +            dockerImage.push()
      +          }
      +        }
      +      }
      +    }
      +
      +    stage('Docker Remove Image') {
      +      steps {
      +        sh "docker rmi -f ${dockerimagename}"
      +        sh "docker rmi -f registry.hub.docker.com/${dockerimagename}"
      +      }
      +    }
      +
      +    stage('Deploying App to Kubernetes') {
      +      steps {
      +        sh "sed -i 's/nodeapp:latest/nodeapp:${env.BUILD_NUMBER}/g' deploymentservice.yml"
      +        withKubeConfig([credentialsId: 'kubernetes']) {
      +          sh 'kubectl apply -f deploymentservice.yml'
      +        }
      +      }
      +    }
      +  }
      +}
      +
      +
    • +
    +
    +

    Other way to Generate Pipeline Jenkinsfile

    +

    You can generate your custom Jenkinsfile by clicking on "Pipeline Syntax" +link shown when you create a new Pipeline when clicking the "New Item" menu +link.

    +
    +

    Setup a Pipeline

    +
      +
    • +

      Once you review the provided Jenkinsfile and understand the stages, +you can now create a pipeline to trigger it on your newly setup Jenkins server:

      +

      a. Click on the "New Item" link.

      +

      b. Select the "Pipeline" link.

      +

      c. Give name to your Pipeline i.e. “jenkins-k8s-pipeline

      +

      Adding Jenkins Credentials

      +

      d. Select "Build Triggers" tab and then select +Github hook tirgger for GITScm polling as shown below:

      +

      Adding Github Build Trigger

      +

      e. Select "Pipeline" tab and then select the "Pipeline script from SCM" from +the dropdown options. Then you need to specify the Git as SCM and also "Repository +URL" for your public git repo and also specify your branch and Jenkinsfile's +name as shown below:

      +

      Add Jenkins Pipeline Script From Git

      +

      OR, You can copy/paste the contents of your Jenkinsfile on the given textbox. +Please make sure you are selecting the "Pipeline script" from the dropdown options.

      +

      Add Jenkins Pipeline Script Content

      +

      f. Click on "Save" button.

      +
    • +
    +

    How to manually Trigger the Pipeline

    +
      +
    • +

      Finally, click on the "Build Now" menu link on right side navigation that will +triggers the Pipeline process i.e. Build docker image, Push Image to your Docker +Hub Registry and Pull the image from Docker Registry, Remove local Docker images +and then Deploy to K8s Cluster as shown below:

      +

      Jenkins Pipeline Build Now

      +

      You can see the deployment to your K8s Cluster is successful then you can browse +the output using http://<Floating-IP>:<NodePort> as shown below:

      +

      K8s Deployed Node App

      +

      You can see the Console Output logs of this pipeline process by clicking the +icon before the id of the started Pipeline on the right bottom corner.

      +

      Jenkins console

      +

      The pipeline stages after successful completion looks like below:

      +

      Jenkins Pipeline Stages Run Successful

      +
    • +
    +

    We will continue on next documentation on +how to setup GitHub Webhook in your Jenkins Pipeline so that Jenkins will trigger +the build when a devops commits code to your GitHub repository's specific branch.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/apache-spark/images/launch-multiple-worker-instances.png b/other-tools/apache-spark/images/launch-multiple-worker-instances.png new file mode 100644 index 00000000..5abe9735 Binary files /dev/null and b/other-tools/apache-spark/images/launch-multiple-worker-instances.png differ diff --git a/other-tools/apache-spark/images/spark-completed-applications.png b/other-tools/apache-spark/images/spark-completed-applications.png new file mode 100644 index 00000000..a2a82f86 Binary files /dev/null and b/other-tools/apache-spark/images/spark-completed-applications.png differ diff --git a/other-tools/apache-spark/images/spark-nodes.png b/other-tools/apache-spark/images/spark-nodes.png new file mode 100644 index 00000000..3fc57ba7 Binary files /dev/null and b/other-tools/apache-spark/images/spark-nodes.png differ diff --git a/other-tools/apache-spark/images/spark-running-applications.png b/other-tools/apache-spark/images/spark-running-applications.png new file mode 100644 index 00000000..8b2c3235 Binary files /dev/null and b/other-tools/apache-spark/images/spark-running-applications.png differ diff --git a/other-tools/apache-spark/images/spark-web-ui.png b/other-tools/apache-spark/images/spark-web-ui.png new file mode 100644 index 00000000..3f665f29 Binary files /dev/null and b/other-tools/apache-spark/images/spark-web-ui.png differ diff --git a/other-tools/apache-spark/spark/index.html b/other-tools/apache-spark/spark/index.html new file mode 100644 index 00000000..b5fe9bd3 --- /dev/null +++ b/other-tools/apache-spark/spark/index.html @@ -0,0 +1,3747 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Apache Spark Cluster Setup on NERC OpenStack

    +

    Apache Spark Overview

    +

    Apache Spark is increasingly recognized as the primary +analysis suite for big data, particularly among Python users. Spark offers a robust +Python API and includes several valuable built-in libraries such as MLlib for +machine learning and Spark Streaming for real-time analysis. In contrast to Apache +Hadoop, Spark performs most computations in main +memory boosting the performance.

    +

    Many modern computational tasks utilize the MapReduce parallel paradigm. This +computational process comprises two stages: Map and Reduce. Before task +execution, all data is distributed across the nodes of the cluster. During the +"Map" stage, the master node dispatches the executable task to the other nodes, +and each worker processes its respective data. The subsequent step is "Reduce" that +involves the master node collecting all results from the workers and generating +final results based on the workers' outcomes. Apache Spark also implements this +model of computations so it provides Big Data Processing abilities.

    +

    Apache Spark Cluster Setup

    +

    To get a Spark standalone cluster up and running manually, all you need to do is +spawn some VMs and start Spark as master on one of them and worker on the others. +They will automatically form a cluster that you can connect to/from Python, Java, +and Scala applications using the IP address of the master VM.

    +

    Setup a Master VM

    +
      +
    • +

      To create a master VM for the first time, ensure that the "Image" dropdown option +is selected. In this example, we selected ubuntu-22.04-x86_64 and the cpu-su.2 +flavor is being used.

      +
    • +
    • +

      Make sure you have added rules in the +Security Groups +to allow ssh using Port 22 access to the instance.

      +
    • +
    • +

      Assign a Floating IP +to your new instance so that you will be able to ssh into this machine:

      +
      ssh ubuntu@<Floating-IP> -A -i <Path_To_Your_Private_Key>
      +
      +

      For example:

      +
      ssh ubuntu@199.94.61.4 -A -i cloud.key
      +
      +
    • +
    • +

      Upon successfully accessing the machine, execute the following dependencies:

      +
      sudo apt-get -y update
      +sudo apt install default-jre -y
      +
      +
    • +
    • +

      Download and install Scala:

      +
      wget https://downloads.lightbend.com/scala/2.13.10/scala-2.13.10.deb
      +sudo dpkg -i scala-2.13.10.deb
      +sudo apt-get install scala
      +
      +
      +

      Note

      +

      Installing Scala means installing various command-line tools such as the +Scala compiler and build tools.

      +
      +
    • +
    • +

      Download and unpack Apache Spark:

      +
      SPARK_VERSION="3.4.2"
      +APACHE_MIRROR="dlcdn.apache.org"
      +
      +wget https://$APACHE_MIRROR/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-hadoop3-scala2.13.tgz
      +sudo tar -zxvf spark-$SPARK_VERSION-bin-hadoop3-scala2.13.tgz
      +sudo cp -far spark-$SPARK_VERSION-bin-hadoop3-scala2.13 /usr/local/spark
      +
      +
      +

      Very Important Note

      +

      Please ensure you are using the latest Spark version by modifying the +SPARK_VERSION in the above script. Additionally, verify that the version +exists on the APACHE_MIRROR website. Please note the value of SPARK_VERSION +as you will need it during Preparing Jobs for Execution and Examination.

      +
      +
    • +
    • +

      Create an SSH/RSA Key by running ssh-keygen -t rsa without using any passphrase:

      +
      ssh-keygen -t rsa
      +
      +Generating public/private rsa key pair.
      +Enter file in which to save the key (/home/ubuntu/.ssh/id_rsa):
      +Enter passphrase (empty for no passphrase):
      +Enter same passphrase again:
      +Your identification has been saved in /home/ubuntu/.ssh/id_rsa
      +Your public key has been saved in /home/ubuntu/.ssh/id_rsa.pub
      +The key fingerprint is:
      +SHA256:8i/TVSCfrkdV4+Jyqc00RoZZFSHNj8C0QugmBa7RX7U ubuntu@spark-master
      +The key's randomart image is:
      ++---[RSA 3072]----+
      +|      .. ..o..++o|
      +|     o  o.. +o.+.|
      +|    . +o  .o=+.oo|
      +|     +.oo  +o++..|
      +|    o EoS  .+oo  |
      +|     . o   .+B   |
      +|        .. +O .  |
      +|        o.o..o   |
      +|         o..     |
      ++----[SHA256]-----+
      +
      +
    • +
    • +

      Copy and append the contents of SSH public key i.e. ~/.ssh/id_rsa.pub to +the ~/.ssh/authorized_keys file.

      +
    • +
    +

    Create a Volume Snapshot of the master VM

    +
      +
    • +

      Once you're logged in to NERC's Horizon dashboard. You need to Shut Off the +master vm before creating a volume snapshot.

      +

      Click Action -> Shut Off Instance.

      +

      Status will change to Shutoff.

      +
    • +
    • +

      Then, create a snapshot of its attached volume by clicking on the "Create snapshot" +from the Project -> Volumes -> Volumes as described here.

      +
    • +
    +

    Create Two Worker Instances from the Volume Snapshot

    +
      +
    • +

      Once a snapshot is created and is in "Available" status, you can view and manage +it under the Volumes menu in the Horizon dashboard under Volume Snapshots.

      +

      Navigate to Project -> Volumes -> Snapshots.

      +
    • +
    • +

      You have the option to directly launch this volume as an instance by clicking +on the arrow next to "Create Volume" and selecting "Launch as Instance".

      +

      NOTE: Specify Count: 2 to launch 2 instances using the volume snapshot +as shown below:

      +

      Launch 2 Workers From Volume Snapshot

      +
      +

      Naming, Security Group and Flavor for Worker Nodes

      +

      You can specify the "Instance Name" as "spark-worker", and for each instance, +it will automatically append incremental values at the end, such as +spark-worker-1 and spark-worker-2. Also, make sure you have attached +the Security Groups +to allow ssh using Port 22 access to the worker instances.

      +
      +
    • +
    +

    Additionally, during launch, you + will have the option to choose your preferred flavor for the worker nodes, + which can differ from the master VM based on your computational requirements.

    +
      +
    • +

      Navigate to Project -> Compute -> Instances.

      +
    • +
    • +

      Restart the shutdown master VM, click Action -> Start Instance.

      +
    • +
    • +

      The final set up for our Spark cluster looks like this, with 1 master node and +2 worker nodes:

      +

      Spark Cluster VMs

      +
    • +
    +

    Configure Spark on the Master VM

    +
      +
    • +

      SSH login into the master VM again.

      +
    • +
    • +

      Update the /etc/hosts file to specify all three hostnames with their corresponding +internal IP addresses.

      +
      sudo nano /etc/hosts
      +
      +

      Ensure all hosts are resolvable by adding them to /etc/hosts. You can modify +the following content specifying each VM's internal IP addresses and paste +the updated content at the end of the /etc/hosts file. Alternatively, you +can use sudo cat >> /etc/hosts to append the content directly to the end of +the /etc/hosts file.

      +
      <Master-Internal-IP> master
      +<Worker1-Internal-IP> worker1
      +<Worker2-Internal-IP> worker2
      +
      +
      +

      Very Important Note

      +

      Make sure to use >> instead of > to avoid overwriting the existing content +and append the new content at the end of the file.

      +
      +

      For example, the end of the /etc/hosts file looks like this:

      +
      sudo cat /etc/hosts
      +...
      +192.168.0.46 master
      +192.168.0.26 worker1
      +192.168.0.136 worker2
      +
      +
    • +
    • +

      Verify that you can SSH into both worker nodes by using ssh worker1 and +ssh worker2 from the Spark master node's terminal.

      +
    • +
    • +

      Copy the sample configuration file for the Spark:

      +
      cd /usr/local/spark/conf/
      +cp spark-env.sh.template spark-env.sh
      +
      +
    • +
    • +

      Update the environment variables file i.e. spark-env.sh to include the following +information:

      +
      export SPARK_MASTER_HOST='<Master-Internal-IP>'
      +export JAVA_HOME=<Path_of_JAVA_installation>
      +
      +
      +

      Environment Variables

      +

      Executing this command: readlink -f $(which java) will display the path +to the current Java setup in your VM. For example: +/usr/lib/jvm/java-11-openjdk-amd64/bin/java, you need to remove the +last bin/java part, i.e. /usr/lib/jvm/java-11-openjdk-amd64, to set +it as the JAVA_HOME environment variable. +Learn more about other Spark settings that can be configured through environment +variables here.

      +
      +

      For example:

      +
      echo "export SPARK_MASTER_HOST='192.168.0.46'" >> spark-env.sh
      +echo "export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64" >> spark-env.sh
      +
      +
    • +
    • +

      Source the changed environment variables file i.e. spark-env.sh:

      +
      source spark-env.sh
      +
      +
    • +
    • +

      Create a file named slaves in the Spark configuration directory (i.e., +/usr/local/spark/conf/) that specifies all 3 hostnames (nodes) as specified in +/etc/hosts:

      +
      sudo cat slaves
      +master
      +worker1
      +worker2
      +
      +
    • +
    +

    Run the Spark cluster from the Master VM

    +
      +
    • +

      SSH into the master VM again if you are not already logged in.

      +
    • +
    • +

      You need to run the Spark cluster from /usr/local/spark:

      +
      cd /usr/local/spark
      +
      +# Start all hosts (nodes) including master and workers
      +./sbin/start-all.sh
      +
      +
      +

      How to Stop All Spark Cluster

      +

      To stop all of the Spark cluster nodes, execute ./sbin/stop-all.sh +command from /usr/local/spark.

      +
      +
    • +
    +

    Connect to the Spark WebUI

    +

    Apache Spark provides a suite of +web user interfaces (WebUIs) +that you can use to monitor the status and resource consumption of your Spark cluster.

    +
    +

    Different types of Spark Web UI

    +

    Apache Spark provides different web UIs: Master web UI, Worker web UI, +and Application web UI.

    +
    +
      +
    • +

      You can connect to the Master web UI using +SSH Port Forwarding, aka SSH Tunneling +i.e. Local Port Forwarding from your local machine's terminal by running:

      +
      ssh -N -L <Your_Preferred_Port>:localhost:8080 <User>@<Floating-IP> -i <Path_To_Your_Private_Key>
      +
      +

      Here, you can choose any port that is available on your machine as <Your_Preferred_Port> +and then master VM's assigned Floating IP as <Floating-IP> and associated +Private Key pair attached to the VM as <Path_To_Your_Private_Key>.

      +

      For example:

      +
      ssh -N -L 8080:localhost:8080 ubuntu@199.94.61.4 -i ~/.ssh/cloud.key
      +
      +
    • +
    • +

      Once the SSH Tunneling is successful, please do not close or stop the terminal +where you are running the SSH Tunneling. Instead, log in to the Master web UI +using your web browser: http://localhost:<Your_Preferred_Port> i.e. http://localhost:8080.

      +
    • +
    +

    The Master web UI offers an overview of the Spark cluster, showcasing the following +details:

    +
      +
    • Master URL and REST URL
    • +
    • Available CPUs and memory for the Spark cluster
    • +
    • Status and allocated resources for each worker
    • +
    • Details on active and completed applications, including their status, resources, +and duration
    • +
    • Details on active and completed drivers, including their status and resources
    • +
    +

    The Master web UI appears as shown below when you navigate to http://localhost:<Your_Preferred_Port> +i.e. http://localhost:8080 from your web browser:

    +

    The Master web UI

    +

    The Master web UI also provides an overview of the applications. Through the +Master web UI, you can easily identify the allocated vCPU (Core) and memory +resources for both the Spark cluster and individual applications.

    +

    Preparing Jobs for Execution and Examination

    +
      +
    • +

      To run jobs from /usr/local/spark, execute the following commands:

      +
      cd /usr/local/spark
      +SPARK_VERSION="3.4.2"
      +
      +
      +

      Very Important Note

      +

      Please ensure you are using the same Spark version that you have +downloaded and installed previously as the value +of SPARK_VERSION in the above script.

      +
      +
    • +
    • +

      Single Node Job:

      +

      Let's quickly start to run a simple job:

      +
      ./bin/spark-submit --driver-memory 2g --class org.apache.spark.examples.SparkPi examples/jars/spark-examples_2.13-$SPARK_VERSION.jar 50
      +
      +
    • +
    • +

      Cluster Mode Job:

      +

      Let's submit a longer and more complex job with many tasks that will be +distributed among the multi-node cluster, and then view the Master web UI:

      +
      ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://master:7077 examples/jars/spark-examples_2.13-$SPARK_VERSION.jar 1000
      +
      +

      While the job is running, you will see a similar view on the Master web UI under +the "Running Applications" section:

      +

      Spark Running Application

      +

      Once the job is completed, it will show up under the "Completed Applications" +section on the Master web UI as shown below:

      +

      Spark Completed Application

      +
    • +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/index.html b/other-tools/index.html new file mode 100644 index 00000000..99d77088 --- /dev/null +++ b/other-tools/index.html @@ -0,0 +1,3391 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Kubernetes

    + +

    i. Kubernetes Development environment

    +
      +
    1. Minikube
    2. +
    3. Kind
    4. +
    5. MicroK8s
    6. +
    7. +

      K3s

      +

      5.a. K3s with High Availibility(HA) setup

      +

      5.b. Multi-master HA K3s cluster using k3sup

      +

      5.c. Single-Node K3s Cluster using k3d

      +

      5.d. Multi-master K3s cluster setup using k3d

      +
    8. +
    9. +

      k0s

      +
    10. +
    +

    ii. Kubernetes Production environment

    +
      +
    1. +

      Kubeadm

      +

      1.a. Bootstrapping cluster with kubeadm

      +

      1.b. Creating a HA cluster with kubeadm

      +
    2. +
    3. +

      Kubespray

      +
    4. +
    +
    +

    CI/ CD Tools

    + +

    Apache Spark

    + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/kubernetes/comparisons/index.html b/other-tools/kubernetes/comparisons/index.html new file mode 100644 index 00000000..35e515ef --- /dev/null +++ b/other-tools/kubernetes/comparisons/index.html @@ -0,0 +1,3301 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Comparison

    +

    k3s vs microk8s Comparision

    +

    Kubespray vs Kubeadm

    +

    Kubeadm provides domain Knowledge of +Kubernetes clusters' life cycle management, including self-hosted layouts, +dynamic discovery services and so on. Had it belonged to the new +operators world, it may +have been named a "Kubernetes cluster operator". Kubespray however, does generic +configuration management tasks from the "OS operators" ansible world, plus some +initial K8s clustering (with networking plugins included) and control plane bootstrapping.

    +

    Kubespray has started using kubeadm internally for cluster creation since v2.3 +in order to consume life cycle management domain knowledge from it and offload +generic OS configuration things from it, which hopefully benefits both sides.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/kubernetes/images/control_plane_ports_protocols.png b/other-tools/kubernetes/images/control_plane_ports_protocols.png new file mode 100644 index 00000000..9ad79950 Binary files /dev/null and b/other-tools/kubernetes/images/control_plane_ports_protocols.png differ diff --git a/other-tools/kubernetes/images/crc_security_group.png b/other-tools/kubernetes/images/crc_security_group.png new file mode 100644 index 00000000..a262f70c Binary files /dev/null and b/other-tools/kubernetes/images/crc_security_group.png differ diff --git a/other-tools/kubernetes/images/k3d-cluster-info.png b/other-tools/kubernetes/images/k3d-cluster-info.png new file mode 100644 index 00000000..d081ebd8 Binary files /dev/null and b/other-tools/kubernetes/images/k3d-cluster-info.png differ diff --git a/other-tools/kubernetes/images/k3d-cluster-list.png b/other-tools/kubernetes/images/k3d-cluster-list.png new file mode 100644 index 00000000..e76cb3ae Binary files /dev/null and b/other-tools/kubernetes/images/k3d-cluster-list.png differ diff --git a/other-tools/kubernetes/images/k3d-nodes-list.png b/other-tools/kubernetes/images/k3d-nodes-list.png new file mode 100644 index 00000000..4645660c Binary files /dev/null and b/other-tools/kubernetes/images/k3d-nodes-list.png differ diff --git a/other-tools/kubernetes/images/k3d_added_new_node.png b/other-tools/kubernetes/images/k3d_added_new_node.png new file mode 100644 index 00000000..ce4f659c Binary files /dev/null and b/other-tools/kubernetes/images/k3d_added_new_node.png differ diff --git a/other-tools/kubernetes/images/k3d_all.png b/other-tools/kubernetes/images/k3d_all.png new file mode 100644 index 00000000..22170759 Binary files /dev/null and b/other-tools/kubernetes/images/k3d_all.png differ diff --git a/other-tools/kubernetes/images/k3d_ha_all.png b/other-tools/kubernetes/images/k3d_ha_all.png new file mode 100644 index 00000000..cf39f30c Binary files /dev/null and b/other-tools/kubernetes/images/k3d_ha_all.png differ diff --git a/other-tools/kubernetes/images/k3d_ha_nodes.png b/other-tools/kubernetes/images/k3d_ha_nodes.png new file mode 100644 index 00000000..30fae943 Binary files /dev/null and b/other-tools/kubernetes/images/k3d_ha_nodes.png differ diff --git a/other-tools/kubernetes/images/k3d_ha_pods.png b/other-tools/kubernetes/images/k3d_ha_pods.png new file mode 100644 index 00000000..912f3765 Binary files /dev/null and b/other-tools/kubernetes/images/k3d_ha_pods.png differ diff --git a/other-tools/kubernetes/images/k3d_nodes.png b/other-tools/kubernetes/images/k3d_nodes.png new file mode 100644 index 00000000..41dd50cb Binary files /dev/null and b/other-tools/kubernetes/images/k3d_nodes.png differ diff --git a/other-tools/kubernetes/images/k3d_restarted_node.png b/other-tools/kubernetes/images/k3d_restarted_node.png new file mode 100644 index 00000000..0dcc25a4 Binary files /dev/null and b/other-tools/kubernetes/images/k3d_restarted_node.png differ diff --git a/other-tools/kubernetes/images/k3d_self_healing_ha_nodes.png b/other-tools/kubernetes/images/k3d_self_healing_ha_nodes.png new file mode 100644 index 00000000..2d51b04e Binary files /dev/null and b/other-tools/kubernetes/images/k3d_self_healing_ha_nodes.png differ diff --git a/other-tools/kubernetes/images/k3s-vs-microk8s.png b/other-tools/kubernetes/images/k3s-vs-microk8s.png new file mode 100644 index 00000000..c832aedf Binary files /dev/null and b/other-tools/kubernetes/images/k3s-vs-microk8s.png differ diff --git a/other-tools/kubernetes/images/k3s_active_agent_status.png b/other-tools/kubernetes/images/k3s_active_agent_status.png new file mode 100644 index 00000000..89966d0d Binary files /dev/null and b/other-tools/kubernetes/images/k3s_active_agent_status.png differ diff --git a/other-tools/kubernetes/images/k3s_active_master_status.png b/other-tools/kubernetes/images/k3s_active_master_status.png new file mode 100644 index 00000000..b9770871 Binary files /dev/null and b/other-tools/kubernetes/images/k3s_active_master_status.png differ diff --git a/other-tools/kubernetes/images/k3s_architecture.png b/other-tools/kubernetes/images/k3s_architecture.png new file mode 100644 index 00000000..4f936bc7 Binary files /dev/null and b/other-tools/kubernetes/images/k3s_architecture.png differ diff --git a/other-tools/kubernetes/images/k3s_ha_architecture.jpg b/other-tools/kubernetes/images/k3s_ha_architecture.jpg new file mode 100644 index 00000000..5cc94c97 Binary files /dev/null and b/other-tools/kubernetes/images/k3s_ha_architecture.jpg differ diff --git a/other-tools/kubernetes/images/k3s_high_availability.png b/other-tools/kubernetes/images/k3s_high_availability.png new file mode 100644 index 00000000..97d79263 Binary files /dev/null and b/other-tools/kubernetes/images/k3s_high_availability.png differ diff --git a/other-tools/kubernetes/images/k3s_security_group.png b/other-tools/kubernetes/images/k3s_security_group.png new file mode 100644 index 00000000..8ff61184 Binary files /dev/null and b/other-tools/kubernetes/images/k3s_security_group.png differ diff --git a/other-tools/kubernetes/images/k3sup.jpg b/other-tools/kubernetes/images/k3sup.jpg new file mode 100644 index 00000000..e3b0a64d Binary files /dev/null and b/other-tools/kubernetes/images/k3sup.jpg differ diff --git a/other-tools/kubernetes/images/k8s-dashboard-docker-app.jpg b/other-tools/kubernetes/images/k8s-dashboard-docker-app.jpg new file mode 100644 index 00000000..db6ce653 Binary files /dev/null and b/other-tools/kubernetes/images/k8s-dashboard-docker-app.jpg differ diff --git a/other-tools/kubernetes/images/k8s-dashboard.jpg b/other-tools/kubernetes/images/k8s-dashboard.jpg new file mode 100644 index 00000000..7b82d05b Binary files /dev/null and b/other-tools/kubernetes/images/k8s-dashboard.jpg differ diff --git a/other-tools/kubernetes/images/k8s_HA_cluster.png b/other-tools/kubernetes/images/k8s_HA_cluster.png new file mode 100644 index 00000000..2e2ce1bf Binary files /dev/null and b/other-tools/kubernetes/images/k8s_HA_cluster.png differ diff --git a/other-tools/kubernetes/images/k8s_components.jpg b/other-tools/kubernetes/images/k8s_components.jpg new file mode 100644 index 00000000..bb010634 Binary files /dev/null and b/other-tools/kubernetes/images/k8s_components.jpg differ diff --git a/other-tools/kubernetes/images/ked-cluster-list.png b/other-tools/kubernetes/images/ked-cluster-list.png new file mode 100644 index 00000000..e76cb3ae Binary files /dev/null and b/other-tools/kubernetes/images/ked-cluster-list.png differ diff --git a/other-tools/kubernetes/images/kubernetes-dashboard-port-type.png b/other-tools/kubernetes/images/kubernetes-dashboard-port-type.png new file mode 100644 index 00000000..7ab83ef2 Binary files /dev/null and b/other-tools/kubernetes/images/kubernetes-dashboard-port-type.png differ diff --git a/other-tools/kubernetes/images/microk8s_dashboard_ports.png b/other-tools/kubernetes/images/microk8s_dashboard_ports.png new file mode 100644 index 00000000..b628bd50 Binary files /dev/null and b/other-tools/kubernetes/images/microk8s_dashboard_ports.png differ diff --git a/other-tools/kubernetes/images/microk8s_microbot_app.png b/other-tools/kubernetes/images/microk8s_microbot_app.png new file mode 100644 index 00000000..5efa5848 Binary files /dev/null and b/other-tools/kubernetes/images/microk8s_microbot_app.png differ diff --git a/other-tools/kubernetes/images/minikube_addons.png b/other-tools/kubernetes/images/minikube_addons.png new file mode 100644 index 00000000..dddf37a7 Binary files /dev/null and b/other-tools/kubernetes/images/minikube_addons.png differ diff --git a/other-tools/kubernetes/images/minikube_config.png b/other-tools/kubernetes/images/minikube_config.png new file mode 100644 index 00000000..ad633b5c Binary files /dev/null and b/other-tools/kubernetes/images/minikube_config.png differ diff --git a/other-tools/kubernetes/images/minikube_dashboard_clusterip.png b/other-tools/kubernetes/images/minikube_dashboard_clusterip.png new file mode 100644 index 00000000..5469ca8e Binary files /dev/null and b/other-tools/kubernetes/images/minikube_dashboard_clusterip.png differ diff --git a/other-tools/kubernetes/images/minikube_dashboard_nodeport.png b/other-tools/kubernetes/images/minikube_dashboard_nodeport.png new file mode 100644 index 00000000..bd1b3f4f Binary files /dev/null and b/other-tools/kubernetes/images/minikube_dashboard_nodeport.png differ diff --git a/other-tools/kubernetes/images/minikube_hello-minikube_page.png b/other-tools/kubernetes/images/minikube_hello-minikube_page.png new file mode 100644 index 00000000..7022d94e Binary files /dev/null and b/other-tools/kubernetes/images/minikube_hello-minikube_page.png differ diff --git a/other-tools/kubernetes/images/minikube_nginx_page.png b/other-tools/kubernetes/images/minikube_nginx_page.png new file mode 100644 index 00000000..4b1dc309 Binary files /dev/null and b/other-tools/kubernetes/images/minikube_nginx_page.png differ diff --git a/other-tools/kubernetes/images/minikube_started.png b/other-tools/kubernetes/images/minikube_started.png new file mode 100644 index 00000000..2949a34b Binary files /dev/null and b/other-tools/kubernetes/images/minikube_started.png differ diff --git a/other-tools/kubernetes/images/module_01.svg b/other-tools/kubernetes/images/module_01.svg new file mode 100644 index 00000000..ec0e55f1 --- /dev/null +++ b/other-tools/kubernetes/images/module_01.svg @@ -0,0 +1 @@ +16.07.28_k8s_visual_diagrams diff --git a/other-tools/kubernetes/images/module_02.svg b/other-tools/kubernetes/images/module_02.svg new file mode 100644 index 00000000..d4106ec1 --- /dev/null +++ b/other-tools/kubernetes/images/module_02.svg @@ -0,0 +1 @@ +16.07.28_k8s_visual_diagrams diff --git a/other-tools/kubernetes/images/module_03.svg b/other-tools/kubernetes/images/module_03.svg new file mode 100644 index 00000000..1ecb989c --- /dev/null +++ b/other-tools/kubernetes/images/module_03.svg @@ -0,0 +1 @@ +16.07.28_k8s_visual_diagrams diff --git a/other-tools/kubernetes/images/module_04.svg b/other-tools/kubernetes/images/module_04.svg new file mode 100644 index 00000000..63ad8e47 --- /dev/null +++ b/other-tools/kubernetes/images/module_04.svg @@ -0,0 +1 @@ +16.07.28_k8s_visual_diagrams diff --git a/other-tools/kubernetes/images/module_05.svg b/other-tools/kubernetes/images/module_05.svg new file mode 100644 index 00000000..382a6c27 --- /dev/null +++ b/other-tools/kubernetes/images/module_05.svg @@ -0,0 +1 @@ +16.07.28_k8s_visual_diagrams diff --git a/other-tools/kubernetes/images/module_06.svg b/other-tools/kubernetes/images/module_06.svg new file mode 100644 index 00000000..97c73217 --- /dev/null +++ b/other-tools/kubernetes/images/module_06.svg @@ -0,0 +1 @@ +16.07.28_k8s_visual_diagrams diff --git a/other-tools/kubernetes/images/network-layout.png b/other-tools/kubernetes/images/network-layout.png new file mode 100644 index 00000000..404b58a4 Binary files /dev/null and b/other-tools/kubernetes/images/network-layout.png differ diff --git a/other-tools/kubernetes/images/nginx-pod-worker-node.png b/other-tools/kubernetes/images/nginx-pod-worker-node.png new file mode 100644 index 00000000..b16caba3 Binary files /dev/null and b/other-tools/kubernetes/images/nginx-pod-worker-node.png differ diff --git a/other-tools/kubernetes/images/nginx_page.png b/other-tools/kubernetes/images/nginx_page.png new file mode 100644 index 00000000..53c3f3da Binary files /dev/null and b/other-tools/kubernetes/images/nginx_page.png differ diff --git a/other-tools/kubernetes/images/okd_architecture.png b/other-tools/kubernetes/images/okd_architecture.png new file mode 100644 index 00000000..55f1e2ce Binary files /dev/null and b/other-tools/kubernetes/images/okd_architecture.png differ diff --git a/other-tools/kubernetes/images/running-nginx-container-app.jpg b/other-tools/kubernetes/images/running-nginx-container-app.jpg new file mode 100644 index 00000000..682e62a0 Binary files /dev/null and b/other-tools/kubernetes/images/running-nginx-container-app.jpg differ diff --git a/other-tools/kubernetes/images/running_minikube_services.png b/other-tools/kubernetes/images/running_minikube_services.png new file mode 100644 index 00000000..20d4f719 Binary files /dev/null and b/other-tools/kubernetes/images/running_minikube_services.png differ diff --git a/other-tools/kubernetes/images/running_pods.png b/other-tools/kubernetes/images/running_pods.png new file mode 100644 index 00000000..b898dbb2 Binary files /dev/null and b/other-tools/kubernetes/images/running_pods.png differ diff --git a/other-tools/kubernetes/images/running_services.png b/other-tools/kubernetes/images/running_services.png new file mode 100644 index 00000000..0c810b89 Binary files /dev/null and b/other-tools/kubernetes/images/running_services.png differ diff --git a/other-tools/kubernetes/images/single_master_architecture.png b/other-tools/kubernetes/images/single_master_architecture.png new file mode 100644 index 00000000..bb7068c0 Binary files /dev/null and b/other-tools/kubernetes/images/single_master_architecture.png differ diff --git a/other-tools/kubernetes/images/skooner-dashboard.png b/other-tools/kubernetes/images/skooner-dashboard.png new file mode 100644 index 00000000..b5d7b152 Binary files /dev/null and b/other-tools/kubernetes/images/skooner-dashboard.png differ diff --git a/other-tools/kubernetes/images/skooner-pod-worker-node.png b/other-tools/kubernetes/images/skooner-pod-worker-node.png new file mode 100644 index 00000000..e2be83d6 Binary files /dev/null and b/other-tools/kubernetes/images/skooner-pod-worker-node.png differ diff --git a/other-tools/kubernetes/images/skooner_port.png b/other-tools/kubernetes/images/skooner_port.png new file mode 100644 index 00000000..f3dec559 Binary files /dev/null and b/other-tools/kubernetes/images/skooner_port.png differ diff --git a/other-tools/kubernetes/images/the_k8s_dashboard.png b/other-tools/kubernetes/images/the_k8s_dashboard.png new file mode 100644 index 00000000..265e3efb Binary files /dev/null and b/other-tools/kubernetes/images/the_k8s_dashboard.png differ diff --git a/other-tools/kubernetes/images/worker_nodes_ports_protocols.png b/other-tools/kubernetes/images/worker_nodes_ports_protocols.png new file mode 100644 index 00000000..c4c98443 Binary files /dev/null and b/other-tools/kubernetes/images/worker_nodes_ports_protocols.png differ diff --git a/other-tools/kubernetes/k0s/index.html b/other-tools/kubernetes/k0s/index.html new file mode 100644 index 00000000..bfc5a1f2 --- /dev/null +++ b/other-tools/kubernetes/k0s/index.html @@ -0,0 +1,3444 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    k0s

    +

    Key Features

    +
      +
    • Available as a single static binary
    • +
    • Offers a self-hosted, isolated control plane
    • +
    • Supports a variety of storage backends, including etcd, SQLite, MySQL (or any +compatible), and PostgreSQL.
    • +
    • Offers an Elastic control plane
    • +
    • Vanilla upstream Kubernetes
    • +
    • Supports custom container runtimes (containerd is the default)
    • +
    • Supports custom Container Network Interface (CNI) plugins (calico is the default)
    • +
    • Supports x86_64 and arm64
    • +
    +

    Pre-requisite

    +

    We will need 1 VM to create a single node kubernetes cluster using k0s. +We are using following setting for this purpose:

    +
      +
    • +

      1 Linux machine, ubuntu-22.04-x86_64 or your choice of Ubuntu OS image, +cpu-su.2 flavor with 2vCPU, 8GB RAM, 20GB storage - also assign Floating IP +to this VM.

      +
    • +
    • +

      setup Unique hostname to the machine using the following command:

      +
      echo "<node_internal_IP> <host_name>" >> /etc/hosts
      +hostnamectl set-hostname <host_name>
      +
      +

      For example:

      +
      echo "192.168.0.252 k0s" >> /etc/hosts
      +hostnamectl set-hostname k0s
      +
      +
    • +
    +

    Install k0s on Ubuntu

    +

    Run the below command on the Ubuntu VM:

    +
      +
    • +

      SSH into k0s machine

      +
    • +
    • +

      Switch to root user: sudo su

      +
    • +
    • +

      Update the repositories and packages:

      +
      apt-get update && apt-get upgrade -y
      +
      +
    • +
    • +

      Download k0s:

      +
      curl -sSLf https://get.k0s.sh | sudo sh
      +
      +
    • +
    • +

      Install k0s as a service:

      +
      k0s install controller --single
      +
      +INFO[2021-10-12 01:45:52] no config file given, using defaults
      +INFO[2021-10-12 01:45:52] creating user: etcd
      +INFO[2021-10-12 01:46:00] creating user: kube-apiserver
      +INFO[2021-10-12 01:46:00] creating user: konnectivity-server
      +INFO[2021-10-12 01:46:00] creating user: kube-scheduler
      +INFO[2021-10-12 01:46:01] Installing k0s service
      +
      +
    • +
    • +

      Start k0s as a service:

      +
      k0s start
      +
      +
    • +
    • +

      Check service, logs and k0s status:

      +
      k0s status
      +
      +Version: v1.22.2+k0s.1
      +Process ID: 16625
      +Role: controller
      +Workloads: true
      +
      +
    • +
    • +

      Access your cluster using kubectl:

      +
      k0s kubectl get nodes
      +
      +NAME   STATUS   ROLES    AGE    VERSION
      +k0s    Ready    <none>   8m3s   v1.22.2+k0s
      +
      +
      alias kubectl='k0s kubectl'
      +kubectl get nodes -o wide
      +
      +
      kubectl get all
      +NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
      +service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   38s
      +
      +
    • +
    +

    Uninstall k0s

    +
      +
    • +

      Stop the service:

      +
      sudo k0s stop
      +
      +
    • +
    • +

      Execute the k0s reset command - cleans up the installed system service, data +directories, containers, mounts and network namespaces.

      +
      sudo k0s reset
      +
      +
    • +
    • +

      Reboot the system

      +
    • +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/kubernetes/k3s/k3s-ha-cluster-using-k3d/index.html b/other-tools/kubernetes/k3s/k3s-ha-cluster-using-k3d/index.html new file mode 100644 index 00000000..01d4b064 --- /dev/null +++ b/other-tools/kubernetes/k3s/k3s-ha-cluster-using-k3d/index.html @@ -0,0 +1,3453 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    + +
    + + + +
    +
    + + + + + + + + + +

    Set up K3s in High Availability using k3d

    +

    First, Kubernetes HA has two possible setups: embedded or external database +(DB). We’ll use the embedded DB in this HA K3s cluster setup. For which etcd +is the default embedded DB.

    +

    There are some strongly recommended Kubernetes HA best practices +and also there is Automated HA master deployment doc.

    +

    Pre-requisite

    +

    Make sure you have already installed k3d following this.

    +

    HA cluster with at least three control plane nodes

    +
    k3d cluster create --servers 3 --image rancher/k3s:latest
    +
    +

    Here, --server 3: specifies requests three nodes to be created with the role server +and --image rancher/k3s:latest: specifies the K3s image to be used here we are +using latest

    +
      +
    • +

      Switch context to the new cluster:

      +
      kubectl config use-context k3d-k3s-default
      +
      +

      You can now check what has been created from the different points of view:

      +
      kubectl get nodes --output wide
      +
      +

      The output will look like: +k3d HA nodes

      +
      kubectl get pods --all-namespaces --output wide
      +
      +

      OR,

      +
      kubectl get pods -A -o wide
      +
      +

      The output will look like: +k3d HA pods

      +
    • +
    +

    Scale up the cluster

    +

    You can quickly simulate the addition of another control plane node to the HA cluster:

    +
    k3d node create extraCPnode --role=server --image=rancher/k3s:latest
    +
    +INFO[0000] Adding 1 node(s) to the runtime local cluster 'k3s-default'...
    +INFO[0000] Starting Node 'k3d-extraCPnode-0'
    +INFO[0018] Updating loadbalancer config to include new server node(s)
    +INFO[0018] Successfully configured loadbalancer k3d-k3s-default-serverlb!
    +INFO[0019] Successfully created 1 node(s)!
    +
    +

    Here, extraCPnode: specifies the name for the node, +--role=server : sets the role for the node to be a control plane/server, +--image rancher/k3s:latest: specifies the K3s image to be used here we are +using latest

    +
    kubectl get nodes
    +
    +NAME                       STATUS   ROLES         AGE   VERSION
    +k3d-extracpnode-0          Ready    etcd,master   31m   v1.19.3+k3s2
    +k3d-k3s-default-server-0   Ready    etcd,master   47m   v1.19.3+k3s2
    +k3d-k3s-default-server-1   Ready    etcd,master   47m   v1.19.3+k3s2
    +k3d-k3s-default-server-2   Ready    etcd,master   47m   v1.19.3+k3s2
    +
    +

    OR,

    +
    kubectl get nodes --output wide
    +
    +

    The output looks like below: +k3d added new node

    +

    Heavy Armored against crashes

    +

    As we are working with containers, the best way to "crash" a node is to literally +stop the container:

    +
    docker stop k3d-k3s-default-server-0
    +
    +
    +

    Note

    +

    The Docker and k3d commands will show the state change immediately. However, +the Kubernetes (read: K8s or K3s) cluster needs a short time to see the state +change to NotReady.

    +
    +
    kubectl get nodes
    +
    +NAME                       STATUS     ROLES         AGE   VERSION
    +k3d-extracpnode-0          Ready      etcd,master   32m   v1.19.3+k3s2
    +k3d-k3s-default-server-0   NotReady   etcd,master   48m   v1.19.3+k3s2
    +k3d-k3s-default-server-1   Ready      etcd,master   48m   v1.19.3+k3s2
    +k3d-k3s-default-server-2   Ready      etcd,master   48m   v1.19.3+k3s2
    +
    +

    Now it is a good time to reference again the load balancer k3d uses and how it is +critical in allowing us to continue accessing the K3s cluster.

    +

    While the load balancer internally switched to the next available node, from an +external connectivity point of view, we still use the same IP/host. This abstraction +saves us quite some efforts and it’s one of the most useful features of k3d.

    +

    Let’s look at the state of the cluster:

    +
    kubectl get all --all-namespaces
    +
    +

    The output looks like below: +k3d HA all

    +

    Everything looks right. If we look at the pods more specifically, then we will +see that K3s automatically self-healed by recreating pods running on the failed +node on other nodes:

    +
    kubectl get pods --all-namespaces --output wide
    +
    +

    As the output can be seen: +k3d self healing HA nodes

    +

    Finally, to show the power of HA and how K3s manages it, let’s restart the node0 +and see it being re-included into the cluster as if nothing happened:

    +
    docker start k3d-k3s-default-server-0
    +
    +

    Our cluster is stable, and all the nodes are fully operational again as shown below: +k3d restarted node

    +

    Cleaning the resources

    +
    k3d cluster delete
    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/kubernetes/k3s/k3s-ha-cluster/index.html b/other-tools/kubernetes/k3s/k3s-ha-cluster/index.html new file mode 100644 index 00000000..29c4bb80 --- /dev/null +++ b/other-tools/kubernetes/k3s/k3s-ha-cluster/index.html @@ -0,0 +1,3653 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    K3s with High Availability setup

    +

    First, Kubernetes HA has two possible setups: embedded or external database +(DB). We’ll use the external DB in this HA K3s cluster setup. For which MySQL +is the external DB as shown here: +k3s HA architecture with external database

    +

    In the diagram above, both the user running kubectl and each of the two agents +connect to the TCP Load Balancer. The Load Balancer uses a list of private IP +addresses to balance the traffic between the three servers. If one of the +servers crashes, it is be removed from the list of IP addresses.

    +

    The servers use the SQL data store to synchronize the cluster’s state.

    +

    Requirements

    +

    i. Managed TCP Load Balancer

    +

    ii. Managed MySQL service

    +

    iii. Three VMs to run as K3s servers

    +

    iv. Two VMs to run as K3s agents

    +

    There are some strongly recommended Kubernetes HA best practices +and also there is Automated HA master deployment doc.

    +

    Managed TCP Load Balancer

    +

    Create a load balancer using nginx: +The nginx.conf located at etc/nginx/nginx.conf contains upstream that is pointing +to the 3 K3s Servers on port 6443 as shown below:

    +
    events {}
    +...
    +
    +stream {
    +  upstream k3s_servers {
    +    server <k3s_server1-Internal-IP>:6443;
    +    server <k3s_server2-Internal-IP>:6443;
    +    server <k3s_server3-Internal-IP>:6443;
    +  }
    +
    +  server {
    +    listen 6443;
    +    proxy_pass k3s_servers;
    +  }
    +}
    +
    +

    Managed MySQL service

    +

    Create a MySQL database server with a new database and create a new +mysql user and password with granted permission to read/write the new database. +In this example, you can create:

    +

    database name: <YOUR_DB_NAME> +database user: <YOUR_DB_USER_NAME> +database password: <YOUR_DB_USER_PASSWORD>

    +

    Three VMs to run as K3s servers

    +

    Create 3 K3s Master VMs and perform the following steps on each of them: +i. Export the datastore endpoint:

    +
    export K3S_DATASTORE_ENDPOINT='mysql://<YOUR_DB_USER_NAME>:<YOUR_DB_USER_PASSWORD>@tcp(<MySQL-Server-Internal-IP>:3306)/<YOUR_DB_NAME>'
    +
    +

    ii. Install the K3s with setting not to deploy any pods on this server +(opposite of affinity) unless critical addons and tls-san set <Loadbalancer-Internal-IP> +as alternative name for that tls certificate.

    +
    curl -sfL https://get.k3s.io | sh -s - server \
    +    --node-taint CriticalAddonsOnly=true:NoExecute \
    +    --tls-san <Loadbalancer-Internal-IP_or_Hostname>
    +
    +
      +
    • +

      Verify all master nodes are visible to one another:

      +
      sudo k3s kubectl get node
      +
      +
    • +
    • +

      Generate token from one of the K3s Master VMs: +You need to extract a token from the master that will be used to join the nodes +to the control plane by running following command on one of the K3s master node:

      +
      sudo cat /var/lib/rancher/k3s/server/node-token
      +
      +

      You will then obtain a token that looks like:

      +
      K1097aace305b0c1077fc854547f34a598d23330ff047ddeed8beb3c428b38a1ca7::server:6cc9fbb6c5c9de96f37fb14b5535c778
      +
      +
    • +
    +

    Two VMs to run as K3s agents

    +

    Set the K3S_URL to point to the Loadbalancer’s internal IP and set the K3S_TOKEN +from the clipboard on both of the agent nodes:

    +
    curl -sfL https://get.k3s.io | K3S_URL=https://<Loadbalancer-Internal-IP_or_Hostname>:6443
    +    K3S_TOKEN=<Token_From_Master> sh -
    +
    +

    Once both Agents are running, if you run the following command on Master Server, +you can see all nodes:

    +
    sudo k3s kubectl get node
    +
    +

    Simulate a failure

    +

    To simulate a failure, stop the K3s service on one or more of the K3s servers manually, +then run the kubectl get nodes command:

    +
    sudo systemctl stop k3s
    +
    +

    The third server will take over at this point.

    +
      +
    • +

      To restart servers manually:

      +
      sudo systemctl restart k3s
      +
      +
    • +
    +

    On your local development machine to access Kubernetes Cluster Remotely (Optional)

    +
    +

    Important Requirement

    +

    Your local development machine must have installed kubectl.

    +
    +
      +
    • +

      Copy kubernetes config to your local machine: +Copy the kubeconfig file's content located at the K3s master node at /etc/rancher/k3s/k3s.yaml +to your local machine's ~/.kube/config file. Before saving, please change the cluster +server path from 127.0.0.1 to <Loadbalancer-Internal-IP>. This will allow +your local machine to see the cluster nodes:

      +
      kubectl get nodes
      +
      +
    • +
    +

    Kubernetes Dashboard

    +

    The Kubernetes Dashboard +is a GUI tool to help you work more efficiently with K8s cluster. This is only +accessible from within the cluster (at least not without some serious tweaking).

    +

    check releases for the command +to use for Installation:

    +
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
    +
    +
      +
    • +

      Dashboard RBAC Configuration:

      +

      dashboard.admin-user.yml

      +
      apiVersion: v1
      +kind: ServiceAccount
      +metadata:
      +  name: admin-user
      +  namespace: kubernetes-dashboard
      +
      +

      dashboard.admin-user-role.yml

      +
      apiVersion: rbac.authorization.k8s.io/v1
      +kind: ClusterRoleBinding
      +metadata:
      +  name: admin-user
      +roleRef:
      +  apiGroup: rbac.authorization.k8s.io
      +  kind: ClusterRole
      +  name: cluster-admin
      +subjects:
      +- kind: ServiceAccount
      +  name: admin-user
      +  namespace: kubernetes-dashboard
      +
      +
    • +
    • +

      Deploy the admin-user configuration:

      +
      +

      Important Note

      +

      If you're doing this from your local development machine, remove sudo k3s +and just use kubectl)

      +
      +
      sudo k3s kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml
      +
      +
    • +
    • +

      Get bearer token

      +
      sudo k3s kubectl -n kubernetes-dashboard describe secret admin-user-token
      +    | grep ^token
      +
      +
    • +
    • +

      Start dashboard locally:

      +
      sudo k3s kubectl proxy
      +
      +

      Then you can sign in at this URL using your token we got in the previous step:

      +
      http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
      +
      +
    • +
    +

    Deploying Nginx using deployment

    +
      +
    • +

      Create a deployment nginx.yaml:

      +
      vi nginx.yaml
      +
      +
    • +
    • +

      Copy and paste the following content in nginx.yaml:

      +
      apiVersion: apps/v1
      +kind: Deployment
      +metadata:
      +  name: mysite
      +  labels:
      +    app: mysite
      +spec:
      +  replicas: 1
      +  selector:
      +    matchLabels:
      +      app: mysite
      +  template:
      +    metadata:
      +      labels:
      +        app : mysite
      +    spec:
      +      containers:
      +        - name : mysite
      +          image: nginx
      +          ports:
      +            - containerPort: 80
      +
      +
      sudo k3s kubectl apply -f nginx.yaml
      +
      +
    • +
    • +

      Verify the nginx pod is in Running state:

      +
      sudo k3s kubectl get pods --all-namespaces
      +
      +

      OR,

      +
      kubectl get pods --all-namespaces --output wide
      +
      +

      OR,

      +
      kubectl get pods -A -o wide
      +
      +
    • +
    • +

      Scale the pods to available agents:

      +
      sudo k3s kubectl scale --replicas=2 deploy/mysite
      +
      +
    • +
    • +

      View all deployment status:

      +
      sudo k3s kubectl get deploy mysite
      +
      +NAME     READY   UP-TO-DATE   AVAILABLE   AGE
      +mysite   2/2     2            2           85s
      +
      +
    • +
    • +

      Delete the nginx deployment and pod:

      +
      sudo k3s kubectl delete -f nginx.yaml
      +
      +

      OR,

      +
      sudo k3s kubectl delete deploy mysite
      +
      +
    • +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/kubernetes/k3s/k3s-using-k3d/index.html b/other-tools/kubernetes/k3s/k3s-using-k3d/index.html new file mode 100644 index 00000000..3986a742 --- /dev/null +++ b/other-tools/kubernetes/k3s/k3s-using-k3d/index.html @@ -0,0 +1,3524 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Setup K3s cluster Using k3d

    +

    One of the most popular and second method of creating k3s cluster is by using k3d. +By the name itself it suggests, K3s-in-docker, is a wrapper around K3s – Lightweight +Kubernetes that runs it in docker. Please refer to this link +to get brief insights of this wonderful tool. It provides a seamless experience +working with K3s cluster management with some straight forward commands. k3d is +efficient enough to create and manage K3s single node and well as K3s High +Availability clusters just with few commands.

    +
    +

    Note

    +

    For using k3d you must have docker installed in your system

    +
    +
    +

    Install Docker

    +
      +
    • +

      Install container runtime - docker

      +
      apt-get install docker.io -y
      +
      +
    • +
    • +

      Configure the Docker daemon, in particular to use systemd for the management +of the container’s cgroups

      +
      cat <<EOF | sudo tee /etc/docker/daemon.json
      +{
      +"exec-opts": ["native.cgroupdriver=systemd"]
      +}
      +EOF
      +
      +systemctl enable --now docker
      +usermod -aG docker ubuntu
      +systemctl daemon-reload
      +systemctl restart docker
      +
      +
    • +
    +
    +

    Install kubectl

    +
      +
    • +

      Install kubectl binary

      +

      kubectl: the command line util to talk to your cluster.

      +
      snap install kubectl --classic
      +
      +

      This outputs: kubectl 1.26.1 from Canonical✓ installed

      +
    • +
    • +

      Now verify the kubectl version:

      +
      kubectl version -o yaml
      +
      +
    • +
    +
    +

    Installing k3d

    +
      +
    • +

      k3d Installation:

      +

      The below command will install the k3d, in your system using the installation +script.

      +
      wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash
      +
      +

      OR,

      +
      curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash
      +
      +

      To verify the installation, please run the following command:

      +
      k3d version
      +
      +k3d version v5.0.0
      +k3s version v1.21.5-k3s1 (default)
      +
      +

      After the successful installation, you are ready to create your cluster using +k3d and run K3s in docker within seconds.

      +
    • +
    • +

      Getting Started:

      +

      Now let's directly jump into creating our K3s cluster using k3d.

      +
        +
      1. +

        Create k3d Cluster:

        +

        k3d cluster create k3d-demo-cluster

        +

        This single command spawns a K3s cluster with two containers: A Kubernetes +control-plane node(server) and a load balancer(serverlb) in front +of it. It puts both of them in a dedicated Docker network and exposes the +Kubernetes API on a randomly chosen free port on the Docker host. It also +creates a named Docker volume in the background as a preparation for image +imports.

        +

        You can also look for advance syntax for cluster creation:

        +

        k3d cluster create mycluster --api-port 127.0.0.1:6445 --servers 3 \ + --agents 2 --volume '/home/me/mycode:/code@agent[*]' --port '8080:80@loadbalancer'

        +

        Here, the above single command spawns a K3s cluster with six containers:

        +
          +
        • +

          load balancer

          +
        • +
        • +

          3 servers (control-plane nodes)

          +
        • +
        • +

          2 agents (formerly worker nodes)

          +
        • +
        +
        +

        With the --api-port 127.0.0.1:6445, you tell k3d to map the Kubernetes +API Port (6443 internally) to 127.0.0.1/localhost’s port 6445. That +means that you will have this connection string in your Kubeconfig: +server: https://127.0.0.1:6445 to connect to this cluster.

        +

        This port will be mapped from the load balancer to your host system. From +there, requests will be proxied to your server nodes, effectively simulating +a production setup, where server nodes also can go down and you would want +to failover to another server.

        +

        The --volume /home/me/mycode:/code@agent[*] bind mounts your local directory +/home/me/mycode to the path /code inside all ([*] of your agent nodes). +Replace * with an index (here: 0 or 1) to only mount it into one of them.

        +

        The specification telling k3d which nodes it should mount the volume to +is called "node filter" and it’s also used for other flags, like the --port +flag for port mappings.

        +

        That said, --port '8080:80@loadbalancer' maps your local host’s port 8080 +to port 80 on the load balancer (serverlb), which can be used to forward +HTTP ingress traffic to your cluster. For example, you can now deploy a +web app into the cluster (Deployment), which is exposed (Service) externally +via an Ingress such as myapp.k3d.localhost.

        +

        Then (provided that everything is set up to resolve that domain to your +localhost IP), you can point your browser to http://myapp.k3d.localhost:8080 +to access your app. Traffic then flows from your host through the Docker +bridge interface to the load balancer. From there, it’s proxied to the +cluster, where it passes via Ingress and Service to your application Pod.

        +
        +

        Note

        +

        You have to have some mechanism set up to route to resolve myapp.k3d.localhost +to your local host IP (127.0.0.1). The most common way is using entries +of the form 127.0.0.1 myapp.k3d.localhost in your /etc/hosts file +(C:\Windows\System32\drivers\etc\hosts on Windows). However, this does +not allow for wildcard entries (*.localhost), so it may become a bit +cumbersome after a while, so you may want to have a look at tools like +dnsmasq (MacOS/UNIX) or Acrylic (Windows) to ease the burden.

        +
        +
      2. +
      3. +

        Getting the cluster’s kubeconfig: +Get the new cluster’s connection details merged into your default kubeconfig +(usually specified using the KUBECONFIG environment variable or the default +path $HOME/.kube/config) and directly switch to the new context:

        +

        k3d kubeconfig merge k3d-demo-cluster --kubeconfig-switch-context

        +

        This outputs:

        +

        /root/.k3d/kubeconfig-k3d-demo-cluster.yaml

        +
      4. +
      5. +

        Checking the nodes running on k3d cluster:

        +

        k3d node list

        +

        k3d nodes list

        +

        You can see here two nodes. The (very) smart implementation here is that +while the cluster is running on its node k3d-k3s-default-server-0, +there is another "node" that acts as the load balancer i.e. k3d-k3d-demo-cluster-serverlb.

        +
      6. +
      7. +

        Firing Kubectl commands that allows you to run commands against Kubernetes:

        +

        i. The below command will list the nodes available in our cluster:

        +

        kubectl get nodes -o wide

        +

        OR,

        +

        kubectl get nodes --output wide

        +

        The output will look like: +k3d nodes list

        +

        ii. To look at what’s inside the K3s cluster (pods, services, deployments, +etc.):

        +

        kubectl get all --all-namespaces

        +

        The output will look like: +k3d all

        +

        We can see that, in addition to the Kubernetes service, K3s deploys DNS, +metrics and ingress (traefik) services when you use the defaults.

        +

        iii. List the active k3d clusters:

        +

        k3d cluster list

        +

        k3d cluster list

        +

        iv. Check the cluster connectivity:

        +

        kubectl cluster-info

        +

        kubectl cluster-info

        +

        To further debug and diagnose cluster problems, use 'kubectl cluster-info +dump'.

        +
      8. +
      9. +

        Check the active containers:

        +
        docker ps
        +
        +
      10. +
      +

      Now as you can observe, the cluster is up and running and we can play around +with the cluster, you can create and deploy your applications over the cluster.

      +
    • +
    • +

      Deleting Cluster:

      +
      k3d cluster delete k3d-demo-cluster
      +
      +INFO[0000] Deleting cluster 'k3d-demo-cluster'
      +INFO[0000] Deleted k3d-k3d-demo-cluster-serverlb
      +INFO[0001] Deleted k3d-k3d-demo-cluster-server-0
      +INFO[0001] Deleting cluster network 'k3d-k3d-demo-cluster'
      +INFO[0001] Deleting image volume 'k3d-k3d-demo-cluster-images'
      +INFO[0001] Removing cluster details from default kubeconfig...
      +INFO[0001] Removing standalone kubeconfig file (if there is one)...
      +INFO[0001] Successfully deleted cluster k3d-demo-cluster!
      +
      +
    • +
    +

    You can also create a k3d High Availability cluster +and add as many nodes you want within seconds.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/kubernetes/k3s/k3s-using-k3sup/index.html b/other-tools/kubernetes/k3s/k3s-using-k3sup/index.html new file mode 100644 index 00000000..9a0e0a37 --- /dev/null +++ b/other-tools/kubernetes/k3s/k3s-using-k3sup/index.html @@ -0,0 +1,3418 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    K3s cluster setup using k3sup

    +

    k3sup (pronounced ketchup) is a popular open source tool to install K3s over +SSH.

    +
      +
    • Bootstrap the cluster +k3sup Setup
    • +
    +

    The two most important commands in k3sup are:

    +

    i. install: install K3s to a new server and create a join token for the cluster

    +

    ii. join: fetch the join token from a server, then use it to install K3s to an +agent

    +

    Download k3sup

    +
    curl -sLS https://get.k3sup.dev | sh
    +sudo install k3sup /usr/bin/
    +
    +k3sup --help
    +
    +
      +
    • +

      Other options for install:

      +

      --cluster - start this server in clustering mode using embedded etcd (embedded +HA)

      +

      --skip-install - if you already have k3s installed, you can just run this command +to get the kubeconfig

      +

      --ssh-key - specify a specific path for the SSH key for remote login

      +

      --local-path - default is ./kubeconfig - set the file where you want to save +your cluster's kubeconfig. By default this file will be overwritten.

      +

      --merge - Merge config into existing file instead of overwriting (e.g. to add +config to the default kubectl config, use --local-path ~/.kube/config --merge).

      +

      --context - default is default - set the name of the kubeconfig context.

      +

      --ssh-port - default is 22, but you can specify an alternative port i.e. 2222

      +

      --k3s-extra-args - Optional extra arguments to pass to k3s installer, wrapped +in quotes, i.e. --k3s-extra-args '--no-deploy traefik' or +--k3s-extra-args '--docker'. For multiple args combine then within single +quotes --k3s-extra-args

      +

      --no-deploy traefik --docker.

      +

      --k3s-version - set the specific version of k3s, i.e. v0.9.1

      +

      --ipsec - Enforces the optional extra argument for k3s: --flannel-backend +option: ipsec

      +

      --print-command - Prints out the command, sent over SSH to the remote computer

      +

      --datastore - used to pass a SQL connection-string to the --datastore-endpoint +flag of k3s.

      +

      See even more install options by running k3sup install --help.

      +
    • +
    • +

      On Master Node:

      +
      export SERVER_IP=<Master-Internal-IP>
      +export USER=root
      +
      +k3sup install --ip $SERVER_IP --user $USER
      +
      +
    • +
    • +

      On Agent Node: +Next join one or more agents to the cluster:

      +
      export AGENT_IP=<Agent-Internal-IP>
      +
      +export SERVER_IP=<Master-Internal-IP>
      +export USER=root
      +
      +k3sup join --ip $AGENT_IP --server-ip $SERVER_IP --user $USER
      +
      +
    • +
    +

    Create a multi-master (HA) setup with external SQL

    +
    export LB_IP='<Loadbalancer-Internal-IP_or_Hostname>'
    +export DATASTORE='mysql://<YOUR_DB_USER_NAME>:<YOUR_DB_USER_PASSWORD>@tcp(<MySQL-Server-Internal-IP>:3306)/<YOUR_DB_NAME>'
    +export CHANNEL=latest
    +
    +

    Before continuing, check that your environment variables are still populated from +earlier, and if not, trace back and populate them.

    +
    echo $LB_IP
    +echo $DATASTORE
    +echo $CHANNEL
    +
    +
    k3sup install --user root --ip $SERVER1 \
    +--k3s-channel $CHANNEL \
    +--print-command \
    +--datastore='${DATASTORE}' \
    +--tls-san $LB_IP
    +
    +k3sup install --user root --ip $SERVER2 \
    +--k3s-channel $CHANNEL \
    +--print-command \
    +--datastore='${DATASTORE}' \
    +--tls-san $LB_IP
    +
    +k3sup install --user root --ip $SERVER3 \
    +--k3s-channel $CHANNEL \
    +--print-command \
    +--datastore='${DATASTORE}' \
    +--tls-san $LB_IP
    +
    +k3sup join --user root --server-ip $LB_IP --ip $AGENT1 \
    +--k3s-channel $CHANNEL \
    +--print-command
    +
    +k3sup join --user root --server-ip $LB_IP --ip $AGENT2 \
    +--k3s-channel $CHANNEL \
    +--print-command
    +
    +
    +

    There will be a kubeconfig file created in the current working directory with the +IP address of the LoadBalancer set for kubectl to use.

    +
      +
    • +

      Check the nodes have joined:

      +
      export KUBECONFIG=`pwd`/kubeconfig
      +kubectl get node
      +
      +
    • +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/kubernetes/k3s/k3s/index.html b/other-tools/kubernetes/k3s/k3s/index.html new file mode 100644 index 00000000..c8de285e --- /dev/null +++ b/other-tools/kubernetes/k3s/k3s/index.html @@ -0,0 +1,3949 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    K3s

    +

    Features

    +
      +
    • +

      Lightweight certified K8s distro

      +
    • +
    • +

      Built for production operations

      +
    • +
    • +

      40MB binary, 250MB memeory consumption

      +
    • +
    • +

      Single process w/ integrated K8s master, Kubelet, and containerd

      +
    • +
    • +

      Supports not only etcd to hold the cluster state, but also SQLite +(for single-node, simpler setups) or external DBs like MySQL and PostgreSQL

      +
    • +
    • +

      Open source project

      +
    • +
    +

    Components and architecure

    +

    K3s Components and architecure

    +
      +
    • +

      High-Availability K3s Server with an External DB:

      +

      K3s Components and architecure or, +K3s Components and architecure

      +

      For this kind of high availability k3s setup read this.

      +
    • +
    +

    Pre-requisite

    +

    We will need 1 control-plane(master) and 2 worker nodes to create a single +control-plane kubernetes cluster using k3s. We are using following setting +for this purpose:

    +
      +
    • +

      1 Linux machine for master, ubuntu-22.04-x86_64 or your choice of Ubuntu OS +image, cpu-su.2 flavor with 2vCPU, 8GB RAM, 20GB storage - also +assign Floating IP +to the master node.

      +
    • +
    • +

      2 Linux machines for worker, ubuntu-22.04-x86_64 or your choice of Ubuntu OS +image, cpu-su.1 flavor with 1vCPU, 4GB RAM, 20GB storage.

      +
    • +
    • +

      ssh access to all machines: Read more here +on how to set up SSH on your remote VMs.

      +
    • +
    +

    Networking

    +

    The K3s server needs port 6443 to be accessible by all nodes.

    +

    The nodes need to be able to reach other nodes over UDP port 8472 when Flannel +VXLAN overlay networking is used. The node should not listen on any other port. K3s +uses reverse tunneling such that the nodes make outbound connections to the server +and all kubelet traffic runs through that tunnel. However, if you do not use Flannel +and provide your own custom CNI, then port 8472 is not needed by K3s.

    +

    If you wish to utilize the metrics server, you will need to open port 10250 +on each node.

    +

    If you plan on achieving high availability with embedded etcd, server nodes +must be accessible to each other on ports 2379 and 2380.

    +
      +
    • +

      Create 1 security group with appropriate Inbound Rules for K3s Server Nodes +that will be used by all 3 nodes:

      +

      Inbound Rules for K3s Server Nodes

      +
      +

      Important Note

      +

      The VXLAN overlay networking port on nodes should not be exposed to the world +as it opens up your cluster network to be accessed by anyone. Run your nodes +behind a firewall/security group that disables access to port 8472.

      +
      +
    • +
    • +

      setup Unique hostname to each machine using the following command:

      +
      echo "<node_internal_IP> <host_name>" >> /etc/hosts
      +hostnamectl set-hostname <host_name>
      +
      +

      For example:

      +
      echo "192.168.0.235 k3s-master" >> /etc/hosts
      +hostnamectl set-hostname k3s-master
      +
      +
    • +
    +

    In this step, you will setup the following nodes:

    +
      +
    • +

      k3s-master

      +
    • +
    • +

      k3s-worker1

      +
    • +
    • +

      k3s-worker2

      +
    • +
    +

    The below steps will be performed on all the above mentioned nodes:

    +
      +
    • +

      SSH into all the 3 machines

      +
    • +
    • +

      Switch as root: sudo su

      +
    • +
    • +

      Update the repositories and packages:

      +
      apt-get update && apt-get upgrade -y
      +
      +
    • +
    • +

      Install curl and apt-transport-https

      +
      apt-get update && apt-get install -y apt-transport-https curl
      +
      +
    • +
    +
    +

    Install Docker

    +
      +
    • +

      Install container runtime - docker

      +
      apt-get install docker.io -y
      +
      +
    • +
    • +

      Configure the Docker daemon, in particular to use systemd for the management +of the container’s cgroups

      +
      cat <<EOF | sudo tee /etc/docker/daemon.json
      +{
      +"exec-opts": ["native.cgroupdriver=systemd"]
      +}
      +EOF
      +
      +systemctl enable --now docker
      +usermod -aG docker ubuntu
      +systemctl daemon-reload
      +systemctl restart docker
      +
      +
    • +
    +
    +

    Configure K3s to bootstrap the cluster on master node

    +

    Run the below command on the master node i.e. k3s-master that you want to setup +as control plane.

    +
      +
    • +

      SSH into k3s-master machine

      +
    • +
    • +

      Switch to root user: sudo su

      +
    • +
    • +

      Execute the below command to initialize the cluster:

      +
      curl -sfL https://get.k3s.io | sh -s - --kubelet-arg 'cgroup-driver=systemd' \
      +--node-taint CriticalAddonsOnly=true:NoExecute --docker
      +
      +

      OR, +If you don't want to setup the K3s cluster without using docker as the +container runtime, then just run without supplying the --docker argument.

      +
      curl -sfL https://get.k3s.io | sh -
      +
      +
    • +
    +

    After running this installation:

    +
      +
    • +

      The K3s service will be configured to automatically restart after node reboots +or if the process crashes or is killed

      +
    • +
    • +

      Additional utilities will be installed, including kubectl, crictl, ctr, +k3s-killall.sh, and k3s-uninstall.sh

      +
    • +
    • +

      A kubeconfig file will be written to /etc/rancher/k3s/k3s.yaml and the kubectl +installed by K3s will automatically use it.

      +
    • +
    +

    To check if the service installed successfully, you can use:

    +
    systemctl status k3s
    +
    +

    The output looks like:

    +

    K3s Active Master Status

    +

    OR,

    +
    k3s --version
    +kubectl version
    +
    +
    +

    Note

    +

    If you want to taint the node i.e. not to deploy pods on this node after +installation then run: kubectl taint nodes <master_node_name> k3s-controlplane=true:NoExecute +i.e. kubectl taint nodes k3s-master k3s-controlplane=true:NoExecute

    +
    +

    You can check if the master node is working by:

    +
    k3s kubectl get nodes
    +
    +NAME         STATUS   ROLES                  AGE   VERSION
    +k3s-master   Ready    control-plane,master   37s   v1.21.5+k3s2
    +
    +
    kubectl config get-clusters
    +
    +NAME
    +default
    +
    +
    kubectl cluster-info
    +
    +Kubernetes control plane is running at https://127.0.0.1:6443
    +CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    +Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
    +
    +To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    +
    +
    kubectl get namespaces
    +
    +NAME              STATUS   AGE
    +default           Active   27m
    +kube-system       Active   27m
    +kube-public       Active   27m
    +kube-node-lease   Active   27m
    +
    +
    kubectl get endpoints -n kube-system
    +
    +NAME                    ENDPOINTS                                  AGE
    +kube-dns                10.42.0.4:53,10.42.0.4:53,10.42.0.4:9153   27m
    +metrics-server          10.42.0.3:443                              27m
    +rancher.io-local-path   <none>                                     27m
    +
    +
    kubectl get pods -n kube-system
    +
    +NAME                                      READY   STATUS    RESTARTS   AGE
    +helm-install-traefik-crd-ql7j2            0/1     Pending   0          32m
    +helm-install-traefik-mr65j                0/1     Pending   0          32m
    +coredns-7448499f4d-x57z7                  1/1     Running   0          32m
    +metrics-server-86cbb8457f-cg2fs           1/1     Running   0          32m
    +local-path-provisioner-5ff76fc89d-kdfcl   1/1     Running   0          32m
    +
    +

    You need to extract a token from the master that will be used to join the nodes +to the master.

    +

    On the master node:

    +
    sudo cat /var/lib/rancher/k3s/server/node-token
    +
    +

    You will then obtain a token that looks like:

    +
    K1097aace305b0c1077fc854547f34a598d2::server:6cc9fbb6c5c9de96f37fb14b8
    +
    +
    +

    Configure K3s on worker nodes to join the cluster

    +

    Run the below command on both of the worker nodes i.e. k3s-worker1 and k3s-worker2 +that you want to join the cluster.

    +
      +
    • +

      SSH into k3s-worker1 and k3s-worker1 machine

      +
    • +
    • +

      Switch to root user: sudo su

      +
    • +
    • +

      Execute the below command to join the cluster using the token obtained from +the master node:

      +
    • +
    +

    To install K3s on worker nodes and add them to the cluster, run the installation + script with the K3S_URL and K3S_TOKEN environment variables. Here is an example + showing how to join a worker node:

    +
    curl -sfL https://get.k3s.io | K3S_URL=https://<Master-Internal-IP>:6443 \
    +K3S_TOKEN=<Join_Token> sh -
    +
    +

    Where <Master-Internal-IP> is the Internal IP of the master node and <Join_Token> + is the token obtained from the master node.

    +

    For example:

    +
    curl -sfL https://get.k3s.io | K3S_URL=https://192.168.0.154:6443 \
    +K3S_TOKEN=K1019827f88b77cc5e1dce04d692d445c1015a578dafdc56aca829b2f
    +501df9359a::server:1bf0d61c85c6dac6d5a0081da55f44ba sh -
    +
    +

    You can verify if the k3s-agent on both of the worker nodes is running by:

    +
    systemctl status k3s-agent
    +
    +

    The output looks like: + K3s Active Agent Status

    +
    +

    To verify that our nodes have successfully been added to the cluster, run the +following command on master node:

    +
    k3s kubectl get nodes
    +
    +

    OR,

    +
    k3s kubectl get nodes -o wide
    +
    +

    Your output should look like:

    +
    k3s kubectl get nodes
    +
    +NAME          STATUS   ROLES                  AGE     VERSION
    +k3s-worker1   Ready    <none>                 5m16s   v1.21.5+k3s2
    +k3s-worker2   Ready    <none>                 5m5s    v1.21.5+k3s2
    +k3s-master    Ready    control-plane,master   9m33s   v1.21.5+k3s2
    +
    +

    This shows that we have successfully setup our K3s cluster ready to deploy applications +to it.

    +
    +

    Deploying Nginx using deployment

    +
      +
    • +

      Create a deployment nginx.yaml on master node

      +
      vi nginx.yaml
      +
      +

      The nginx.yaml looks like this:

      +
      apiVersion: apps/v1
      +kind: Deployment
      +metadata:
      +  name: mysite
      +  labels:
      +    app: mysite
      +spec:
      +  replicas: 1
      +  selector:
      +    matchLabels:
      +      app: mysite
      +  template:
      +    metadata:
      +      labels:
      +        app : mysite
      +    spec:
      +      containers:
      +        - name : mysite
      +          image: nginx
      +          ports:
      +            - containerPort: 80
      +
      +
      kubectl apply -f nginx.yaml
      +
      +
    • +
    • +

      Verify the nginx pod is in Running state:

      +
      sudo k3s kubectl get pods --all-namespaces
      +
      +
    • +
    • +

      Scale the pods to available agents:

      +
      sudo k3s kubectl scale --replicas=2 deploy/mysite
      +
      +
    • +
    • +

      View all deployment status:

      +
      sudo k3s kubectl get deploy mysite
      +
      +NAME     READY   UP-TO-DATE   AVAILABLE   AGE
      +mysite   2/2     2            2           85s
      +
      +
    • +
    • +

      Delete the nginx deployment and pod:

      +
      sudo k3s kubectl delete -f nginx.yaml
      +
      +

      OR,

      +
      sudo k3s kubectl delete deploy mysite
      +
      +
      +

      Note

      +

      Instead of apply manually any new deployment yaml, you can just copy the +yaml file to the /var/lib/rancher/k3s/server/manifests/ folder +i.e. sudo cp nginx.yaml /var/lib/rancher/k3s/server/manifests/.. This +will automatically deploy the newly copied deployment on your cluster.

      +
      +
    • +
    +

    Deploy Addons to K3s

    +

    K3s is a lightweight kubernetes tool that doesn’t come packaged with all the tools +but you can install them separately.

    +
      +
    • +

      Install Helm Commandline tool on K3s:

      +

      i. Download the latest version of Helm commandline tool using wget from +this page.

      +
      wget https://get.helm.sh/helm-v3.7.0-linux-amd64.tar.gz
      +
      +

      ii. Unpack it:

      +
      tar -zxvf helm-v3.7.0-linux-amd64.tar.gz
      +
      +

      iii. Find the helm binary in the unpacked directory, and move it to its desired +destination

      +
      mv linux-amd64/helm /usr/bin/helm
      +chmod +x /usr/bin/helm
      +
      +

      OR,

      +

      Using Snap:

      +
      snap install helm --classic
      +
      +

      OR,

      +

      Using Apt (Debian/Ubuntu):

      +
      curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
      +sudo apt-get install apt-transport-https --yes
      +echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
      +sudo apt-get update
      +sudo apt-get install helm
      +
      +
    • +
    • +

      Verify the Helm installation:

      +
      helm version
      +
      +version.BuildInfo{Version:"v3.7.0", GitCommit:"eeac83883cb4014fe60267ec63735
      +70374ce770b", GitTreeState:"clean", GoVersion:"go1.16.8"}
      +
      +
    • +
    • +

      Add the helm chart repository to allow installation of applications using helm:

      +
      helm repo add stable https://charts.helm.sh/stable
      +helm repo update
      +
      +
    • +
    +
    +

    Deploy A Sample Nginx Application using Helm

    +

    Nginx can be used as a web proxy to expose ingress +web traffic routes in and out of the cluster.

    +
      +
    • +

      You can install "nginx web-proxy" using Helm:

      +
      export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
      +helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
      +helm repo list
      +helm repo update
      +helm install stable ingress-nginx/ingress-nginx --namespace kube-system \
      +    --set defaultBackend.enabled=false --set controller.publishService.enabled=true
      +
      +
    • +
    • +

      We can test if the application has been installed by:

      +
      k3s kubectl get pods -n kube-system -l app=nginx-ingress -o wide
      +
      +NAME   READY STATUS  RESTARTS AGE  IP        NODE    NOMINATED NODE  READINESS GATES
      +nginx.. 1/1  Running 0        19m  10.42.1.5 k3s-worker1   <none>      <none>
      +
      +
    • +
    • +

      We have successfully deployed nginx web-proxy on k3s. Go to browser, visit http://<Master-Floating-IP> +i.e. http://128.31.25.246 to check the nginx default page.

      +
    • +
    +

    Upgrade K3s Using the Installation Script

    +

    To upgrade K3s from an older version you can re-run the installation script using +the same flags, for example:

    +
    curl -sfL https://get.k3s.io | sh -
    +
    +

    This will upgrade to a newer version in the stable channel by default.

    +

    If you want to upgrade to a newer version in a specific channel (such as latest) +you can specify the channel:

    +
    curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=latest sh -
    +
    +

    If you want to upgrade to a specific version you can run the following command:

    +
    curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=vX.Y.Z-rc1 sh -
    +
    +

    From non root user's terminal to install the latest version, you do not need to +pass INSTALL_K3S_VERSION that by default loads the Latest version.

    +
    curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--write-kubeconfig-mode 644" \
    +    sh -
    +
    +
    +

    Note

    +

    For more about on "How to use flags and environment variables" read this.

    +
    +

    Restarting K3s

    +

    Restarting K3s is supported by the installation script for systemd and OpenRC.

    +

    Using systemd:

    +

    To restart servers manually:

    +
    sudo systemctl restart k3s
    +
    +

    To restart agents manually:

    +
    sudo systemctl restart k3s-agent
    +
    +

    Using OpenRC:

    +

    To restart servers manually:

    +
    sudo service k3s restart
    +
    +

    To restart agents manually:

    +
    sudo service k3s-agent restart
    +
    +

    Uninstalling

    +

    If you installed K3s with the help of the install.sh script, an uninstall script +is generated during installation. The script is created on your master node at +/usr/bin/k3s-uninstall.sh or as k3s-agent-uninstall.sh on your worker nodes.

    +

    To remove K3s on the worker nodes, execute:

    +
    sudo /usr/bin/k3s-agent-uninstall.sh
    +sudo rm -rf /var/lib/rancher
    +
    +

    To remove k3s on the master node, execute:

    +
    sudo /usr/bin/k3s-uninstall.sh
    +sudo rm -rf /var/lib/rancher
    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/kubernetes/kind/index.html b/other-tools/kubernetes/kind/index.html new file mode 100644 index 00000000..a589dd61 --- /dev/null +++ b/other-tools/kubernetes/kind/index.html @@ -0,0 +1,3468 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Kind

    +

    Pre-requisite

    +

    We will need 1 VM to create a single node kubernetes cluster using kind. +We are using following setting for this purpose:

    +
      +
    • +

      1 Linux machine, almalinux-9-x86_64, cpu-su.2 flavor with 2vCPU, 8GB RAM, +20GB storage - also assign Floating IP + to this VM.

      +
    • +
    • +

      setup Unique hostname to the machine using the following command:

      +
      echo "<node_internal_IP> <host_name>" >> /etc/hosts
      +hostnamectl set-hostname <host_name>
      +
      +

      For example:

      +
      echo "192.168.0.167 kind" >> /etc/hosts
      +hostnamectl set-hostname kind
      +
      +
    • +
    +

    Install docker on AlmaLinux

    +

    Run the below command on the AlmaLinux VM:

    +
      +
    • +

      SSH into kind machine

      +
    • +
    • +

      Switch to root user: sudo su

      +
    • +
    • +

      Execute the below command to initialize the cluster:

      +

      Please remove container-tools module that includes stable versions of podman, +buildah, skopeo, runc, conmon, etc as well as dependencies and will be removed +with the module. If this module is not removed then it will conflict with Docker. +Red Hat does recommend Podman on RHEL 8.

      +
      dnf module remove container-tools
      +
      +dnf update -y
      +
      +dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
      +
      +dnf install docker-ce docker-ce-cli containerd.io docker-compose-plugin
      +
      +systemctl start docker
      +systemctl enable --now docker
      +systemctl status docker
      +
      +docker -v
      +
      +
    • +
    +

    Install kubectl on AlmaLinux

    +
    curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
    +sudo install -o root -g root -m 0755 kubectl /usr/bin/kubectl
    +chmod +x /usr/bin/kubectl
    +
    +
      +
    • +

      Test to ensure that the kubectl is installed:

      +
      kubectl version --client
      +
      +
    • +
    +

    Install kind

    +
    curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
    +chmod +x ./kind
    +mv ./kind /usr/bin
    +
    +
    which kind
    +
    +/bin/kind
    +
    +
    kind version
    +
    +kind v0.11.1 go1.16.4 linux/amd64
    +
    +
      +
    • +

      To communicate with cluster, just give the cluster name as a context in kubectl:

      +
      kind create cluster --name k8s-kind-cluster1
      +
      +Creating cluster "k8s-kind-cluster1" ...
      +✓ Ensuring node image (kindest/node:v1.21.1) 🖼
      +✓ Preparing nodes 📦
      +✓ Writing configuration 📜
      +✓ Starting control-plane 🕹️
      +✓ Installing CNI 🔌
      +✓ Installing StorageClass 💾
      +Set kubectl context to "kind-k8s-kind-cluster1"
      +You can now use your cluster with:
      +
      +kubectl cluster-info --context kind-k8s-kind-cluster1
      +
      +Have a nice day! 👋
      +
      +
    • +
    • +

      Get the cluster details:

      +
      kubectl cluster-info --context kind-k8s-kind-cluster1
      +
      +Kubernetes control plane is running at https://127.0.0.1:38646
      +CoreDNS is running at https://127.0.0.1:38646/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
      +
      +To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
      +
      +
      kubectl get all
      +
      +NAME                TYPE       CLUSTER-IP  EXTERNAL-IP  PORT(S)  AGE
      +service/kubernetes  ClusterIP  10.96.0.1   <none>       443/TCP  5m25s
      +
      +
      kubectl get nodes
      +
      +NAME                             STATUS  ROLES                AGE    VERSION
      +k8s-kind-cluster1-control-plane  Ready  control-plane,master  5m26s  v1.21.1
      +
      +
    • +
    +

    Deleting a Cluster

    +

    If you created a cluster with kind create cluster then deleting is equally simple:

    +
    kind delete cluster
    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm/index.html b/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm/index.html new file mode 100644 index 00000000..ad17731c --- /dev/null +++ b/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm/index.html @@ -0,0 +1,4414 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Highly Available Kubernetes Cluster using kubeadm

    +

    Objectives

    +
      +
    • +

      Install a multi control-plane(master) Kubernetes cluster

      +
    • +
    • +

      Install a Pod network on the cluster so that your Pods can talk to each other

      +
    • +
    • +

      Deploy and test a sample app

      +
    • +
    • +

      Deploy K8s Dashboard to view all cluster's components

      +
    • +
    +

    Components and architecure

    +

    This shows components and architecture of a highly-available, production-grade +Kubernetes cluster.

    +

    Components and architecure

    +

    You can learn about each component from Kubernetes Componets.

    +

    Pre-requisite

    +

    You will need 2 control-plane(master node) and 2 worker nodes to create a +multi-master kubernetes cluster using kubeadm. You are going to use the +following set up for this purpose:

    +
      +
    • +

      2 Linux machines for master, ubuntu-20.04-x86_64 or your choice of Ubuntu OS +image, cpu-su.2 flavor with 2vCPU, 8GB RAM, 20GB storage.

      +
    • +
    • +

      2 Linux machines for worker, ubuntu-20.04-x86_64 or your choice of Ubuntu OS +image, cpu-su.1 flavor with 1vCPU, 4GB RAM, 20GB storage - also +assign Floating IPs +to both of the worker nodes.

      +
    • +
    • +

      1 Linux machine for loadbalancer, ubuntu-20.04-x86_64 or your choice of Ubuntu +OS image, cpu-su.1 flavor with 1vCPU, 4GB RAM, 20GB storage.

      +
    • +
    • +

      ssh access to all machines: Read more here +on how to setup SSH to your remote VMs.

      +
    • +
    • +

      Create 2 security groups with appropriate ports and protocols:

      +

      i. To be used by the master nodes: +Control plane ports and protocols

      +

      ii. To be used by the worker nodes: +Worker node ports and protocols

      +
    • +
    • +

      setup Unique hostname to each machine using the following command:

      +
      echo "<node_internal_IP> <host_name>" >> /etc/hosts
      +hostnamectl set-hostname <host_name>
      +
      +

      For example:

      +
      echo "192.168.0.167 loadbalancer" >> /etc/hosts
      +hostnamectl set-hostname loadbalancer
      +
      +
    • +
    +

    Steps

    +
      +
    1. +

      Prepare the Loadbalancer node to communicate with the two master nodes' +apiservers on their IPs via port 6443.

      +
    2. +
    3. +

      Do following in all the nodes except the Loadbalancer node:

      +
        +
      • Disable swap.
      • +
      • Install kubelet and kubeadm.
      • +
      • Install container runtime - you will be using containerd.
      • +
      +
    4. +
    5. +

      Initiate kubeadm control plane configuration on one of the master nodes.

      +
    6. +
    7. +

      Save the new master and worker node join commands with the token.

      +
    8. +
    9. +

      Join the second master node to the control plane using the join command.

      +
    10. +
    11. +

      Join the worker nodes to the control plane using the join command.

      +
    12. +
    13. +

      Configure kubeconfig($HOME/.kube/config) on loadbalancer node.

      +
    14. +
    15. +

      Install kubectl on Loadbalancer node.

      +
    16. +
    17. +

      Install CNI network plugin i.e. Flannel on Loadbalancer node.

      +
    18. +
    19. +

      Validate all cluster components and nodes are visible on Loadbalancer node.

      +
    20. +
    21. +

      Deploy a sample app and validate the app from Loadbalancer node.

      +
    22. +
    +
    +

    Setting up loadbalancer

    +

    You will use HAPROXY as the primary loadbalancer, but you can use any other +options as well. This node will be not part of the K8s cluster but will be +outside of the cluster and interacts with the cluster using ports.

    +

    You have 2 master nodes. Which means the user can connect to either of the 2 +apiservers. The loadbalancer will be used to loadbalance between the 2 apiservers.

    +
      +
    • +

      Login to the loadbalancer node

      +
    • +
    • +

      Switch as root - sudo su

      +
    • +
    • +

      Update your repository and your system

      +
      sudo apt-get update && sudo apt-get upgrade -y
      +
      +
    • +
    • +

      Install haproxy

      +
      sudo apt-get install haproxy -y
      +
      +
    • +
    • +

      Edit haproxy configuration

      +
      vi /etc/haproxy/haproxy.cfg
      +
      +

      Add the below lines to create a frontend configuration for loadbalancer -

      +
      frontend fe-apiserver
      +bind 0.0.0.0:6443
      +mode tcp
      +option tcplog
      +default_backend be-apiserver
      +
      +

      Add the below lines to create a backend configuration for master1 and master2 +nodes at port 6443.

      +
      +

      Note

      +

      6443 is the default port of kube-apiserver

      +
      +
      backend be-apiserver
      +mode tcp
      +option tcplog
      +option tcp-check
      +balance roundrobin
      +default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 #<!-- markdownlint-disable -->
      +
      +    server master1 10.138.0.15:6443 check
      +    server master2 10.138.0.16:6443 check
      +
      +

      Here - master1 and master2 are the hostnames of the master nodes and +10.138.0.15 and 10.138.0.16 are the corresponding internal IP addresses.

      +
    • +
    • +

      Ensure haproxy config file is correctly formatted:

      +
      haproxy -c -q -V -f /etc/haproxy/haproxy.cfg
      +
      +
    • +
    • +

      Restart and Verify haproxy

      +
      systemctl restart haproxy
      +systemctl status haproxy
      +
      +

      Ensure haproxy is in running status.

      +

      Run nc command as below:

      +
      nc -v localhost 6443
      +Connection to localhost 6443 port [tcp/*] succeeded!
      +
      +
      +

      Note

      +

      If you see failures for master1 and master2 connectivity, you can ignore +them for time being as you have not yet installed anything on the servers.

      +
      +
    • +
    +
    +

    Install kubeadm, kubelet and containerd on master and worker nodes

    +

    kubeadm will not install or manage kubelet or kubectl for you, so you will +need to ensure they match the version of the Kubernetes control plane you want kubeadm +to install for you. You will install these packages on all of your machines:

    +

    kubeadm: the command to bootstrap the cluster.

    +

    kubelet: the component that runs on all of the machines in your cluster and +does things like starting pods and containers.

    +

    kubectl: the command line util to talk to your cluster.

    +

    In this step, you will install kubelet and kubeadm on the below nodes

    +
      +
    • +

      master1

      +
    • +
    • +

      master2

      +
    • +
    • +

      worker1

      +
    • +
    • +

      worker2

      +
    • +
    +

    The below steps will be performed on all the above mentioned nodes:

    +
      +
    • +

      SSH into all the 4 machines

      +
    • +
    • +

      Update the repositories and packages:

      +
      sudo apt-get update && sudo apt-get upgrade -y
      +
      +
    • +
    • +

      Turn off swap

      +
      swapoff -a
      +sudo sed -i '/ swap / s/^/#/' /etc/fstab
      +
      +
    • +
    • +

      Install curl and apt-transport-https

      +
      sudo apt-get update && sudo apt-get install -y apt-transport-https curl
      +
      +
    • +
    • +

      Download the Google Cloud public signing key and add key to verify releases

      +
      curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
      +
      +
    • +
    • +

      add kubernetes apt repo

      +
      cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
      +deb https://apt.kubernetes.io/ kubernetes-xenial main
      +EOF
      +
      +
    • +
    • +

      Install kubelet and kubeadm

      +
      sudo apt-get update
      +sudo apt-get install -y kubelet kubeadm
      +
      +
    • +
    • +

      apt-mark hold is used so that these packages will not be updated/removed automatically

      +
      sudo apt-mark hold kubelet kubeadm
      +
      +
    • +
    +
    +

    Install the container runtime i.e. containerd on master and worker nodes

    +

    To run containers in Pods, Kubernetes uses a container runtime.

    +

    By default, Kubernetes uses the Container Runtime Interface (CRI) to interface +with your chosen container runtime.

    +
      +
    • +

      Install container runtime - containerd

      +

      The first thing to do is configure the persistent loading of the necessary +containerd modules. This forwarding IPv4 and letting iptables see bridged +trafficis is done with the following command:

      +
      cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
      +overlay
      +br_netfilter
      +EOF
      +
      +sudo modprobe overlay
      +sudo modprobe br_netfilter
      +
      +
    • +
    • +

      Ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config:

      +
      # sysctl params required by setup, params persist across reboots
      +cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
      +net.bridge.bridge-nf-call-iptables  = 1
      +net.bridge.bridge-nf-call-ip6tables = 1
      +net.ipv4.ip_forward                 = 1
      +EOF
      +
      +
    • +
    • +

      Apply sysctl params without reboot:

      +
      sudo sysctl --system
      +
      +
    • +
    • +

      Install the necessary dependencies with:

      +
      sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
      +
      +
    • +
    • +

      The containerd.io packages in DEB and RPM formats are distributed by Docker. +Add the required GPG key with:

      +
      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
      +sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
      +
      +

      It's now time to Install and configure containerd:

      +
      sudo apt update -y
      +sudo apt install -y containerd.io
      +containerd config default | sudo tee /etc/containerd/config.toml
      +
      +# Reload the systemd daemon with
      +sudo systemctl daemon-reload
      +
      +# Start containerd
      +sudo systemctl restart containerd
      +sudo systemctl enable --now containerd
      +
      +

      You can verify containerd is running with the command:

      +
      sudo systemctl status containerd
      +
      +
    • +
    +
    +

    Configure kubeadm to bootstrap the cluster

    +

    You will start off by initializing only one master node. For this purpose, you +choose master1 to initialize our first control plane but you can also do the +same in master2.

    +
      +
    • +

      SSH into master1 machine

      +
    • +
    • +

      Switch to root user: sudo su

      +
      +

      Configuring the kubelet cgroup driver

      +

      From 1.22 onwards, if you do not set the cgroupDriver field under +KubeletConfiguration, kubeadm will default it to systemd. So you do +not need to do anything here by default but if you want you change it you can +refer to this documentation.

      +
      +
    • +
    • +

      Execute the below command to initialize the cluster:

      +
      kubeadm config images pull
      +kubeadm init --control-plane-endpoint
      +"LOAD_BALANCER_IP_OR_HOSTNAME:LOAD_BALANCER_PORT" --upload-certs --pod-network-cidr=10.244.0.0/16
      +
      +

      Here, you can use either the IP address or the hostname of the loadbalancer in +place of . You have not enabled the hostname of +the server, i.e. loadbalancer as the LOAD_BALANCER_IP_OR_HOSTNAME that is +visible from the master1 node. so instead of using not resolvable hostnames +across your network, you will be using the IP address of the Loadbalancer server.

      +

      The is the front end configuration port defined in HAPROXY +configuration. For this, you have kept the port as 6443 which is the default +apiserver port.

      +
      +

      Important Note

      +

      --pod-network-cidr value depends upon what CNI plugin you going to use so +need to be very careful while setting this CIDR values. In our case, you are +going to use Flannel CNI network plugin so you will use: +--pod-network-cidr=10.244.0.0/16. If you are opted to use Calico CNI +network plugin then you need to use: --pod-network-cidr=192.168.0.0/16 and +if you are opted to use Weave Net no need to pass this parameter.

      +
      +

      For example, our Flannel CNI network plugin based kubeadm init command with +loadbalancer node with internal IP: 192.168.0.167 look like below:

      +
      kubeadm config images pull
      +kubeadm init --control-plane-endpoint "192.168.0.167:6443" --upload-certs --pod-network-cidr=10.244.0.0/16
      +
      +

      Save the output in some secure file for future use. This will show an unique token +to join the control plane. The output from kubeadm init should looks like below:

      +
      Your Kubernetes control-plane has initialized successfully!
      +
      +To start using your cluster, you need to run the following as a regular user:
      +
      +mkdir -p $HOME/.kube
      +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      +sudo chown $(id -u):$(id -g) $HOME/.kube/config
      +
      +Alternatively, if you are the root user, you can run:
      +
      +export KUBECONFIG=/etc/kubernetes/admin.conf
      +
      +You should now deploy a pod network to the cluster.
      +Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      +https://kubernetes.io/docs/concepts/cluster-administration/addons/
      +
      +You can now join any number of the control-plane node running the following
      +command on each worker nodes as root:
      +
      +kubeadm join 192.168.0.167:6443 --token cnslau.kd5fjt96jeuzymzb \
      +    --discovery-token-ca-cert-hash sha256:871ab3f050bc9790c977daee9e44cf52e15ee3
      +    7ab9834567333b939458a5bfb5 \
      +    --control-plane --certificate-key 824d9a0e173a810416b4bca7038fb33b616108c17abcbc5eaef8651f11e3d146
      +
      +Please note that the certificate-key gives access to cluster sensitive data, keep
      +it secret!
      +As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you
      +can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
      +
      +Then you can join any number of worker nodes by running the following on each as
      +root:
      +
      +kubeadm join 192.168.0.167:6443 --token cnslau.kd5fjt96jeuzymzb \
      +    --discovery-token-ca-cert-hash sha256:871ab3f050bc9790c977daee9e44cf52e15ee37ab9834567333b939458a5bfb5
      +
      +

      The output consists of 3 major tasks:

      +

      A. Setup kubeconfig using on current master node: +As you are running as root user so you need to run the following command:

      +
      export KUBECONFIG=/etc/kubernetes/admin.conf
      +
      +

      We need to run the below commands as a normal user to use the kubectl from terminal.

      +
      mkdir -p $HOME/.kube
      +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      +sudo chown $(id -u):$(id -g) $HOME/.kube/config
      +
      +

      Now the machine is initialized as master.

      +
      +

      Warning

      +

      Kubeadm signs the certificate in the admin.conf to have +Subject: O = system:masters, CN = kubernetes-admin. system:masters is a +break-glass, super user group that bypasses the authorization layer +(e.g. RBAC). Do not share the admin.conf file with anyone and instead +grant users custom permissions by generating them a kubeconfig file using +the kubeadm kubeconfig user command.

      +
      +

      B. Setup a new control plane (master) i.e. master2 by running following +command on master2 node:

      +
      kubeadm join 192.168.0.167:6443 --token cnslau.kd5fjt96jeuzymzb \
      +    --discovery-token-ca-cert-hash sha256:871ab3f050bc9790c977daee9e44cf52e1
      +        5ee37ab9834567333b939458a5bfb5 \
      +    --control-plane --certificate-key 824d9a0e173a810416b4bca7038fb33b616108c17abcbc5eaef8651f11e3d146
      +
      +

      C. Join worker nodes running following command on individual worker nodes:

      +
      kubeadm join 192.168.0.167:6443 --token cnslau.kd5fjt96jeuzymzb \
      +    --discovery-token-ca-cert-hash sha256:871ab3f050bc9790c977daee9e44cf52e15ee37ab9834567333b939458a5bfb5
      +
      +
      +

      Important Note

      +

      Your output will be different than what is provided here. While +performing the rest of the demo, ensure that you are executing the +command provided by your output and dont copy and paste from here.

      +
      +

      If you do not have the token, you can get it by running the following command on +the control-plane node:

      +
      kubeadm token list
      +
      +

      The output is similar to this:

      +
      TOKEN     TTL  EXPIRES      USAGES           DESCRIPTION            EXTRA GROUPS
      +8ewj1p... 23h  2018-06-12   authentication,  The default bootstrap  system:
      +                            signing          token generated by     bootstrappers:
      +                                            'kubeadm init'.         kubeadm:
      +                                                                    default-node-token
      +
      +

      If you missed the join command, execute the following command +kubeadm token create --print-join-command in the master node to recreate the +token with the join command.

      +
      root@master:~$ kubeadm token create --print-join-command
      +
      +kubeadm join 10.2.0.4:6443 --token xyzeyi.wxer3eg9vj8hcpp2 \
      +--discovery-token-ca-cert-hash sha256:ccfc92b2a31b002c3151cdbab77ff4dc32ef13b213fa3a9876e126831c76f7fa
      +
      +

      By default, tokens expire after 24 hours. If you are joining a node to the cluster +after the current token has expired, you can create a new token by running the +following command on the control-plane node:

      +
      kubeadm token create
      +
      +

      The output is similar to this: +5didvk.d09sbcov8ph2amjw

      +

      We can use this new token to join:

      +
      kubeadm join <master-ip>:<master-port> --token <token> \
      +    --discovery-token-ca-cert-hash sha256:<hash>
      +
      +
    • +
    +
    +
      +
    • +

      SSH into master2

      +
    • +
    • +

      Switch to root user:sudo su

      +
    • +
    • +

      Check the command provided by the output of master1:

      +

      You can now use the below command to add another control-plane node(master) to +the control plane:

      +
      kubeadm join 192.168.0.167:6443 --token cnslau.kd5fjt96jeuzymzb
      +    --discovery-token-ca-cert-hash sha256:871ab3f050bc9790c977daee9e44cf52e15ee3
      +    7ab9834567333b939458a5bfb5 \
      +    --control-plane --certificate-key 824d9a0e173a810416b4bca7038fb33b616108c17abcbc5eaef8651f11e3d146
      +
      +
    • +
    • +

      Execute the kubeadm join command for control plane on master2

      +

      Your output should look like:

      +
      This node has joined the cluster and a new control plane instance was created:
      +
      +* Certificate signing request was sent to apiserver and approval was received.
      +* The Kubelet was informed of the new secure connection details.
      +* Control plane (master) label and taint were applied to the new node.
      +* The Kubernetes control plane instances scaled up.
      +* A new etcd member was added to the local/stacked etcd cluster.
      +
      +
    • +
    +

    Now that you have initialized both the masters - you can now work on +bootstrapping the worker nodes.

    +
      +
    • +

      SSH into worker1 and worker2

      +
    • +
    • +

      Switch to root user on both the machines: sudo su

      +
    • +
    • +

      Check the output given by the init command on master1 to join worker node:

      +
      kubeadm join 192.168.0.167:6443 --token cnslau.kd5fjt96jeuzymzb \
      +    --discovery-token-ca-cert-hash sha256:871ab3f050bc9790c977daee9e44cf52e15ee37ab9834567333b939458a5bfb5
      +
      +
    • +
    • +

      Execute the above command on both the nodes:

      +
    • +
    • +

      Your output should look like:

      +
      This node has joined the cluster:
      +* Certificate signing request was sent to apiserver and a response was received.
      +* The Kubelet was informed of the new secure connection details.
      +
      +
    • +
    +
    +

    Configure kubeconfig on loadbalancer node

    +

    Now that you have configured the master and the worker nodes, its now time to +configure Kubeconfig (.kube) on the loadbalancer node. It is completely up to +you if you want to use the loadbalancer node to setup kubeconfig. kubeconfig can +also be setup externally on a separate machine which has access to loadbalancer +node. For the purpose of this demo you will use loadbalancer node to host +kubeconfig and kubectl.

    +
      +
    • +

      SSH into loadbalancer node

      +
    • +
    • +

      Switch to root user: sudo su

      +
    • +
    • +

      Create a directory: .kube at $HOME of root user

      +
      mkdir -p $HOME/.kube
      +
      +
    • +
    • +

      SCP configuration file from any one master node to loadbalancer node

      +
      scp master1:/etc/kubernetes/admin.conf $HOME/.kube/config
      +
      +
      +

      Important Note

      +

      If you havent setup ssh connection between master node and loadbalancer, you +can manually copy the contents of the file /etc/kubernetes/admin.conf from +master1 node and then paste it to $HOME/.kube/config file on the +loadbalancer node. Ensure that the kubeconfig file path is +$HOME/.kube/config on the loadbalancer node.

      +
      +
    • +
    • +

      Provide appropriate ownership to the copied file

      +
      chown $(id -u):$(id -g) $HOME/.kube/config
      +
      +
    • +
    +
    +

    Install kubectl

    +
      +
    • +

      Install kubectl binary

      +

      kubectl: the command line util to talk to your cluster.

      +
      snap install kubectl --classic
      +
      +

      This outputs: kubectl 1.26.1 from Canonical✓ installed

      +
    • +
    • +

      Verify the cluster

      +
      kubectl get nodes
      +
      +NAME      STATUS        ROLES                  AGE     VERSION
      +master1   NotReady      control-plane,master   21m     v1.26.1
      +master2   NotReady      control-plane,master   15m     v1.26.1
      +worker1   Ready         <none>                 9m17s   v1.26.1
      +worker2   Ready         <none>                 9m25s   v1.26.1
      +
      +
    • +
    +
    +

    Install CNI network plugin

    +

    CNI overview

    +

    Managing a network where containers can interoperate efficiently is very +important. Kubernetes has adopted the Container Network Interface(CNI) +specification for managing network resources on a cluster. This relatively +simple specification makes it easy for Kubernetes to interact with a wide range +of CNI-based software solutions. Using this CNI plugin allows Kubernetes pods to +have the same IP address inside the pod as they do on the VPC network. Make sure +the configuration corresponds to the Pod CIDR specified in the kubeadm +configuration file if applicable.

    +

    You must deploy a CNI based Pod network add-on so that your Pods can communicate +with each other. Cluster DNS (CoreDNS) will not start up before a network is +installed. To verify you can run this command: kubectl get po -n kube-system:

    +

    You should see the following output. You will see the two coredns-* pods in a +pending state. It is the expected behavior. Once we install the network plugin, +it will be in a Running state.

    +

    Output Example:

    +
    root@loadbalancer:~$ kubectl get po -n kube-system
    + NAME                               READY  STATUS   RESTARTS  AGE
    +coredns-558bd4d5db-5jktc             0/1   Pending   0        10m
    +coredns-558bd4d5db-xdc5x             0/1   Pending   0        10m
    +etcd-master1                         1/1   Running   0        11m
    +kube-apiserver-master1               1/1   Running   0        11m
    +kube-controller-manager-master1      1/1   Running   0        11m
    +kube-proxy-5jfh5                     1/1   Running   0        10m
    +kube-scheduler-master1               1/1   Running   0        11m
    +
    +

    Supported CNI options

    +

    To read more about the currently supported base CNI solutions for Kubernetes +read here +and also read this.

    +

    The below command can be run on the Loadbalancer node to install the CNI plugin:

    +
    kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
    +
    +

    As you had passed --pod-network-cidr=10.244.0.0/16 with kubeadm init so this +should work for Flannel CNI.

    +
    +

    Using Other CNI Options

    +

    For Calico CNI plugin to work correctly, you need to pass +--pod-network-cidr=192.168.0.0/16 with kubeadm init and then you can run: +kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

    +
    +

    For Weave Net CNI plugin to work correctly, you don't need to pass +--pod-network-cidr with kubeadm init and then you can run: +kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl +version | base64 | tr -d '\n')"

    +
    +
      +
    • +

      Dual Network: + It is highly recommended to follow an internal/external network layout for your + cluster, as showed in this diagram: + Dual Network Diagram

      +

      To enable this just give two different names to the internal and external interface, +according to your distro of choiche naming scheme:

      +
      external_interface: eth0
      +internal_interface: eth1
      +
      +

      Also you can decide here what CIDR should your cluster use

      +
      cluster_cidr: 10.43.0.0/16
      +service_cidr: 10.44.0.0/16
      +
      +

      Once you successfully installed the Flannel CNI component to your cluster. +You can now verify your HA cluster running:

      +
      kubectl get nodes
      +
      +NAME      STATUS   ROLES                    AGE   VERSION
      +master1   Ready    control-plane,master     22m   v1.26.1
      +master2   Ready    control-plane,master     17m   v1.26.1
      +worker1   Ready    <none>                   10m   v1.26.1
      +worker2   Ready    <none>                   10m   v1.26.1
      +
      +
    • +
    +
    +

    Deploy A Sample Nginx Application From one of the master nodes

    +

    Now that we have all the components to make the cluster and applications work, +let’s deploy a sample Nginx application and see if we can access it over a +NodePort +that has port range of 30000-32767.

    +

    The below command can be run on:

    +
    kubectl run nginx --image=nginx --port=80
    +kubectl expose pod nginx --port=80 --type=NodePort
    +
    +

    To check which NodePort is opened and running the Nginx run:

    +
    kubectl get svc
    +
    +

    The output will show: +Running Services

    +

    Once the deployment is up, you should be able to access the Nginx home page on +the allocated NodePort from either of the worker nodes' Floating IP.

    +

    To check which worker node is serving nginx, you can check NODE column +running the following command:

    +
    kubectl get pods --all-namespaces --output wide
    +
    +

    OR,

    +
    kubectl get pods -A -o wide
    +
    +

    This will show like below:

    +

    Nginx Pod and Worker

    +

    Go to browser, visit http://<Worker-Floating-IP>:<NodePort> +i.e. http://128.31.25.246:32713 to check the nginx default page. +Here Worker_Floating-IP corresponds to the Floating IP of the nginx pod +running worker node i.e. worker2.

    +

    For your example,

    +

    nginx default page

    +
    +

    Deploy A K8s Dashboard

    +

    You will going to setup K8dash/Skooner +to view a dashboard that shows all your K8s cluster components.

    +
      +
    • +

      SSH into loadbalancer node

      +
    • +
    • +

      Switch to root user: sudo su

      +
    • +
    • +

      Apply available deployment by running the following command:

      +
      kubectl apply -f https://raw.githubusercontent.com/skooner-k8s/skooner/master/kubernetes-skooner-nodeport.yaml
      +
      +

      This will map Skooner port 4654 to a randomly selected port on the running node. +The assigned NodePort can be found running:

      +
      kubectl get svc --namespace=kube-system
      +
      +

      OR,

      +
      kubectl get po,svc -n kube-system
      +
      +

      Skooner Service Port

      +

      To check which worker node is serving skooner-*, you can check NODE column +running the following command:

      +
      kubectl get pods --all-namespaces --output wide
      +
      +

      OR,

      +
      kubectl get pods -A -o wide
      +
      +

      This will show like below:

      +

      Skooner Pod and Worker

      +

      Go to browser, visit http://<Worker-Floating-IP>:<NodePort> i.e. +http://128.31.25.246:30495 to check the skooner dashboard page. +Here Worker_Floating-IP corresponds to the Floating IP of the skooner-* pod +running worker node i.e. worker2.

      +

      Skooner Dashboard

      +
    • +
    +

    Setup the Service Account Token to access the Skooner Dashboard:

    +

    The first (and easiest) option is to create a dedicated service account. Run the +following commands:

    +
      +
    • +

      Create the service account in the current namespace (we assume default)

      +
      kubectl create serviceaccount skooner-sa
      +
      +
    • +
    • +

      Give that service account root on the cluster

      +
      kubectl create clusterrolebinding skooner-sa --clusterrole=cluster-admin --serviceaccount=default:skooner-sa
      +
      +
    • +
    • +

      Create a secret that was created to hold the token for the SA:

      +
      kubectl apply -f - <<EOF
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +    name: skooner-sa-token
      +    annotations:
      +        kubernetes.io/service-account.name: skooner-sa
      +type: kubernetes.io/service-account-token
      +EOF
      +
      +
      +

      Information

      +

      Since 1.22, this type of Secret is no longer used to mount credentials into +Pods, and obtaining tokens via the TokenRequest API +is recommended instead of using service account token Secret objects. Tokens +obtained from the TokenRequest API are more secure than ones stored in Secret +objects, because they have a bounded lifetime and are not readable by other API +clients. You can use the kubectl create token command to obtain a token from +the TokenRequest API. For example: kubectl create token skooner-sa, where +skooner-sa is service account name.

      +
      +
    • +
    • +

      Find the secret that was created to hold the token for the SA

      +
      kubectl get secrets
      +
      +
    • +
    • +

      Show the contents of the secret to extract the token

      +
      kubectl describe secret skooner-sa-token
      +
      +

      Copy the token value from the secret detail and enter it into the login screen +to access the dashboard.

      +
    • +
    +

    Watch Demo Video showing how to setup the cluster

    +

    Here’s a recorded demo video +on how to setup HA K8s cluster using kubeadm as +explained above.

    +
    +

    Very Important: Certificates Renewal

    +

    Client certificates generated by kubeadm expire after one year unless the +Kubernetes version is upgraded or the certificates are manually renewed.

    +

    To renew certificates manually, you can use the kubeadm certs renew command with +the appropriate command line options. After running the command, you should +restart the control plane Pods.

    +

    kubeadm certs renew can renew any specific certificate or, with the subcommand +all, it can renew all of them, as shown below:

    +
    kubeadm certs renew all
    +
    +

    Once renewing certificates is done. You must restart the kube-apiserver, +kube-controller-manager, kube-scheduler and etcd, so that they can use the +new certificates by running:

    +
    systemctl restart kubelet
    +
    +

    Then, update the new kube config file:

    +
    export KUBECONFIG=/etc/kubernetes/admin.conf
    +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    +
    +
    +

    Don't Forget to Update the older kube config file

    +

    Update wherever you are using the older kube config to connect with the cluster.

    +
    +

    Clean Up

    +
      +
    • +

      To view the Cluster info:

      +
      kubectl cluster-info
      +
      +
    • +
    • +

      To delete your local references to the cluster:

      +
      kubectl config delete-cluster
      +
      +
    • +
    +

    How to Remove the node?

    +

    Talking to the control-plane node with the appropriate credentials, run:

    +
    kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets
    +
    +
      +
    • +

      Before removing the node, reset the state installed by kubeadm:

      +
      kubeadm reset
      +
      +

      The reset process does not reset or clean up iptables rules or IPVS tables. If +you wish to reset iptables, you must do so manually:

      +
      iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
      +
      +

      If you want to reset the IPVS tables, you must run the following command:

      +
      ipvsadm -C
      +
      +
    • +
    • +

      Now remove the node:

      +
      kubectl delete node <node name>
      +
      +

      If you wish to start over, run kubeadm init or kubeadm join with the +appropriate arguments.

      +
    • +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm/index.html b/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm/index.html new file mode 100644 index 00000000..cb53d9b0 --- /dev/null +++ b/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm/index.html @@ -0,0 +1,4210 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Creating a Single Master cluster with kubeadm

    +

    Objectives

    +
      +
    • +

      Install a single control-plane(master) Kubernetes cluster

      +
    • +
    • +

      Install a Pod network on the cluster so that your Pods can talk to each other

      +
    • +
    • +

      Deploy and test a sample app

      +
    • +
    • +

      Deploy K8s Dashboard to view all cluster's components

      +
    • +
    +

    Components and architecure

    +

    Components and architecure

    +

    You can learn about each component from Kubernetes Componets.

    +

    Pre-requisite

    +

    We will need 1 control-plane(master) and 2 worker node to create a single +control-plane kubernetes cluster using kubeadm. We are using following setting +for this purpose:

    +
      +
    • +

      1 Linux machine for master, ubuntu-20.04-x86_64, cpu-su.2 flavor with 2vCPU, +8GB RAM, 20GB storage.

      +
    • +
    • +

      2 Linux machines for worker, ubuntu-20.04-x86_64, cpu-su.1 flavor with 1vCPU, + 4GB RAM, 20GB storage - also assign Floating IPs + to both of the worker nodes.

      +
    • +
    • +

      ssh access to all machines: Read more here +on how to set up SSH on your remote VMs.

      +
    • +
    • +

      Create 2 security groups with appropriate ports and protocols:

      +

      i. To be used by the master nodes: +Control plane ports and protocols

      +

      ii. To be used by the worker nodes: +Worker node ports and protocols

      +
    • +
    • +

      setup Unique hostname to each machine using the following command:

      +
      echo "<node_internal_IP> <host_name>" >> /etc/hosts
      +hostnamectl set-hostname <host_name>
      +
      +

      For example:

      +
      echo "192.168.0.167 master" >> /etc/hosts
      +hostnamectl set-hostname master
      +
      +
    • +
    +

    Steps

    +
      +
    1. +

      Disable swap on all nodes.

      +
    2. +
    3. +

      Install kubeadm, kubelet, and kubectl on all the nodes.

      +
    4. +
    5. +

      Install container runtime on all nodes- you will be using containerd.

      +
    6. +
    7. +

      Initiate kubeadm control plane configuration on the master node.

      +
    8. +
    9. +

      Save the worker node join command with the token.

      +
    10. +
    11. +

      Install CNI network plugin i.e. Flannel on master node.

      +
    12. +
    13. +

      Join the worker node to the master node (control plane) using the join command.

      +
    14. +
    15. +

      Validate all cluster components and nodes are visible on master node.

      +
    16. +
    17. +

      Deploy a sample app and validate the app from master node.

      +
    18. +
    +

    Install kubeadm, kubelet and containerd on master and worker nodes

    +

    kubeadm will not install or manage kubelet or kubectl for you, so you will +need to ensure they match the version of the Kubernetes control plane you want kubeadm +to install for you. You will install these packages on all of your machines:

    +

    kubeadm: the command to bootstrap the cluster.

    +

    kubelet: the component that runs on all of the machines in your cluster and +does things like starting pods and containers.

    +

    kubectl: the command line util to talk to your cluster.

    +

    In this step, you will install kubelet and kubeadm on the below nodes

    +
      +
    • master
    • +
    • worker1
    • +
    • worker2
    • +
    +

    The below steps will be performed on all the above mentioned nodes:

    +
      +
    • +

      SSH into all the 3 machines

      +
    • +
    • +

      Update the repositories and packages:

      +
      sudo apt-get update && sudo apt-get upgrade -y
      +
      +
    • +
    • +

      Turn off swap

      +
      swapoff -a
      +sudo sed -i '/ swap / s/^/#/' /etc/fstab
      +
      +
    • +
    • +

      Install curl and apt-transport-https

      +
      sudo apt-get update && sudo apt-get install -y apt-transport-https curl
      +
      +
    • +
    • +

      Download the Google Cloud public signing key and add key to verify releases

      +
      curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
      +
      +
    • +
    • +

      add kubernetes apt repo

      +
      cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
      +deb https://apt.kubernetes.io/ kubernetes-xenial main
      +EOF
      +
      +
    • +
    • +

      Install kubelet, kubeadm, and kubectl

      +
      sudo apt-get update
      +sudo apt-get install -y kubelet kubeadm kubectl
      +
      +
    • +
    • +

      apt-mark hold is used so that these packages will not be updated/removed automatically

      +
      sudo apt-mark hold kubelet kubeadm kubectl
      +
      +
    • +
    +
    +

    Install the container runtime i.e. containerd on master and worker nodes

    +

    To run containers in Pods, Kubernetes uses a container runtime.

    +

    By default, Kubernetes uses the Container Runtime Interface (CRI) to interface +with your chosen container runtime.

    +
      +
    • +

      Install container runtime - containerd

      +

      The first thing to do is configure the persistent loading of the necessary +containerd modules. This forwarding IPv4 and letting iptables see bridged +trafficis is done with the following command:

      +
      cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
      +overlay
      +br_netfilter
      +EOF
      +
      +sudo modprobe overlay
      +sudo modprobe br_netfilter
      +
      +
    • +
    • +

      Ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config:

      +
      # sysctl params required by setup, params persist across reboots
      +cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
      +net.bridge.bridge-nf-call-iptables  = 1
      +net.bridge.bridge-nf-call-ip6tables = 1
      +net.ipv4.ip_forward                 = 1
      +EOF
      +
      +
    • +
    • +

      Apply sysctl params without reboot:

      +
      sudo sysctl --system
      +
      +
    • +
    • +

      Install the necessary dependencies with:

      +
      sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
      +
      +
    • +
    • +

      The containerd.io packages in DEB and RPM formats are distributed by Docker. +Add the required GPG key with:

      +
      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
      +sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
      +
      +

      It's now time to Install and configure containerd:

      +
      sudo apt update -y
      +sudo apt install -y containerd.io
      +containerd config default | sudo tee /etc/containerd/config.toml
      +
      +# Reload the systemd daemon with
      +sudo systemctl daemon-reload
      +
      +# Start containerd
      +sudo systemctl restart containerd
      +sudo systemctl enable --now containerd
      +
      +

      You can verify containerd is running with the command:

      +
      sudo systemctl status containerd
      +
      +
      +

      Configuring the kubelet cgroup driver

      +

      From 1.22 onwards, if you do not set the cgroupDriver field under +KubeletConfiguration, kubeadm will default it to systemd. So you do +not need to do anything here by default but if you want you change it you +can refer to this documentation.

      +
      +
    • +
    +
    +

    Configure kubeadm to bootstrap the cluster on master node

    +

    Run the below command on the master node i.e. master that you want to setup as +control plane.

    +
      +
    • +

      SSH into master machine

      +
    • +
    • +

      Switch to root user: sudo su

      +
    • +
    • +

      Execute the below command to initialize the cluster:

      +
      export MASTER_IP=<Master-Internal-IP>
      +kubeadm config images pull
      +kubeadm init --apiserver-advertise-address=${MASTER_IP} --pod-network-cidr=10.244.0.0/16
      +
      +
      +

      Important Note

      +

      Please make sure you replace the correct IP of the node with +<Master-Internal-IP> which is the Internal IP of master node. +--pod-network-cidr value depends upon what CNI plugin you going to use +so need to be very careful while setting this CIDR values. In our case, +you are going to use Flannel CNI network plugin so you will use: +--pod-network-cidr=10.244.0.0/16. If you are opted to use Calico CNI +network plugin then you need to use: --pod-network-cidr=192.168.0.0/16 +and if you are opted to use Weave Net no need to pass this parameter.

      +
      +

      For example, our Flannel CNI network plugin based kubeadm init command with +master node with internal IP: 192.168.0.167 look like below:

      +

      For example:

      +
      export MASTER_IP=192.168.0.167
      +kubeadm config images pull
      +kubeadm init --apiserver-advertise-address=${MASTER_IP} --pod-network-cidr=10.244.0.0/16
      +
      +

      Save the output in some secure file for future use. This will show an unique +token to join the control plane. The output from kubeadm init should looks +like below:

      +
      Your Kubernetes control-plane has initialized successfully!
      +
      +To start using your cluster, you need to run the following as a regular user:
      +
      +mkdir -p $HOME/.kube
      +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      +sudo chown $(id -u):$(id -g) $HOME/.kube/config
      +
      +Alternatively, if you are the root user, you can run:
      +
      +export KUBECONFIG=/etc/kubernetes/admin.conf
      +
      +You should now deploy a pod network to the cluster.
      +Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      +https://kubernetes.io/docs/concepts/cluster-administration/addons/
      +
      +You can now join any number of the control-plane node running the following
      +command on each worker nodes as root:
      +
      +kubeadm join 192.168.0.167:6443 --token cnslau.kd5fjt96jeuzymzb \
      +    --discovery-token-ca-cert-hash sha256:871ab3f050bc9790c977daee9e44cf52e15ee3
      +    7ab9834567333b939458a5bfb5 \
      +    --control-plane --certificate-key 824d9a0e173a810416b4bca7038fb33b616108c17abcbc5eaef8651f11e3d146
      +
      +Please note that the certificate-key gives access to cluster sensitive data, keep
      +it secret!
      +As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you
      +can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
      +
      +Then you can join any number of worker nodes by running the following on each as
      +root:
      +
      +kubeadm join 192.168.0.167:6443 --token cnslau.kd5fjt96jeuzymzb \
      +    --discovery-token-ca-cert-hash sha256:871ab3f050bc9790c977daee9e44cf52e15ee37ab9834567333b939458a5bfb5
      +
      +

      The output consists of 2 major tasks:

      +

      A. Setup kubeconfig using on current master node: +As you are running as root user so you need to run the following command:

      +
      export KUBECONFIG=/etc/kubernetes/admin.conf
      +
      +

      We need to run the below commands as a normal user to use the kubectl from terminal.

      +
      mkdir -p $HOME/.kube
      +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      +sudo chown $(id -u):$(id -g) $HOME/.kube/config
      +
      +

      Now the machine is initialized as master.

      +
      +

      Warning

      +

      Kubeadm signs the certificate in the admin.conf to have +Subject: O = system:masters, CN = kubernetes-admin. system:masters is a +break-glass, super user group that bypasses the authorization layer +(e.g. RBAC). Do not share the admin.conf file with anyone and instead +grant users custom permissions by generating them a kubeconfig file using +the kubeadm kubeconfig user command.

      +
      +

      B. Join worker nodes running following command on individual worker nodes:

      +
      kubeadm join 192.168.0.167:6443 --token cnslau.kd5fjt96jeuzymzb \
      +    --discovery-token-ca-cert-hash sha256:871ab3f050bc9790c977daee9e44cf52e15ee37ab9834567333b939458a5bfb5
      +
      +
      +

      Important Note

      +

      Your output will be different than what is provided here. While +performing the rest of the demo, ensure that you are executing the +command provided by your output and dont copy and paste from here.

      +
      +

      If you do not have the token, you can get it by running the following command +on the control-plane node:

      +
      kubeadm token list
      +
      +

      The output is similar to this:

      +
      TOKEN     TTL  EXPIRES      USAGES           DESCRIPTION            EXTRA GROUPS
      +8ewj1p... 23h  2018-06-12   authentication,  The default bootstrap  system:
      +                            signing          token generated by     bootstrappers:
      +                                            'kubeadm init'.         kubeadm:
      +                                                                    default-node-token
      +
      +

      If you missed the join command, execute the following command +kubeadm token create --print-join-command in the master node to recreate the +token with the join command.

      +
      root@master:~$ kubeadm token create --print-join-command
      +
      +kubeadm join 10.2.0.4:6443 --token xyzeyi.wxer3eg9vj8hcpp2 \
      +--discovery-token-ca-cert-hash sha256:ccfc92b2a31b002c3151cdbab77ff4dc32ef13b213fa3a9876e126831c76f7fa
      +
      +

      By default, tokens expire after 24 hours. If you are joining a node to the cluster +after the current token has expired, you can create a new token by running the +following command on the control-plane node:

      +
      kubeadm token create
      +
      +

      The output is similar to this: +5didvk.d09sbcov8ph2amjw

      +

      We can use this new token to join:

      +
      kubeadm join <master-ip>:<master-port> --token <token> \
      +    --discovery-token-ca-cert-hash sha256:<hash>
      +
      +
    • +
    +
    +

    Now that you have initialized the master - you can now work on bootstrapping the +worker nodes.

    +
      +
    • +

      SSH into worker1 and worker2

      +
    • +
    • +

      Switch to root user on both the machines: sudo su

      +
    • +
    • +

      Check the output given by the init command on master to join worker node:

      +
      kubeadm join 192.168.0.167:6443 --token cnslau.kd5fjt96jeuzymzb \
      +    --discovery-token-ca-cert-hash sha256:871ab3f050bc9790c977daee9e44cf52e15ee37ab9834567333b939458a5bfb5
      +
      +
    • +
    • +

      Execute the above command on both the nodes:

      +
    • +
    • +

      Your output should look like:

      +
      This node has joined the cluster:
      +* Certificate signing request was sent to apiserver and a response was received.
      +* The Kubelet was informed of the new secure connection details.
      +
      +
    • +
    +
    +

    Validate all cluster components and nodes are visible on all nodes

    +
      +
    • +

      Verify the cluster

      +
      kubectl get nodes
      +
      +NAME      STATUS        ROLES                  AGE     VERSION
      +master    NotReady      control-plane,master   21m     v1.26.1
      +worker1   Ready         <none>                 9m17s   v1.26.1
      +worker2   Ready         <none>                 9m25s   v1.26.1
      +
      +
    • +
    +
    +

    Install CNI network plugin

    +

    CNI overview

    +

    Managing a network where containers can interoperate efficiently is very +important. Kubernetes has adopted the Container Network Interface(CNI) +specification for managing network resources on a cluster. This relatively +simple specification makes it easy for Kubernetes to interact with a wide range +of CNI-based software solutions. Using this CNI plugin allows Kubernetes pods to +have the same IP address inside the pod as they do on the VPC network. Make sure +the configuration corresponds to the Pod CIDR specified in the kubeadm +configuration file if applicable.

    +

    You must deploy a CNI based Pod network add-on so that your Pods can communicate +with each other. Cluster DNS (CoreDNS) will not start up before a network is +installed. To verify you can run this command: kubectl get po -n kube-system:

    +

    You should see the following output. You will see the two coredns-* pods in a +pending state. It is the expected behavior. Once we install the network plugin, +it will be in a Running state.

    +

    Output Example:

    +
    root@master:~$ kubectl get po -n kube-system
    + NAME                               READY  STATUS   RESTARTS  AGE
    +coredns-558bd4d5db-5jktc             0/1   Pending   0        10m
    +coredns-558bd4d5db-xdc5x             0/1   Pending   0        10m
    +etcd-master1                         1/1   Running   0        11m
    +kube-apiserver-master1               1/1   Running   0        11m
    +kube-controller-manager-master1      1/1   Running   0        11m
    +kube-proxy-5jfh5                     1/1   Running   0        10m
    +kube-scheduler-master1               1/1   Running   0        11m
    +
    +

    Supported CNI options

    +

    To read more about the currently supported base CNI solutions for Kubernetes +read here +and also read this.

    +

    The below command can be run on the master node to install the CNI plugin:

    +
    kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
    +
    +

    As you had passed --pod-network-cidr=10.244.0.0/16 with kubeadm init so this +should work for Flannel CNI.

    +
    +

    Using Other CNI Options

    +

    For Calico CNI plugin to work correctly, you need to pass +--pod-network-cidr=192.168.0.0/16 with kubeadm init and then you can run: +kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

    +
    +

    For Weave Net CNI plugin to work correctly, you don't need to pass +--pod-network-cidr with kubeadm init and then you can run: +kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl +version | base64 | tr -d '\n')"

    +
    +
      +
    • +

      Dual Network:

      +

      It is highly recommended to follow an internal/external network layout for +your cluster, as showed in this diagram:

      +

      Dual Network Diagram

      +

      To enable this just give two different names to the internal and external interface, +according to your distro of choiche naming scheme:

      +
      external_interface: eth0
      +internal_interface: eth1
      +
      +

      Also you can decide here what CIDR should your cluster use

      +
      cluster_cidr: 10.43.0.0/16
      +service_cidr: 10.44.0.0/16
      +
      +

      Once you successfully installed the Flannel CNI component to your cluster. +You can now verify your HA cluster running:

      +
      kubectl get nodes
      +
      +NAME      STATUS   ROLES                    AGE   VERSION
      +master    Ready    control-plane,master     22m   v1.26.1
      +worker1   Ready    <none>                   10m   v1.26.1
      +worker2   Ready    <none>                   10m   v1.26.1
      +
      +
    • +
    +

    Watch Recorded Video showing the above steps on setting up the cluster

    +

    Here’s a quick recorded demo video +upto this point where we successfully setup single master K8s cluster using Kubeadm.

    +
    +

    Deploy A Sample Nginx Application From the master node

    +

    Now that we have all the components to make the cluster and applications work, +let’s deploy a sample Nginx application and see if we can access it over a +NodePort +that has port range of 30000-32767.

    +

    The below command can be run on:

    +
    kubectl run nginx --image=nginx --port=80
    +kubectl expose pod nginx --port=80 --type=NodePort
    +
    +

    To check which NodePort is opened and running the Nginx run:

    +
    kubectl get svc
    +
    +

    The output will show: +Running Services

    +

    Once the deployment is up, you should be able to access the Nginx home page on +the allocated NodePort from either of the worker nodes' Floating IP.

    +

    To check which worker node is serving nginx, you can check NODE column +running the following command:

    +
    kubectl get pods --all-namespaces --output wide
    +
    +

    OR,

    +
    kubectl get pods -A -o wide
    +
    +

    This will show like below:

    +

    Nginx Pod and Worker

    +

    Go to browser, visit http://<Worker-Floating-IP>:<NodePort> +i.e. http://128.31.25.246:32713 to check the nginx default page. +Here Worker_Floating-IP corresponds to the Floating IP of the nginx pod +running worker node i.e. worker2.

    +

    For your example,

    +

    nginx default page

    +
    +

    Deploy A K8s Dashboard

    +

    You will going to setup K8dash/Skooner +to view a dashboard that shows all your K8s cluster components.

    +
      +
    • +

      SSH into master node

      +
    • +
    • +

      Switch to root user: sudo su

      +
    • +
    • +

      Apply available deployment by running the following command:

      +
      kubectl apply -f https://raw.githubusercontent.com/skooner-k8s/skooner/master/kubernetes-skooner-nodeport.yaml
      +
      +

      This will map Skooner port 4654 to a randomly selected port from the master +node. The assigned NodePort on the master node can be found running:

      +
      kubectl get svc --namespace=kube-system
      +
      +

      OR,

      +
      kubectl get po,svc -n kube-system
      +
      +

      Skooner Service Port

      +

      To check which worker node is serving skooner-*, you can check NODE column +running the following command:

      +
      kubectl get pods --all-namespaces --output wide
      +
      +

      OR,

      +
      kubectl get pods -A -o wide
      +
      +

      This will show like below:

      +

      Skooner Pod and Worker

      +

      Go to browser, visit http://<Worker-Floating-IP>:<NodePort> i.e. +http://128.31.25.246:30495 to check the skooner dashboard page. +Here Worker_Floating-IP corresponds to the Floating IP of the skooner-* pod +running worker node i.e. worker2.

      +

      Skooner Dashboard

      +
    • +
    +

    Setup the Service Account Token to access the Skooner Dashboard:

    +

    The first (and easiest) option is to create a dedicated service account. Run the +following commands:

    +
      +
    • +

      Create the service account in the current namespace (we assume default)

      +
      kubectl create serviceaccount skooner-sa
      +
      +
    • +
    • +

      Give that service account root on the cluster

      +
      kubectl create clusterrolebinding skooner-sa --clusterrole=cluster-admin --serviceaccount=default:skooner-sa
      +
      +
    • +
    • +

      Create a secret that was created to hold the token for the SA:

      +
      kubectl apply -f - <<EOF
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +    name: skooner-sa-token
      +    annotations:
      +        kubernetes.io/service-account.name: skooner-sa
      +type: kubernetes.io/service-account-token
      +EOF
      +
      +
      +

      Information

      +

      Since 1.22, this type of Secret is no longer used to mount credentials into +Pods, and obtaining tokens via the TokenRequest API +is recommended instead of using service account token Secret objects. Tokens +obtained from the TokenRequest API are more secure than ones stored in +Secret objects, because they have a bounded lifetime and are not readable +by other API clients. You can use the kubectl create token command to +obtain a token from the TokenRequest API. For example: +kubectl create token skooner-sa, where skooner-sa is service account +name.

      +
      +
    • +
    • +

      Find the secret that was created to hold the token for the SA

      +
      kubectl get secrets
      +
      +
    • +
    • +

      Show the contents of the secret to extract the token

      +
      kubectl describe secret skooner-sa-token
      +
      +
    • +
    +

    Copy the token value from the secret detail and enter it into the login screen +to access the dashboard.

    +

    Watch Demo Video showing how to deploy applications

    +

    Here’s a recorded demo video +on how to deploy applications on top of setup single master K8s cluster as +explained above.

    +
    +

    Very Important: Certificates Renewal

    +

    Client certificates generated by kubeadm expire after one year unless the +Kubernetes version is upgraded or the certificates are manually renewed.

    +

    To renew certificates manually, you can use the kubeadm certs renew command with +the appropriate command line options. After running the command, you should +restart the control plane Pods.

    +

    kubeadm certs renew can renew any specific certificate or, with the subcommand +all, it can renew all of them, as shown below:

    +
    kubeadm certs renew all
    +
    +

    Once renewing certificates is done. You must restart the kube-apiserver, +kube-controller-manager, kube-scheduler and etcd, so that they can use the +new certificates by running:

    +
    systemctl restart kubelet
    +
    +

    Then, update the new kube config file:

    +
    export KUBECONFIG=/etc/kubernetes/admin.conf
    +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    +
    +
    +

    Don't Forget to Update the older kube config file

    +

    Update wherever you are using the older kube config to connect with the cluster.

    +
    +

    Clean Up

    +
      +
    • +

      To view the Cluster info:

      +
      kubectl cluster-info
      +
      +
    • +
    • +

      To delete your local references to the cluster:

      +
      kubectl config delete-cluster
      +
      +
    • +
    +

    How to Remove the node?

    +

    Talking to the control-plane node with the appropriate credentials, run:

    +
    kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets
    +
    +
      +
    • +

      Before removing the node, reset the state installed by kubeadm:

      +
      kubeadm reset
      +
      +

      The reset process does not reset or clean up iptables rules or IPVS tables. If +you wish to reset iptables, you must do so manually:

      +
      iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
      +
      +

      If you want to reset the IPVS tables, you must run the following command:

      +
      ipvsadm -C
      +
      +
    • +
    • +

      Now remove the node:

      +
      kubectl delete node <node name>
      +
      +

      If you wish to start over, run kubeadm init or kubeadm join with the +appropriate arguments.

      +
    • +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/kubernetes/kubernetes/index.html b/other-tools/kubernetes/kubernetes/index.html new file mode 100644 index 00000000..cb41f64e --- /dev/null +++ b/other-tools/kubernetes/kubernetes/index.html @@ -0,0 +1,3439 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + +

    Kubernetes Overview

    +

    Kubernetes, commonly known as K8s is an open sourced container orchestration +tool for managing containerized cloud-native workloads and services in computing, +networking, and storage infrastructure. K8s can help to deploy and manage +containerized applications like platforms as a service(PaaS), batch processing +workers, and microservices in the cloud at scale. It reduces cloud computing +costs while simplifying the operation of resilient and scalable applications. +While it is possible to install and manage Kubernetes on infrastructure that you +manage, it is a time-consuming and complicated process. To make provisioning and +deploying clusters much easier, we have listed a number of popular platforms and +tools to setup your K8s on your NERC's OpenStack Project space.

    +

    Kubernetes Components & Architecture

    +

    A Kubernetes cluster consists of a set of worker machines, called nodes, that +run containerized applications. Every cluster has at least one worker node. The +worker node(s) host the Pods that are the components of the application workload.

    +

    The control plane or master manages the worker nodes and the Pods in the cluster. +In production environments, the control plane usually runs across multiple +computers and a cluster usually runs multiple nodes, providing fault-tolerance, +redundancy, and high availability.

    +

    Here's the diagram of a Kubernetes cluster with all the components tied together. +Kubernetes Components & Architecture

    +

    Kubernetes Basics workflow

    +
      +
    1. +

      Create a Kubernetes cluster +Create a Kubernetes cluster

      +
    2. +
    3. +

      Deploy an app +Deploy an app

      +
    4. +
    5. +

      Explore your app +Explore your app

      +
    6. +
    7. +

      Expose your app publicly +Expose your app publicly

      +
    8. +
    9. +

      Scale up your app +Scale up your app

      +
    10. +
    11. +

      Update your app +Update your app

      +
    12. +
    +

    Development environment

    +
      +
    1. +

      Minikube is a local Kubernetes +cluster that focuses on making Kubernetes development and learning simple. +Kubernetes may be started with just a single command if you have a Docker +(or similarly comparable) container or a Virtual Machine environment. +For more read this.

      +
    2. +
    3. +

      Kind is a tool for running +local Kubernetes clusters utilizing Docker container "nodes". It was built for +Kubernetes testing, but it may also be used for local development and continuous +integration. For more read this.

      +
    4. +
    5. +

      MicroK8s is the smallest, fastest, and most conformant +Kubernetes that tracks upstream releases and simplifies clustering. MicroK8s is +ideal for prototyping, testing, and offline development. +For more read this.

      +
    6. +
    7. +

      K3s is a single <40MB binary, certified Kubernetes distribution +developed by Rancher Labs and now a CNCF sandbox project that fully implements the +Kubernetes API and is less than 40MB in size. To do so, they got rid of a lot of +additional drivers that didn't need to be in the core and could easily be replaced +with add-ons. For more read this.

      +

      To setup a Multi-master HA K3s cluster using k3sup(pronounced ketchup) +read this.

      +

      To setup a Single-Node K3s Cluster using k3d read this +and if you would like to setup Multi-master K3s cluster setup using k3d +read this.

      +
    8. +
    9. +

      k0s is an all-inclusive Kubernetes distribution, +configured with all of the features needed to build a Kubernetes cluster simply +by copying and running an executable file on each target host. +For more read this.

      +
    10. +
    +

    Production environment

    +

    If your Kubernetes cluster has to run critical workloads, it must be configured +to be resilient and higly available(HA) production-ready Kubernetes cluster. To +setup production-quality cluster, you can use the following deployment tools.

    +
      +
    1. +

      Kubeadm +performs the actions necessary to get a minimum viable, secure cluster up and +running in a user friendly way. +Bootstrapping cluster with kubeadm read this +and if you would like to setup Multi-master cluster setup using Kubeadm read this.

      +
    2. +
    3. +

      Kubespray +helps to install a Kubernetes cluster on NERC OpenStack. Kubespray is a +composition of Ansible playbooks, inventory, provisioning tools, and domain +knowledge for generic OS/Kubernetes clusters configuration management tasks. +Installing Kubernetes with Kubespray read this.

      +
    4. +
    +

    To choose a tool which best fits your use case, read this comparison.

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/kubernetes/kubespray/index.html b/other-tools/kubernetes/kubespray/index.html new file mode 100644 index 00000000..8bcf995a --- /dev/null +++ b/other-tools/kubernetes/kubespray/index.html @@ -0,0 +1,3674 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Kubespray

    +

    Pre-requisite

    +

    We will need 1 control-plane(master) and 1 worker node to create a single +control-plane kubernetes cluster using Kubespray. We are using following setting +for this purpose:

    +
      +
    • +

      1 Linux machine for Ansible master, ubuntu-22.04-x86_64 or your choice of Ubuntu +OS image, cpu-su.2 flavor with 2vCPU, 8GB RAM, 20GB storage.

      +
    • +
    • +

      1 Linux machine for master, ubuntu-22.04-x86_64 or your choice of Ubuntu +OS image, cpu-su.2 flavor with 2vCPU, 8GB RAM, 20GB storage - +also assign Floating IP +to the master node.

      +
    • +
    • +

      1 Linux machines for worker, ubuntu-22.04-x86_64 or your choice of Ubuntu +OS image, cpu-su.1 flavor with 1vCPU, 4GB RAM, 20GB storage.

      +
    • +
    • +

      ssh access to all machines: Read more here +on how to set up SSH on your remote VMs.

      +
    • +
    • +

      To allow SSH from Ansible master to all other nodes: Read more here + Generate SSH key for Ansible master node using:

      +
      ssh-keygen -t rsa
      +
      +Generating public/private rsa key pair.
      +Enter file in which to save the key (/root/.ssh/id_rsa):
      +Enter passphrase (empty for no passphrase):
      +Enter same passphrase again:
      +Your identification has been saved in /root/.ssh/id_rsa
      +Your public key has been saved in /root/.ssh/id_rsa.pub
      +The key fingerprint is:
      +SHA256:OMsKP7EmhT400AJA/KN1smKt6eTaa3QFQUiepmj8dxroot@ansible-master
      +The key's randomart image is:
      ++---[RSA 3072]----+
      +|=o.oo.           |
      +|.o...            |
      +|..=  .           |
      +|=o.= ...         |
      +|o=+.=.o SE       |
      +|.+*o+. o. .      |
      +|.=== +o. .       |
      +|o+=o=..          |
      +|++o=o.           |
      ++----[SHA256]-----+
      +
      +

      Copy and append the contents of SSH public key i.e. ~/.ssh/id_rsa.pub to +other nodes's ~/.ssh/authorized_keys file. Please make sure you are logged +in as root user by doing sudo su before you copy this public key to the +end of ~/.ssh/authorized_keys file of the other master and worker nodes. This +will allow ssh <other_nodes_internal_ip> from the Ansible master node's terminal.

      +
    • +
    • +

      Create 2 security groups with appropriate ports and protocols:

      +

      i. To be used by the master nodes: +Control plane ports and protocols

      +

      ii. To be used by the worker nodes: +Worker node ports and protocols

      +
    • +
    • +

      setup Unique hostname to each machine using the following command:

      +
      echo "<node_internal_IP> <host_name>" >> /etc/hosts
      +hostnamectl set-hostname <host_name>
      +
      +

      For example:

      +
      echo "192.168.0.224 ansible_master" >> /etc/hosts
      +hostnamectl set-hostname ansible_master
      +
      +
    • +
    +

    In this step, you will update packages and disable swap on the all 3 nodes:

    +
      +
    • +

      1 Ansible Master Node - ansible_master

      +
    • +
    • +

      1 Kubernetes Master Node - kubspray_master

      +
    • +
    • +

      1 Kubernetes Worker Node - kubspray_worker1

      +
    • +
    +

    The below steps will be performed on all the above mentioned nodes:

    +
      +
    • +

      SSH into all the 3 machines

      +
    • +
    • +

      Switch as root: sudo su

      +
    • +
    • +

      Update the repositories and packages:

      +
      apt-get update && apt-get upgrade -y
      +
      +
    • +
    • +

      Turn off swap

      +
      swapoff -a
      +sed -i '/ swap / s/^/#/' /etc/fstab
      +
      +
    • +
    +
    +

    Configure Kubespray on ansible_master node using Ansible Playbook

    +

    Run the below command on the master node i.e. master that you want to setup as +control plane.

    +
      +
    • +

      SSH into ansible_master machine

      +
    • +
    • +

      Switch to root user: sudo su

      +
    • +
    • +

      Execute the below command to initialize the cluster:

      +
    • +
    • +

      Install Python3 and upgrade pip to pip3:

      +
      apt install python3-pip -y
      +pip3 install --upgrade pip
      +python3 -V && pip3 -V
      +pip -V
      +
      +
    • +
    • +

      Clone the Kubespray git repository:

      +
      git clone https://github.com/kubernetes-sigs/kubespray.git
      +cd kubespray
      +
      +
    • +
    • +

      Install dependencies from requirements.txt:

      +
      pip install -r requirements.txt
      +
      +
    • +
    • +

      Copy inventory/sample as inventory/mycluster

      +
      cp -rfp inventory/sample inventory/mycluster
      +
      +
    • +
    • +

      Update Ansible inventory file with inventory builder:

      +

      This step is little trivial because we need to update hosts.yml with the nodes +IP.

      +

      Now we are going to declare a variable "IPS" for storing the IP address of +other K8s nodes .i.e. kubspray_master(192.168.0.130), kubspray_worker1(192.168.0.32)

      +
      declare -a IPS=(192.168.0.130 192.168.0.32)
      +CONFIG_FILE=inventory/mycluster/hosts.yml python3 \
      +    contrib/inventory_builder/inventory.py ${IPS[@]}
      +
      +

      This outputs:

      +
      DEBUG: Adding group all
      +DEBUG: Adding group kube_control_plane
      +DEBUG: Adding group kube_node
      +DEBUG: Adding group etcd
      +DEBUG: Adding group k8s_cluster
      +DEBUG: Adding group calico_rr
      +DEBUG: adding host node1 to group all
      +DEBUG: adding host node2 to group all
      +DEBUG: adding host node1 to group etcd
      +DEBUG: adding host node1 to group kube_control_plane
      +DEBUG: adding host node2 to group kube_control_plane
      +DEBUG: adding host node1 to group kube_node
      +DEBUG: adding host node2 to group kube_node
      +
      +
    • +
    • +

      After running the above commands do verify the hosts.yml and its content:

      +
      cat inventory/mycluster/hosts.yml
      +
      +

      The contents of the hosts.yml file should looks like:

      +
      all:
      +  hosts:
      +    node1:
      +      ansible_host: 192.168.0.130
      +      ip: 192.168.0.130
      +      access_ip: 192.168.0.130
      +    node2:
      +      ansible_host: 192.168.0.32
      +      ip: 192.168.0.32
      +      access_ip: 192.168.0.32
      +  children:
      +    kube_control_plane:
      +      hosts:
      +        node1:
      +        node2:
      +    kube_node:
      +      hosts:
      +        node1:
      +        node2:
      +    etcd:
      +      hosts:
      +        node1:
      +    k8s_cluster:
      +      children:
      +        kube_control_plane:
      +        kube_node:
      +    calico_rr:
      +      hosts: {}
      +
      +
    • +
    • +

      Review and change parameters under inventory/mycluster/group_vars

      +
      cat inventory/mycluster/group_vars/all/all.yml
      +cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
      +
      +
    • +
    • +

      It can be useful to set the following two variables to true in +inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml: kubeconfig_localhost +(to make a copy of kubeconfig on the host that runs Ansible in +{ inventory_dir }/artifacts) and kubectl_localhost +(to download kubectl onto the host that runs Ansible in { bin_dir }).

      +
      +

      Very Important

      +

      As Ubuntu 20 kvm kernel doesn't have dummy module we need to modify +the following two variables in inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml: +enable_nodelocaldns: false and kube_proxy_mode: iptables which will +Disable nodelocal dns cache and Kube-proxy proxyMode to iptables respectively.

      +
      +
    • +
    • +

      Deploy Kubespray with Ansible Playbook - run the playbook as root user. +The option --become is required, as for example writing SSL keys in /etc/, +installing packages and interacting with various systemd daemons. Without +--become the playbook will fail to run!

      +
      ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml
      +
      +
      +

      Note

      +

      Running ansible playbook takes little time because it depends on the network +bandwidth also.

      +
      +
    • +
    +
    +

    Install kubectl on Kubernetes master node .i.e. kubspray_master

    +
      +
    • +

      Install kubectl binary

      +
      snap install kubectl --classic
      +
      +

      This outputs: kubectl 1.26.1 from Canonical✓ installed

      +
    • +
    • +

      Now verify the kubectl version:

      +
      kubectl version -o yaml
      +
      +
    • +
    +
    +

    Validate all cluster components and nodes are visible on all nodes

    +
      +
    • +

      Verify the cluster

      +
      kubectl get nodes
      +
      +NAME    STATUS   ROLES                  AGE     VERSION
      +node1   Ready    control-plane,master   6m7s    v1.26.1
      +node2   Ready    control-plane,master   5m32s   v1.26.1
      +
      +
    • +
    +
    +

    Deploy A Hello Minikube Application

    +
      +
    • +

      Use the kubectl create command to create a Deployment that manages a Pod. The Pod +runs a Container based on the provided Docker image.

      +
      kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
      +
      +
      kubectl expose deployment hello-minikube --type=LoadBalancer --port=8080
      +
      +service/hello-minikube exposed
      +
      +
    • +
    • +

      View the deployments information:

      +
      kubectl get deployments
      +
      +NAME             READY   UP-TO-DATE   AVAILABLE   AGE
      +hello-minikube   1/1     1            1           50s
      +
      +
    • +
    • +

      View the port information:

      +
      kubectl get svc hello-minikube
      +
      +NAME             TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
      +hello-minikube   LoadBalancer   10.233.35.126   <pending>     8080:30723/TCP   40s
      +
      +
    • +
    • +

      Expose the service locally

      +
      kubectl port-forward svc/hello-minikube 30723:8080
      +
      +Forwarding from [::1]:30723 -> 8080
      +Forwarding from 127.0.0.1:30723 -> 8080
      +Handling connection for 30723
      +Handling connection for 30723
      +
      +

      Go to browser, visit http://<Master-Floating-IP>:8080 +i.e. http://140.247.152.235:8080/ to check the hello minikube default page.

      +
    • +
    +

    Clean up

    +

    Now you can clean up the app resources you created in your cluster:

    +
    kubectl delete service hello-minikube
    +kubectl delete deployment hello-minikube
    +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/kubernetes/microk8s/index.html b/other-tools/kubernetes/microk8s/index.html new file mode 100644 index 00000000..63ea3133 --- /dev/null +++ b/other-tools/kubernetes/microk8s/index.html @@ -0,0 +1,3528 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Microk8s

    +

    Pre-requisite

    +

    We will need 1 VM to create a single node kubernetes cluster using microk8s. +We are using following setting for this purpose:

    +
      +
    • +

      1 Linux machine, ubuntu-22.04-x86_64 or your choice of Ubuntu OS image, +cpu-su.2 flavor with 2vCPU, 8GB RAM, 20GB storage - also assign Floating IP +to this VM.

      +
    • +
    • +

      setup Unique hostname to the machine using the following command:

      +
      echo "<node_internal_IP> <host_name>" >> /etc/hosts
      +hostnamectl set-hostname <host_name>
      +
      +

      For example:

      +
      echo "192.168.0.62 microk8s" >> /etc/hosts
      +hostnamectl set-hostname microk8s
      +
      +
    • +
    +

    Install MicroK8s on Ubuntu

    +

    Run the below command on the Ubuntu VM:

    +
      +
    • +

      SSH into microk8s machine

      +
    • +
    • +

      Switch to root user: sudo su

      +
    • +
    • +

      Update the repositories and packages:

      +
      apt-get update && apt-get upgrade -y
      +
      +
    • +
    • +

      Install MicroK8s:

      +
      sudo snap install microk8s --classic
      +
      +
    • +
    • +

      Check the status while Kubernetes starts

      +
      microk8s status --wait-ready
      +
      +
    • +
    • +

      Turn on the services you want:

      +
      microk8s enable dns dashboard
      +
      +

      Try microk8s enable --help for a list of available services and optional features. +microk8s disable <name> turns off a service. For example other useful services +are: microk8s enable registry istio storage

      +
    • +
    • +

      Start using Kubernetes

      +
      microk8s kubectl get all --all-namespaces
      +
      +

      If you mainly use MicroK8s you can make our kubectl the default one on your +command-line with alias mkctl="microk8s kubectl". Since it is a standard +upstream kubectl, you can also drive other Kubernetes clusters with it by +pointing to the respective kubeconfig file via the --kubeconfig argument.

      +
    • +
    • +

      Access the Kubernetes dashboard +UI:

      +

      Microk8s Dashboard Ports

      +

      As we see above the kubernetes-dashboard service in the kube-system namespace +has a ClusterIP of 10.152.183.73 and listens on TCP port 443. The ClusterIP +is randomly assigned, so if you follow these steps on your host, make sure +you check the IP adress you got.

      +
      +

      Note

      +

      Another way to access the default token to be used for the dashboard access +can be retrieved with: +

      token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d "" -f1) #<!-- markdownlint-disable -->
      +microk8s kubectl -n kube-system describe secret $token
      +

      +
      +
    • +
    • +

      Keep running the kubernetes-dashboad on Proxy to access it via web browser:

      +
      microk8s dashboard-proxy
      +
      +Checking if Dashboard is running.
      +Dashboard will be available at https://127.0.0.1:10443
      +Use the following token to login:
      +eyJhbGc....
      +
      +
      +

      Important

      +

      This tells us the IP address of the Dashboard and the port. The values assigned +to your Dashboard will differ. Please note the displayed PORT and +the TOKEN that are required to access the kubernetes-dashboard. Make +sure, the exposed PORT is opened in Security Groups for the instance +following this guide.

      +
      +

      This will show the token to login to the Dashbord shown on the url with NodePort.

      +

      You'll need to wait a few minutes before the dashboard becomes available. If +you open a web browser on the same desktop you deployed Microk8s and point it +to https://<Floating-IP>:<PORT> (where PORT is the PORT assigned to the Dashboard +noted while running the above command), you’ll need to accept the risk +(because the Dashboard uses a self-signed certificate). And, we can enter the +previously noted TOKEN to access the kubernetes-dashboard.

      +

      The K8s Dashboard service

      +

      Once you enter the correct TOKEN the kubernetes-dashboard is accessed and +looks like below:

      +

      The K8s Dashboard service interface

      +
      +

      Information

      +
        +
      • Start and stop Kubernetes: +Kubernetes is a collection of system services that talk to each other all +the time. If you don’t need them running in the background then you will +save battery by stopping them. microk8s start and microk8s stop will +those tasks for you.
      • +
      • To Reset the infrastructure to a clean state: microk8s reset
      • +
      +
      +
    • +
    +

    Deploy a Container using the Kubernetes-Dashboard

    +

    Click on the + button in the top left corner of the main window. On the resulting +page, click Create from form and then fill out the necessary information as shown +below:

    +

    Deploying a test NGINX container named tns

    +

    You should immediately be directed to a page that lists your new deployment as shown +below:

    +

    The running NGINX container

    +

    Go back to the terminal window and issue the command:

    +
    microk8s kubectl get svc tns -n kube-system
    +
    +NAME   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
    +tns    LoadBalancer   10.152.183.90   <pending>     8080:30012/TCP   14m
    +
    +

    Go to browser, visit http://<Floating-IP>:<NodePort> +i.e. http://128.31.26.4:30012/ to check the nginx default page.

    +

    Deploy A Sample Nginx Application

    +
      +
    • +

      Create an alias:

      +
      alias mkctl="microk8s kubectl"
      +
      +
    • +
    • +

      Create a deployment, in this case Nginx:

      +
      mkctl create deployment --image nginx my-nginx
      +
      +
    • +
    • +

      To access the deployment we will need to expose it:

      +
      mkctl expose deployment my-nginx --port=80 --type=NodePort
      +
      +
      mkctl get svc my-nginx
      +
      +NAME       TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
      +my-nginx   NodePort   10.152.183.41   <none>        80:31225/TCP   35h
      +
      +
    • +
    +

    Go to browser, visit http://<Floating-IP>:<NodePort> +i.e. http://128.31.26.4:31225/ to check the nginx default page.

    +

    Deploy Another Application

    +

    You can start by creating a microbot deployment with two pods via the kubectl cli:

    +
    mkctl create deployment microbot --image=dontrebootme/microbot:v1
    +mkctl scale deployment microbot --replicas=2
    +
    +

    To expose the deployment to NodePort, you need to create a service:

    +
    mkctl expose deployment microbot --type=NodePort --port=80 --name=microbot-service
    +
    +
      +
    • +

      View the port information:

      +
      mkctl get svc microbot-service
      +
      +NAME               TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
      +microbot-service   NodePort   10.152.183.8   <none>        80:31442/TCP   35h
      +
      +
    • +
    +

    Go to browser, visit http://<Floating-IP>:<NodePort> +i.e. http://128.31.26.4:31442/ to check the microbot default page.

    +

    Microk8s Microbot App

    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/other-tools/kubernetes/minikube/index.html b/other-tools/kubernetes/minikube/index.html new file mode 100644 index 00000000..84eb84b9 --- /dev/null +++ b/other-tools/kubernetes/minikube/index.html @@ -0,0 +1,3840 @@ + + + + + + + + + + + + + + + + + + + + + New England Research Cloud(NERC) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + +

    Minikube

    +

    Minimum system requirements for minikube

    +
      +
    • 2 GB RAM or more
    • +
    • 2 CPU / vCPUs or more
    • +
    • 20 GB free hard disk space or more
    • +
    • Docker / Virtual Machine Manager – KVM & VirtualBox. Docker, Hyperkit, Hyper-V, +KVM, Parallels, Podman, VirtualBox, or VMWare are examples of container or virtual +machine managers.
    • +
    +

    Pre-requisite

    +

    We will need 1 VM to create a single node kubernetes cluster using minikube. +We are using following setting for this purpose:

    +
      +
    • +

      1 Linux machine for master, ubuntu-22.04-x86_64 or your choice of Ubuntu OS image, +cpu-su.2 flavor with 2vCPU, 8GB RAM, 20GB storage - also assign Floating IP + to this VM.

      +
    • +
    • +

      setup Unique hostname to the machine using the following command:

      +
      echo "<node_internal_IP> <host_name>" >> /etc/hosts
      +hostnamectl set-hostname <host_name>
      +
      +

      For example:

      +
      echo "192.168.0.62 minikube" >> /etc/hosts
      +hostnamectl set-hostname minikube
      +
      +
    • +
    +

    Install Minikube on Ubuntu

    +

    Run the below command on the Ubuntu VM:

    +
    +

    Very Important

    +

    Run the following steps as non-root user i.e. ubuntu.

    +
    +
      +
    • +

      SSH into minikube machine

      +
    • +
    • +

      Update the repositories and packages:

      +
      sudo apt-get update && sudo apt-get upgrade -y
      +
      +
    • +
    • +

      Install curl, wget, and apt-transport-https

      +
      sudo apt-get update && sudo apt-get install -y curl wget apt-transport-https
      +
      +
    • +
    +
    +

    Download and install the latest version of Docker CE

    +
      +
    • +

      Download and install Docker CE:

      +
      curl -fsSL https://get.docker.com -o get-docker.sh
      +sudo sh get-docker.sh
      +
      +
    • +
    • +

      Configure the Docker daemon:

      +
      sudo usermod -aG docker $USER && newgrp docker
      +
      +
    • +
    +
    +

    Install kubectl

    +
      +
    • +

      Install kubectl binary

      +

      kubectl: the command line util to talk to your cluster.

      +
      sudo snap install kubectl --classic
      +
      +

      This outputs: kubectl 1.26.1 from Canonical✓ installed

      +
    • +
    • +

      Now verify the kubectl version:

      +
      sudo kubectl version -o yaml
      +
      +
    • +
    +
    +

    Install the container runtime i.e. containerd on master and worker nodes

    +

    To run containers in Pods, Kubernetes uses a container runtime.

    +

    By default, Kubernetes uses the Container Runtime Interface (CRI) to interface +with your chosen container runtime.

    +
      +
    • +

      Install container runtime - containerd

      +

      The first thing to do is configure the persistent loading of the necessary +containerd modules. This forwarding IPv4 and letting iptables see bridged +trafficis is done with the following command:

      +
      cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
      +overlay
      +br_netfilter
      +EOF
      +
      +sudo modprobe overlay
      +sudo modprobe br_netfilter
      +
      +
    • +
    • +

      Ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config:

      +
      # sysctl params required by setup, params persist across reboots
      +cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
      +net.bridge.bridge-nf-call-iptables  = 1
      +net.bridge.bridge-nf-call-ip6tables = 1
      +net.ipv4.ip_forward                 = 1
      +EOF
      +
      +
    • +
    • +

      Apply sysctl params without reboot:

      +
      sudo sysctl --system
      +
      +
    • +
    • +

      Install the necessary dependencies with:

      +
      sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
      +
      +
    • +
    • +

      The containerd.io packages in DEB and RPM formats are distributed by Docker. +Add the required GPG key with:

      +
      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
      +sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
      +
      +

      It's now time to Install and configure containerd:

      +
      sudo apt update -y
      +sudo apt install -y containerd.io
      +containerd config default | sudo tee /etc/containerd/config.toml
      +
      +# Reload the systemd daemon with
      +sudo systemctl daemon-reload
      +
      +# Start containerd
      +sudo systemctl restart containerd
      +sudo systemctl enable --now containerd
      +
      +

      You can verify containerd is running with the command:

      +
      sudo systemctl status containerd
      +
      +
    • +
    +
    +

    Installing minikube

    +
      +
    • +

      Install minikube

      +
      curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
      +sudo dpkg -i minikube_latest_amd64.deb
      +
      +

      OR, install minikube using wget:

      +
      wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
      +cp minikube-linux-amd64 /usr/bin/minikube
      +chmod +x /usr/bin/minikube
      +
      +
    • +
    • +

      Verify the Minikube installation:

      +
      minikube version
      +
      +minikube version: v1.29.0
      +commit: ddac20b4b34a9c8c857fc602203b6ba2679794d3
      +
      +
    • +
    • +

      Install conntrack: +Kubernetes 1.26.1 requires conntrack to be installed in root's path:

      +
      sudo apt-get install -y conntrack
      +
      +
    • +
    • +

      Start minikube: +As we are already stated in the beginning that we would be using docker as base +for minikue, so start the minikube with the docker driver,

      +
      minikube start --driver=docker --container-runtime=containerd
      +
      +
      +

      Note

      +
        +
      • To check the internal IP, run the minikube ip command.
      • +
      • By default, Minikube uses the driver most relevant to the host OS. To +use a different driver, set the --driver flag in minikube start. For +example, to use others or none instead of Docker, run +minikube start --driver=none. To persistent configuration so that +you to run minikube start without explicitly passing i.e. in global scope +the --vm-driver docker flag each time, run: +minikube config set vm-driver docker.
      • +
      • Other start options: +minikube start --force --driver=docker --network-plugin=cni --container-runtime=containerd
      • +
      • In case you want to start minikube with customize resources and want installer +to automatically select the driver then you can run following command, +minikube start --addons=ingress --cpus=2 --cni=flannel --install-addons=true +--kubernetes-version=stable --memory=6g
      • +
      +
      +

      Output would like below:

      +

      Minikube sucessfully started

      +

      Perfect, above confirms that minikube cluster has been configured and started +successfully.

      +
    • +
    • +

      Run below minikube command to check status:

      +
      minikube status
      +
      +minikube
      +type: Control Plane
      +host: Running
      +kubelet: Running
      +apiserver: Running
      +kubeconfig: Configured
      +
      +
    • +
    • +

      Run following kubectl command to verify the cluster info and node status:

      +
      kubectl cluster-info
      +
      +Kubernetes control plane is running at https://192.168.0.62:8443
      +CoreDNS is running at https://192.168.0.62:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
      +
      +To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
      +
      +
      kubectl get nodes
      +
      +NAME       STATUS   ROLES                  AGE   VERSION
      +minikube   Ready    control-plane,master   5m    v1.26.1
      +
      +
    • +
    • +

      To see the kubectl configuration use the command:

      +
      kubectl config view
      +
      +

      The output looks like: +Minikube config view

      +
    • +
    • +

      Get minikube addon details:

      +
      minikube addons list
      +
      +

      The output will display like below: +Minikube addons list

      +

      If you wish to enable any addons run the below minikube command,

      +
      minikube addons enable <addon-name>
      +
      +
    • +
    • +

      Enable minikube dashboard addon:

      +
      minikube dashboard
      +
      +🔌  Enabling dashboard ...
      +     Using image kubernetesui/metrics-scraper:v1.0.7
      +     Using image kubernetesui/dashboard:v2.3.1
      +🤔  Verifying dashboard health ...
      +🚀  Launching proxy ...
      +🤔  Verifying proxy health ...
      +http://127.0.0.1:40783/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
      +
      +
    • +
    • +

      To view minikube dashboard url:

      +
      minikube dashboard --url
      +
      +🤔  Verifying dashboard health ...
      +🚀  Launching proxy ...
      +🤔  Verifying proxy health ...
      +http://127.0.0.1:42669/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
      +
      +
    • +
    • +

      Expose Dashboard on NodePort instead of ClusterIP:

      +

      -- Check the current port for kubernetes-dashboard:

      +
      kubectl get services -n kubernetes-dashboard
      +
      +

      The output looks like below:

      +

      Current ClusterIP for Minikube Dashboard

      +
      kubectl edit service kubernetes-dashboard -n kubernetes-dashboard
      +
      +

      -- Replace type: "ClusterIP" to "NodePort":

      +

      Current Dashboard Type

      +

      -- After saving the file: +Test again: kubectl get services -n kubernetes-dashboard

      +

      Now the output should look like below: +Current NodePort for Minikube Dashboard

      +

      So, now you can browser the K8s Dashboard, visit http://<Floating-IP>:<NodePort> +i.e. http://140.247.152.235:31881 to view the Dashboard.

      +
    • +
    +

    Deploy A Sample Nginx Application

    +
      +
    • +

      Create a deployment, in this case Nginx:

      +

      A Kubernetes Pod is a group of one or more Containers, tied together for the +purposes of administration and networking. The Pod in this tutorial has only +one Container. A Kubernetes Deployment checks on the health of your Pod and +restarts the Pod's Container if it terminates. Deployments are the recommended +way to manage the creation and scaling of Pods.

      +
    • +
    • +

      Let's check if the Kubernetes cluster is up and running:

      +
      kubectl get all --all-namespaces
      +kubectl get po -A
      +kubectl get nodes
      +
      +
      kubectl create deployment --image nginx my-nginx
      +
      +
    • +
    • +

      To access the deployment we will need to expose it:

      +
      kubectl expose deployment my-nginx --port=80 --type=NodePort
      +
      +

      To check which NodePort is opened and running the Nginx run:

      +
      kubectl get svc
      +
      +

      The output will show: +Minikube Running Services

      +

      OR,

      +
      minikube service list
      +
      +|----------------------|---------------------------|--------------|-------------|
      +|      NAMESPACE       |           NAME            | TARGET PORT  |       URL   |
      +|----------------------|---------------------------|--------------|-------------|
      +| default              | kubernetes                | No node port |
      +| default              | my-nginx                  |           80 | http:.:31081|
      +| kube-system          | kube-dns                  | No node port |
      +| kubernetes-dashboard | dashboard-metrics-scraper | No node port |
      +| kubernetes-dashboard | kubernetes-dashboard      |           80 | http:.:31929|
      +|----------------------|---------------------------|--------------|-------------|
      +
      +

      OR,

      +
      kubectl get svc my-nginx
      +minikube service my-nginx --url
      +
      +

      Once the deployment is up, you should be able to access the Nginx home page on +the allocated NodePort from the node's Floating IP.

      +

      Go to browser, visit http://<Floating-IP>:<NodePort> +i.e. http://140.247.152.235:31081/ to check the nginx default page.

      +

      For your example,

      +

      nginx default page

      +
    • +
    +
    +

    Deploy A Hello Minikube Application

    +
      +
    • +

      Use the kubectl create command to create a Deployment that manages a Pod. The Pod +runs a Container based on the provided Docker image.

      +
      kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
      +kubectl expose deployment hello-minikube --type=NodePort --port=8080
      +
      +
    • +
    • +

      View the port information:

      +
      kubectl get svc hello-minikube
      +minikube service hello-minikube --url
      +
      +

      Go to browser, visit http://<Floating-IP>:<NodePort> +i.e. http://140.247.152.235:31293/ to check the hello minikube default page.

      +

      For your example,

      +

      Hello Minikube default page

      +
    • +
    +

    Clean up

    +

    Now you can clean up the app resources you created in your cluster:

    +
    kubectl delete service my-nginx
    +kubectl delete deployment my-nginx
    +
    +kubectl delete service hello-minikube
    +kubectl delete deployment hello-minikube
    +
    +
    +

    Managing Minikube Cluster

    +
      +
    • +

      To stop the minikube, run

      +
      minikube stop
      +
      +
    • +
    • +

      To delete the single node cluster:

      +
      minikube delete
      +
      +
    • +
    • +

      To Start the minikube, run

      +
      minikube start
      +
      +
    • +
    • +

      Remove the Minikube configuration and data directories:

      +
      rm -rf ~/.minikube
      +rm -rf ~/.kube
      +
      +
    • +
    • +

      If you have installed any Minikube related packages, remove them:

      +
      sudo apt remove -y conntrack
      +
      +
    • +
    • +

      In case you want to start the minikube with higher resource like 8 GB RM and 4 +CPU then execute following commands one after the another.

      +
      minikube config set cpus 4
      +minikube config set memory 8192
      +minikube delete
      +minikube start
      +
      +
    • +
    +
    + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 00000000..59684019 --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"NERC Technical Documentation NERC welcomes your contributions These pages are hosted from a git repository and contributions are welcome! Fork this repo","title":"Home"},{"location":"#nerc-technical-documentation","text":"NERC welcomes your contributions These pages are hosted from a git repository and contributions are welcome! Fork this repo","title":"NERC Technical Documentation"},{"location":"about/","text":"About NERC We are currently in the pilot phase of the project and are focusing on developing the technology to make it easy for researchers to take advantage of a suite of services ( IaaS, PaaS, SaaS ) that are not readily available today. This includes: The creation of the building blocks needed for production cloud services Begin collaboration with Systems Engineers from other institutions with well established RC groups On-board select proof of concept use cases from institutions within the MGHPCC consortium and other institutions within Massachusetts The longer term objectives will be centered around activities that will focus on: Engaging with various OpenStack communities by sharing best practices and setting standards for deployments Connecting regularly with the Mass Open Cloud (MOC) leadership to understand when new technologies they are developing with RedHat, Inc. \u2013 and as part of the new NSF funded Open Cloud Testbed \u2013 might be ready for adoption into the production NERC environment Broadening the local deployment team of NERC to include partner universities within the MGHPCC consortium. Figure 1: NERC Overview NERC production services ( red ) stand on top of the existing NESE storage services ( blue ) that are built on the strong foundation of MGHPCC ( green ) that provides core facility and network access. The Innovation Hub ( grey ) enables new technologies to be rapidly adopted by the NERC or NESE services. On the far left ( purple ) are the Research and Learning communities which are the primary customers of NERC. As users proceed down the stack of production services from Web-apps, that require more technical skills, the Cloud Facilitators ( orange ) in the middle guide and educate users on how to best use the services. For more information, view NERC's concept document.","title":"About"},{"location":"about/#about-nerc","text":"We are currently in the pilot phase of the project and are focusing on developing the technology to make it easy for researchers to take advantage of a suite of services ( IaaS, PaaS, SaaS ) that are not readily available today. This includes: The creation of the building blocks needed for production cloud services Begin collaboration with Systems Engineers from other institutions with well established RC groups On-board select proof of concept use cases from institutions within the MGHPCC consortium and other institutions within Massachusetts The longer term objectives will be centered around activities that will focus on: Engaging with various OpenStack communities by sharing best practices and setting standards for deployments Connecting regularly with the Mass Open Cloud (MOC) leadership to understand when new technologies they are developing with RedHat, Inc. \u2013 and as part of the new NSF funded Open Cloud Testbed \u2013 might be ready for adoption into the production NERC environment Broadening the local deployment team of NERC to include partner universities within the MGHPCC consortium. Figure 1: NERC Overview NERC production services ( red ) stand on top of the existing NESE storage services ( blue ) that are built on the strong foundation of MGHPCC ( green ) that provides core facility and network access. The Innovation Hub ( grey ) enables new technologies to be rapidly adopted by the NERC or NESE services. On the far left ( purple ) are the Research and Learning communities which are the primary customers of NERC. As users proceed down the stack of production services from Web-apps, that require more technical skills, the Cloud Facilitators ( orange ) in the middle guide and educate users on how to best use the services. For more information, view NERC's concept document.","title":"About NERC"},{"location":"get-started/create-a-user-portal-account/","text":"User Account Types NERC offers two types of user accounts: a Principal Investigator (PI) Account and a General User Account . All General Users must be assigned to their project by an active NERC PI or by one of the delegated project manager(s), as described here . Then, those project users can be added to the resource allocation during a new allocation request or at a later time. Principal Investigator Eligibility Information MGHPCC consortium members, whereby they enter into an service agreement with MGHPCC for the NERC services. Non-members of MGHPCC can also be PIs of NERC Services, but must also have an active non-member agreement with MGHPCC. External research focused institutions will be considered on a case-by-case basis and are subject to an external customer cost structure. A PI account can request allocations of NERC resources, grant access to other general users enabling them to log into NERC's computational project space, and delegate its responsibilities to other collaborators from the same institutions or elsewhere as managers using NERC's ColdFront interface , as described here . Getting Started Any faculty, staff, student, and external collaborator must request a user account through the MGHPCC Shared Services (MGHPCC-SS) Account Portal , also known as \"RegApp\" . This is a web-based, single point-of-entry to the NERC system that displays a user welcome page. The welcome page of the account registration site displays instructions on how to register a General User account on NERC, as shown in the image below: There are two options: either register for a new account or manage an existing one. If you are new to NERC and want to register as a new MGHPCC-SS user, click on the \"Register for an Account\" button. This will redirect you to a new web page which shows details about how to register for a new MGHPCC-SS user account. NERC uses CILogon that supports login either using your Institutional Identity Provider (IdP). Clicking the \"Begin MGHPCC-SS Account Creation Process\" button will initiate the account creation process. You will be redirected to a site managed by CILogon where you will select your institutional or commercial identity provider, as shown below: Once selected, you will be redirected to your institutional or commercial identity provider, where you will log in, as shown here: After a successful log on, your browser will be redirected back to the MGHPCC-SS Registration Page and ask for a review and confirmation of creating your account with fetched information to complete the account creation process. Very Important If you don't click the \"Create MGHPCC-SS Account\" button, your account will not be created! So, this is a very important step. Review your information carefully and then click on the \"Create MGHPCC-SS Account\" button to save your information. Please review the information, make any corrections that you need and fill in any blank/ missing fields such as \"Research Domain\". Please read the End User Level Agreement (EULA) and accept the terms by checking the checkbox in this form. Once you have reviewed and verified that all your user information in this form is correct, only then click the \"Create MGHPCC-SS Account\" button. This will automatically send an email to your email address with a link to validate and confirm your account information. Once you receive an \"MGHPCC-SS Account Creation Validation\" email, review your user account information to ensure it is correct. Then, click on the provided validation web link and enter the unique account creation Confirmation Code provided in the email as shown below: Once validated, you need to ensure that your user account is created and valid by viewing the following page: Important Note If you have an institutional identity, it's preferable to use that identity to create your MGHPCC-SS account. Institutional identities are vetted by identity management teams and provide a higher level of confidence to resource owners when granting access to resources. You can only link one university account to an MGHPCC-SS account; if you have multiple university accounts, you will only be able to link one of those accounts to your MGHPCC-SS account. If, at a later date, you want to change which account is connected to your MGHPCC-SS identity, you can do so by contacting help@mghpcc.org . How to update and modify your MGHPCC-SS account information? Log in to the RegApp using your MGHPCC-SS account. Click on \"Manage Your MGHPCC-SS Account\" button as shown below: Review your currently saved account information, make any necessary corrections or updates to fields, and then click on the \"Update MGHPCC-SS Account\" button. This will send an email to verify your updated account information, so please check your email address. Confirm and validate the new account details by clicking the provided validation web link and entering the unique Confirmation Code provided in the email as shown below: How to request a Principal Investigator (PI) Account? The process for requesting and obtaining a PI Account is relatively simple. You can fill out this NERC Principal Investigator (PI) Account Request form to initiate the process. Alternatively, users can request a Principal Investigator (PI) user account by submitting a new ticket at the NERC's Support Ticketing System under the \"NERC PI Account Request\" option in the Help Topic dropdown menu, as shown in the image below: Information Once your PI user request is reviewed and approved by the NERC's admin, you will receive an email confirmation from NERC's support system, i.e., help@nerc.mghpcc.org . Then, you can access NERC's ColdFront resource allocation management portal using the PI user role, as described here .","title":"How to Create a User Account"},{"location":"get-started/create-a-user-portal-account/#user-account-types","text":"NERC offers two types of user accounts: a Principal Investigator (PI) Account and a General User Account . All General Users must be assigned to their project by an active NERC PI or by one of the delegated project manager(s), as described here . Then, those project users can be added to the resource allocation during a new allocation request or at a later time. Principal Investigator Eligibility Information MGHPCC consortium members, whereby they enter into an service agreement with MGHPCC for the NERC services. Non-members of MGHPCC can also be PIs of NERC Services, but must also have an active non-member agreement with MGHPCC. External research focused institutions will be considered on a case-by-case basis and are subject to an external customer cost structure. A PI account can request allocations of NERC resources, grant access to other general users enabling them to log into NERC's computational project space, and delegate its responsibilities to other collaborators from the same institutions or elsewhere as managers using NERC's ColdFront interface , as described here .","title":"User Account Types"},{"location":"get-started/create-a-user-portal-account/#getting-started","text":"Any faculty, staff, student, and external collaborator must request a user account through the MGHPCC Shared Services (MGHPCC-SS) Account Portal , also known as \"RegApp\" . This is a web-based, single point-of-entry to the NERC system that displays a user welcome page. The welcome page of the account registration site displays instructions on how to register a General User account on NERC, as shown in the image below: There are two options: either register for a new account or manage an existing one. If you are new to NERC and want to register as a new MGHPCC-SS user, click on the \"Register for an Account\" button. This will redirect you to a new web page which shows details about how to register for a new MGHPCC-SS user account. NERC uses CILogon that supports login either using your Institutional Identity Provider (IdP). Clicking the \"Begin MGHPCC-SS Account Creation Process\" button will initiate the account creation process. You will be redirected to a site managed by CILogon where you will select your institutional or commercial identity provider, as shown below: Once selected, you will be redirected to your institutional or commercial identity provider, where you will log in, as shown here: After a successful log on, your browser will be redirected back to the MGHPCC-SS Registration Page and ask for a review and confirmation of creating your account with fetched information to complete the account creation process. Very Important If you don't click the \"Create MGHPCC-SS Account\" button, your account will not be created! So, this is a very important step. Review your information carefully and then click on the \"Create MGHPCC-SS Account\" button to save your information. Please review the information, make any corrections that you need and fill in any blank/ missing fields such as \"Research Domain\". Please read the End User Level Agreement (EULA) and accept the terms by checking the checkbox in this form. Once you have reviewed and verified that all your user information in this form is correct, only then click the \"Create MGHPCC-SS Account\" button. This will automatically send an email to your email address with a link to validate and confirm your account information. Once you receive an \"MGHPCC-SS Account Creation Validation\" email, review your user account information to ensure it is correct. Then, click on the provided validation web link and enter the unique account creation Confirmation Code provided in the email as shown below: Once validated, you need to ensure that your user account is created and valid by viewing the following page: Important Note If you have an institutional identity, it's preferable to use that identity to create your MGHPCC-SS account. Institutional identities are vetted by identity management teams and provide a higher level of confidence to resource owners when granting access to resources. You can only link one university account to an MGHPCC-SS account; if you have multiple university accounts, you will only be able to link one of those accounts to your MGHPCC-SS account. If, at a later date, you want to change which account is connected to your MGHPCC-SS identity, you can do so by contacting help@mghpcc.org .","title":"Getting Started"},{"location":"get-started/create-a-user-portal-account/#how-to-update-and-modify-your-mghpcc-ss-account-information","text":"Log in to the RegApp using your MGHPCC-SS account. Click on \"Manage Your MGHPCC-SS Account\" button as shown below: Review your currently saved account information, make any necessary corrections or updates to fields, and then click on the \"Update MGHPCC-SS Account\" button. This will send an email to verify your updated account information, so please check your email address. Confirm and validate the new account details by clicking the provided validation web link and entering the unique Confirmation Code provided in the email as shown below:","title":"How to update and modify your MGHPCC-SS account information?"},{"location":"get-started/create-a-user-portal-account/#how-to-request-a-principal-investigator-pi-account","text":"The process for requesting and obtaining a PI Account is relatively simple. You can fill out this NERC Principal Investigator (PI) Account Request form to initiate the process. Alternatively, users can request a Principal Investigator (PI) user account by submitting a new ticket at the NERC's Support Ticketing System under the \"NERC PI Account Request\" option in the Help Topic dropdown menu, as shown in the image below: Information Once your PI user request is reviewed and approved by the NERC's admin, you will receive an email confirmation from NERC's support system, i.e., help@nerc.mghpcc.org . Then, you can access NERC's ColdFront resource allocation management portal using the PI user role, as described here .","title":"How to request a Principal Investigator (PI) Account?"},{"location":"get-started/user-onboarding-on-NERC/","text":"User Onboarding Process Overview NERC's Research allocations are available to faculty members and researchers, including postdoctoral researchers and students, at a U.S. based institution in New England. In order to get access to resources provided by NERC's computational infrastructure, you must first register and obtain a user account. The overall user flow can be summarized using the following sequence diagram: All users including PI need to register to NERC via: https://regapp.mss.mghpcc.org/ . PI will send a request for a Principal Investigator (PI) user account role by submitting: NERC's PI Request Form . Alternatively, users can request a Principal Investigator (PI) user account by submitting a new ticket at the NERC's Support Ticketing System under the \"NERC PI Account Request\" option in the Help Topic dropdown menu, as shown in the image below: Principal Investigator Eligibility Information MGHPCC consortium members, whereby they enter into an service agreement with MGHPCC for the NERC services. Non-members of MGHPCC can also be PIs of NERC Services, but must also have an active non-member agreement with MGHPCC. External research focused institutions will be considered on a case-by-case basis and are subject to an external customer cost structure. Wait until the PI request gets approved by the NERC's admin . Once a PI request is approved , PI can add a new project and also search and add user(s) to the project - Other general user(s) can also see the project(s) once they are added to a project via: https://coldfront.mss.mghpcc.org . PI or project Manager can request resource allocation either NERC (OpenStack) or NERC-OCP (OpenShift) for the newly added project and select which user(s) can use the requested allocation. As a new NERC PI for the first time, am I entitled to any credits? As a new PI using NERC for the first time, you might wonder if you get any credits. Yes, you'll receive up to $1000 for the first month only . But remember, this credit can not be used in the following months . Also, it does not apply to GPU resource usage . Wait until the requested resource allocation gets approved by the NERC's admin . Once approved , PI and the corresponding project users can go to either NERC Openstack horizon web interface: https://stack.nerc.mghpcc.org or NERC OpenShift web console: https://console.apps.shift.nerc.mghpcc.org based on approved Resource Type and they can start using the NERC's resources based on the approved project quotas .","title":"User Onboarding Process"},{"location":"get-started/user-onboarding-on-NERC/#user-onboarding-process-overview","text":"NERC's Research allocations are available to faculty members and researchers, including postdoctoral researchers and students, at a U.S. based institution in New England. In order to get access to resources provided by NERC's computational infrastructure, you must first register and obtain a user account. The overall user flow can be summarized using the following sequence diagram: All users including PI need to register to NERC via: https://regapp.mss.mghpcc.org/ . PI will send a request for a Principal Investigator (PI) user account role by submitting: NERC's PI Request Form . Alternatively, users can request a Principal Investigator (PI) user account by submitting a new ticket at the NERC's Support Ticketing System under the \"NERC PI Account Request\" option in the Help Topic dropdown menu, as shown in the image below: Principal Investigator Eligibility Information MGHPCC consortium members, whereby they enter into an service agreement with MGHPCC for the NERC services. Non-members of MGHPCC can also be PIs of NERC Services, but must also have an active non-member agreement with MGHPCC. External research focused institutions will be considered on a case-by-case basis and are subject to an external customer cost structure. Wait until the PI request gets approved by the NERC's admin . Once a PI request is approved , PI can add a new project and also search and add user(s) to the project - Other general user(s) can also see the project(s) once they are added to a project via: https://coldfront.mss.mghpcc.org . PI or project Manager can request resource allocation either NERC (OpenStack) or NERC-OCP (OpenShift) for the newly added project and select which user(s) can use the requested allocation. As a new NERC PI for the first time, am I entitled to any credits? As a new PI using NERC for the first time, you might wonder if you get any credits. Yes, you'll receive up to $1000 for the first month only . But remember, this credit can not be used in the following months . Also, it does not apply to GPU resource usage . Wait until the requested resource allocation gets approved by the NERC's admin . Once approved , PI and the corresponding project users can go to either NERC Openstack horizon web interface: https://stack.nerc.mghpcc.org or NERC OpenShift web console: https://console.apps.shift.nerc.mghpcc.org based on approved Resource Type and they can start using the NERC's resources based on the approved project quotas .","title":"User Onboarding Process Overview"},{"location":"get-started/allocation/adding-a-new-allocation/","text":"Adding a new Resource Allocation to the project If one resource allocation is not sufficient for a project, PI or project managers may request additional allocations by clicking on the \"Request Resource Allocation\" button on the Allocations section of the project details. This will show the page where all existing users for the project will be listed on the bottom of the request form. PIs can select desired user(s) to make the requested resource allocations available on their NERC's OpenStack or OpenShift projects. Here, you can view the Resource Type, information about your Allocated Project, status, End Date of the allocation, and actions button or any pending actions as shown below: Adding a new Resource Allocation to your OpenStack project Important: Requested/Approved Allocated OpenStack Storage Quota & Cost Ensure you choose NERC (OpenStack) in the Resource option and specify your anticipated computing units. Each allocation, whether requested or approved, will be billed based on the pay-as-you-go model. The exception is for Storage quotas , where the cost is determined by your requested and approved allocation values to reserve storage from the total NESE storage pool. For NERC (OpenStack) Resource Allocations, the Storage quotas are specified by the \"OpenStack Volume Quota (GiB)\" and \"OpenStack Swift Quota (GiB)\" allocation attributes. If you have common questions or need more information, refer to our Billing FAQs for comprehensive answers. Keep in mind that you can easily scale and expand your current resource allocations within your project by following this documentation later on. Adding a new Resource Allocation to your OpenShift project Important: Requested/Approved Allocated OpenShift Storage Quota & Cost Ensure you choose NERC-OCP (OpenShift) in the Resource option ( Always Remember: the first option, i.e. NERC (OpenStack) is selected by default!) and specify your anticipated computing units. Each allocation, whether requested or approved, will be billed based on the pay-as-you-go model. The exception is for Storage quotas , where the cost is determined by your requested and approved allocation values to reserve storage from the total NESE storage pool. For NERC-OCP (OpenShift) Resource Allocations, storage quotas are specified by the \"OpenShift Request on Storage Quota (GiB)\" and \"OpenShift Limit on Ephemeral Storage Quota (GiB)\" allocation attributes. If you have common questions or need more information, refer to our Billing FAQs for comprehensive answers. Keep in mind that you can easily scale and expand your current resource allocations within your project by following this documentation later on.","title":"Adding a new Resource Allocation to the project"},{"location":"get-started/allocation/adding-a-new-allocation/#adding-a-new-resource-allocation-to-the-project","text":"If one resource allocation is not sufficient for a project, PI or project managers may request additional allocations by clicking on the \"Request Resource Allocation\" button on the Allocations section of the project details. This will show the page where all existing users for the project will be listed on the bottom of the request form. PIs can select desired user(s) to make the requested resource allocations available on their NERC's OpenStack or OpenShift projects. Here, you can view the Resource Type, information about your Allocated Project, status, End Date of the allocation, and actions button or any pending actions as shown below:","title":"Adding a new Resource Allocation to the project"},{"location":"get-started/allocation/adding-a-new-allocation/#adding-a-new-resource-allocation-to-your-openstack-project","text":"Important: Requested/Approved Allocated OpenStack Storage Quota & Cost Ensure you choose NERC (OpenStack) in the Resource option and specify your anticipated computing units. Each allocation, whether requested or approved, will be billed based on the pay-as-you-go model. The exception is for Storage quotas , where the cost is determined by your requested and approved allocation values to reserve storage from the total NESE storage pool. For NERC (OpenStack) Resource Allocations, the Storage quotas are specified by the \"OpenStack Volume Quota (GiB)\" and \"OpenStack Swift Quota (GiB)\" allocation attributes. If you have common questions or need more information, refer to our Billing FAQs for comprehensive answers. Keep in mind that you can easily scale and expand your current resource allocations within your project by following this documentation later on.","title":"Adding a new Resource Allocation to your OpenStack project"},{"location":"get-started/allocation/adding-a-new-allocation/#adding-a-new-resource-allocation-to-your-openshift-project","text":"Important: Requested/Approved Allocated OpenShift Storage Quota & Cost Ensure you choose NERC-OCP (OpenShift) in the Resource option ( Always Remember: the first option, i.e. NERC (OpenStack) is selected by default!) and specify your anticipated computing units. Each allocation, whether requested or approved, will be billed based on the pay-as-you-go model. The exception is for Storage quotas , where the cost is determined by your requested and approved allocation values to reserve storage from the total NESE storage pool. For NERC-OCP (OpenShift) Resource Allocations, storage quotas are specified by the \"OpenShift Request on Storage Quota (GiB)\" and \"OpenShift Limit on Ephemeral Storage Quota (GiB)\" allocation attributes. If you have common questions or need more information, refer to our Billing FAQs for comprehensive answers. Keep in mind that you can easily scale and expand your current resource allocations within your project by following this documentation later on.","title":"Adding a new Resource Allocation to your OpenShift project"},{"location":"get-started/allocation/adding-a-project/","text":"A New Project Creation Process What PIs need to fill in order to request a Project? Once logged in to NERC's ColdFront, PIs can choose Projects sub-menu located under the Project menu. Clicking on the \"Add a project\" button will show the interface below: Very Important: Project Title Length Limitation Please ensure that the project title is both concise and does not exceed a length of 63 characters . PIs need to specify an appropriate title ( less than 63 characters ), description of their research work that will be performed on the NERC (in one or two paragraphs), the field(s) of science or research domain(s), and then click the \"Save\" button. Once saved successfully, PIs effectively become the \"manager\" of the project, and are free to add or remove users and also request resource allocation(s) to any Projects for which they are the PI. PIs are permitted to add users to their group, request new allocations, renew expiring allocations, and provide information such as publications and grant data. PIs can maintain all their research information under one project or, if they require, they can separate the work into multiple projects.","title":"A New Project Creation Process"},{"location":"get-started/allocation/adding-a-project/#a-new-project-creation-process","text":"","title":"A New Project Creation Process"},{"location":"get-started/allocation/adding-a-project/#what-pis-need-to-fill-in-order-to-request-a-project","text":"Once logged in to NERC's ColdFront, PIs can choose Projects sub-menu located under the Project menu. Clicking on the \"Add a project\" button will show the interface below: Very Important: Project Title Length Limitation Please ensure that the project title is both concise and does not exceed a length of 63 characters . PIs need to specify an appropriate title ( less than 63 characters ), description of their research work that will be performed on the NERC (in one or two paragraphs), the field(s) of science or research domain(s), and then click the \"Save\" button. Once saved successfully, PIs effectively become the \"manager\" of the project, and are free to add or remove users and also request resource allocation(s) to any Projects for which they are the PI. PIs are permitted to add users to their group, request new allocations, renew expiring allocations, and provide information such as publications and grant data. PIs can maintain all their research information under one project or, if they require, they can separate the work into multiple projects.","title":"What PIs need to fill in order to request a Project?"},{"location":"get-started/allocation/allocation-change-request/","text":"Request change to Resource Allocation to an existing project If past resource allocation is not sufficient for an existing project, PIs or project managers can request a change by clicking \"Request Change\" button on project resource allocation detail page as show below: Request Change Resource Allocation Attributes for OpenStack Project This will bring up the detailed Quota attributes for that project as shown below: Important: Requested/Approved Allocated OpenStack Storage Quota & Cost For NERC (OpenStack) resource types, the Storage quotas are controlled by the values of the \"OpenStack Volume Quota (GiB)\" and \"OpenStack Swift Quota (GiB)\" quota attributes. The Storage cost is determined by your requested and approved allocation values for these quota attributes. If you have common questions or need more information, refer to our Billing FAQs for comprehensive answers. PI or project managers can provide a new value for the individual quota attributes, and give justification for the requested changes so that the NERC admin can review the change request and approve or deny based on justification and quota change request. Then submitting the change request, this will notify the NERC admin about it. Please wait untill the NERC admin approves/ deny the change request to see the change on your resource allocation for the selected project. Important Information PI or project managers can put the new values on the textboxes for ONLY quota attributes they want to change others they can be left blank so those quotas will not get changed! To use GPU resources on your VM, you need to specify the number of GPUs in the \"OpenStack GPU Quota\" attribute. Additionally, ensure that your other quota attributes, namely \"OpenStack Compute vCPU Quota\" and \"OpenStack Compute RAM Quota (MiB)\" have sufficient resources to meet the vCPU and RAM requirements for one of the GPU tier-based flavors. Refer to the GPU Tier documentation for specific requirements and further details on the flavors available for GPU usage. Allocation Change Requests for OpenStack Project Once the request is processed by the NERC admin, any user can view that request change trails for the project by looking at the \"Allocation Change Requests\" section that looks like below: Any user can click on Action button to view the details about the change request. This will show more details about the change request as shown below: How to Use GPU Resources in your OpenStack Project Comparison Between CPU and GPU To learn more about the key differences between CPUs and GPUs, please read this . A GPU instance is launched in the same way as any other compute instance, with a few considerations to keep in mind: When launching a GPU based instance, be sure to select one of the GPU Tier based flavor. You need to have sufficient resource quota to launch the desired flavor. Always ensure you know which GPU-based flavor you want to use, then submit an allocation change request to adjust your current allocation to fit the flavor's resource requirements. Resource Requirements for Launching a VM with \"NVIDIA A100 SXM4 40GB\" Flavor. Based on the GPU Tier documentation , NERC provides two variations of NVIDIA A100 SXM4 40GB flavors: gpu-su-a100sxm4.1 : Includes 1 NVIDIA A100 GPU gpu-su-a100sxm4.2 : Includes 2 NVIDIA A100 GPUs You should select the flavor that best fits your resource needs and ensure your OpenStack quotas are appropriately configured for the chosen flavor. To use a GPU-based VM flavor, choose the one that best fits your resource needs and make sure your OpenStack quotas meet the required specifications: For the gpu-su-a100sxm4.1 flavor: vCPU : 32 RAM (GiB) : 240 For the gpu-su-a100sxm4.2 flavor: vCPU : 64 RAM (GiB) : 480 Ensure that your OpenStack resource quotas are configured as follows: OpenStack GPU Quota : Meets or exceeds the number of GPUs required by the chosen flavor. OpenStack Compute vCPU Quota : Meets or exceeds the vCPU requirement. OpenStack Compute RAM Quota (MiB) : Meets or exceeds the RAM requirement. Properly configure these quotas to successfully launch a VM with the selected \"gpu-su-a100sxm4\" flavor. We recommend using ubuntu-22.04-x86_64 as the image for your GPU-based instance because we have tested the NVIDIA driver with this image and obtained good results. That said, it is possible to run a variety of other images as well. Request Change Resource Allocation Attributes for OpenShift Project Important: Requested/Approved Allocated OpenShift Storage Quota & Cost For NERC-OCP (OpenShift) resource types, the Storage quotas are controlled by the values of the \"OpenShift Request on Storage Quota (GiB)\" and \"OpenShift Limit on Ephemeral Storage Quota (GiB)\" quota attributes. The Storage cost is determined by your requested and approved allocation values for these quota attributes. PI or project managers can provide a new value for the individual quota attributes, and give justification for the requested changes so that the NERC admin can review the change request and approve or deny based on justification and quota change request. Then submitting the change request, this will notify the NERC admin about it. Please wait untill the NERC admin approves/ deny the change request to see the change on your resource allocation for the selected project. Important Information PI or project managers can put the new values on the textboxes for ONLY quota attributes they want to change others they can be left blank so those quotas will not get changed! In order to use GPU resources on your pod, you must specify the number of GPUs you want to use in the \"OpenShift Request on GPU Quota\" attribute. Allocation Change Requests for OpenShift Project Once the request is processed by the NERC admin, any user can view that request change trails for the project by looking at the \"Allocation Change Requests\" section that looks like below: Any user can click on Action button to view the details about the change request. This will show more details about the change request as shown below: How to Use GPU Resources in your OpenShift Project Comparison Between CPU and GPU To learn more about the key differences between CPUs and GPUs, please read this . For OpenShift pods, we can specify different types of GPUs. Since OpenShift is not based on flavors, we can customize the resources as needed at the pod level while still utilizing GPU resources. You can read about how to specify a pod to use a GPU here . Also, you will be able to select a different GPU device for your workload, as explained here .","title":"Request change to Resource Allocation to an existing project"},{"location":"get-started/allocation/allocation-change-request/#request-change-to-resource-allocation-to-an-existing-project","text":"If past resource allocation is not sufficient for an existing project, PIs or project managers can request a change by clicking \"Request Change\" button on project resource allocation detail page as show below:","title":"Request change to Resource Allocation to an existing project"},{"location":"get-started/allocation/allocation-change-request/#request-change-resource-allocation-attributes-for-openstack-project","text":"This will bring up the detailed Quota attributes for that project as shown below: Important: Requested/Approved Allocated OpenStack Storage Quota & Cost For NERC (OpenStack) resource types, the Storage quotas are controlled by the values of the \"OpenStack Volume Quota (GiB)\" and \"OpenStack Swift Quota (GiB)\" quota attributes. The Storage cost is determined by your requested and approved allocation values for these quota attributes. If you have common questions or need more information, refer to our Billing FAQs for comprehensive answers. PI or project managers can provide a new value for the individual quota attributes, and give justification for the requested changes so that the NERC admin can review the change request and approve or deny based on justification and quota change request. Then submitting the change request, this will notify the NERC admin about it. Please wait untill the NERC admin approves/ deny the change request to see the change on your resource allocation for the selected project. Important Information PI or project managers can put the new values on the textboxes for ONLY quota attributes they want to change others they can be left blank so those quotas will not get changed! To use GPU resources on your VM, you need to specify the number of GPUs in the \"OpenStack GPU Quota\" attribute. Additionally, ensure that your other quota attributes, namely \"OpenStack Compute vCPU Quota\" and \"OpenStack Compute RAM Quota (MiB)\" have sufficient resources to meet the vCPU and RAM requirements for one of the GPU tier-based flavors. Refer to the GPU Tier documentation for specific requirements and further details on the flavors available for GPU usage.","title":"Request Change Resource Allocation Attributes for OpenStack Project"},{"location":"get-started/allocation/allocation-change-request/#allocation-change-requests-for-openstack-project","text":"Once the request is processed by the NERC admin, any user can view that request change trails for the project by looking at the \"Allocation Change Requests\" section that looks like below: Any user can click on Action button to view the details about the change request. This will show more details about the change request as shown below:","title":"Allocation Change Requests for OpenStack Project"},{"location":"get-started/allocation/allocation-change-request/#how-to-use-gpu-resources-in-your-openstack-project","text":"Comparison Between CPU and GPU To learn more about the key differences between CPUs and GPUs, please read this . A GPU instance is launched in the same way as any other compute instance, with a few considerations to keep in mind: When launching a GPU based instance, be sure to select one of the GPU Tier based flavor. You need to have sufficient resource quota to launch the desired flavor. Always ensure you know which GPU-based flavor you want to use, then submit an allocation change request to adjust your current allocation to fit the flavor's resource requirements. Resource Requirements for Launching a VM with \"NVIDIA A100 SXM4 40GB\" Flavor. Based on the GPU Tier documentation , NERC provides two variations of NVIDIA A100 SXM4 40GB flavors: gpu-su-a100sxm4.1 : Includes 1 NVIDIA A100 GPU gpu-su-a100sxm4.2 : Includes 2 NVIDIA A100 GPUs You should select the flavor that best fits your resource needs and ensure your OpenStack quotas are appropriately configured for the chosen flavor. To use a GPU-based VM flavor, choose the one that best fits your resource needs and make sure your OpenStack quotas meet the required specifications: For the gpu-su-a100sxm4.1 flavor: vCPU : 32 RAM (GiB) : 240 For the gpu-su-a100sxm4.2 flavor: vCPU : 64 RAM (GiB) : 480 Ensure that your OpenStack resource quotas are configured as follows: OpenStack GPU Quota : Meets or exceeds the number of GPUs required by the chosen flavor. OpenStack Compute vCPU Quota : Meets or exceeds the vCPU requirement. OpenStack Compute RAM Quota (MiB) : Meets or exceeds the RAM requirement. Properly configure these quotas to successfully launch a VM with the selected \"gpu-su-a100sxm4\" flavor. We recommend using ubuntu-22.04-x86_64 as the image for your GPU-based instance because we have tested the NVIDIA driver with this image and obtained good results. That said, it is possible to run a variety of other images as well.","title":"How to Use GPU Resources in your OpenStack Project"},{"location":"get-started/allocation/allocation-change-request/#request-change-resource-allocation-attributes-for-openshift-project","text":"Important: Requested/Approved Allocated OpenShift Storage Quota & Cost For NERC-OCP (OpenShift) resource types, the Storage quotas are controlled by the values of the \"OpenShift Request on Storage Quota (GiB)\" and \"OpenShift Limit on Ephemeral Storage Quota (GiB)\" quota attributes. The Storage cost is determined by your requested and approved allocation values for these quota attributes. PI or project managers can provide a new value for the individual quota attributes, and give justification for the requested changes so that the NERC admin can review the change request and approve or deny based on justification and quota change request. Then submitting the change request, this will notify the NERC admin about it. Please wait untill the NERC admin approves/ deny the change request to see the change on your resource allocation for the selected project. Important Information PI or project managers can put the new values on the textboxes for ONLY quota attributes they want to change others they can be left blank so those quotas will not get changed! In order to use GPU resources on your pod, you must specify the number of GPUs you want to use in the \"OpenShift Request on GPU Quota\" attribute.","title":"Request Change Resource Allocation Attributes for OpenShift Project"},{"location":"get-started/allocation/allocation-change-request/#allocation-change-requests-for-openshift-project","text":"Once the request is processed by the NERC admin, any user can view that request change trails for the project by looking at the \"Allocation Change Requests\" section that looks like below: Any user can click on Action button to view the details about the change request. This will show more details about the change request as shown below:","title":"Allocation Change Requests for OpenShift Project"},{"location":"get-started/allocation/allocation-change-request/#how-to-use-gpu-resources-in-your-openshift-project","text":"Comparison Between CPU and GPU To learn more about the key differences between CPUs and GPUs, please read this . For OpenShift pods, we can specify different types of GPUs. Since OpenShift is not based on flavors, we can customize the resources as needed at the pod level while still utilizing GPU resources. You can read about how to specify a pod to use a GPU here . Also, you will be able to select a different GPU device for your workload, as explained here .","title":"How to Use GPU Resources in your OpenShift Project"},{"location":"get-started/allocation/allocation-details/","text":"Allocation details Access to ColdFront's allocations details is based on user roles . PIs and managers see the same allocation details as users, and can also add project users to the allocation, if they're not already on it, and remove users from an allocation. PI and Manager View PIs and managers can view important details of the project and underlying allocations. It shows all allocations including start and end dates, creation and last modified dates, users on the allocation and public allocation attributes. PIs and managers can add or remove users from allocations. PI and Manager Allocation View of OpenStack Resource Allocation PI and Manager Allocation View of OpenShift Resource Allocation General User View General Users who are not PIs or Managers on a project see a read-only view of the allocation details. If a user is on a project but not a particular allocation, they will not be able to see the allocation in the Project view nor will they be able to access the Allocation detail page. General User View of OpenStack Resource Allocation General User View of OpenShift Resource Allocation","title":"Allocation details"},{"location":"get-started/allocation/allocation-details/#allocation-details","text":"Access to ColdFront's allocations details is based on user roles . PIs and managers see the same allocation details as users, and can also add project users to the allocation, if they're not already on it, and remove users from an allocation.","title":"Allocation details"},{"location":"get-started/allocation/allocation-details/#pi-and-manager-view","text":"PIs and managers can view important details of the project and underlying allocations. It shows all allocations including start and end dates, creation and last modified dates, users on the allocation and public allocation attributes. PIs and managers can add or remove users from allocations.","title":"PI and Manager View"},{"location":"get-started/allocation/allocation-details/#pi-and-manager-allocation-view-of-openstack-resource-allocation","text":"","title":"PI and Manager Allocation View of OpenStack Resource Allocation"},{"location":"get-started/allocation/allocation-details/#pi-and-manager-allocation-view-of-openshift-resource-allocation","text":"","title":"PI and Manager Allocation View of OpenShift Resource Allocation"},{"location":"get-started/allocation/allocation-details/#general-user-view","text":"General Users who are not PIs or Managers on a project see a read-only view of the allocation details. If a user is on a project but not a particular allocation, they will not be able to see the allocation in the Project view nor will they be able to access the Allocation detail page.","title":"General User View"},{"location":"get-started/allocation/allocation-details/#general-user-view-of-openstack-resource-allocation","text":"","title":"General User View of OpenStack Resource Allocation"},{"location":"get-started/allocation/allocation-details/#general-user-view-of-openshift-resource-allocation","text":"","title":"General User View of OpenShift Resource Allocation"},{"location":"get-started/allocation/archiving-a-project/","text":"Archiving an Existing Project Only a PI can archive their ColdFront project(s) by accessing NERC's ColdFront interface . Important Note: If you archive a project then this will expire all your allocations on that project, which will disable your group's access to the resources in those allocations. Also, you cannot make any changes to archived projects. Once archived it is no longer visible on your projects list . All archived projects will be listed under your archived projects , which can be viewed by clicking the \"View archived projects\" button as shown below: All your archived projects are displayed here:","title":"Archiving an Existing Project"},{"location":"get-started/allocation/archiving-a-project/#archiving-an-existing-project","text":"Only a PI can archive their ColdFront project(s) by accessing NERC's ColdFront interface . Important Note: If you archive a project then this will expire all your allocations on that project, which will disable your group's access to the resources in those allocations. Also, you cannot make any changes to archived projects. Once archived it is no longer visible on your projects list . All archived projects will be listed under your archived projects , which can be viewed by clicking the \"View archived projects\" button as shown below: All your archived projects are displayed here:","title":"Archiving an Existing Project"},{"location":"get-started/allocation/coldfront/","text":"What is NERC's ColdFront? NERC uses NERC's ColdFront interface , an open source resource allocation management system called ColdFront to provide a single point-of-entry for administration, reporting, and measuring scientific impact of NERC resources for PI. Learning ColdFront A collection of animated gifs showcasing common functions in ColdFront is available, providing helpful insights into how these features can be utilized. How to get access to NERC's ColdFront Any users who had registerd their user accounts through the MGHPCC Shared Services (MGHPCC-SS) Account Portal also known as \"RegApp\" can get access to NERC's ColdFront interface . General Users who are not PIs or Managers on a project see a read-only view of the NERC's ColdFront as described here . Whereas, once a PI Account request is granted, the PI will receive an email confirming the request approval and how to connect NERC's ColdFront. PI or project managers can use NERC's ColdFront as a self-service web-portal that can see an administrative view of it as described here and can do the following tasks: Only PI can add a new project and archive any existing project(s) Manage existing projects Request allocations that fall under projects in NERC's resources such as clusters, cloud resources, servers, storage, and software licenses Add/remove user access to/from allocated resources who is a member of the project without requiring system administrator interaction Elevate selected users to 'manager' status, allowing them to handle some of the PI asks such as request new resource allocations, add/remove users to/from resource allocations, add project data such as grants and publications Monitor resource utilization such as storage and cloud usage Receive email notifications for expiring/renewing access to resources as well as notifications when allocations change status - i.e. activated, expired, denied Provide information such as grants, publications, and other reportable data for periodic review by center director to demonstrate need for the resources How to login to NERC's ColdFront? NERC's ColdFront interface provides users with login page as shown here: Please click on \" Log In \" button. Then, it will show the login interface as shown below: You need to click on \" Log in via OpenID Connect \" button. This will redirect you to CILogon welcome page where you can select your appropriate Identity Provider as shown below: Once successful, you will be redirected to the ColdFront's main dashboard as shown below:","title":"What is NERC's ColdFront?"},{"location":"get-started/allocation/coldfront/#what-is-nercs-coldfront","text":"NERC uses NERC's ColdFront interface , an open source resource allocation management system called ColdFront to provide a single point-of-entry for administration, reporting, and measuring scientific impact of NERC resources for PI. Learning ColdFront A collection of animated gifs showcasing common functions in ColdFront is available, providing helpful insights into how these features can be utilized.","title":"What is NERC's ColdFront?"},{"location":"get-started/allocation/coldfront/#how-to-get-access-to-nercs-coldfront","text":"Any users who had registerd their user accounts through the MGHPCC Shared Services (MGHPCC-SS) Account Portal also known as \"RegApp\" can get access to NERC's ColdFront interface . General Users who are not PIs or Managers on a project see a read-only view of the NERC's ColdFront as described here . Whereas, once a PI Account request is granted, the PI will receive an email confirming the request approval and how to connect NERC's ColdFront. PI or project managers can use NERC's ColdFront as a self-service web-portal that can see an administrative view of it as described here and can do the following tasks: Only PI can add a new project and archive any existing project(s) Manage existing projects Request allocations that fall under projects in NERC's resources such as clusters, cloud resources, servers, storage, and software licenses Add/remove user access to/from allocated resources who is a member of the project without requiring system administrator interaction Elevate selected users to 'manager' status, allowing them to handle some of the PI asks such as request new resource allocations, add/remove users to/from resource allocations, add project data such as grants and publications Monitor resource utilization such as storage and cloud usage Receive email notifications for expiring/renewing access to resources as well as notifications when allocations change status - i.e. activated, expired, denied Provide information such as grants, publications, and other reportable data for periodic review by center director to demonstrate need for the resources","title":"How to get access to NERC's ColdFront"},{"location":"get-started/allocation/coldfront/#how-to-login-to-nercs-coldfront","text":"NERC's ColdFront interface provides users with login page as shown here: Please click on \" Log In \" button. Then, it will show the login interface as shown below: You need to click on \" Log in via OpenID Connect \" button. This will redirect you to CILogon welcome page where you can select your appropriate Identity Provider as shown below: Once successful, you will be redirected to the ColdFront's main dashboard as shown below:","title":"How to login to NERC's ColdFront?"},{"location":"get-started/allocation/manage-users-to-a-project/","text":"Managing Users in the Project Add/Remove User(s) to/from a Project A user can only view projects they are on. PIs or managers can add or remove users from their respective projects by navigating to the Users section of the project. Once we click on the \"Add Users\" button, it will show us the following search interface: Searching multiple users at once! If you want to simultaneously search for multiple users in the system, you can input multiple usernames separated by space or newline , as shown below: NOTE: This will return a list of all users matching those provided usernames only if they exist. They can search for any users in the system that are not already part of the project by providing exact matched username or partial text of other multiple fields. The search results show details about the user account such as email address, username, first name, last name etc. as shown below: Delegating user as 'Manager' When adding a user to your project you can optionally designate them as a \"Manager\" by selecting their role using the drop down next to their email. Read more about user roles here . Thus, found user(s) can be selected and assigned directly to the available resource allocation(s) on the given project using this interface. While adding the users, their Role also can be selected from the dropdown options as either User or Manager. Once confirmed with selection of user(s) their roles and allocations, click on the \"Add Selected Users to Project\" button. Removing Users from the Project is straightforward by just clicking on the \"Remove Users\" button. Then it shows the following interface: PI or project managers can select the user(s) and then click on the \"Remove Selected Users From Project\" button. User Roles Access to ColdFront is role based so users see a read-only view of the allocation details for any allocations they are on. PIs see the same allocation details as general users and can also add project users to the allocation if they're not already on it. Even on the first time, PIs add any user to the project as the User role. Later PI or project managers can delegate users on their project to the 'manager' role. This allows multiple managers on the same project. This provides the user with the same access and abilities as the PI. A \"Manager\" is a user who has the same permissions as the PI to add/remove users, request/renew allocations, add/remove project info such as grants, publications, and research output. Managers may also complete the annual project review. What can a PI do that a manager can't? The only tasks a PI can do that a manager can't is create a new project or archive any existing project(s). All other project-related actions that a PI can perform can also be accomplished by any one of the managers assigned to that project. General User Accounts are not able to create/update projects and request Resource Allocations. Instead, these accounts must be associated with a Project that has Resources. General User accounts that are associated with a Project have access to view their project details and use all the resources associated with the Project on NERC. General Users (not PIs or Managers) can turn off email notifications at the project level. PIs also have the 'manager' status on a project. Managers can't turn off their notifications. This ensures they continue to get allocation expiration notification emails. Delegating User to Manager Role You can also modify a users role of existing project users at any time by clicking on the Edit button next to the user's name. To change a user's role to 'manager' click on the edit icon next to the user's name on the Project Detail page: Then toggle the \"Role\" from User to Manager: Very Important Make sure to click the \"Update\" button to save the change. This delegation of \"Manager\" role can also be done when adding a user to your project. You can optionally designate them as a \"Manager\" by selecting their role using the drop down next to their email as described here . Notifications All users on a project will receive notifications about allocations including reminders of upcoming expiration dates and status changes. Users may uncheck the box next to their username to turn off notifications. Managers and PIs on the project are not able to turn off notifications.","title":"Managing Users in the Project"},{"location":"get-started/allocation/manage-users-to-a-project/#managing-users-in-the-project","text":"","title":"Managing Users in the Project"},{"location":"get-started/allocation/manage-users-to-a-project/#addremove-users-tofrom-a-project","text":"A user can only view projects they are on. PIs or managers can add or remove users from their respective projects by navigating to the Users section of the project. Once we click on the \"Add Users\" button, it will show us the following search interface: Searching multiple users at once! If you want to simultaneously search for multiple users in the system, you can input multiple usernames separated by space or newline , as shown below: NOTE: This will return a list of all users matching those provided usernames only if they exist. They can search for any users in the system that are not already part of the project by providing exact matched username or partial text of other multiple fields. The search results show details about the user account such as email address, username, first name, last name etc. as shown below: Delegating user as 'Manager' When adding a user to your project you can optionally designate them as a \"Manager\" by selecting their role using the drop down next to their email. Read more about user roles here . Thus, found user(s) can be selected and assigned directly to the available resource allocation(s) on the given project using this interface. While adding the users, their Role also can be selected from the dropdown options as either User or Manager. Once confirmed with selection of user(s) their roles and allocations, click on the \"Add Selected Users to Project\" button. Removing Users from the Project is straightforward by just clicking on the \"Remove Users\" button. Then it shows the following interface: PI or project managers can select the user(s) and then click on the \"Remove Selected Users From Project\" button.","title":"Add/Remove User(s) to/from a Project"},{"location":"get-started/allocation/manage-users-to-a-project/#user-roles","text":"Access to ColdFront is role based so users see a read-only view of the allocation details for any allocations they are on. PIs see the same allocation details as general users and can also add project users to the allocation if they're not already on it. Even on the first time, PIs add any user to the project as the User role. Later PI or project managers can delegate users on their project to the 'manager' role. This allows multiple managers on the same project. This provides the user with the same access and abilities as the PI. A \"Manager\" is a user who has the same permissions as the PI to add/remove users, request/renew allocations, add/remove project info such as grants, publications, and research output. Managers may also complete the annual project review. What can a PI do that a manager can't? The only tasks a PI can do that a manager can't is create a new project or archive any existing project(s). All other project-related actions that a PI can perform can also be accomplished by any one of the managers assigned to that project. General User Accounts are not able to create/update projects and request Resource Allocations. Instead, these accounts must be associated with a Project that has Resources. General User accounts that are associated with a Project have access to view their project details and use all the resources associated with the Project on NERC. General Users (not PIs or Managers) can turn off email notifications at the project level. PIs also have the 'manager' status on a project. Managers can't turn off their notifications. This ensures they continue to get allocation expiration notification emails.","title":"User Roles"},{"location":"get-started/allocation/manage-users-to-a-project/#delegating-user-to-manager-role","text":"You can also modify a users role of existing project users at any time by clicking on the Edit button next to the user's name. To change a user's role to 'manager' click on the edit icon next to the user's name on the Project Detail page: Then toggle the \"Role\" from User to Manager: Very Important Make sure to click the \"Update\" button to save the change. This delegation of \"Manager\" role can also be done when adding a user to your project. You can optionally designate them as a \"Manager\" by selecting their role using the drop down next to their email as described here .","title":"Delegating User to Manager Role"},{"location":"get-started/allocation/manage-users-to-a-project/#notifications","text":"All users on a project will receive notifications about allocations including reminders of upcoming expiration dates and status changes. Users may uncheck the box next to their username to turn off notifications. Managers and PIs on the project are not able to turn off notifications.","title":"Notifications"},{"location":"get-started/allocation/managing-users-to-an-allocation/","text":"Adding and removing project Users to project Resource Allocation Any available users who were not added previously on a given project can be added to resource allocation by clicking on the \"Add Users\" button as shown below: Once Clicked it will show the following interface where PIs can select the available user(s) on the checkboxes and click on the \"Add Selected Users to Allocation\" button. Very Important The desired user must already be on the project to be added to the allocation. Removing Users from the Resource Allocation is straightforward by just clicking on the \"Remove Users\" button. Then it shows the following interface: PI or project managers can select the user(s) on the checkboxes and then click on the \"Remove Selected Users From Project\" button.","title":"Adding and removing project Users to project Resource Allocation"},{"location":"get-started/allocation/managing-users-to-an-allocation/#adding-and-removing-project-users-to-project-resource-allocation","text":"Any available users who were not added previously on a given project can be added to resource allocation by clicking on the \"Add Users\" button as shown below: Once Clicked it will show the following interface where PIs can select the available user(s) on the checkboxes and click on the \"Add Selected Users to Allocation\" button. Very Important The desired user must already be on the project to be added to the allocation. Removing Users from the Resource Allocation is straightforward by just clicking on the \"Remove Users\" button. Then it shows the following interface: PI or project managers can select the user(s) on the checkboxes and then click on the \"Remove Selected Users From Project\" button.","title":"Adding and removing project Users to project Resource Allocation"},{"location":"get-started/allocation/project-and-allocation-review/","text":"Project and Individual Allocation Annual Review Process Project Annual Review Process NERC's ColdFront allows annual project reviews for NERC admins by mandating PIs to assess and update their projects. With the Project Review feature activated, each project undergoes a mandatory review every 365 days. During this process, PIs update project details, confirm project members, and input publications, grants, and research outcomes from the preceding year. Required Project Review The PI or any manager(s) of a project must complete the project review once every 365 days. ColdFront does not send notifications to PIs when project reviews are due. Instead, when the PI or Manager(s) of a project views their project they will find the notification that the project review is due. Additionally, when the project review is pending, PIs or Project Manager(s) cannot request new allocations or renew expiring allocations or change request to update the allocated allocation attributes' values. This is to enforce PIs need to review their projects annually. The PI or any managers on the project are able to complete the project review process. Project Reviews by PIs or Project Manager(s) When a PI or any Project Manager(s) of a project logs into NERC's ColdFront web console and their project review is due, they will see a banner next to the project name on the home page: If they try to request a new allocation or renew an expiring allocation or change request to update the allocated allocation attributes' values, they will get an error message: Project Review Steps When they click on the \"Review Project\" link they're presented with the requirements and a description of why we're asking for this update: The links in each step direct them to different parts of their Project Detail page. This review page lists the dates when grants and publications were last updated. If there are no grant or publications or at least one of them hasn't been udpated in the last year, we ask for a reason they're not updating the project information. This helps encourage PIs to provide updates if they have them. If not, they provide a reason and this is displayed for the NERC admins as part of the review process. Once the project review page is completed, the PI is redirected to the project detail page and they see the status change to \"project review pending\". Allocation Renewals When the requested allocation is approved, it must have an expiration date - which is normally 365 days or 1 year from the date it is approved. Automated emails are triggered to all users on an allocation when the expiration date is 60 days away, 30 days, 7 days, and then expired, unless the user turns off notifications on the project. Very Important: Urgent Allocation Renewal is Required Before Expiration If the allocation renewal isn't processed prior to the original allocation expiration date by the PI or Manager, the allocation will expire and the allocation users will get a notification email letting them know the allocation has expired! Currently, a project will continue to be able to utilize expired allocations. So this will continue to incur costs for you. Allocation renewals may not require any additions or changes to the allocation attributes from the PI or Manager. By default, if the PI or Manager clicks on the 'Activate' button as shown below: Then it will prompt for confirmation and allow the admin to review and submit the activation request by clicking on 'Submit' button as shown below: Emails are sent to all allocation users letting them know the renewal request has been submitted. Then the allocation status will change to \"Renewal Requested\" as shown below: Once the renewal request is reviewed and approved by NERC admins, it will change into \"Active\" status and the expiration date is set to another 365 days as shown below: Then an automated email notification will be sent to the PI and all users on the allocation that have enabled email notifications. Cost Associated with Expired Allocations Currently, a project will continue to be able to utilize expired allocations. So this will continue to incur costs for you. In the future, we plan to change this behavior so expired allocations will result in its associated VMs/pods not to start and possibly having associated active VMs/pods to cease running.","title":"Project and Individual Allocation Annual Review Process"},{"location":"get-started/allocation/project-and-allocation-review/#project-and-individual-allocation-annual-review-process","text":"","title":"Project and Individual Allocation Annual Review Process"},{"location":"get-started/allocation/project-and-allocation-review/#project-annual-review-process","text":"NERC's ColdFront allows annual project reviews for NERC admins by mandating PIs to assess and update their projects. With the Project Review feature activated, each project undergoes a mandatory review every 365 days. During this process, PIs update project details, confirm project members, and input publications, grants, and research outcomes from the preceding year. Required Project Review The PI or any manager(s) of a project must complete the project review once every 365 days. ColdFront does not send notifications to PIs when project reviews are due. Instead, when the PI or Manager(s) of a project views their project they will find the notification that the project review is due. Additionally, when the project review is pending, PIs or Project Manager(s) cannot request new allocations or renew expiring allocations or change request to update the allocated allocation attributes' values. This is to enforce PIs need to review their projects annually. The PI or any managers on the project are able to complete the project review process.","title":"Project Annual Review Process"},{"location":"get-started/allocation/project-and-allocation-review/#project-reviews-by-pis-or-project-managers","text":"When a PI or any Project Manager(s) of a project logs into NERC's ColdFront web console and their project review is due, they will see a banner next to the project name on the home page: If they try to request a new allocation or renew an expiring allocation or change request to update the allocated allocation attributes' values, they will get an error message:","title":"Project Reviews by PIs or Project Manager(s)"},{"location":"get-started/allocation/project-and-allocation-review/#project-review-steps","text":"When they click on the \"Review Project\" link they're presented with the requirements and a description of why we're asking for this update: The links in each step direct them to different parts of their Project Detail page. This review page lists the dates when grants and publications were last updated. If there are no grant or publications or at least one of them hasn't been udpated in the last year, we ask for a reason they're not updating the project information. This helps encourage PIs to provide updates if they have them. If not, they provide a reason and this is displayed for the NERC admins as part of the review process. Once the project review page is completed, the PI is redirected to the project detail page and they see the status change to \"project review pending\".","title":"Project Review Steps"},{"location":"get-started/allocation/project-and-allocation-review/#allocation-renewals","text":"When the requested allocation is approved, it must have an expiration date - which is normally 365 days or 1 year from the date it is approved. Automated emails are triggered to all users on an allocation when the expiration date is 60 days away, 30 days, 7 days, and then expired, unless the user turns off notifications on the project. Very Important: Urgent Allocation Renewal is Required Before Expiration If the allocation renewal isn't processed prior to the original allocation expiration date by the PI or Manager, the allocation will expire and the allocation users will get a notification email letting them know the allocation has expired! Currently, a project will continue to be able to utilize expired allocations. So this will continue to incur costs for you. Allocation renewals may not require any additions or changes to the allocation attributes from the PI or Manager. By default, if the PI or Manager clicks on the 'Activate' button as shown below: Then it will prompt for confirmation and allow the admin to review and submit the activation request by clicking on 'Submit' button as shown below: Emails are sent to all allocation users letting them know the renewal request has been submitted. Then the allocation status will change to \"Renewal Requested\" as shown below: Once the renewal request is reviewed and approved by NERC admins, it will change into \"Active\" status and the expiration date is set to another 365 days as shown below: Then an automated email notification will be sent to the PI and all users on the allocation that have enabled email notifications.","title":"Allocation Renewals"},{"location":"get-started/allocation/project-and-allocation-review/#cost-associated-with-expired-allocations","text":"Currently, a project will continue to be able to utilize expired allocations. So this will continue to incur costs for you. In the future, we plan to change this behavior so expired allocations will result in its associated VMs/pods not to start and possibly having associated active VMs/pods to cease running.","title":"Cost Associated with Expired Allocations"},{"location":"get-started/allocation/requesting-an-allocation/","text":"How to request a new Resource Allocation On the Project Detail page the project PI/manager(s) can request an allocation by clicking the \"Request Resource Allocation\" button as shown below: On the shown page, you will be able to choose either OpenStack Resource Allocation or OpenShift Resource Allocation by specifying either NERC (OpenStack) or NERC-OCP (OpenShift) in the Resource dropdown option. Note: The first option i.e. NERC (OpenStack) , is selected by default. Default GPU Resource Quota for Initial Allocation Requests By default, the GPU resource quota is set to 0 for the initial resource allocation request for both OpenStack and OpenShift Resource Types. However, you will be able to change request and adjust the corresponding GPU quotas for both after they are approved for the first time. For NERC's OpenStack, please follow this guide on how to utilize GPU resources in your OpenStack project. For NERC's OpenShift, refer to this reference to learn about how to use GPU resources in pod level. Request A New OpenStack Resource Allocation for an OpenStack Project If users have already been added to the project as described here , the Users selection section will be displayed as shown below: In this section, the project PI/manager(s) can choose user(s) from the project to be included in this allocation before clicking the \"Submit\" button. Read the End User License Agreement Before Submission You should read the shown End User License Agreement (the \"Agreement\"). By clicking the \"Submit\" button, you agree to the Terms and Conditions. Important: Requested/Approved Allocated OpenStack Storage Quota & Cost Ensure you choose NERC (OpenStack) in the Resource option and specify your anticipated computing units. Each allocation, whether requested or approved, will be billed based on the pay-as-you-go model. The exception is for Storage quotas , where the cost is determined by your requested and approved allocation values to reserve storage from the total NESE storage pool. For NERC (OpenStack) Resource Allocations, the Storage quotas are specified by the \"OpenStack Volume Quota (GiB)\" and \"OpenStack Swift Quota (GiB)\" allocation attributes. If you have common questions or need more information, refer to our Billing FAQs for comprehensive answers. Keep in mind that you can easily scale and expand your current resource allocations within your project by following this documentation later on. Resource Allocation Quotas for OpenStack Project The amount of quota to start out a resource allocation after approval, can be specified using an integer field in the resource allocation request form as shown above. The provided unit value is computed as PI or project managers request resource quota. The basic unit of computational resources is defined in terms of integer value that corresponds to multiple OpenStack resource quotas. For example, 1 Unit corresponds to: Resource Name Quota Amount x Unit Instances 1 vCPUs 1 GPU 0 RAM(MiB) 4096 Volumes 2 Volume Storage(GiB) 20 Object Storage(GiB) 1 Information By default, 2 OpenStack Floating IPs , 10 Volume Snapshots and 10 Security Groups are provided to each approved project regardless of units of requested quota units. Request A New OpenShift Resource Allocation for an OpenShift project If users have already been added to the project as described here , the Users selection section will be displayed as shown below: In this section, the project PI/manager(s) can choose user(s) from the project to be included in this allocation before clicking the \"Submit\" button. Read the End User License Agreement Before Submission You should read the shown End User License Agreement (the \"Agreement\"). By clicking the \"Submit\" button, you agree to the Terms and Conditions. Resource Allocation Quotas for OpenShift Project The amount of quota to start out a resource allocation after approval, can be specified using an integer field in the resource allocation request form as shown above. The provided unit value is computed as PI or project managers request resource quota. The basic unit of computational resources is defined in terms of integer value that corresponds to multiple OpenShift resource quotas. For example, 1 Unit corresponds to: Resource Name Quota Amount x Unit vCPUs 1 GPU 0 RAM(MiB) 4096 Persistent Volume Claims (PVC) 2 Storage(GiB) 20 Ephemeral Storage(GiB) 5 Important: Requested/Approved Allocated OpenShift Storage Quota & Cost Ensure you choose NERC-OCP (OpenShift) in the Resource option ( Always Remember: the first option, i.e. NERC (OpenStack) is selected by default!) and specify your anticipated computing units. Each allocation, whether requested or approved, will be billed based on the pay-as-you-go model. The exception is for Storage quotas , where the cost is determined by your requested and approved allocation values to reserve storage from the total NESE storage pool. For NERC-OCP (OpenShift) Resource Allocations, storage quotas are specified by the \"OpenShift Request on Storage Quota (GiB)\" and \"OpenShift Limit on Ephemeral Storage Quota (GiB)\" allocation attributes. If you have common questions or need more information, refer to our Billing FAQs for comprehensive answers. Keep in mind that you can easily scale and expand your current resource allocations within your project by following this documentation later on.","title":"How to request a new Resource Allocation"},{"location":"get-started/allocation/requesting-an-allocation/#how-to-request-a-new-resource-allocation","text":"On the Project Detail page the project PI/manager(s) can request an allocation by clicking the \"Request Resource Allocation\" button as shown below: On the shown page, you will be able to choose either OpenStack Resource Allocation or OpenShift Resource Allocation by specifying either NERC (OpenStack) or NERC-OCP (OpenShift) in the Resource dropdown option. Note: The first option i.e. NERC (OpenStack) , is selected by default. Default GPU Resource Quota for Initial Allocation Requests By default, the GPU resource quota is set to 0 for the initial resource allocation request for both OpenStack and OpenShift Resource Types. However, you will be able to change request and adjust the corresponding GPU quotas for both after they are approved for the first time. For NERC's OpenStack, please follow this guide on how to utilize GPU resources in your OpenStack project. For NERC's OpenShift, refer to this reference to learn about how to use GPU resources in pod level.","title":"How to request a new Resource Allocation"},{"location":"get-started/allocation/requesting-an-allocation/#request-a-new-openstack-resource-allocation-for-an-openstack-project","text":"If users have already been added to the project as described here , the Users selection section will be displayed as shown below: In this section, the project PI/manager(s) can choose user(s) from the project to be included in this allocation before clicking the \"Submit\" button. Read the End User License Agreement Before Submission You should read the shown End User License Agreement (the \"Agreement\"). By clicking the \"Submit\" button, you agree to the Terms and Conditions. Important: Requested/Approved Allocated OpenStack Storage Quota & Cost Ensure you choose NERC (OpenStack) in the Resource option and specify your anticipated computing units. Each allocation, whether requested or approved, will be billed based on the pay-as-you-go model. The exception is for Storage quotas , where the cost is determined by your requested and approved allocation values to reserve storage from the total NESE storage pool. For NERC (OpenStack) Resource Allocations, the Storage quotas are specified by the \"OpenStack Volume Quota (GiB)\" and \"OpenStack Swift Quota (GiB)\" allocation attributes. If you have common questions or need more information, refer to our Billing FAQs for comprehensive answers. Keep in mind that you can easily scale and expand your current resource allocations within your project by following this documentation later on.","title":"Request A New OpenStack Resource Allocation for an OpenStack Project"},{"location":"get-started/allocation/requesting-an-allocation/#resource-allocation-quotas-for-openstack-project","text":"The amount of quota to start out a resource allocation after approval, can be specified using an integer field in the resource allocation request form as shown above. The provided unit value is computed as PI or project managers request resource quota. The basic unit of computational resources is defined in terms of integer value that corresponds to multiple OpenStack resource quotas. For example, 1 Unit corresponds to: Resource Name Quota Amount x Unit Instances 1 vCPUs 1 GPU 0 RAM(MiB) 4096 Volumes 2 Volume Storage(GiB) 20 Object Storage(GiB) 1 Information By default, 2 OpenStack Floating IPs , 10 Volume Snapshots and 10 Security Groups are provided to each approved project regardless of units of requested quota units.","title":"Resource Allocation Quotas for OpenStack Project"},{"location":"get-started/allocation/requesting-an-allocation/#request-a-new-openshift-resource-allocation-for-an-openshift-project","text":"If users have already been added to the project as described here , the Users selection section will be displayed as shown below: In this section, the project PI/manager(s) can choose user(s) from the project to be included in this allocation before clicking the \"Submit\" button. Read the End User License Agreement Before Submission You should read the shown End User License Agreement (the \"Agreement\"). By clicking the \"Submit\" button, you agree to the Terms and Conditions.","title":"Request A New OpenShift Resource Allocation for an OpenShift project"},{"location":"get-started/allocation/requesting-an-allocation/#resource-allocation-quotas-for-openshift-project","text":"The amount of quota to start out a resource allocation after approval, can be specified using an integer field in the resource allocation request form as shown above. The provided unit value is computed as PI or project managers request resource quota. The basic unit of computational resources is defined in terms of integer value that corresponds to multiple OpenShift resource quotas. For example, 1 Unit corresponds to: Resource Name Quota Amount x Unit vCPUs 1 GPU 0 RAM(MiB) 4096 Persistent Volume Claims (PVC) 2 Storage(GiB) 20 Ephemeral Storage(GiB) 5 Important: Requested/Approved Allocated OpenShift Storage Quota & Cost Ensure you choose NERC-OCP (OpenShift) in the Resource option ( Always Remember: the first option, i.e. NERC (OpenStack) is selected by default!) and specify your anticipated computing units. Each allocation, whether requested or approved, will be billed based on the pay-as-you-go model. The exception is for Storage quotas , where the cost is determined by your requested and approved allocation values to reserve storage from the total NESE storage pool. For NERC-OCP (OpenShift) Resource Allocations, storage quotas are specified by the \"OpenShift Request on Storage Quota (GiB)\" and \"OpenShift Limit on Ephemeral Storage Quota (GiB)\" allocation attributes. If you have common questions or need more information, refer to our Billing FAQs for comprehensive answers. Keep in mind that you can easily scale and expand your current resource allocations within your project by following this documentation later on.","title":"Resource Allocation Quotas for OpenShift Project"},{"location":"get-started/best-practices/best-practices-for-bu/","text":"Best Practices for Boston University Further References https://www.bu.edu/tech/services/security/cyber-security/sensitive-data/ https://www.bu.edu/tech/support/information-security/ https://www.bu.edu/tech/about/security-resources/bestpractice/","title":"Best Practices for Boston University"},{"location":"get-started/best-practices/best-practices-for-bu/#best-practices-for-boston-university","text":"","title":"Best Practices for Boston University"},{"location":"get-started/best-practices/best-practices-for-bu/#further-references","text":"https://www.bu.edu/tech/services/security/cyber-security/sensitive-data/ https://www.bu.edu/tech/support/information-security/ https://www.bu.edu/tech/about/security-resources/bestpractice/","title":"Further References"},{"location":"get-started/best-practices/best-practices-for-harvard/","text":"Securing Your Public Facing Server Overview This document is aimed to provide you with a few concrete actions you can take to significantly enhance the security of your devices. This advice can be enabled even if your servers are not public facing. However, we strongly recommend implementing these steps if your servers are intended to be accessible to the internet at large. All recommendations and guidance are guided by our policy that has specific requirements, the current policy/requirements for servers at NERC can be found here . Harvard University Security Policy Information Please note that all assets deployed to your NERC project must be compliant with University Security policies. Please familiarize yourself with the Harvard University Information Security Policy and your role in securing data. If you have any questions about how Security should be implemented in the Cloud, please contact your school security officer: \"Havard Security Officer\" . Know Your Data Depending on the data that exists on your servers, you may have to take added or specific steps to safeguard that data. At Harvard, we developed a scale of data classification ranging from 1 to 5 in order of increasing data sensitivity. We have prepared added guidance with examples for both Administrative Data and Research Data . Additionally, if your work involved individuals situated in a European Economic Area, you may be subject to the requirements of the General Data Protection Regulations and more information about your responsibilities can be found here . Host Protection The primary focus of this guide is to provide you with security essentials that we support and that you can implement with little effort. Endpoint Protection Harvard University uses the endpoint protection service: Crowdstrike , which actively checks a machine for indication of malicious activity and will act to both block the activity and remediate the issue. This service is offered free to our community members and requires the installation of an agent on the server that runs transparently. This software enables the Harvard security team to review security events and act as needed. Crowdstrike can be downloaded from our repository at: agents.itsec.harvard.edu this software is required for all devices owned by Harvard staff/faculty and available for all operating systems. Please note To acess this repository you need to be in Harvard Campus Network . Patch/Update Regularly It is common that vendors/developers will announce that they have discovered a new vulnerability in the software you may be using. A lot of these vulnerabilities are addressed by new releases that the developer issues. Keeping your software and server operating system up to date with current versions ensures that you are using a version of the software that does not have any known/published vulnerabilities. Vulnerability Management Various software versions have historically been found to be vulnerable to specific attacks and exploits. The risk of running older versions of software is that you may be exposing your machine to a possible known method of attack. To assess which attacks you might be vulnerable to and be provided with specific remediation guidance, we recommend enrolling your servers with our Tenable service which periodically scans the software on your server and correlates the software information with a database of published vulnerabilities. This service will enable you to prioritize which component you need to upgrade or otherwise define which vulnerabilities you may be exposed to. The Tenable agent run transparently and can be enabled to work according to the parameters set for your school; the agent can be downloaded here and configuration support can be found by filing a support request via HUIT support ticketing system: ServiceNow . Safer Applications/ Development Every application has its own unique operational constraints/requirements, and the advice below cannot be comprehensive however we can offer a few general recommendations Secure Credential Management Credentials should not be kept on the server, nor should they be included directly in your programming logic. Attackers often review running code on the server to see if they can obtain any sensitive credentials that may have been included in each script. To better manage your credentials, we recommend either using: \u25cf 1password Credential Manager \u25cf AWS Secrets Not Running the Application as the Root/Superuser Frequently an application needs special permissions and access and often it is easiest to run an application in the root/superuser account. This is a dangerous practice since the application, when compromised, gives attackers an account with full administrative privileges. Instead, configuring the application to run with an account with only the permissions it needs to run is a way to minimize the impact of a given compromise. Safer Networking The goal in safer networking is to minimize the areas that an attacker can target. Minimize Publicly Exposed Services Every port/service open to the internet will be scanned to access your servers. We recommend that any service/port that is not needed to be accessed by the public be placed behind the campus firewall. This will significantly reduce the number of attempts by attackers to compromise your servers. In practice this usually means that you only expose posts 80/443 which enables you to serve websites, while you keep all other services such as SSH, WordPress-logins, etc behind the campus firewall. Strengthen SSH Logins Where possible, and if needed, logins to a Harvard service should be placed behind Harvardkey. For researchers however, the preferred login method is usually SSH and we recommend the following ways to strengthen your SSH accounts \u25cf Disable password only logins In file /etc/ssh/sshd_config change PasswordAuthentication to no to disable tunneled clear text passwords i.e. PasswordAuthentication no . Uncomment the permit empty passwords option in the second line, and, if needed, change yes to no i.e. PermitEmptyPasswords no Then run service ssh restart . \u25cf Use SSH keys with passwords enabled on them \u25cf If possible, enroll the SSH service with a Two-factor authentication provider such as DUO or YubiKey. Attack Detection Despite the best protection, a sophisticated attacker may still find a way to compromise your servers and in those scenarios, we want to enhance your ability to detect activity that may be suspicious. Install Crowdstrike As stated above, Crowdstrike is both an endpoint protection service and also an endpoint detection service. This software understands activities that might be benign in isolation but coupled with other actions on the device may be indicative of a compromise. It also enables the quickest security response. Crowdstrike can be downloaded from our repository at: agents.itsec.harvard.edu this software is needed for all devices owned by Harvard staff/faculty and available for all operating systems. Safeguard your System Logs System logs are logs that check and track activity on your servers, including logins, installed applications, errors and more. Sophisticated attackers will try to delete these logs to frustrate investigations and prevent discovery of their attacks. To ensure that your logs are still accessible and available for review, we recommend that you configure your logs to be sent to a system separate from your servers. This can be either sending logs to an external file storage repository. Or configuring a separate logging system using Splunk . For help setting up logging please file a support request via our support ticketing system: ServiceNow . Escalating an Issue There are several ways you can report a security issue and they are all documented on HUIT Internet Security and Data Privacy group site . In the event you suspect a security issue has occurred or wanted someone to supply a security assessment, please feel free to reach out to the HUIT Internet Security and Data Privacy group, specifically the Operations & Engineering team. \u25cf Email Harvard ITSEC-OPS \u25cf Service Queue \u25cf Harvard HUIT Slack Channel: #isdp-public Further References https://policy.security.harvard.edu/all-servers https://enterprisearchitecture.harvard.edu/security-minimal-viable-product-requirements-huit-hostedmanaged-server-instances https://policy.security.harvard.edu/security-requirements","title":"Best Practices for Harvard University"},{"location":"get-started/best-practices/best-practices-for-harvard/#securing-your-public-facing-server","text":"","title":"Securing Your Public Facing Server"},{"location":"get-started/best-practices/best-practices-for-harvard/#overview","text":"This document is aimed to provide you with a few concrete actions you can take to significantly enhance the security of your devices. This advice can be enabled even if your servers are not public facing. However, we strongly recommend implementing these steps if your servers are intended to be accessible to the internet at large. All recommendations and guidance are guided by our policy that has specific requirements, the current policy/requirements for servers at NERC can be found here . Harvard University Security Policy Information Please note that all assets deployed to your NERC project must be compliant with University Security policies. Please familiarize yourself with the Harvard University Information Security Policy and your role in securing data. If you have any questions about how Security should be implemented in the Cloud, please contact your school security officer: \"Havard Security Officer\" .","title":"Overview"},{"location":"get-started/best-practices/best-practices-for-harvard/#know-your-data","text":"Depending on the data that exists on your servers, you may have to take added or specific steps to safeguard that data. At Harvard, we developed a scale of data classification ranging from 1 to 5 in order of increasing data sensitivity. We have prepared added guidance with examples for both Administrative Data and Research Data . Additionally, if your work involved individuals situated in a European Economic Area, you may be subject to the requirements of the General Data Protection Regulations and more information about your responsibilities can be found here .","title":"Know Your Data"},{"location":"get-started/best-practices/best-practices-for-harvard/#host-protection","text":"The primary focus of this guide is to provide you with security essentials that we support and that you can implement with little effort.","title":"Host Protection"},{"location":"get-started/best-practices/best-practices-for-harvard/#endpoint-protection","text":"Harvard University uses the endpoint protection service: Crowdstrike , which actively checks a machine for indication of malicious activity and will act to both block the activity and remediate the issue. This service is offered free to our community members and requires the installation of an agent on the server that runs transparently. This software enables the Harvard security team to review security events and act as needed. Crowdstrike can be downloaded from our repository at: agents.itsec.harvard.edu this software is required for all devices owned by Harvard staff/faculty and available for all operating systems. Please note To acess this repository you need to be in Harvard Campus Network .","title":"Endpoint Protection"},{"location":"get-started/best-practices/best-practices-for-harvard/#patchupdate-regularly","text":"It is common that vendors/developers will announce that they have discovered a new vulnerability in the software you may be using. A lot of these vulnerabilities are addressed by new releases that the developer issues. Keeping your software and server operating system up to date with current versions ensures that you are using a version of the software that does not have any known/published vulnerabilities.","title":"Patch/Update Regularly"},{"location":"get-started/best-practices/best-practices-for-harvard/#vulnerability-management","text":"Various software versions have historically been found to be vulnerable to specific attacks and exploits. The risk of running older versions of software is that you may be exposing your machine to a possible known method of attack. To assess which attacks you might be vulnerable to and be provided with specific remediation guidance, we recommend enrolling your servers with our Tenable service which periodically scans the software on your server and correlates the software information with a database of published vulnerabilities. This service will enable you to prioritize which component you need to upgrade or otherwise define which vulnerabilities you may be exposed to. The Tenable agent run transparently and can be enabled to work according to the parameters set for your school; the agent can be downloaded here and configuration support can be found by filing a support request via HUIT support ticketing system: ServiceNow .","title":"Vulnerability Management"},{"location":"get-started/best-practices/best-practices-for-harvard/#safer-applications-development","text":"Every application has its own unique operational constraints/requirements, and the advice below cannot be comprehensive however we can offer a few general recommendations","title":"Safer Applications/ Development"},{"location":"get-started/best-practices/best-practices-for-harvard/#secure-credential-management","text":"Credentials should not be kept on the server, nor should they be included directly in your programming logic. Attackers often review running code on the server to see if they can obtain any sensitive credentials that may have been included in each script. To better manage your credentials, we recommend either using: \u25cf 1password Credential Manager \u25cf AWS Secrets","title":"Secure Credential Management"},{"location":"get-started/best-practices/best-practices-for-harvard/#not-running-the-application-as-the-rootsuperuser","text":"Frequently an application needs special permissions and access and often it is easiest to run an application in the root/superuser account. This is a dangerous practice since the application, when compromised, gives attackers an account with full administrative privileges. Instead, configuring the application to run with an account with only the permissions it needs to run is a way to minimize the impact of a given compromise.","title":"Not Running the Application as the Root/Superuser"},{"location":"get-started/best-practices/best-practices-for-harvard/#safer-networking","text":"The goal in safer networking is to minimize the areas that an attacker can target.","title":"Safer Networking"},{"location":"get-started/best-practices/best-practices-for-harvard/#minimize-publicly-exposed-services","text":"Every port/service open to the internet will be scanned to access your servers. We recommend that any service/port that is not needed to be accessed by the public be placed behind the campus firewall. This will significantly reduce the number of attempts by attackers to compromise your servers. In practice this usually means that you only expose posts 80/443 which enables you to serve websites, while you keep all other services such as SSH, WordPress-logins, etc behind the campus firewall.","title":"Minimize Publicly Exposed Services"},{"location":"get-started/best-practices/best-practices-for-harvard/#strengthen-ssh-logins","text":"Where possible, and if needed, logins to a Harvard service should be placed behind Harvardkey. For researchers however, the preferred login method is usually SSH and we recommend the following ways to strengthen your SSH accounts \u25cf Disable password only logins In file /etc/ssh/sshd_config change PasswordAuthentication to no to disable tunneled clear text passwords i.e. PasswordAuthentication no . Uncomment the permit empty passwords option in the second line, and, if needed, change yes to no i.e. PermitEmptyPasswords no Then run service ssh restart . \u25cf Use SSH keys with passwords enabled on them \u25cf If possible, enroll the SSH service with a Two-factor authentication provider such as DUO or YubiKey.","title":"Strengthen SSH Logins"},{"location":"get-started/best-practices/best-practices-for-harvard/#attack-detection","text":"Despite the best protection, a sophisticated attacker may still find a way to compromise your servers and in those scenarios, we want to enhance your ability to detect activity that may be suspicious.","title":"Attack Detection"},{"location":"get-started/best-practices/best-practices-for-harvard/#install-crowdstrike","text":"As stated above, Crowdstrike is both an endpoint protection service and also an endpoint detection service. This software understands activities that might be benign in isolation but coupled with other actions on the device may be indicative of a compromise. It also enables the quickest security response. Crowdstrike can be downloaded from our repository at: agents.itsec.harvard.edu this software is needed for all devices owned by Harvard staff/faculty and available for all operating systems.","title":"Install Crowdstrike"},{"location":"get-started/best-practices/best-practices-for-harvard/#safeguard-your-system-logs","text":"System logs are logs that check and track activity on your servers, including logins, installed applications, errors and more. Sophisticated attackers will try to delete these logs to frustrate investigations and prevent discovery of their attacks. To ensure that your logs are still accessible and available for review, we recommend that you configure your logs to be sent to a system separate from your servers. This can be either sending logs to an external file storage repository. Or configuring a separate logging system using Splunk . For help setting up logging please file a support request via our support ticketing system: ServiceNow .","title":"Safeguard your System Logs"},{"location":"get-started/best-practices/best-practices-for-harvard/#escalating-an-issue","text":"There are several ways you can report a security issue and they are all documented on HUIT Internet Security and Data Privacy group site . In the event you suspect a security issue has occurred or wanted someone to supply a security assessment, please feel free to reach out to the HUIT Internet Security and Data Privacy group, specifically the Operations & Engineering team. \u25cf Email Harvard ITSEC-OPS \u25cf Service Queue \u25cf Harvard HUIT Slack Channel: #isdp-public","title":"Escalating an Issue"},{"location":"get-started/best-practices/best-practices-for-harvard/#further-references","text":"https://policy.security.harvard.edu/all-servers https://enterprisearchitecture.harvard.edu/security-minimal-viable-product-requirements-huit-hostedmanaged-server-instances https://policy.security.harvard.edu/security-requirements","title":"Further References"},{"location":"get-started/best-practices/best-practices-for-my-institution/","text":"Best Practices for My Institution Institutions with the Best Practices outlines The following institutions using our services have already provided guidelines for best practices: Harvard University Boston University Upcoming Best Practices for other institutions We are in the process of obtaining Best Practices for institutions not listed above. If your institution already have outlined Best Practices guidelines with your internal IT department, please contact us to list it here soon by emailing us at help@nerc.mghpcc.org or, by submitting a new ticket at the NERC's Support Ticketing System .","title":"Best Practices for My Institution"},{"location":"get-started/best-practices/best-practices-for-my-institution/#best-practices-for-my-institution","text":"","title":"Best Practices for My Institution"},{"location":"get-started/best-practices/best-practices-for-my-institution/#institutions-with-the-best-practices-outlines","text":"The following institutions using our services have already provided guidelines for best practices: Harvard University Boston University Upcoming Best Practices for other institutions We are in the process of obtaining Best Practices for institutions not listed above. If your institution already have outlined Best Practices guidelines with your internal IT department, please contact us to list it here soon by emailing us at help@nerc.mghpcc.org or, by submitting a new ticket at the NERC's Support Ticketing System .","title":"Institutions with the Best Practices outlines"},{"location":"get-started/best-practices/best-practices/","text":"Best Practices for the NERC Users By 2025, according to Gartner's forecast , the responsibility for approximately 99% of cloud security failures will likely lie with customers. These failures can be attributed to the difficulties in gauging and overseeing risks associated with on-prem cloud security. The MGHPCC will enter into a lightweight Memorandum of Understanding (MOU) with each institutional customer that consumes NERC services and that will also clearly explain about the security risks and some of the shared responsibilities for the customers while using the NERC. This ensures roles and responsibilities are distinctly understood by each party. NERC Principal Investigators (PIs): PIs are ultimately responsible for their end-users and the security of the systems and applications that are deployed as part of their project(s) on NERC. This includes being responsible for the security of their data hosted on the NERC as well as users, accounts and access management. Every individual user needs to comply with your Institution\u2019s Security and Privacy policies to protect their Data, Endpoints, Accounts and Access management . They must ensure any data created on or uploaded to the NERC is adequately secured. Each customer has complete control over their systems, networks and assets. It is essential to restrict access to the NERC provided user environment only to authorized users by using secure identity and access management. Furthermore, users have authority over various credential-related aspects, including secure login mechanisms, single sign-on (SSO), and multifactor authentication. Under this model, we are responsible for operation of the physical infrastructure that includes responsibility for protecting, patching and maintaining underlying virtualization layer, servers, disks, storage, network gears, other hardwares, and softwares. Whereas NERC users are responsible for the security of the guest operating system (OS) and software stack i.e. databases used to run their applications and data. They are also entrusted with safeguarding middleware, containers, workloads, and any code or data generated by the platform. All NERC users are responsible for their use of NERC services, which include: Following the best practices for security on NERC services. Please review your institutional guidelines next . Complying with security policies regarding VMs and containers. NERC admins are not responsible for maintaining or deploying VMs or containers created by PIs for their projects. See Harvard University and Boston University policies here . We will be adding more institutions under this page soon. Without prior notice, NERC reserves the right to shut down any VM or container that is causing internal or external problems or violating these policies. Adhering to institutional restrictions and compliance policies around the data they upload and provide access to/from NERC. At NERC, we only offer users to store internal data in which information is chosen to keep confidential but the disclosure of which would not cause material harm to you, your users and your institution. Your institution may have already classified and categorized data and implemented security policies and guidance for each category. If your project includes sensitive data and information then you might need to contact NERC's admin as soon as possible to discuss other potential options. Backups and/or snapshots are the user's responsibility for volumes/data, configurations, objects, and their state, which are useful in the case when users accidentally delete/lose their data. NERC admins cannot recover lost data. In addition, while NERC stores data with high redundancy to deal with computer or disk failures, PIs should ensure they have off-site backups for disaster recovery, e.g., to deal with occasional disruptions and outages due to the natural disasters that impact the MGHPCC data center.","title":"Quick Guide and Best Practices"},{"location":"get-started/best-practices/best-practices/#best-practices-for-the-nerc-users","text":"By 2025, according to Gartner's forecast , the responsibility for approximately 99% of cloud security failures will likely lie with customers. These failures can be attributed to the difficulties in gauging and overseeing risks associated with on-prem cloud security. The MGHPCC will enter into a lightweight Memorandum of Understanding (MOU) with each institutional customer that consumes NERC services and that will also clearly explain about the security risks and some of the shared responsibilities for the customers while using the NERC. This ensures roles and responsibilities are distinctly understood by each party. NERC Principal Investigators (PIs): PIs are ultimately responsible for their end-users and the security of the systems and applications that are deployed as part of their project(s) on NERC. This includes being responsible for the security of their data hosted on the NERC as well as users, accounts and access management. Every individual user needs to comply with your Institution\u2019s Security and Privacy policies to protect their Data, Endpoints, Accounts and Access management . They must ensure any data created on or uploaded to the NERC is adequately secured. Each customer has complete control over their systems, networks and assets. It is essential to restrict access to the NERC provided user environment only to authorized users by using secure identity and access management. Furthermore, users have authority over various credential-related aspects, including secure login mechanisms, single sign-on (SSO), and multifactor authentication. Under this model, we are responsible for operation of the physical infrastructure that includes responsibility for protecting, patching and maintaining underlying virtualization layer, servers, disks, storage, network gears, other hardwares, and softwares. Whereas NERC users are responsible for the security of the guest operating system (OS) and software stack i.e. databases used to run their applications and data. They are also entrusted with safeguarding middleware, containers, workloads, and any code or data generated by the platform. All NERC users are responsible for their use of NERC services, which include: Following the best practices for security on NERC services. Please review your institutional guidelines next . Complying with security policies regarding VMs and containers. NERC admins are not responsible for maintaining or deploying VMs or containers created by PIs for their projects. See Harvard University and Boston University policies here . We will be adding more institutions under this page soon. Without prior notice, NERC reserves the right to shut down any VM or container that is causing internal or external problems or violating these policies. Adhering to institutional restrictions and compliance policies around the data they upload and provide access to/from NERC. At NERC, we only offer users to store internal data in which information is chosen to keep confidential but the disclosure of which would not cause material harm to you, your users and your institution. Your institution may have already classified and categorized data and implemented security policies and guidance for each category. If your project includes sensitive data and information then you might need to contact NERC's admin as soon as possible to discuss other potential options. Backups and/or snapshots are the user's responsibility for volumes/data, configurations, objects, and their state, which are useful in the case when users accidentally delete/lose their data. NERC admins cannot recover lost data. In addition, while NERC stores data with high redundancy to deal with computer or disk failures, PIs should ensure they have off-site backups for disaster recovery, e.g., to deal with occasional disruptions and outages due to the natural disasters that impact the MGHPCC data center.","title":"Best Practices for the NERC Users"},{"location":"get-started/cost-billing/billing-faqs/","text":"Billing Frequently Asked Questions (FAQs) Our primary focus is to deliver outstanding on-prem cloud services, prioritizing reliability, security, and cutting-edge solutions to meet your research and teaching requirements. To achieve this, we have implemented a cost-effective pricing model that enables us to maintain, enhance, and sustain the quality of our services. By adopting consistent cost structures across all institutions, we can make strategic investments in infrastructure, expand our service portfolio, and enhance our support capabilities for a seamless user experience. Most of the institutions using our services have an MOU (Memorandum Of Understanding) with us to be better aligned to a number of research regulations, policies and requirements but if your institution does not have an MOU with us, please have someone from your faculty or administration contact us to discuss it soon by emailing us at help@nerc.mghpcc.org or, by submitting a new ticket at the NERC's Support Ticketing System . Questions & Answers 1. As a new NERC PI for the first time, am I entitled to any credits? Yes, you will receive up to $1000 of credit for the first month only . This credit is not transferable to subsequent months . This does not apply to the usage of GPU resources . 2. How often will I be billed? You or your institution will be billed monthly within the first week of each month. 3. If I have an issue with my bill, who do I contact? Please send your requests by emailing us at help@nerc.mghpcc.org or, by submitting a new ticket at the NERC's Support Ticketing System . 4. How do I control costs? Upon creating a project, you will set these resource limits (quotas) for OpenStack (VMs), OpenShift (containers), and storage through ColdFront . This is the maximum amount of resources you can consume at one time. 5. Are we invoicing for CPUs/GPUs only when the VM or Pod is active? Yes. You will only be billed based on your utilization (cores, memory, GPU) when VMs exist ( even if they are Stopped! ) or when pods are running. Utilization will be translated into billable Service Units (SUs) . Persistent storage related to an OpenStack VM or OpenShift Pod will continue to be billed even when the VM is stopped or the Pod is not running . 6. Am I going to incur costs for expired allocations? Currently, a project will continue to be able to utilize expired allocations. So this will continue to incur costs for you. 7. Are VMs invoiced even when shut down? Yes, as long as VMs are using resources they are invoiced. In order not to be billed for a VM you must delete the Instance/VM. It is a good idea to create a snapshot of your VM prior to deleting it. 8. Will OpenStack & OpenShift show on a single invoice? Yes. In the near future customers of NERC will be able to view per project service utilization via the XDMoD tool. 9. What happens when a Flavor is expanded during the month? a. Flavors cannot be expanded. b. You can create a snapshot of an existing VM/Instance and, with that snapshot, deploy a new flavor of VM/Instance. 10. Is storage charged separately? Yes, but on the same invoice. To learn more, see our page on Storage . 11. Will I be charged for storage attached to shut-off instances? Yes. 12. Are we Invoicing Storage using ColdFront Requests or resource usage? a. Storage is invoiced based on Coldfront Requests . b. When you request additional storage through Coldfront, invoicing on that additional storage will occur when your request is fulfilled. When you request a decrease in storage through Request change using ColdFront , your invoicing will adjust accordingly when your request is made. In both cases 'invoicing' means 'accumulate hours for whatever storage quantity was added or removed'. For example: I request an increase in storage, the request is approved and processed. At this point we start Invoicing. I request a decrease in storage. The invoicing for that storage stops immediately. 13. For OpenShift, what values are we using to track CPU & Memory? a. For invoicing we utilize requests.cpu for tracking CPU utilization & requests.memory for tracking memory utilization. b. Utilization will be capped based on the limits you set in ColdFront for your resource allocations. 14. If a single Pod exceeds the resources for a GPU SU, how is it invoiced? It will be invoiced as 2 or more GPU SU's depending on how many multiples of the resources it exceeds. 15. How often will we change the pricing? a. Our current plan is no more than once a year for existing offerings. b. Additional offerings may be added throughout the year (i.e. new types of hardware or storage). 16. Is there any NERC Pricing Calculator? Yes. Start your estimate with no commitment based on your resource needs by using this online tool . For more information about how to use this tool, see How to use the NERC Pricing Calculator .","title":"Billing FAQs"},{"location":"get-started/cost-billing/billing-faqs/#billing-frequently-asked-questions-faqs","text":"Our primary focus is to deliver outstanding on-prem cloud services, prioritizing reliability, security, and cutting-edge solutions to meet your research and teaching requirements. To achieve this, we have implemented a cost-effective pricing model that enables us to maintain, enhance, and sustain the quality of our services. By adopting consistent cost structures across all institutions, we can make strategic investments in infrastructure, expand our service portfolio, and enhance our support capabilities for a seamless user experience. Most of the institutions using our services have an MOU (Memorandum Of Understanding) with us to be better aligned to a number of research regulations, policies and requirements but if your institution does not have an MOU with us, please have someone from your faculty or administration contact us to discuss it soon by emailing us at help@nerc.mghpcc.org or, by submitting a new ticket at the NERC's Support Ticketing System .","title":"Billing Frequently Asked Questions (FAQs)"},{"location":"get-started/cost-billing/billing-faqs/#questions-answers","text":"1. As a new NERC PI for the first time, am I entitled to any credits? Yes, you will receive up to $1000 of credit for the first month only . This credit is not transferable to subsequent months . This does not apply to the usage of GPU resources . 2. How often will I be billed? You or your institution will be billed monthly within the first week of each month. 3. If I have an issue with my bill, who do I contact? Please send your requests by emailing us at help@nerc.mghpcc.org or, by submitting a new ticket at the NERC's Support Ticketing System . 4. How do I control costs? Upon creating a project, you will set these resource limits (quotas) for OpenStack (VMs), OpenShift (containers), and storage through ColdFront . This is the maximum amount of resources you can consume at one time. 5. Are we invoicing for CPUs/GPUs only when the VM or Pod is active? Yes. You will only be billed based on your utilization (cores, memory, GPU) when VMs exist ( even if they are Stopped! ) or when pods are running. Utilization will be translated into billable Service Units (SUs) . Persistent storage related to an OpenStack VM or OpenShift Pod will continue to be billed even when the VM is stopped or the Pod is not running . 6. Am I going to incur costs for expired allocations? Currently, a project will continue to be able to utilize expired allocations. So this will continue to incur costs for you. 7. Are VMs invoiced even when shut down? Yes, as long as VMs are using resources they are invoiced. In order not to be billed for a VM you must delete the Instance/VM. It is a good idea to create a snapshot of your VM prior to deleting it. 8. Will OpenStack & OpenShift show on a single invoice? Yes. In the near future customers of NERC will be able to view per project service utilization via the XDMoD tool. 9. What happens when a Flavor is expanded during the month? a. Flavors cannot be expanded. b. You can create a snapshot of an existing VM/Instance and, with that snapshot, deploy a new flavor of VM/Instance. 10. Is storage charged separately? Yes, but on the same invoice. To learn more, see our page on Storage . 11. Will I be charged for storage attached to shut-off instances? Yes. 12. Are we Invoicing Storage using ColdFront Requests or resource usage? a. Storage is invoiced based on Coldfront Requests . b. When you request additional storage through Coldfront, invoicing on that additional storage will occur when your request is fulfilled. When you request a decrease in storage through Request change using ColdFront , your invoicing will adjust accordingly when your request is made. In both cases 'invoicing' means 'accumulate hours for whatever storage quantity was added or removed'. For example: I request an increase in storage, the request is approved and processed. At this point we start Invoicing. I request a decrease in storage. The invoicing for that storage stops immediately. 13. For OpenShift, what values are we using to track CPU & Memory? a. For invoicing we utilize requests.cpu for tracking CPU utilization & requests.memory for tracking memory utilization. b. Utilization will be capped based on the limits you set in ColdFront for your resource allocations. 14. If a single Pod exceeds the resources for a GPU SU, how is it invoiced? It will be invoiced as 2 or more GPU SU's depending on how many multiples of the resources it exceeds. 15. How often will we change the pricing? a. Our current plan is no more than once a year for existing offerings. b. Additional offerings may be added throughout the year (i.e. new types of hardware or storage). 16. Is there any NERC Pricing Calculator? Yes. Start your estimate with no commitment based on your resource needs by using this online tool . For more information about how to use this tool, see How to use the NERC Pricing Calculator .","title":"Questions & Answers"},{"location":"get-started/cost-billing/billing-process-for-bu/","text":"Billing Process for Boston University Boston University has elected to receive a centralized invoice for its university investigators and their designated user\u2019s use of NERC services. IS&T will then internally recover the cost from investigators. The process for cost recovery is currently being implemented, and we will reach out to investigators once the process is complete to obtain internal funding information to process your monthly bill. Subsidization of Boston University\u2019s Use of NERC Boston University will subsidize a portion of NERC usage by its investigators. The University will subsidize $100 per month of an investigator\u2019s total usage on NERC, regardless of the number of NERC projects an investigator has established. Monthly subsidies cannot be carried over to subsequent months. The subsidized amount and method are subject to change, and any adjustments will be conveyed directly to investigators and updated on this page. Please direct any questions about BU\u2019s billing process by emailing us at help@nerc.mghpcc.org or submitting a new ticket to the the NERC's Support Ticketing System . Questions about a specific invoice that you have received can be sent to IST-ISR-NERC@bu.edu .","title":"Billing Process for Boston University"},{"location":"get-started/cost-billing/billing-process-for-bu/#billing-process-for-boston-university","text":"Boston University has elected to receive a centralized invoice for its university investigators and their designated user\u2019s use of NERC services. IS&T will then internally recover the cost from investigators. The process for cost recovery is currently being implemented, and we will reach out to investigators once the process is complete to obtain internal funding information to process your monthly bill.","title":"Billing Process for Boston University"},{"location":"get-started/cost-billing/billing-process-for-bu/#subsidization-of-boston-universitys-use-of-nerc","text":"Boston University will subsidize a portion of NERC usage by its investigators. The University will subsidize $100 per month of an investigator\u2019s total usage on NERC, regardless of the number of NERC projects an investigator has established. Monthly subsidies cannot be carried over to subsequent months. The subsidized amount and method are subject to change, and any adjustments will be conveyed directly to investigators and updated on this page. Please direct any questions about BU\u2019s billing process by emailing us at help@nerc.mghpcc.org or submitting a new ticket to the the NERC's Support Ticketing System . Questions about a specific invoice that you have received can be sent to IST-ISR-NERC@bu.edu .","title":"Subsidization of Boston University\u2019s Use of NERC"},{"location":"get-started/cost-billing/billing-process-for-harvard/","text":"Billing Process for Harvard University Direct Billing for NERC is a convenience service for Harvard Faculty and Departments. HUIT will pay the monthly invoices and then allocate the monthly usage costs on the Harvard University General Ledger. This follows a similar pattern with how other Public Cloud Providers (AWS, Azure, GCP) accounts are billed and leverage the HUIT Central Billing Portal . Your HUIT Customer Code will be matched to your NERC Project Allocation Name as a Billing Asset. In this process you will be asked for your GL billing code, which you can change as needed per project. Please be cognizant that only a single billing code is allowed per billing asset. Therefore, if you have multiple projects with different funds, if you are able, please create a separate project for each fund. Otherwise, you will need to take care of this with internal journals inside of your department or lab. During each monthly billing cycle, the NERC team will upload the billing Comma-separated values (CSV) files to the HUIT Central Billing system accessible AWS Object Storage (S3) bucket. The HUIT Central Billing system ingests billing data files provided by NERC, maps the usage costs to HUIT Billing customers (and GL Codes) and then includes those amounts in HUIT Monthly Billing of all customers. This is an automated process. Please follow these two steps to ensure proper billing setup: Each Harvard PI must have a HUIT billing account linked to their NetID (abc123), and NERC requires a HUIT \" Customer Code \" for billing purposes. To create a HUIT billing account, sign up here with your HarvardKey. The PI's submission of the corresponding HUIT \" Customer Code \" is now seamlessly integrated into the PI user account role submission process. This means that PIs can provide the corresponding HUIT \" Customer Code \" either while submitting NERC's PI Request Form or by submitting a new ticket at NERC's Support Ticketing System under the \"NERC PI Account Request\" option in the Help Topic dropdown menu. What if you already have an existing Customer Code? Please note that if you already have an existing active NERC account, you need to provide your HUIT Customer Code to NERC. If you think your department may already have a HUIT account but you don\u2019t know the corresponding Customer Code then you can contact HUIT Billing to get the required Customer Code. During the Resource Allocation review and approval process, we will utilize the HUIT \"Customer Code\" provided by the PI in step #1 to align it with the approved allocation. Before confirming the mapping of the Customer Code to the Resource Allocation, we will send an email to the PI to confirm its accuracy and then approve the requested allocation. Subsequently, after the allocation is approved, we will request the PI to initiate a change request to input the correct \"Customer Code\" into the allocation's \"Institution-Specific Code\" attribute's value. Very Important Note We recommend keeping your \" Institution-Specific Code \" updated at all times, ensuring it accurately reflects your current and valid Customer Code . The PI or project manager(s) have the authority to request changes for updating the \"Institution-Specific Code\" attribute for each resource allocation. They can do so by submitting a Change Request as outlined here . How to view Project Name, Project ID & Institution-Specific Code? By clicking on the Allocation detail page through ColdFront, you can access information about the allocation of each resource, including OpenStack and OpenShift as described here . You can review and verify Allocated Project Name , Allocated Project ID and Institution-Specific Code attributes, which are located under the \"Allocation Attributes\" section on the detail page as described here . Once we confirm the six-digit HUIT Customer Code for the PI and the correct resource allocation, the NERC admin team will initiate the creation of a new ServiceNow ticket. This will be done by reaching out to HUIT Billing or directly emailing HUIT Billing at huit-billing@harvard.edu for the approved and active allocation request. In this email, the NERC admin needs to specify the Allocated Project ID , Allocated Project Name , Customer Code , and PI's Email address . Then, the HUIT billing team will generate a unique Asset ID to be utilized by the Customer's HUIT billing portal. Important Information regarding HUIT Billing SLA Please note that we will require the PI or Manager(s) to repeat step #2 for any new resource allocation(s) as well as renewed allocation(s). Additionally, the HUIT Billing SLA for new Cloud Billing assets is 2 business days , although most requests are typically completed within 8 hours. Harvard University Security Policy Information Please note that all assets deployed to your NERC project must be compliant with University Security policies as described here . Please familiarize yourself with the Harvard University Information Security Policy and your role in securing data. If you have any questions about how Security should be implemented in the Cloud, please contact your school security officer: \"Havard Security Officer\" .","title":"Billing Process for Harvard University"},{"location":"get-started/cost-billing/billing-process-for-harvard/#billing-process-for-harvard-university","text":"Direct Billing for NERC is a convenience service for Harvard Faculty and Departments. HUIT will pay the monthly invoices and then allocate the monthly usage costs on the Harvard University General Ledger. This follows a similar pattern with how other Public Cloud Providers (AWS, Azure, GCP) accounts are billed and leverage the HUIT Central Billing Portal . Your HUIT Customer Code will be matched to your NERC Project Allocation Name as a Billing Asset. In this process you will be asked for your GL billing code, which you can change as needed per project. Please be cognizant that only a single billing code is allowed per billing asset. Therefore, if you have multiple projects with different funds, if you are able, please create a separate project for each fund. Otherwise, you will need to take care of this with internal journals inside of your department or lab. During each monthly billing cycle, the NERC team will upload the billing Comma-separated values (CSV) files to the HUIT Central Billing system accessible AWS Object Storage (S3) bucket. The HUIT Central Billing system ingests billing data files provided by NERC, maps the usage costs to HUIT Billing customers (and GL Codes) and then includes those amounts in HUIT Monthly Billing of all customers. This is an automated process. Please follow these two steps to ensure proper billing setup: Each Harvard PI must have a HUIT billing account linked to their NetID (abc123), and NERC requires a HUIT \" Customer Code \" for billing purposes. To create a HUIT billing account, sign up here with your HarvardKey. The PI's submission of the corresponding HUIT \" Customer Code \" is now seamlessly integrated into the PI user account role submission process. This means that PIs can provide the corresponding HUIT \" Customer Code \" either while submitting NERC's PI Request Form or by submitting a new ticket at NERC's Support Ticketing System under the \"NERC PI Account Request\" option in the Help Topic dropdown menu. What if you already have an existing Customer Code? Please note that if you already have an existing active NERC account, you need to provide your HUIT Customer Code to NERC. If you think your department may already have a HUIT account but you don\u2019t know the corresponding Customer Code then you can contact HUIT Billing to get the required Customer Code. During the Resource Allocation review and approval process, we will utilize the HUIT \"Customer Code\" provided by the PI in step #1 to align it with the approved allocation. Before confirming the mapping of the Customer Code to the Resource Allocation, we will send an email to the PI to confirm its accuracy and then approve the requested allocation. Subsequently, after the allocation is approved, we will request the PI to initiate a change request to input the correct \"Customer Code\" into the allocation's \"Institution-Specific Code\" attribute's value. Very Important Note We recommend keeping your \" Institution-Specific Code \" updated at all times, ensuring it accurately reflects your current and valid Customer Code . The PI or project manager(s) have the authority to request changes for updating the \"Institution-Specific Code\" attribute for each resource allocation. They can do so by submitting a Change Request as outlined here . How to view Project Name, Project ID & Institution-Specific Code? By clicking on the Allocation detail page through ColdFront, you can access information about the allocation of each resource, including OpenStack and OpenShift as described here . You can review and verify Allocated Project Name , Allocated Project ID and Institution-Specific Code attributes, which are located under the \"Allocation Attributes\" section on the detail page as described here . Once we confirm the six-digit HUIT Customer Code for the PI and the correct resource allocation, the NERC admin team will initiate the creation of a new ServiceNow ticket. This will be done by reaching out to HUIT Billing or directly emailing HUIT Billing at huit-billing@harvard.edu for the approved and active allocation request. In this email, the NERC admin needs to specify the Allocated Project ID , Allocated Project Name , Customer Code , and PI's Email address . Then, the HUIT billing team will generate a unique Asset ID to be utilized by the Customer's HUIT billing portal. Important Information regarding HUIT Billing SLA Please note that we will require the PI or Manager(s) to repeat step #2 for any new resource allocation(s) as well as renewed allocation(s). Additionally, the HUIT Billing SLA for new Cloud Billing assets is 2 business days , although most requests are typically completed within 8 hours. Harvard University Security Policy Information Please note that all assets deployed to your NERC project must be compliant with University Security policies as described here . Please familiarize yourself with the Harvard University Information Security Policy and your role in securing data. If you have any questions about how Security should be implemented in the Cloud, please contact your school security officer: \"Havard Security Officer\" .","title":"Billing Process for Harvard University"},{"location":"get-started/cost-billing/billing-process-for-my-institution/","text":"Billing Process for My Institution Memorandum of Understanding (MOU) The New England Research Cloud (NERC) is a shared service offered through the Massachusetts Green High Performance Computing Center (MGHPCC). The MGHPCC will enter into a lightweight Memorandum of Understanding (MOU) with each institutional customer that consumes NERC services. The MOU is intended to ensure the institution maintains access to valuable and relevant cloud services provided by the MGHPCC via the NERC to be better aligned to a number of research regulations, policies, and requirements and also ensure NERC remains sustainable over time. Institutions with established MOUs and Billing Processes For cost recovery purposes, institutional customers may elect to receive one invoice for the usage of NERC services by its PIs and cost recovery internally. Every month, the NERC team will export, back up, and securely store the billing data for all PIs in the form of comma-separated values (CSV) files and provide it to the MGHPCC for billing purposes. The following institutions using our services have established MOU as well as billing processes with us: Harvard University Boston University Upcoming MOU with other institutions We are in the process of establishing MOUs for institutions not listed above. PIs from other institutions not listed above can still utilize NERC services with the understanding that they are directly accountable for managing their usage and ensuring all service charges are paid promptly. If you have any some common questions or need further information, see our Billing FAQs for comprehensive answers. If your institution does not have an MOU with us, please have someone from your faculty or administration contact us to discuss it soon by emailing us at help@nerc.mghpcc.org or, by submitting a new ticket at the NERC's Support Ticketing System .","title":"Billing Process for My Institution"},{"location":"get-started/cost-billing/billing-process-for-my-institution/#billing-process-for-my-institution","text":"","title":"Billing Process for My Institution"},{"location":"get-started/cost-billing/billing-process-for-my-institution/#memorandum-of-understanding-mou","text":"The New England Research Cloud (NERC) is a shared service offered through the Massachusetts Green High Performance Computing Center (MGHPCC). The MGHPCC will enter into a lightweight Memorandum of Understanding (MOU) with each institutional customer that consumes NERC services. The MOU is intended to ensure the institution maintains access to valuable and relevant cloud services provided by the MGHPCC via the NERC to be better aligned to a number of research regulations, policies, and requirements and also ensure NERC remains sustainable over time.","title":"Memorandum of Understanding (MOU)"},{"location":"get-started/cost-billing/billing-process-for-my-institution/#institutions-with-established-mous-and-billing-processes","text":"For cost recovery purposes, institutional customers may elect to receive one invoice for the usage of NERC services by its PIs and cost recovery internally. Every month, the NERC team will export, back up, and securely store the billing data for all PIs in the form of comma-separated values (CSV) files and provide it to the MGHPCC for billing purposes. The following institutions using our services have established MOU as well as billing processes with us: Harvard University Boston University Upcoming MOU with other institutions We are in the process of establishing MOUs for institutions not listed above. PIs from other institutions not listed above can still utilize NERC services with the understanding that they are directly accountable for managing their usage and ensuring all service charges are paid promptly. If you have any some common questions or need further information, see our Billing FAQs for comprehensive answers. If your institution does not have an MOU with us, please have someone from your faculty or administration contact us to discuss it soon by emailing us at help@nerc.mghpcc.org or, by submitting a new ticket at the NERC's Support Ticketing System .","title":"Institutions with established MOUs and Billing Processes"},{"location":"get-started/cost-billing/how-pricing-works/","text":"How does NERC pricing work? As a new PI using NERC for the first time, am I entitled to any credits? As a new PI using NERC for the first time, you might wonder if you get any credits. Yes, you'll receive up to $1000 for the first month only . But remember, this credit can not be used in the following months . Also, it does not apply to GPU resource usage . NERC offers you a pay-as-you-go approach for pricing for our cloud infrastructure offerings (Tiers of Service), including Infrastructure-as-a-Service (IaaS) \u2013 Red Hat OpenStack and Platform-as-a-Service (PaaS) \u2013 Red Hat OpenShift. The exception is the Storage quotas in NERC Storage Tiers, where the cost is determined by your requested and approved allocation values to reserve storage from the total NESE storage pool. For NERC (OpenStack) Resource Allocations, storage quotas are specified by the \"OpenStack Volume Quota (GiB)\" and \"OpenStack Swift Quota (GiB)\" allocation attributes. Whereas for NERC-OCP (OpenShift) Resource Allocations, storage quotas are specified by the \"OpenShift Request on Storage Quota (GiB)\" and \"OpenShift Limit on Ephemeral Storage Quota (GiB)\" allocation attributes. If you have common questions or need more information, refer to our Billing FAQs for comprehensive answers. NERC offers a flexible cost model where an institution (with a per-project breakdown) is billed solely for the duration of the specific services required. Access is based on project-approved resource quotas, eliminating runaway usage and charges. There are no obligations of long-term contracts or complicated licensing agreements. Each institution will enter a lightweight MOU with MGHPCC that defines the services and billing model. Calculations Service Units (SUs) Name vGPU vCPU RAM (GiB) Current Price CPU 0 1 4 $0.013 A100 GPU 1 24 74 $1.803 A100sxm4 GPU 1 32 240 $2.078 V100 GPU 1 48 192 $1.214 K80 GPU 1 6 28.5 $0.463 Breakdown CPU/GPU SUs Service Units (SUs) can only be purchased as a whole unit. We will charge for Pods (summed up by Project) and VMs on a per-hour basis for any portion of an hour they are used, and any VM \"flavor\"/Pod reservation is charged as a multiplier of the base SU for the maximum resource they reserve. GPU SU Example: A Project or VM with: 1 A100 GPU, 24 vCPUs, 95MiB RAM, 199.2hrs Will be charged: 1 A100 GPU SUs x 200hrs (199.2 rounded up) x $1.803 $360.60 OpenStack CPU SU Example: A Project or VM with: 3 vCPU, 20 GiB RAM, 720hrs (24hr x 30days) Will be charged: 5 CPU SUs due to the extra RAM (20GiB vs. 12GiB(3 x 4GiB)) x 720hrs x $0.013 $46.80 Are VMs invoiced even when shut down? Yes, VMs are invoiced as long as they are utilizing resources. In order not to be billed for a VM, you must delete your Instance/VM. It is advisable to create a snapshot of your VM prior to deleting it, ensuring you have a backup of your data and configurations. By proactively managing your VMs and resources, you can optimize your usage and minimize unnecessary costs. If you have common questions or need more information, refer to our Billing FAQs for comprehensive answers. OpenShift CPU SU Example: Project with 3 Pods with: i. 1 vCPU, 3 GiB RAM, 720hrs (24hr*30days) ii. 0.1 vCPU, 8 GiB RAM, 720hrs (24hr*30days) iii. 2 vCPU, 4 GiB RAM, 720hrs (24hr*30days) Project Will be charged: RoundUP(Sum( 1 CPU SUs due to first pod * 720hrs * $0.013 2 CPU SUs due to extra RAM (8GiB vs 0.4GiB(0.1*4GiB)) * 720hrs * $0.013 2 CPU SUs due to more CPU (2vCPU vs 1vCPU(4GiB/4)) * 720hrs * $0.013 )) =RoundUP(Sum(720(1+2+2)))*0.013 $46.80 How to calculate cost for all running OpenShift pods? If you prefer a function for the OpenShift pods here it is: Project SU HR count = RoundUP(SUM(Pod1 SU hour count + Pod2 SU hr count + ...)) OpenShift Pods are summed up to the project level so that fractions of CPU/RAM that some pods use will not get overcharged. There will be a split between CPU and GPU pods, as GPU pods cannot currently share resources with CPU pods. Storage Storage is charged separately at a rate of $0.009 TiB/hr or $9.00E-6 GiB/hr . OpenStack volumes remain provisioned until they are deleted. VM's reserve volumes, and you can also create extra volumes yourself. In OpenShift pods, storage is only provisioned while it is active, and in persistent volumes, storage remains provisioned until it is deleted. Very Important: Requested/Approved Allocated Storage Quota and Cost The Storage cost is determined by your requested and approved allocation values . Once approved, these Storage quotas will need to be reserved from the total NESE storage pool for both NERC (OpenStack) and NERC-OCP (OpenShift) resources. For NERC (OpenStack) Resource Allocations, storage quotas are specified by the \"OpenStack Volume Quota (GiB)\" and \"OOpenStack Swift Quota (GiB)\" allocation attributes. Whereas for NERC-OCP (OpenShift) Resource Allocations, storage quotas are specified by the \"OpenShift Request on Storage Quota (GiB)\" and \"OpenShift Limit on Ephemeral Storage Quota (GiB)\" allocation attributes. Even if you have deleted all volumes, snapshots, and object storage buckets and objects in your OpenStack and OpenShift projects. It is very essential to adjust the approved values for your NERC (OpenStack) and NERC-OCP (OpenShift) resource allocations to zero (0) otherwise you will still be incurring a charge for the approved storage as explained in Billing FAQs . Keep in mind that you can easily scale and expand your current resource allocations within your project. Follow this guide on how to use NERC's ColdFront to reduce your Storage quotas for NERC (OpenStack) allocations and this guide for NERC-OCP (OpenShift) allocations. Storage Example 1: Volume or VM with: 500GiB for 699.2hrs Will be charged: .5 Storage TiB SU (.5 TiB x 700hrs) x $0.009 TiB/hr $3.15 Storage Example 2: Volume or VM with: 10TiB for 720hrs (24hr x 30days) Will be charged: 10 Storage TiB SU (10TiB x 720 hrs) x $0.009 TiB/hr $64.80 Storage includes all types of storage Object, Block, Ephemeral & Image. High-Level Function To provide a more practical way to calculate your usage, here is a function of how the calculation works for OpenShift and OpenStack. OpenStack = (Resource (vCPU/RAM/vGPU) assigned to VM flavor converted to number of equivalent SUs) * (time VM has been running), rounded up to a whole hour + Extra storage. NERC's OpenStack Flavor List You can find the most up-to-date information on the current NERC's OpenStack flavors with corresponding SUs by referring to this page . OpenShift = (Resource (vCPU/RAM) requested by Pod converted to the number of SU) * (time Pod was running), summed up to project level rounded up to the whole hour. How to Pay? To ensure a comprehensive understanding of the billing process and payment options for NERC offerings, we advise PIs/Managers to visit individual pages designated for each institution . These pages provide detailed information specific to each organization's policies and procedures regarding their billing. By exploring these dedicated pages, you can gain insights into the preferred payment methods, invoicing cycles, breakdowns of cost components, and any available discounts or offers. Understanding the institution's unique approach to billing ensures accurate planning, effective financial management, and a transparent relationship with us. If you have any some common questions or need further information, see our Billing FAQs for comprehensive answers.","title":"How does NERC pricing work?"},{"location":"get-started/cost-billing/how-pricing-works/#how-does-nerc-pricing-work","text":"As a new PI using NERC for the first time, am I entitled to any credits? As a new PI using NERC for the first time, you might wonder if you get any credits. Yes, you'll receive up to $1000 for the first month only . But remember, this credit can not be used in the following months . Also, it does not apply to GPU resource usage . NERC offers you a pay-as-you-go approach for pricing for our cloud infrastructure offerings (Tiers of Service), including Infrastructure-as-a-Service (IaaS) \u2013 Red Hat OpenStack and Platform-as-a-Service (PaaS) \u2013 Red Hat OpenShift. The exception is the Storage quotas in NERC Storage Tiers, where the cost is determined by your requested and approved allocation values to reserve storage from the total NESE storage pool. For NERC (OpenStack) Resource Allocations, storage quotas are specified by the \"OpenStack Volume Quota (GiB)\" and \"OpenStack Swift Quota (GiB)\" allocation attributes. Whereas for NERC-OCP (OpenShift) Resource Allocations, storage quotas are specified by the \"OpenShift Request on Storage Quota (GiB)\" and \"OpenShift Limit on Ephemeral Storage Quota (GiB)\" allocation attributes. If you have common questions or need more information, refer to our Billing FAQs for comprehensive answers. NERC offers a flexible cost model where an institution (with a per-project breakdown) is billed solely for the duration of the specific services required. Access is based on project-approved resource quotas, eliminating runaway usage and charges. There are no obligations of long-term contracts or complicated licensing agreements. Each institution will enter a lightweight MOU with MGHPCC that defines the services and billing model.","title":"How does NERC pricing work?"},{"location":"get-started/cost-billing/how-pricing-works/#calculations","text":"","title":"Calculations"},{"location":"get-started/cost-billing/how-pricing-works/#service-units-sus","text":"Name vGPU vCPU RAM (GiB) Current Price CPU 0 1 4 $0.013 A100 GPU 1 24 74 $1.803 A100sxm4 GPU 1 32 240 $2.078 V100 GPU 1 48 192 $1.214 K80 GPU 1 6 28.5 $0.463","title":"Service Units (SUs)"},{"location":"get-started/cost-billing/how-pricing-works/#breakdown","text":"","title":"Breakdown"},{"location":"get-started/cost-billing/how-pricing-works/#cpugpu-sus","text":"Service Units (SUs) can only be purchased as a whole unit. We will charge for Pods (summed up by Project) and VMs on a per-hour basis for any portion of an hour they are used, and any VM \"flavor\"/Pod reservation is charged as a multiplier of the base SU for the maximum resource they reserve. GPU SU Example: A Project or VM with: 1 A100 GPU, 24 vCPUs, 95MiB RAM, 199.2hrs Will be charged: 1 A100 GPU SUs x 200hrs (199.2 rounded up) x $1.803 $360.60 OpenStack CPU SU Example: A Project or VM with: 3 vCPU, 20 GiB RAM, 720hrs (24hr x 30days) Will be charged: 5 CPU SUs due to the extra RAM (20GiB vs. 12GiB(3 x 4GiB)) x 720hrs x $0.013 $46.80 Are VMs invoiced even when shut down? Yes, VMs are invoiced as long as they are utilizing resources. In order not to be billed for a VM, you must delete your Instance/VM. It is advisable to create a snapshot of your VM prior to deleting it, ensuring you have a backup of your data and configurations. By proactively managing your VMs and resources, you can optimize your usage and minimize unnecessary costs. If you have common questions or need more information, refer to our Billing FAQs for comprehensive answers. OpenShift CPU SU Example: Project with 3 Pods with: i. 1 vCPU, 3 GiB RAM, 720hrs (24hr*30days) ii. 0.1 vCPU, 8 GiB RAM, 720hrs (24hr*30days) iii. 2 vCPU, 4 GiB RAM, 720hrs (24hr*30days) Project Will be charged: RoundUP(Sum( 1 CPU SUs due to first pod * 720hrs * $0.013 2 CPU SUs due to extra RAM (8GiB vs 0.4GiB(0.1*4GiB)) * 720hrs * $0.013 2 CPU SUs due to more CPU (2vCPU vs 1vCPU(4GiB/4)) * 720hrs * $0.013 )) =RoundUP(Sum(720(1+2+2)))*0.013 $46.80 How to calculate cost for all running OpenShift pods? If you prefer a function for the OpenShift pods here it is: Project SU HR count = RoundUP(SUM(Pod1 SU hour count + Pod2 SU hr count + ...)) OpenShift Pods are summed up to the project level so that fractions of CPU/RAM that some pods use will not get overcharged. There will be a split between CPU and GPU pods, as GPU pods cannot currently share resources with CPU pods.","title":"CPU/GPU SUs"},{"location":"get-started/cost-billing/how-pricing-works/#storage","text":"Storage is charged separately at a rate of $0.009 TiB/hr or $9.00E-6 GiB/hr . OpenStack volumes remain provisioned until they are deleted. VM's reserve volumes, and you can also create extra volumes yourself. In OpenShift pods, storage is only provisioned while it is active, and in persistent volumes, storage remains provisioned until it is deleted. Very Important: Requested/Approved Allocated Storage Quota and Cost The Storage cost is determined by your requested and approved allocation values . Once approved, these Storage quotas will need to be reserved from the total NESE storage pool for both NERC (OpenStack) and NERC-OCP (OpenShift) resources. For NERC (OpenStack) Resource Allocations, storage quotas are specified by the \"OpenStack Volume Quota (GiB)\" and \"OOpenStack Swift Quota (GiB)\" allocation attributes. Whereas for NERC-OCP (OpenShift) Resource Allocations, storage quotas are specified by the \"OpenShift Request on Storage Quota (GiB)\" and \"OpenShift Limit on Ephemeral Storage Quota (GiB)\" allocation attributes. Even if you have deleted all volumes, snapshots, and object storage buckets and objects in your OpenStack and OpenShift projects. It is very essential to adjust the approved values for your NERC (OpenStack) and NERC-OCP (OpenShift) resource allocations to zero (0) otherwise you will still be incurring a charge for the approved storage as explained in Billing FAQs . Keep in mind that you can easily scale and expand your current resource allocations within your project. Follow this guide on how to use NERC's ColdFront to reduce your Storage quotas for NERC (OpenStack) allocations and this guide for NERC-OCP (OpenShift) allocations. Storage Example 1: Volume or VM with: 500GiB for 699.2hrs Will be charged: .5 Storage TiB SU (.5 TiB x 700hrs) x $0.009 TiB/hr $3.15 Storage Example 2: Volume or VM with: 10TiB for 720hrs (24hr x 30days) Will be charged: 10 Storage TiB SU (10TiB x 720 hrs) x $0.009 TiB/hr $64.80 Storage includes all types of storage Object, Block, Ephemeral & Image.","title":"Storage"},{"location":"get-started/cost-billing/how-pricing-works/#high-level-function","text":"To provide a more practical way to calculate your usage, here is a function of how the calculation works for OpenShift and OpenStack. OpenStack = (Resource (vCPU/RAM/vGPU) assigned to VM flavor converted to number of equivalent SUs) * (time VM has been running), rounded up to a whole hour + Extra storage. NERC's OpenStack Flavor List You can find the most up-to-date information on the current NERC's OpenStack flavors with corresponding SUs by referring to this page . OpenShift = (Resource (vCPU/RAM) requested by Pod converted to the number of SU) * (time Pod was running), summed up to project level rounded up to the whole hour.","title":"High-Level Function"},{"location":"get-started/cost-billing/how-pricing-works/#how-to-pay","text":"To ensure a comprehensive understanding of the billing process and payment options for NERC offerings, we advise PIs/Managers to visit individual pages designated for each institution . These pages provide detailed information specific to each organization's policies and procedures regarding their billing. By exploring these dedicated pages, you can gain insights into the preferred payment methods, invoicing cycles, breakdowns of cost components, and any available discounts or offers. Understanding the institution's unique approach to billing ensures accurate planning, effective financial management, and a transparent relationship with us. If you have any some common questions or need further information, see our Billing FAQs for comprehensive answers.","title":"How to Pay?"},{"location":"get-started/cost-billing/nerc-pricing-calculator/","text":"NERC Pricing Calculator The NERC Pricing Calculator is a google excel based tool for estimating the cost of utilizing various NERC resources in different NERC service offerings. It offers a user-friendly interface, allowing users to input their requirements and customize configurations to generate accurate and tailored cost estimates for optimal budgeting and resource allocation. Start your estimate with no commitment, and explore NERC services and pricing for your research needs by using this online tool . How to use the NERC Pricing Calculator? Please Note, you need to make a copy of this tool before estimating the cost and once copied you can easily update corresponding resource type columns' values on your own working sheet that will reflect your potential Service Units (SU), Rate, and cost per Hour, Month and Year. This tool has 4 sheets at the bottom as shown here: If you are more interested to calculate your cost estimates based on the available NERC OpenStack flavors (which define the compute, memory, and storage capacity for your dedicated instances), you can select and use the second sheet titled \" OpenStack Flavor \". For cost estimating the NERC OpenShift resources, you can use the first sheet titled \" Calculate SU \" and input pod specific resource requests in each row. If you are scaling the pods more than one then you need to enter a new row or entry for each scaled pods. For Storage cost, you need to use the third sheet titled \" Calculate Storage \". And then the total cost will be reflected at the last sheet titled \" Total Cost \". For more information about how NERC pricing works, see How does NERC pricing work and to know more about billing process for your own institution, see Billing Process for My Institution .","title":"NERC Pricing Calculator"},{"location":"get-started/cost-billing/nerc-pricing-calculator/#nerc-pricing-calculator","text":"The NERC Pricing Calculator is a google excel based tool for estimating the cost of utilizing various NERC resources in different NERC service offerings. It offers a user-friendly interface, allowing users to input their requirements and customize configurations to generate accurate and tailored cost estimates for optimal budgeting and resource allocation. Start your estimate with no commitment, and explore NERC services and pricing for your research needs by using this online tool . How to use the NERC Pricing Calculator? Please Note, you need to make a copy of this tool before estimating the cost and once copied you can easily update corresponding resource type columns' values on your own working sheet that will reflect your potential Service Units (SU), Rate, and cost per Hour, Month and Year. This tool has 4 sheets at the bottom as shown here: If you are more interested to calculate your cost estimates based on the available NERC OpenStack flavors (which define the compute, memory, and storage capacity for your dedicated instances), you can select and use the second sheet titled \" OpenStack Flavor \". For cost estimating the NERC OpenShift resources, you can use the first sheet titled \" Calculate SU \" and input pod specific resource requests in each row. If you are scaling the pods more than one then you need to enter a new row or entry for each scaled pods. For Storage cost, you need to use the third sheet titled \" Calculate Storage \". And then the total cost will be reflected at the last sheet titled \" Total Cost \". For more information about how NERC pricing works, see How does NERC pricing work and to know more about billing process for your own institution, see Billing Process for My Institution .","title":"NERC Pricing Calculator"},{"location":"migration-moc-to-nerc/Step1/","text":"Creating NERC Project and Networks This process includes some waiting for emails and approvals. It is advised to start this process and then move to step 2 and continue with these steps once you recieve approval. Account Creation & Quota Request Register for your new NERC account here . Wait for an approval email. Register to be a PI for a NERC account here . Wait for an approval email. Request the quota necessary for all of your MOC Projects to be added to NERC here (link also in PI approval email). Log in with your institution login by clicking on Log in via OpenID Connect (highlighted in yellow above). Under Projects>> Click on the name of your project (highlighted in yellow above). Scroll down until you see Request Resource Allocation (highlighted in yellow above) and click on it. Fill out the Justification (highlighted in purple above) for the quota allocation. Using your \u201cMOC Instance information\u201d table you gathered from your MOC project calculate the total number of Instances, VCPUs, RAM and use your \u201cMOC Volume Information\u201d table to calculate Disk space you will need. Using the up and down arrows (highlighted in yellow above) or by entering the number manually select the multiple of 1 Instance, 2 vCPUs, 0 GPUs, 4GB RAM, 2 Volumes and 100GB Disk and 1GB Object Storage that you will need. For example if I need 2 instances 2 vCPUs, 3GB RAM, 3 Volumes and 30GB of storage I would type in 2 or click the up arrow once to select 2 units. Click Submit (highlighted in green above). Wait for your allocation approval email. Setup Login to the Dashboard Log into the NERC OpenStack Dashboard using your OpenID Connect password. Click Connect . Select your institution from the drop down (highlighted in yellow above). Click Log On (highlighted in purple). Follow your institution's log on instructions. Setup NERC Network You are then brought to the Project>Compute>Overview location of the Dashboard. This will look very familiar as the MOC and NERC Dashboard are quite similar. Follow the instructions here to set up your network/s (you may also use the default_network if you wish). The networks don't have to exactly match the MOC. You only need the networks for creating your new instances (and accessing them once we complete the migration). Follow the instructions here to set up your router/s (you may also use the default_router if you wish). Follow the instructions here to set up your Security Group/s. This is where you can use your \u201cMOC Security Group Information\u201d table to create similar Security Groups to the ones you had in the MOC. Follow the instructions here to set up your SSH Key-pair/s.","title":"Creating NERC Project and Networks"},{"location":"migration-moc-to-nerc/Step1/#creating-nerc-project-and-networks","text":"This process includes some waiting for emails and approvals. It is advised to start this process and then move to step 2 and continue with these steps once you recieve approval.","title":"Creating NERC Project and Networks"},{"location":"migration-moc-to-nerc/Step1/#account-creation-quota-request","text":"Register for your new NERC account here . Wait for an approval email. Register to be a PI for a NERC account here . Wait for an approval email. Request the quota necessary for all of your MOC Projects to be added to NERC here (link also in PI approval email). Log in with your institution login by clicking on Log in via OpenID Connect (highlighted in yellow above). Under Projects>> Click on the name of your project (highlighted in yellow above). Scroll down until you see Request Resource Allocation (highlighted in yellow above) and click on it. Fill out the Justification (highlighted in purple above) for the quota allocation. Using your \u201cMOC Instance information\u201d table you gathered from your MOC project calculate the total number of Instances, VCPUs, RAM and use your \u201cMOC Volume Information\u201d table to calculate Disk space you will need. Using the up and down arrows (highlighted in yellow above) or by entering the number manually select the multiple of 1 Instance, 2 vCPUs, 0 GPUs, 4GB RAM, 2 Volumes and 100GB Disk and 1GB Object Storage that you will need. For example if I need 2 instances 2 vCPUs, 3GB RAM, 3 Volumes and 30GB of storage I would type in 2 or click the up arrow once to select 2 units. Click Submit (highlighted in green above). Wait for your allocation approval email.","title":"Account Creation & Quota Request"},{"location":"migration-moc-to-nerc/Step1/#setup","text":"","title":"Setup"},{"location":"migration-moc-to-nerc/Step1/#login-to-the-dashboard","text":"Log into the NERC OpenStack Dashboard using your OpenID Connect password. Click Connect . Select your institution from the drop down (highlighted in yellow above). Click Log On (highlighted in purple). Follow your institution's log on instructions.","title":"Login to the Dashboard"},{"location":"migration-moc-to-nerc/Step1/#setup-nerc-network","text":"You are then brought to the Project>Compute>Overview location of the Dashboard. This will look very familiar as the MOC and NERC Dashboard are quite similar. Follow the instructions here to set up your network/s (you may also use the default_network if you wish). The networks don't have to exactly match the MOC. You only need the networks for creating your new instances (and accessing them once we complete the migration). Follow the instructions here to set up your router/s (you may also use the default_router if you wish). Follow the instructions here to set up your Security Group/s. This is where you can use your \u201cMOC Security Group Information\u201d table to create similar Security Groups to the ones you had in the MOC. Follow the instructions here to set up your SSH Key-pair/s.","title":"Setup NERC Network"},{"location":"migration-moc-to-nerc/Step2/","text":"Identify Volumes, Instances & Security Groups on the MOC that need to be Migrated to the NERC Please read the instructions in their entirety before proceeding. Allow yourself enough time to complete them. Volume Snapshots will not be migrated. If you have a Snapshot you wish to backup please \u201cCreate Volume\u201d from it first. Confirm Access and Login to MOC Dashboard Go to the MOC Dashboard . SSO / Google Login If you have SSO through your Institution or google select Institution Account from the dropdown. Click Connect . Click on University Logins (highlighted in yellow below) if you are using SSO with your Institution. Follow your Institution's login steps after that, and skip to Gathering MOC information for the Migration . Click Google (highlighted in purple above) if your SSO is through Google. Follow standard Google login steps to get in this way, and skip to Gathering MOC information for the Migration . Keystone Credentials If you have a standard login and password leave the dropdown as Keystone Credentials. Enter your User Name. Enter your Password. Click Connect. Don't know your login? If you do not know your login information please create a Password Reset ticket . Click Open a New Ticket (highlighted in yellow above). Click the dropdown and select Forgot Pass & SSO Account Link (highlighted in blue above). In the text field (highlighted in purple above) provide the Institution email, project you are working on and the email address you used to create the account. Click Create Ticket (highlighted in yellow above) and wait for the pinwheel. You will receive an email to let you know that the MOC support staff will get back to you. Gathering MOC information for the Migration You are then brought to the Project>Compute>Overview location of the Dashboard. Create Tables to hold your information Create 3 tables of all of your Instances, your Volumes and Security Groups, for example, if you have 2 instances, 3 volumes and 2 Security Groups like the samples below your lists might look like this: MOC Instance Information Table Instance Name MOC VCPUs MOC Disk MOC RAM MOC UUID Fedora_test 1 10GB 1GB 16a1bfc2-8c90-4361-8c13-64ab40bb6207 Ubuntu_Test 1 10GB 2GB 6a40079a-59f7-407c-9e66-23bc5b749a95 total 2 20GB 3GB MOC Volume Information Table MOC Volume Name MOC Disk MOC Attached To Bootable MOC UUID NERC Volume Name Fedora 10GiB Fedora_test Yes ea45c20b-434a-4c41-8bc6-f48256fc76a8 9c73295d-fdfa-4544-b8b8-a876cc0a1e86 10GiB Ubuntu_Test Yes 9c73295d-fdfa-4544-b8b8-a876cc0a1e86 Snapshot of Fed_Test 10GiB Fedora_test No ea45c20b-434a-4c41-8bc6-f48256fc76a8 total 30GiB MOC Security Group Information Table Security Group Name Direction Ether Type IP Protocol Port Range Remote IP Prefix ssh_only_test Ingress IPv4 TCP 22 0.0.0.0/0 ping_only_test Ingress IPv4 ICMP Any 0.0.0.0/0 Gather the Instance Information Gather the Instance UUIDs (of only the instances that you need to migrate to the NERC). Click Instances (highlighted in pink in image above) Click the Instance Name (highlighted in Yellow above) of the first instance you would like to gather data on. Locate the ID row (highlighted in green above) and copy and save the ID (highlighted in purple above). This is the UUID of your first Instance. Locate the RAM, VCPUs & Disk rows (highlighted in yellow) and copy and save the associated values (highlighted in pink). Repeat this section for each Instance you have. Gather the Volume Information Gather the Volume UUIDs (of only the volumes that you need to migrate to the NERC). Click Volumes dropdown. Select Volumes (highlighted in purple above). Click the Volume Name (highlighted in yellow above) of the first volume you would like to gather data on. The name might be the same as the ID (highlighted in blue above). Locate the ID row (highlighted in green above) and copy and save the ID (highlighted in purple above). This is the UUID of your first Volume. Locate the Size row (highlighted in yellow above) and copy and save the Volume size (highlighted in pink above). Locate the Bootable row (highlighted in gray above) and copy and save the Volume size (highlighted in red above). Locate the Attached To row (highlighted in blue above) and copy and save the Instance this Volume is attached to (highlighted in orange above). If the volume is not attached to an image it will state \u201cNot attached\u201d. Repeat this section for each Volume you have. Gather your Security Group Information If you already have all of your Security Group information outside of the OpenStack Dashboard skip to the section. Gather the Security Group information (of only the security groups that you need to migrate to the NERC). Click Network dropdown Click Security Groups (highlighted in yellow above). Click Manage Rules (highlighted in yellow above) of the first Security Group you would like to gather data on. Ignore the first 2 lines (highlighted in yellow above). Write down the important information for all lines after (highlighted in blue above). Direction, Ether Type, IP Protocol, Port Range, Remote IP Prefix, Remote Security Group. Repeat this section for each security group you have.","title":"Identify Volumes, Instances & Security Groups on the MOC that need to be Migrated to the NERC"},{"location":"migration-moc-to-nerc/Step2/#identify-volumes-instances-security-groups-on-the-moc-that-need-to-be-migrated-to-the-nerc","text":"Please read the instructions in their entirety before proceeding. Allow yourself enough time to complete them. Volume Snapshots will not be migrated. If you have a Snapshot you wish to backup please \u201cCreate Volume\u201d from it first.","title":"Identify Volumes, Instances & Security Groups on the MOC that need to be Migrated to the NERC"},{"location":"migration-moc-to-nerc/Step2/#confirm-access-and-login-to-moc-dashboard","text":"Go to the MOC Dashboard .","title":"Confirm Access and Login to MOC Dashboard"},{"location":"migration-moc-to-nerc/Step2/#sso-google-login","text":"If you have SSO through your Institution or google select Institution Account from the dropdown. Click Connect . Click on University Logins (highlighted in yellow below) if you are using SSO with your Institution. Follow your Institution's login steps after that, and skip to Gathering MOC information for the Migration . Click Google (highlighted in purple above) if your SSO is through Google. Follow standard Google login steps to get in this way, and skip to Gathering MOC information for the Migration .","title":"SSO / Google Login"},{"location":"migration-moc-to-nerc/Step2/#keystone-credentials","text":"If you have a standard login and password leave the dropdown as Keystone Credentials. Enter your User Name. Enter your Password. Click Connect.","title":"Keystone Credentials"},{"location":"migration-moc-to-nerc/Step2/#dont-know-your-login","text":"If you do not know your login information please create a Password Reset ticket . Click Open a New Ticket (highlighted in yellow above). Click the dropdown and select Forgot Pass & SSO Account Link (highlighted in blue above). In the text field (highlighted in purple above) provide the Institution email, project you are working on and the email address you used to create the account. Click Create Ticket (highlighted in yellow above) and wait for the pinwheel. You will receive an email to let you know that the MOC support staff will get back to you.","title":"Don't know your login?"},{"location":"migration-moc-to-nerc/Step2/#gathering-moc-information-for-the-migration","text":"You are then brought to the Project>Compute>Overview location of the Dashboard.","title":"Gathering MOC information for the Migration"},{"location":"migration-moc-to-nerc/Step2/#create-tables-to-hold-your-information","text":"Create 3 tables of all of your Instances, your Volumes and Security Groups, for example, if you have 2 instances, 3 volumes and 2 Security Groups like the samples below your lists might look like this:","title":"Create Tables to hold your information"},{"location":"migration-moc-to-nerc/Step2/#moc-instance-information-table","text":"Instance Name MOC VCPUs MOC Disk MOC RAM MOC UUID Fedora_test 1 10GB 1GB 16a1bfc2-8c90-4361-8c13-64ab40bb6207 Ubuntu_Test 1 10GB 2GB 6a40079a-59f7-407c-9e66-23bc5b749a95 total 2 20GB 3GB","title":"MOC Instance Information Table"},{"location":"migration-moc-to-nerc/Step2/#moc-volume-information-table","text":"MOC Volume Name MOC Disk MOC Attached To Bootable MOC UUID NERC Volume Name Fedora 10GiB Fedora_test Yes ea45c20b-434a-4c41-8bc6-f48256fc76a8 9c73295d-fdfa-4544-b8b8-a876cc0a1e86 10GiB Ubuntu_Test Yes 9c73295d-fdfa-4544-b8b8-a876cc0a1e86 Snapshot of Fed_Test 10GiB Fedora_test No ea45c20b-434a-4c41-8bc6-f48256fc76a8 total 30GiB","title":"MOC Volume Information Table"},{"location":"migration-moc-to-nerc/Step2/#moc-security-group-information-table","text":"Security Group Name Direction Ether Type IP Protocol Port Range Remote IP Prefix ssh_only_test Ingress IPv4 TCP 22 0.0.0.0/0 ping_only_test Ingress IPv4 ICMP Any 0.0.0.0/0","title":"MOC Security Group Information Table"},{"location":"migration-moc-to-nerc/Step2/#gather-the-instance-information","text":"Gather the Instance UUIDs (of only the instances that you need to migrate to the NERC). Click Instances (highlighted in pink in image above) Click the Instance Name (highlighted in Yellow above) of the first instance you would like to gather data on. Locate the ID row (highlighted in green above) and copy and save the ID (highlighted in purple above). This is the UUID of your first Instance. Locate the RAM, VCPUs & Disk rows (highlighted in yellow) and copy and save the associated values (highlighted in pink). Repeat this section for each Instance you have.","title":"Gather the Instance Information"},{"location":"migration-moc-to-nerc/Step2/#gather-the-volume-information","text":"Gather the Volume UUIDs (of only the volumes that you need to migrate to the NERC). Click Volumes dropdown. Select Volumes (highlighted in purple above). Click the Volume Name (highlighted in yellow above) of the first volume you would like to gather data on. The name might be the same as the ID (highlighted in blue above). Locate the ID row (highlighted in green above) and copy and save the ID (highlighted in purple above). This is the UUID of your first Volume. Locate the Size row (highlighted in yellow above) and copy and save the Volume size (highlighted in pink above). Locate the Bootable row (highlighted in gray above) and copy and save the Volume size (highlighted in red above). Locate the Attached To row (highlighted in blue above) and copy and save the Instance this Volume is attached to (highlighted in orange above). If the volume is not attached to an image it will state \u201cNot attached\u201d. Repeat this section for each Volume you have.","title":"Gather the Volume Information"},{"location":"migration-moc-to-nerc/Step2/#gather-your-security-group-information","text":"If you already have all of your Security Group information outside of the OpenStack Dashboard skip to the section. Gather the Security Group information (of only the security groups that you need to migrate to the NERC). Click Network dropdown Click Security Groups (highlighted in yellow above). Click Manage Rules (highlighted in yellow above) of the first Security Group you would like to gather data on. Ignore the first 2 lines (highlighted in yellow above). Write down the important information for all lines after (highlighted in blue above). Direction, Ether Type, IP Protocol, Port Range, Remote IP Prefix, Remote Security Group. Repeat this section for each security group you have.","title":"Gather your Security Group Information"},{"location":"migration-moc-to-nerc/Step3/","text":"Steps to Migrate Volumes from MOC to NERC Create a spreadsheet to track the values you will need The values you will want to keep track of are. Label Value MOCAccess MOCSecret NERCAccess NERCSecret MOCEndPoint https://kzn-swift.massopen.cloud NERCEndPoint https://stack.nerc.mghpcc.org:13808 MinIOVolume MOCVolumeBackupID ContainerName NERCVolumeBackupID NERCVolumeName It is also helpful to have a text editor open so that you can insert the values from the spreadsheet into the commands that need to be run. Create a New MOC Mirror to NERC Instance Follow the instructions here to set up your instance. When selecting the Image please select moc-nerc-migration (highlighted in yellow above). Once the Instance is Running move onto the next step Name your new instance something you will remember, MirrorMOC2NERC for example. Assign a Floating IP to your new instance. If you need assistance please review the Floating IP steps here . Your floating IPs will not be the same as the ones you had in the MOC. Please claim new floating IPs to use. SSH into the MirrorMOC2NERC Instance. The user to use for login is centos . If you have any trouble please review the SSH steps here . Setup Application Credentials Gather MOC Application Credentials Follow the instructions here to create your Application Credentials. Make sure to save the clouds.yaml as clouds_MOC.yaml . Gathering NERC Application Credentials Follow the instructions under the header Command Line setup here to create your Application Credentials. Make sure to save the clouds.yaml as clouds_NERC.yaml . Combine the two clouds.yaml files Make a copy of clouds_MOC.yaml and save as clouds.yaml Open clouds.yaml in a text editor of your choice. Change the openstack (highlighted in yellow above) value to moc (highlighted in yellow two images below). Open clouds_NERC.yaml in a text editor of your choice. Change the openstack (highlighted in yellow above) value to nerc (highlighted in green below). Highlight and copy everything from nerc to the end of the line that starts with auth_type Paste the copied text into clouds.yaml below the line that starts with auth_type. Your new clouds.yaml will look similar to the image above. For further instructions on clouds.yaml files go Here . Moving Application Credentials to VM SSH into the VM created at the top of this page for example MirrorMOC2NERC . Create the openstack config folder and empty clouds.yaml file. mkdir -p ~/.config/openstack cd ~/.config/openstack touch clouds.yaml Open the clouds.yaml file in your favorite text editor. (vi is preinstalled). Copy the entire text inside the clouds.yaml file on your local computer. Paste the contents of the local clouds.yaml file into the clouds.yaml on the VM. Save and exit your VM text editor. Confirm the Instances are Shut Down Confirm the instances are Shut Down. This is a very important step because we will be using the force modifier when we make our backup. The volume can become corrupted if the Instance is not in a Shut Down state. Log into the Instance page of the MOC Dashboard Check the Power State of all of the instances you plan to migrate volumes from are set to Shut Down (highlighted in yellow in image above). If they are not please do so from the Actions Column. Click the drop down arrow under actions. Select Shut Off Instance (blue arrow pointing to it in image above). Backup and Move Volume Data from MOC to NERC SSH into the VM created at the top of this page. For steps on how to do this please see instructions here . Create EC2 credentials in MOC & NERC Generate credentials for Kaizen with the command below. openstack --os-cloud moc ec2 credentials create Copy the access (circled in red above) and secret (circled in blue above) values into your table as and . Generate credentials for the NERC with the command below. openstack --os-cloud nerc ec2 credentials create Copy the access (circled in red above) and secret (circled in blue above) values into your table as as and . Find Object Store Endpoints Look up information on the object-store service in MOC with the command below. openstack --os-cloud moc catalog show object-store -c endpoints If the value is different than https://kzn-swift.massopen.cloud copy the base URL for this service (circled in red above). Look up information on the object-store service in NERC with the command below. openstack --os-cloud nerc catalog show object-store -c endpoints If the value is different than https://stack.nerc.mghpcc.org:13808 copy the base URL for this service (circled in red above). Configure minio client aliases Create a MinIO alias for MOC using the base URL of the \"public\" interface of the object-store service and the EC2 access key (ex. ) & secret key (ex. ) from your table. $ mc alias set moc https://kzn-swift.massopen.cloud mc: Configuration written to `/home/centos/.mc/config.json`. Please update your access credentials. mc: Successfully created `/home/centos/.mc/share`. mc: Initialized share uploads `/home/centos/.mc/share/uploads.json` file. mc: Initialized share downloads `/home/centos/.mc/share/downloads.json` file. Added `moc` successfully. Create a MinIO alias for NERC using the base URL of the \"public\" interface of the object-store service and the EC2 access key (ex. ) & secret key (ex. ) from your table. $ mc alias set nerc https://stack.nerc.mghpcc.org:13808 Added `nerc` successfully. Backup MOC Volumes Locate the desired Volume UUID from the table you created in Step 2 Gathering MOC Information . Add the first Volume ID from your table to the code below in the field and create a Container Name to replace the field. Container Name should be easy to remember as well as unique so include your name. Maybe something like thomasa-backups . openstack --os-cloud moc volume backup create --force --container +-------+---------------------+ | Field | Value | +-------+---------------------+ | id | | | name | None | Copy down your to your table. Wait for the backup to become available. You can run the command below to check on the status. If your volume is 25 or larger this might be a good time to go get a warm beverage or lunch. openstack --os-cloud moc volume backup list +---------------------+------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +---------------------+------+-------------+-----------+------+ | | None | None | creating | 10 | ... openstack --os-cloud moc volume backup list +---------------------+------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +---------------------+------+-------------+-----------+------+ | | None | None | available | 10 | Gather MinIO Volume data Get the volume information for future commands. Use the same from when you created the volume backup. It is worth noting that this value shares the ID number with the VolumeID. $ mc ls moc/ [2022-04-29 09:35:16 EDT] 0B / Create a Container on NERC Create the NERC container that we will send the volume to. Use the same from when you created the volume backup. $ mc mb nerc/ Bucket created successfully `nerc/`. Mirror the Volume from MOC to NERC Using the volume label from MinIO and the for the command below you will kick off the move of your volume. This takes around 30 sec per GB of data in your volume. $ mc mirror moc// nerc// ...123a30e_sha256file: 2.61GB / 2.61GB [=========...=========] 42.15Mib/s 1m3s Copy the Backup Record from MOC to NERC Now that we've copied the backup data into the NERC environment, we need to register the backup with the NERC backup service. We do this by copying metadata from MOC. You will need the original you used to create the original Backup. openstack --os-cloud moc volume backup record export -f value > record.txt Next we will import the record into NERC. openstack --os-cloud nerc volume backup record import -f value $(cat record.txt) None Copy value into your table. Create an Empty Volume on NERC to Receive the Backup Create a volume in the NERC environment to receive the backup. This must be the same size or larger than the original volume which can be changed by modifying the field. Remove the \"--bootable\" flag if you are not creating a bootable volume. The field can be any name you want, I would suggest something that will help you keep track of what instance you want to attach it to. Make sure to fill in the table you created in Step 2 with the value in the NERC Volume Name column. openstack --os-cloud nerc volume create --bootable --size +---------------------+----------------+ | Field | Value | +---------------------+----------------+ | attachments | [] | | availability_zone | nova | ... | id | | ... | size | | +---------------------+----------------+ Restore the Backup Restore the Backup to the Volume you just created. openstack --os-cloud nerc volume backup restore Wait for the volume to shift from restoring-backup to available . openstack --os-cloud nerc volume list +----------------+------------+------------------+------+-------------+ | ID | Name | Status | Size | Attached to | +----------------+------------+------------------+------+-------------+ | | MOC Volume | restoring-backup | 3 | Migration | openstack --os-cloud nerc volume list +----------------+------------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +----------------+------------+-----------+------+-------------+ | | MOC Volume | available | 3 | Migration | Repeat these Backup and Move Volume Data steps for each volume you need to migrate. Create NERC Instances Using MOC Volumes If you have volumes that need to be attached to an instance please follow the next steps. Follow the instructions here to set up your instance/s. Instead of using an Image for your Boot Source you will use a Volume (orange arrow in image below). Select the you created in step Create an Empty Volume on NERC to Recieve the Backup The Flavor will be important as this decides how much vCPUs, RAM, and Disk this instance will consume of your total. If for some reason the earlier approved resource quota is not sufficient you can request further quota by following these steps . Repeat this section for each instance you need to create.","title":"Steps to Migrate Volumes from MOC to NERC"},{"location":"migration-moc-to-nerc/Step3/#steps-to-migrate-volumes-from-moc-to-nerc","text":"","title":"Steps to Migrate Volumes from MOC to NERC"},{"location":"migration-moc-to-nerc/Step3/#create-a-spreadsheet-to-track-the-values-you-will-need","text":"The values you will want to keep track of are. Label Value MOCAccess MOCSecret NERCAccess NERCSecret MOCEndPoint https://kzn-swift.massopen.cloud NERCEndPoint https://stack.nerc.mghpcc.org:13808 MinIOVolume MOCVolumeBackupID ContainerName NERCVolumeBackupID NERCVolumeName It is also helpful to have a text editor open so that you can insert the values from the spreadsheet into the commands that need to be run.","title":"Create a spreadsheet to track the values you will need"},{"location":"migration-moc-to-nerc/Step3/#create-a-new-moc-mirror-to-nerc-instance","text":"Follow the instructions here to set up your instance. When selecting the Image please select moc-nerc-migration (highlighted in yellow above). Once the Instance is Running move onto the next step Name your new instance something you will remember, MirrorMOC2NERC for example. Assign a Floating IP to your new instance. If you need assistance please review the Floating IP steps here . Your floating IPs will not be the same as the ones you had in the MOC. Please claim new floating IPs to use. SSH into the MirrorMOC2NERC Instance. The user to use for login is centos . If you have any trouble please review the SSH steps here .","title":"Create a New MOC Mirror to NERC Instance"},{"location":"migration-moc-to-nerc/Step3/#setup-application-credentials","text":"","title":"Setup Application Credentials"},{"location":"migration-moc-to-nerc/Step3/#gather-moc-application-credentials","text":"Follow the instructions here to create your Application Credentials. Make sure to save the clouds.yaml as clouds_MOC.yaml .","title":"Gather MOC Application Credentials"},{"location":"migration-moc-to-nerc/Step3/#gathering-nerc-application-credentials","text":"Follow the instructions under the header Command Line setup here to create your Application Credentials. Make sure to save the clouds.yaml as clouds_NERC.yaml .","title":"Gathering NERC Application Credentials"},{"location":"migration-moc-to-nerc/Step3/#combine-the-two-cloudsyaml-files","text":"Make a copy of clouds_MOC.yaml and save as clouds.yaml Open clouds.yaml in a text editor of your choice. Change the openstack (highlighted in yellow above) value to moc (highlighted in yellow two images below). Open clouds_NERC.yaml in a text editor of your choice. Change the openstack (highlighted in yellow above) value to nerc (highlighted in green below). Highlight and copy everything from nerc to the end of the line that starts with auth_type Paste the copied text into clouds.yaml below the line that starts with auth_type. Your new clouds.yaml will look similar to the image above. For further instructions on clouds.yaml files go Here .","title":"Combine the two clouds.yaml files"},{"location":"migration-moc-to-nerc/Step3/#moving-application-credentials-to-vm","text":"SSH into the VM created at the top of this page for example MirrorMOC2NERC . Create the openstack config folder and empty clouds.yaml file. mkdir -p ~/.config/openstack cd ~/.config/openstack touch clouds.yaml Open the clouds.yaml file in your favorite text editor. (vi is preinstalled). Copy the entire text inside the clouds.yaml file on your local computer. Paste the contents of the local clouds.yaml file into the clouds.yaml on the VM. Save and exit your VM text editor.","title":"Moving Application Credentials to VM"},{"location":"migration-moc-to-nerc/Step3/#confirm-the-instances-are-shut-down","text":"Confirm the instances are Shut Down. This is a very important step because we will be using the force modifier when we make our backup. The volume can become corrupted if the Instance is not in a Shut Down state. Log into the Instance page of the MOC Dashboard Check the Power State of all of the instances you plan to migrate volumes from are set to Shut Down (highlighted in yellow in image above). If they are not please do so from the Actions Column. Click the drop down arrow under actions. Select Shut Off Instance (blue arrow pointing to it in image above).","title":"Confirm the Instances are Shut Down"},{"location":"migration-moc-to-nerc/Step3/#backup-and-move-volume-data-from-moc-to-nerc","text":"SSH into the VM created at the top of this page. For steps on how to do this please see instructions here .","title":"Backup and Move Volume Data from MOC to NERC"},{"location":"migration-moc-to-nerc/Step3/#create-ec2-credentials-in-moc-nerc","text":"Generate credentials for Kaizen with the command below. openstack --os-cloud moc ec2 credentials create Copy the access (circled in red above) and secret (circled in blue above) values into your table as and . Generate credentials for the NERC with the command below. openstack --os-cloud nerc ec2 credentials create Copy the access (circled in red above) and secret (circled in blue above) values into your table as as and .","title":"Create EC2 credentials in MOC & NERC"},{"location":"migration-moc-to-nerc/Step3/#find-object-store-endpoints","text":"Look up information on the object-store service in MOC with the command below. openstack --os-cloud moc catalog show object-store -c endpoints If the value is different than https://kzn-swift.massopen.cloud copy the base URL for this service (circled in red above). Look up information on the object-store service in NERC with the command below. openstack --os-cloud nerc catalog show object-store -c endpoints If the value is different than https://stack.nerc.mghpcc.org:13808 copy the base URL for this service (circled in red above).","title":"Find Object Store Endpoints"},{"location":"migration-moc-to-nerc/Step3/#configure-minio-client-aliases","text":"Create a MinIO alias for MOC using the base URL of the \"public\" interface of the object-store service and the EC2 access key (ex. ) & secret key (ex. ) from your table. $ mc alias set moc https://kzn-swift.massopen.cloud mc: Configuration written to `/home/centos/.mc/config.json`. Please update your access credentials. mc: Successfully created `/home/centos/.mc/share`. mc: Initialized share uploads `/home/centos/.mc/share/uploads.json` file. mc: Initialized share downloads `/home/centos/.mc/share/downloads.json` file. Added `moc` successfully. Create a MinIO alias for NERC using the base URL of the \"public\" interface of the object-store service and the EC2 access key (ex. ) & secret key (ex. ) from your table. $ mc alias set nerc https://stack.nerc.mghpcc.org:13808 Added `nerc` successfully.","title":"Configure minio client aliases"},{"location":"migration-moc-to-nerc/Step3/#backup-moc-volumes","text":"Locate the desired Volume UUID from the table you created in Step 2 Gathering MOC Information . Add the first Volume ID from your table to the code below in the field and create a Container Name to replace the field. Container Name should be easy to remember as well as unique so include your name. Maybe something like thomasa-backups . openstack --os-cloud moc volume backup create --force --container +-------+---------------------+ | Field | Value | +-------+---------------------+ | id | | | name | None | Copy down your to your table. Wait for the backup to become available. You can run the command below to check on the status. If your volume is 25 or larger this might be a good time to go get a warm beverage or lunch. openstack --os-cloud moc volume backup list +---------------------+------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +---------------------+------+-------------+-----------+------+ | | None | None | creating | 10 | ... openstack --os-cloud moc volume backup list +---------------------+------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +---------------------+------+-------------+-----------+------+ | | None | None | available | 10 |","title":"Backup MOC Volumes"},{"location":"migration-moc-to-nerc/Step3/#gather-minio-volume-data","text":"Get the volume information for future commands. Use the same from when you created the volume backup. It is worth noting that this value shares the ID number with the VolumeID. $ mc ls moc/ [2022-04-29 09:35:16 EDT] 0B /","title":"Gather MinIO Volume data"},{"location":"migration-moc-to-nerc/Step3/#create-a-container-on-nerc","text":"Create the NERC container that we will send the volume to. Use the same from when you created the volume backup. $ mc mb nerc/ Bucket created successfully `nerc/`.","title":"Create a Container on NERC"},{"location":"migration-moc-to-nerc/Step3/#mirror-the-volume-from-moc-to-nerc","text":"Using the volume label from MinIO and the for the command below you will kick off the move of your volume. This takes around 30 sec per GB of data in your volume. $ mc mirror moc// nerc// ...123a30e_sha256file: 2.61GB / 2.61GB [=========...=========] 42.15Mib/s 1m3s","title":"Mirror the Volume from MOC to NERC"},{"location":"migration-moc-to-nerc/Step3/#copy-the-backup-record-from-moc-to-nerc","text":"Now that we've copied the backup data into the NERC environment, we need to register the backup with the NERC backup service. We do this by copying metadata from MOC. You will need the original you used to create the original Backup. openstack --os-cloud moc volume backup record export -f value > record.txt Next we will import the record into NERC. openstack --os-cloud nerc volume backup record import -f value $(cat record.txt) None Copy value into your table.","title":"Copy the Backup Record from MOC to NERC"},{"location":"migration-moc-to-nerc/Step3/#create-an-empty-volume-on-nerc-to-receive-the-backup","text":"Create a volume in the NERC environment to receive the backup. This must be the same size or larger than the original volume which can be changed by modifying the field. Remove the \"--bootable\" flag if you are not creating a bootable volume. The field can be any name you want, I would suggest something that will help you keep track of what instance you want to attach it to. Make sure to fill in the table you created in Step 2 with the value in the NERC Volume Name column. openstack --os-cloud nerc volume create --bootable --size +---------------------+----------------+ | Field | Value | +---------------------+----------------+ | attachments | [] | | availability_zone | nova | ... | id | | ... | size | | +---------------------+----------------+","title":"Create an Empty Volume on NERC to Receive the Backup"},{"location":"migration-moc-to-nerc/Step3/#restore-the-backup","text":"Restore the Backup to the Volume you just created. openstack --os-cloud nerc volume backup restore Wait for the volume to shift from restoring-backup to available . openstack --os-cloud nerc volume list +----------------+------------+------------------+------+-------------+ | ID | Name | Status | Size | Attached to | +----------------+------------+------------------+------+-------------+ | | MOC Volume | restoring-backup | 3 | Migration | openstack --os-cloud nerc volume list +----------------+------------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +----------------+------------+-----------+------+-------------+ | | MOC Volume | available | 3 | Migration | Repeat these Backup and Move Volume Data steps for each volume you need to migrate.","title":"Restore the Backup"},{"location":"migration-moc-to-nerc/Step3/#create-nerc-instances-using-moc-volumes","text":"If you have volumes that need to be attached to an instance please follow the next steps. Follow the instructions here to set up your instance/s. Instead of using an Image for your Boot Source you will use a Volume (orange arrow in image below). Select the you created in step Create an Empty Volume on NERC to Recieve the Backup The Flavor will be important as this decides how much vCPUs, RAM, and Disk this instance will consume of your total. If for some reason the earlier approved resource quota is not sufficient you can request further quota by following these steps . Repeat this section for each instance you need to create.","title":"Create NERC Instances Using MOC Volumes"},{"location":"migration-moc-to-nerc/Step4/","text":"Remove Volume Backups to Conserve Storage If you find yourself low on Volume Storage please follow the steps below to remove your old Volume Backups. If you are very low on space you can do this every time you finish copying a new volume to the NERC. If on the other hand you have plety of remaining space feel free to leave all of your Volume Backups as they are. SSH into the MirrorMOC2NERC Instance . The user to use for login is centos . If you have any trouble please review the SSH steps here . Check Remaining MOC Volume Storage Log into the MOC Dashboard and go to Project > Compute > Overview. Look at the Volume Storage meter (highlighted in yellow in image above). Delete MOC Volume Backups Gather a list of current MOC Volume Backups with the command below. openstack --os-cloud moc volume backup list +---------------------+------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +---------------------+------+-------------+-----------+------+ | | None | None | available | 10 | Only remove Volume Backups you are sure have been moved to the NERC. with the command below you can delete Volume Backups. openstack --os-cloud moc volume backup delete Repeat the MOC Volume Backup section for all MOC Volume Backups you wish to remove. Delete MOC Container Remove the Container created i.e. on MOC side with a unique name during migration. Replace the field with your own container name created during migration process: openstack --os-cloud moc container delete --recursive Verify the is removed from MOC: openstack --os-cloud moc container list Check Remaining NERC Volume Storage Log into the NERC Dashboard and go to Project > Compute > Overview. Look at the Volume Storage meter (highlighted in yellow in image above). Delete NERC Volume Backups Gather a list of current NERC Volume Backups with the command below. openstack --os-cloud nerc volume backup list +---------------------+------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +---------------------+------+-------------+-----------+------+ | | None | None | available | 3 | Only remove Volume Backups you are sure have been migrated to NERC Volumes. Keep in mind that you might not have named the volume the same as on the MOC so check your table from Step 2 to confirm.You can confirm what Volumes you have in NERC with the following command. openstack --os-cloud nerc volume list +----------------+------------------+--------+------+----------------------------------+ | ID | Name | Status | Size | Attached to | +----------------+------------------+--------+------+----------------------------------+ | | | in-use | 3 | Attached to MOC2NERC on /dev/vda | To remove volume backups please use the command below. openstack --os-cloud nerc volume backup delete Repeat the NERC Volume Backup section for all NERC Volume Backups you wish to remove. Delete NERC Container Remove the Container created i.e. on NERC side with a unique name during migration to mirror the Volume from MOC to NERC. Replace the field with your own container name created during migration process: openstack --os-cloud nerc container delete --recursive Verify the is removed from NERC: openstack --os-cloud nerc container list","title":"Remove Volume Backups to Conserve Storage"},{"location":"migration-moc-to-nerc/Step4/#remove-volume-backups-to-conserve-storage","text":"If you find yourself low on Volume Storage please follow the steps below to remove your old Volume Backups. If you are very low on space you can do this every time you finish copying a new volume to the NERC. If on the other hand you have plety of remaining space feel free to leave all of your Volume Backups as they are. SSH into the MirrorMOC2NERC Instance . The user to use for login is centos . If you have any trouble please review the SSH steps here .","title":"Remove Volume Backups to Conserve Storage"},{"location":"migration-moc-to-nerc/Step4/#check-remaining-moc-volume-storage","text":"Log into the MOC Dashboard and go to Project > Compute > Overview. Look at the Volume Storage meter (highlighted in yellow in image above).","title":"Check Remaining MOC Volume Storage"},{"location":"migration-moc-to-nerc/Step4/#delete-moc-volume-backups","text":"Gather a list of current MOC Volume Backups with the command below. openstack --os-cloud moc volume backup list +---------------------+------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +---------------------+------+-------------+-----------+------+ | | None | None | available | 10 | Only remove Volume Backups you are sure have been moved to the NERC. with the command below you can delete Volume Backups. openstack --os-cloud moc volume backup delete Repeat the MOC Volume Backup section for all MOC Volume Backups you wish to remove.","title":"Delete MOC Volume Backups"},{"location":"migration-moc-to-nerc/Step4/#delete-moc-container-containername","text":"Remove the Container created i.e. on MOC side with a unique name during migration. Replace the field with your own container name created during migration process: openstack --os-cloud moc container delete --recursive Verify the is removed from MOC: openstack --os-cloud moc container list","title":"Delete MOC Container <ContainerName>"},{"location":"migration-moc-to-nerc/Step4/#check-remaining-nerc-volume-storage","text":"Log into the NERC Dashboard and go to Project > Compute > Overview. Look at the Volume Storage meter (highlighted in yellow in image above).","title":"Check Remaining NERC Volume Storage"},{"location":"migration-moc-to-nerc/Step4/#delete-nerc-volume-backups","text":"Gather a list of current NERC Volume Backups with the command below. openstack --os-cloud nerc volume backup list +---------------------+------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +---------------------+------+-------------+-----------+------+ | | None | None | available | 3 | Only remove Volume Backups you are sure have been migrated to NERC Volumes. Keep in mind that you might not have named the volume the same as on the MOC so check your table from Step 2 to confirm.You can confirm what Volumes you have in NERC with the following command. openstack --os-cloud nerc volume list +----------------+------------------+--------+------+----------------------------------+ | ID | Name | Status | Size | Attached to | +----------------+------------------+--------+------+----------------------------------+ | | | in-use | 3 | Attached to MOC2NERC on /dev/vda | To remove volume backups please use the command below. openstack --os-cloud nerc volume backup delete Repeat the NERC Volume Backup section for all NERC Volume Backups you wish to remove.","title":"Delete NERC Volume Backups"},{"location":"migration-moc-to-nerc/Step4/#delete-nerc-container-containername","text":"Remove the Container created i.e. on NERC side with a unique name during migration to mirror the Volume from MOC to NERC. Replace the field with your own container name created during migration process: openstack --os-cloud nerc container delete --recursive Verify the is removed from NERC: openstack --os-cloud nerc container list","title":"Delete NERC Container <ContainerName>"},{"location":"openshift/","text":"OpenShift Tutorial Index If you're just starting out, we recommend starting from OpenShift Overview and going through the tutorial in order. If you just need to review a specific step, you can find the page you need in the list below. OpenShift Getting Started OpenShift Overview <<-- Start Here OpenShift Web Console Access the NERC's OpenShift Web Console Web Console Overview OpenShift command-line interface (CLI) Tools OpenShift CLI Tools Overview How to Setup the OpenShift CLI Tools Creating Your First Application on OpenShift Creating A Sample Application Creating Your Own Developer Catalog Service Editing Applications Editing your applications Scaling and Performance Guide Storage Storage Overview Deleting Applications Deleting your applications Decommission OpenShift Resources Decommission OpenShift Resources","title":"OpenShift"},{"location":"openshift/#openshift-tutorial-index","text":"If you're just starting out, we recommend starting from OpenShift Overview and going through the tutorial in order. If you just need to review a specific step, you can find the page you need in the list below.","title":"OpenShift Tutorial Index"},{"location":"openshift/#openshift-getting-started","text":"OpenShift Overview <<-- Start Here","title":"OpenShift Getting Started"},{"location":"openshift/#openshift-web-console","text":"Access the NERC's OpenShift Web Console Web Console Overview","title":"OpenShift Web Console"},{"location":"openshift/#openshift-command-line-interface-cli-tools","text":"OpenShift CLI Tools Overview How to Setup the OpenShift CLI Tools","title":"OpenShift command-line interface (CLI) Tools"},{"location":"openshift/#creating-your-first-application-on-openshift","text":"Creating A Sample Application Creating Your Own Developer Catalog Service","title":"Creating Your First Application on OpenShift"},{"location":"openshift/#editing-applications","text":"Editing your applications Scaling and Performance Guide","title":"Editing Applications"},{"location":"openshift/#storage","text":"Storage Overview","title":"Storage"},{"location":"openshift/#deleting-applications","text":"Deleting your applications","title":"Deleting Applications"},{"location":"openshift/#decommission-openshift-resources","text":"Decommission OpenShift Resources","title":"Decommission OpenShift Resources"},{"location":"openshift/applications/creating-a-sample-application/","text":"Creating A Sample Application NERC's OpenShift service is a platform that provides a cloud-native environment for developing and deploying applications. Here, we walk through the process of creating a simple web application, deploying it. This example uses the Node.js programming language, but the process with other programming languages will be similar. Instructions provided show the tasks using both the web console and the command-line tool. Using the Developer perspective on NERC's OpenShift Web Console Go to the NERC's OpenShift Web Console . Click on the Perspective Switcher drop-down menu and select Developer . In the Navigation Menu , click +Add . Creating applications using samples : Use existing code samples to get started with creating applications on the OpenShift Container Platform. Find the Create applications using samples section and then click on \" View all samples \" and then select the type of application you want to create (e.g. Node.js, Python, Ruby, etc.), it will load application from Git Repo URL and then review or modify the application Name for your application. Alternatively , If you want to create an application from your own source code located in a git repository, select Import from Git . In the Git Repo URL text box, enter your git repo url. For example: https://github.com/myuser/mypublicrepo.git . You may see a warning stating \" URL is valid but cannot be reached \". You can ignore this warning! Click \"Create\" to create your application. Once your application has been created, you can view the details by clicking on the application name in the Project Overview page. On the Topology View menu, click on your application, or the application circle if you are in graphical topology view. In the details panel that displays, scroll to the Routes section on the Resources tab and click on the link to go to the sample application. This will open your application in a new browser window. The link will look similar to http://-.apps.shift.nerc.mghpcc.org . Example: Deploying a Python application For a quick example on how to use the \"Import from Git\" option to deploy a sample Python application, please refer to this guide . Additional resources For more options and customization please read this . Using the CLI (oc command) on your local terminal Alternatively, you can create an application on the NERC's OpenShift cluster by using the oc new-app command from the command line terminal. i. Make sure you have the oc CLI tool installed and configured on your local machine following these steps . Information Some users may have access to multiple projects. Run the following command to switch to a specific project space: oc project . ii. To create an application, you will need to specify the language and runtime for your application. You can do this by using the oc new-app command and specifying a language and runtime. For example, to create a Node.js application, you can run the following command: oc new-app nodejs iii. If you want to create an application from an existing Git repository, you can use the --code flag to specify the URL of the repository. For example: oc new-app --code https://github.com/myuser/mypublicrepo . If you want to use a different name, you can add the --name= argument to the oc new-app command. For example: oc new-app \u2013name=mytestapp https://github.com/myuser/mypublicrepo . The platform will try to automatically detect the programming language of the application code and select the latest version of the base language image available. If oc new-app can't find any suitable Source-To-Image (S2I) builder images based on your source code in your Git repository or unable to detect the programming language or detects the wrong one, you can always specify the image you want to use as part of the new-app argument, with oc new-app ~ . If it is using a test application based on Node.js, we could use the same command as before but add nodejs~ before the URL of the Git repository. For example: oc new-app nodejs~https://github.com/myuser/mypublicrepo . Important Note If you are using a private remote Git repository, you can use the --source-secret flag to specify an existing source clone secret that will get injected into your BuildConfig to access the repository. For example: oc new-app https://github.com/myuser/yourprivaterepo --source-secret=yoursecret . iv. Once your application has been created, You can run oc status to see if your application was successfully built and deployed. Builds and deployments can sometimes take several minutes to complete, so you may run this several times. you can view the details by running the oc get pods command. This will show you a list of all the pods running in your project, including the pod for your new application. v. When using the oc command-line tool to create an application, a route is not automatically set up to make your application web accessible. Run the following to make the test application web accessible: oc create route edge --service=mytestapp --insecure-policy=Redirect . Once the application is deployed and the route is set up, it can be accessed at a web URL similar to http://mytestapp-.apps.shift.nerc.mghpcc.org . For more additional resources For more options and customization please read this . Using the Developer Catalog on NERC's OpenShift Web Console The Developer Catalog offers a streamlined process for deploying applications and services supported by Operator-backed services like CI/CD, Databases, Builder Images, and Helm Charts. It comprises a diverse array of application components, services, event sources, and source-to-image builders ready for integration into your project. About Quick Start Templates By default, the templates build using a public source repository on GitHub that contains the necessary application code. For more options and customization please read this . Steps Go to the NERC's OpenShift Web Console . Click on the Perspective Switcher drop-down menu and select Developer . In the Navigation Menu , click +Add . You need to find the Developer Catalog section and then select All services option as shown below: Then, you will be able search any available services from the Developer Catalog templates by searching for it on catalog and choose the desired type of service or component that you wish to include in your project. For this example, select Databases to list all the database services and then click MariaDB to see the details for the service. To Create Your Own Developer Catalog Service You also have the option to create and integrate custom services into the Developer Catalog using a template, as described here . Once selected by clicking the template, you will see Instantiate Template web interface as shown below: Clicking \"Instantiate Template\" will display an automatically populated template containing details for the MariaDB service. Click \"Create\" to begin the creation process and enter any custom information required. View the MariaDB service in the Topology view as shown below: For Additional resources For more options and customization please read this .","title":"Creating A Sample Application"},{"location":"openshift/applications/creating-a-sample-application/#creating-a-sample-application","text":"NERC's OpenShift service is a platform that provides a cloud-native environment for developing and deploying applications. Here, we walk through the process of creating a simple web application, deploying it. This example uses the Node.js programming language, but the process with other programming languages will be similar. Instructions provided show the tasks using both the web console and the command-line tool.","title":"Creating A Sample Application"},{"location":"openshift/applications/creating-a-sample-application/#using-the-developer-perspective-on-nercs-openshift-web-console","text":"Go to the NERC's OpenShift Web Console . Click on the Perspective Switcher drop-down menu and select Developer . In the Navigation Menu , click +Add . Creating applications using samples : Use existing code samples to get started with creating applications on the OpenShift Container Platform. Find the Create applications using samples section and then click on \" View all samples \" and then select the type of application you want to create (e.g. Node.js, Python, Ruby, etc.), it will load application from Git Repo URL and then review or modify the application Name for your application. Alternatively , If you want to create an application from your own source code located in a git repository, select Import from Git . In the Git Repo URL text box, enter your git repo url. For example: https://github.com/myuser/mypublicrepo.git . You may see a warning stating \" URL is valid but cannot be reached \". You can ignore this warning! Click \"Create\" to create your application. Once your application has been created, you can view the details by clicking on the application name in the Project Overview page. On the Topology View menu, click on your application, or the application circle if you are in graphical topology view. In the details panel that displays, scroll to the Routes section on the Resources tab and click on the link to go to the sample application. This will open your application in a new browser window. The link will look similar to http://-.apps.shift.nerc.mghpcc.org . Example: Deploying a Python application For a quick example on how to use the \"Import from Git\" option to deploy a sample Python application, please refer to this guide .","title":"Using the Developer perspective on NERC's OpenShift Web Console"},{"location":"openshift/applications/creating-a-sample-application/#additional-resources","text":"For more options and customization please read this .","title":"Additional resources"},{"location":"openshift/applications/creating-a-sample-application/#using-the-cli-oc-command-on-your-local-terminal","text":"Alternatively, you can create an application on the NERC's OpenShift cluster by using the oc new-app command from the command line terminal. i. Make sure you have the oc CLI tool installed and configured on your local machine following these steps . Information Some users may have access to multiple projects. Run the following command to switch to a specific project space: oc project . ii. To create an application, you will need to specify the language and runtime for your application. You can do this by using the oc new-app command and specifying a language and runtime. For example, to create a Node.js application, you can run the following command: oc new-app nodejs iii. If you want to create an application from an existing Git repository, you can use the --code flag to specify the URL of the repository. For example: oc new-app --code https://github.com/myuser/mypublicrepo . If you want to use a different name, you can add the --name= argument to the oc new-app command. For example: oc new-app \u2013name=mytestapp https://github.com/myuser/mypublicrepo . The platform will try to automatically detect the programming language of the application code and select the latest version of the base language image available. If oc new-app can't find any suitable Source-To-Image (S2I) builder images based on your source code in your Git repository or unable to detect the programming language or detects the wrong one, you can always specify the image you want to use as part of the new-app argument, with oc new-app ~ . If it is using a test application based on Node.js, we could use the same command as before but add nodejs~ before the URL of the Git repository. For example: oc new-app nodejs~https://github.com/myuser/mypublicrepo . Important Note If you are using a private remote Git repository, you can use the --source-secret flag to specify an existing source clone secret that will get injected into your BuildConfig to access the repository. For example: oc new-app https://github.com/myuser/yourprivaterepo --source-secret=yoursecret . iv. Once your application has been created, You can run oc status to see if your application was successfully built and deployed. Builds and deployments can sometimes take several minutes to complete, so you may run this several times. you can view the details by running the oc get pods command. This will show you a list of all the pods running in your project, including the pod for your new application. v. When using the oc command-line tool to create an application, a route is not automatically set up to make your application web accessible. Run the following to make the test application web accessible: oc create route edge --service=mytestapp --insecure-policy=Redirect . Once the application is deployed and the route is set up, it can be accessed at a web URL similar to http://mytestapp-.apps.shift.nerc.mghpcc.org .","title":"Using the CLI (oc command) on your local terminal"},{"location":"openshift/applications/creating-a-sample-application/#for-more-additional-resources","text":"For more options and customization please read this .","title":"For more additional resources"},{"location":"openshift/applications/creating-a-sample-application/#using-the-developer-catalog-on-nercs-openshift-web-console","text":"The Developer Catalog offers a streamlined process for deploying applications and services supported by Operator-backed services like CI/CD, Databases, Builder Images, and Helm Charts. It comprises a diverse array of application components, services, event sources, and source-to-image builders ready for integration into your project. About Quick Start Templates By default, the templates build using a public source repository on GitHub that contains the necessary application code. For more options and customization please read this .","title":"Using the Developer Catalog on NERC's OpenShift Web Console"},{"location":"openshift/applications/creating-a-sample-application/#steps","text":"Go to the NERC's OpenShift Web Console . Click on the Perspective Switcher drop-down menu and select Developer . In the Navigation Menu , click +Add . You need to find the Developer Catalog section and then select All services option as shown below: Then, you will be able search any available services from the Developer Catalog templates by searching for it on catalog and choose the desired type of service or component that you wish to include in your project. For this example, select Databases to list all the database services and then click MariaDB to see the details for the service. To Create Your Own Developer Catalog Service You also have the option to create and integrate custom services into the Developer Catalog using a template, as described here . Once selected by clicking the template, you will see Instantiate Template web interface as shown below: Clicking \"Instantiate Template\" will display an automatically populated template containing details for the MariaDB service. Click \"Create\" to begin the creation process and enter any custom information required. View the MariaDB service in the Topology view as shown below:","title":"Steps"},{"location":"openshift/applications/creating-a-sample-application/#for-additional-resources","text":"For more options and customization please read this .","title":"For Additional resources"},{"location":"openshift/applications/creating-your-own-developer-catalog-service/","text":"Creating Your Own Developer Catalog Service Here, we walk through the process of creating a simple RStudio web server template that bundles all resources required to run the server i.e. ConfigMap, Pod, Route, Service, etc. and then initiate and deploy application from that template. This example template file is readily accessible from the Git Repository . More about Writing Templates For more options and customization please read this . Find the From Local Machine section and click on Import YAML as shown below: On opened YAML editor paste the contents of the template copied from the rstudio-server-template.yaml file located at the provided Git Repo . You need to find the Developer Catalog section and then select All services option as shown below: Then, you will be able to use the created Developer Catalog template by searching for \"RStudio\" on catalog as shown below: Once selected by clicking the template, you will see Instantiate Template web interface as shown below: Based on our template definition, we request that users input a preferred password for the RStudio server so the following interface will prompt for your password that will be used during login to the RStudio server. Once successfully initiated, you can either open the application URL using the Open URL icon as shown below or you can naviate to the Routes section and click on Location path as shown below: To get the Username to be used for login on RStudio server, you need to click on running pod i.e. rstudio-server as shown below: Then select the YAML section to find out the attribute value for runAsUser that is used as the Username while Sign in to RStudio server as shown below: Finally, you will be able to see the RStudio web interface! Modifying uploaded templates You can edit a template that has already been uploaded to your project: oc edit template