P.S. It's been a year since I started up this new blog and, while my regular posting has been down, this is definitely better than I've ever been about communicating what I know and do. The fact that a year in I'm now a contributor to now 2 different PnP repositories and an author in the tech community blog feels good. Now, on to the post.
Ever get annoyed with the page properties web part put out by Microsoft? If you've got some OCD issues (like me) then it may not take very long. At ThreeWill, we help clients with their digital workplaces and improving the way their users can obtain information and makes sense of it all. Oftentimes, the Page Properties web part can be useful here, as we very often add valuable metadata to pages in a digital workplace, which we often tie to page templates as well. News might roll up based on these page properties, which can assist in finding information in many ways. But its often handy to display this metadata in a clean way on a page as well. The standard Page Properties web part seeks to do just that. And, for the most part, it does a fine job with it. But it has a few deficiencies. The most annoying thing to me, when setting up digital workplaces was that it only supports a white background. But there are other small things, like the limitations with pretty standard field types. I like the idea of taking advantage of metadata columns for pages, but being able to use it visually is equally important. I finally decided to do something about it and build a new version of this web part. So with this in mind, let's lay out our goals with this new web part. We will call it the Advanced Page Properties web part.
For a part like this, it's all about getting the property page figured out first. We want this to feel familiar too and not stray too much from the original design, unless it helps.
Let's start by recognizing our chief property that the web part needs: selectedProperties. This array will hold the internal names of the fields that a user has selected for display in our web part. We intend on passing this property down to our React component. Here's a look at our property object:
We are using the PnP JS library for gathering the fields in the Site Pages library. Figuring out the right types of filters to gather was a bit of trial-and-error. We are excluding anything that's inherited from a base type or is hidden in any way. We are also excluding 3 standard types so far: boolean, note and user. Note doesn't make sense to display. Boolean can definitely work, but needs a good display convention. User was the only tricky object, which is the reason it isn't done yet.
We call the above method prior to loading up the property pane.
Our React component needs to properly react to the list of selected properties changing. It also needs to react to our theme changing. I leveraged this awesome post from Hugo Bernier for the theming, so I will not cover that in-depth, although you will see how it's being leveraged in the code snippets below. Here are the properties we plan to start with and respond to:
We will track the state of our selected properties and their values with hooks. We want to trigger off of changes to our properties, so we will setup a reference to their current state. We will also establish our themeVariant and context at the start of our component.
// Main state object for the life of this component - pagePropValues
So we are tracking the state of pagePropValues, which is an array of type PageProperty. What is PageProperty?
Our effect is looking to see when changes are made to the properties, then is peforming our core logic to refresh properties and values.
* @description Effects to fire whenever the properties change
// No cleanup at this moment
The core method is refreshProperties. It has 2 main calls it needs to make, whenever selected properties has changed: Establish any known metadata for each property that will assist in display and obtain all actual values for this property and the specific page id that we are viewing.
* @description Gets the actual values for any selected properties, along with critical field metadata and ultimately re-sets the pagePropValues state
As we loop through all of the properties that have been selected, we make calls with PnP JS to get all of the metadata per field and all of the values per field. The call to get all of the values can return with any number of data types, so we need to be prepared for that. This is why it is of type any to start. But this is also why we have a switch statement for certain outlier situations, where the line to set the array of any need to be done a little differently than the default. Our 3 known cases of needing to do something different are TaxonomyFieldTypeMulti, MultiChoice and Thumbnail.
This method then calls our final display method, RenderPagePropValue, which performs our 2nd layer of array display, mapping all of the values and providing the correct display, based on the field type of the selected property. This is the heart of the display, where various type conversions and logic are done real-time as we display the values, including trying to achieve a slightly more modern SharePoint look using capsules for array labels.
* @description Focuses on the 3rd and final row layer, which is the actual values tied to any property displayed for the page
So that's all of the necessary code. Here's what the finished product looks like, compared to the original page properties web part.
This web part is now officially apart of the PnP Web Parts repository and can be found here. I would love to hear about improvements you'd like to see and obviously you are more than welcome to contribute. I already have a bit of a list of things I'd love to see it do.
Hopefully, I've gotten you excited about Page Properties again and you've learned a little along the way around how the current Page Properties part might be doing what it does under the hood. Please consider contributing and feel free to reach out to me anytime. Thanks for your time!
Starting up this little cheat sheet and will be posting the link to it on the top page. As I start to catalog various findings in the minutia of Fluent styles, as applied to SharePoint sites and parts, I'm starting to jot them down in this Markdown.
Modal windows and pop-ups are a staple of applications. Ever just needed to quickly make one in your PowerApp?
Here is a quick and easy step-by-step for making a pop-up modal window on a screen in your PowerApp.
Let's assume we have an app screen that's collecting some survey data. Like so:
It would be nice if we could provide a little more information to explain the Account Number that is needed.
Insert a new Icon onto the screen and give it the Information image.
Select your screen and add the following to the OnVisible event:
Next, select your new information icon and add the following to the OnSelect event:
"Your account number is located at the top right of your most recent bill.",
Insert a label onto the screen and adjust it's width and height to cover the entire screen. Name this label ModalBackground and remove it's Text.
Next, set the Fill property of this background label to RGBA(169,169,169,.5). It should look like this:
Add a second label and name it ModalBox. Center it's horizontal alignment and make the vertical alignment in the middle. Additionally, give it a Fill of White, center align its text and give it some extra padding. It should look something like this when you're done.
Set the Text property of the 'ModalBox' label to ModalText.
Add a Cancel Icon to the screen and place it in the top right corner of the ModalBox label. Name it ModalClose. Now you should have something like this:
Put the following code in the OnSelect event of ModalClose:
Group the 3 new controls you have added and name the group Modal.
Set the Visible property of the Modal group to ShowModal.
If you've done everything, you're modal should look and work like so:
This is a quick and painless process to have modals at your disposal in your next PowerApp. Happy low-coding!
I learned a valuable lesson in the use of Disconnect-PnPOnline : ALWAYS use it!
I had a provisioning script that first connected to the tenant admin to create a new site, then switched contexts to provision files, including a newly laid out home page, onto the newly created site.
I had a connection issue midway through one particular run, where it successfully connected to the tenant admin but didn't successfully connect to the new site, but the script kept running successfully because, hey, it still had a context - to the tenant admin site.
Essentially, I was able to change the Home page of the Sharepoint Admin Center. Thankfully, I was also able to use Set-PnPHomePage to set it back to _layouts/15/online/AdminHome.aspx#/home. But my heart was skipping a few beats there for a bit.
If I had just used Disconnect-PnPOnline in between switching contexts then everything would have just stopped. So you've now been warned.
Microsoft Graph Connectors are now in preview mode and appear to be showing up for most folks who have turned on targeted releases in their Dev tenants. So, naturally, I had to see what we could break... er, do.
One of the first things I wanted to try out was the SQL Server connector. This is big win for SharePoint and 365 to be able to include 3rd-party structured data without any custom development. So it felt like a great candidate to try out first. So here was problem 1: I had no "on-prem" SQL database anymore. And SQL is expensive. Or is it? Pete Skelly turned my eyes towards serverless Azure SQL - a consumption-based model for SQL. Seems pretty new and cool to me. Let's throw it in!
We want to mimic an "on-prem" experience, so that means trying to use the Power BI gateway to connect to our SQL database. So let's make sure we have a VM to install the gateway onto that will sign in with a dev tenant login and can then connect to the SQL server. Things are looking good. Anything else?
What if we see how much of this we can build with ARM templates? Yeah, that sounds good. The ARM Tools extension for VSCode is pretty solid. Channel 9's DevOpsLab just started a video series on Demystifying ARM Templates, which shows the power of the MS team's JSON schema they've built. So yeah, why not throw that in. Plus, I've been wanting an opportunity to use the ARM Template Viewer extension in VSCode too, so now I have a reason and a way to visually represent almost everything we're creating for this little Graph Connector.
The AdventureWorks database loaded up with Product data
Windows VM with a Power BI gateway to connect to the Azure SQL db
A new graph connector, which uses the gateway to crawl Products, to display in a new vertical in MS Search results
A custom adaptive card display (we gotta see if we can pipe in product pictures, right?)
Here's a high-level overview of the data flow, from SQL-to-VM-to-365:
The VM's gateway will be controlled by the 365 Search service and graph connector API. Every 15 minutes, 365 will attempt an incremental crawl, by reaching out to the gateway on the VM, which will receive a query to execute agains the SQL DB on Azure. So let's get this party started.
Don't feel like making all of this? No problem! I've already set it all up and exported it to an ARM Template for you! I even went ahead and took the time to set it up a little better for automation. I've provided 2 files for you to use to get all of this provisioned automagically in your own tenant/subscription:
runit.azcli - this is an Azure CLI script that is built for Bash.
Simply put the 2 files next to each other from wherever you plan on running them. If you want to run it from the cloud shell, like I did, you'll just have to upload them first, like so.
The ARM Template JSON has a number of preset variables you should know about. If you want it to run successfully, you'll need to set them to be something unique for you. These variables are:
rg - A reprint of the resource group you plan on putting everything in. That will also get set in the runit.azcli. This name just gets used in some of the more generic naming of things like the NICs and NSGs that I'm hoping you don't have to even think about.
vmname - The name of the VM that will get made for you
location - This also gets set in runit.azcli so you don't necessarily have to set it here.
servers_serverlessserverpoc_name - The name of the database server that will be created
adventureworks_dbname - The name of the database (defaults to HomolWorks)
uniquedomainprefix - Should probably be the same as the name you pick for the VM - using this will make RDP'ing easier
my_ip - if you set this, then your IP will be automatically added to the firewall rules for the Database server
sqladmin_account - the SQL admin user name. Defaults to POCAdmin
vmadmin_name - the VM admin user name. Defaults to captainawesome
The script only has 3 variables it tries to preset before deploying the ARM template. Naturally you could edit this all you want and feed in more parameters rather than setting them up in the .json file. Only 2 variables really need to be set: rg and location. This ARM template is scoped to a resource group, so the script creates that Resource Group first, then deploys the ARM template to the group. Note that you can optionally set the IP you're developing from in the my_ip variable.
Once you've set rg and location, run the runit.azcli script from a terminal using Bash or Zsh. I ran mine from Azure Cloud Shell.
Not so bad so far, right? If you're feeling really adventurous, be sure to look over the ARM Template in full to see everything we've got going on. Lots of small stuff has been squirreled away so you don't have to care.
NOTE that I did actually get an error running the template, which was during the setup of one of the advisors for the SQL database, where I was told the resources were busy. But everything had actually executed fine, at least for the purposes of this Proof-of-Concept. So if you get that error, just make sure that the AdventureWorks database was provisioned in the new resource group and you should be good to go.
Let's review everything we have now that the script and ARM template have finished. First let's just take a look at the finished Resource Group. Quite a few resources have been generated and most of them will cost next to nothing. And here's the best part: you can just delete the resource group whenever and be done with this.
Click on the VM and notice the top right portion of the Overview tab. We have a static IP and a DNS. This will make for easy RDP, plus it has been beneficial to setting up our DB firewall.
Let's see where that IP is used by SQL. Go back to the resource group and click on the SQL server. Then select the Firewalls and Virtual Networks option. Note in the IP Address rules that we already have an item added - the same IP as our VM. If you hadn't setup your IP, now could be a good time to add that a Save.
Remember that DNS? Well now's our chance to use it. Connect with your favorite RDP client, using the domain that was created and the VM admin account/password that you setup in the ARM template parameters.
Okay, we're almost there. Time to setup our sql graph connector. First, let's confirm that you can even do this. As we mentioned earlier, this is still in preview mode at the time of this writing, so if you want to have a shot at seeing the connectors, you need to set you organization settings to getting targeted releases. See the animation below for accessing that setting from the 365 admin center:
Here's where you go to access connectors. In the 365 admin center, select Settings->Microsoft Search and confirm that you have the Connectors tab.
To connect to our SQL Server, click Add, then Microsoft SQL Server. Then supply a Name, a Connection ID (just a unique identifier of your choosing) and a description. Finally accept the terms to proceed.
After we click Next, it's time for us to setup our database connection. Finally! To get your connection string, head back over to Azure and select your database. Then select the Connection Strings option along the left. The first tab holds the information you'll need over in your 365 dev tenant.
Note that in the Database Settings step for the graph connector, the On-premises gateway is selectable. You should see the name you provided for your gateway in there. Select it. Fill out the parts need for the connection and click Test Connection.
What's happening under the hood is that 365 is reaching out to the gateway agent running on your VM and then making the connection to the database, which has allowed your VM to connect through it's Firewall rules. Pretty neat, huh?
Okay, so the last few steps of the Graph Connector are about setting up the data to be crawled. So what exactly do we have here? We're going to focus on Products. Here's a quick snapshot of the tables though from the Azure Data Studio:
We intend to target Product and we will also include the Product Model through a join.
The next step in the Graph Connector setup is to provide our full crawl query. Note that we are expected to provide a date field that we will base the query off of, so that only items added after the previous crawl will be pulled to make things more efficient. This is the watermark field (@watermark). We have chosen CreateDate as that field.
Hold up. There is no CreateDate field. Well done, young padawan. Oddly enough, the MS folks didn't think to have one. So we will need to do it for them. Go back to Azure and select your AdventureWorks database. Click the Query Editor (preview) on the left hand side and log in with the database admin account you provisioned. Run this query:
INNERJOIN[SalesLT].[ProductModel] pm on p.ProductModelID = pm.ProductModelID
Choose DateTime as the watermark data type and press the Validate Query button. Select the watermark field, which we aliased as ProductCreated, then select the unique identifier field, which is ProductId. Notice we have a print out of the first 10 rows of data as well. Interesting side note here: currently money and float fields appear to not be supported by the graph connector. That's why ListPrice was left out of the query.
Next we set the incremental crawl. This will append anything that has been modified since the last incremental crawl run. This step is optional but I recommend it. The crawl looks very similar to the full crawl, but instead our @watermark is based on ModifiedDate instead. Here's the query:
INNERJOIN[SalesLT].[ProductModel] pm on p.ProductModelID = pm.ProductModelID
Similar to the full crawl, choose DateTime as the watermark data type and press the Validate Query button. Select the watermark field, which we aliased as ProductModified, then select the unique identifier field, which is ProductId. I skipped the soft delete section for now. I also skipped leveraging any type of row-level security, which is actually supported and documented in Microsoft writeup on the SQL Graph Connectors. Essentially, you would need to include ACL columnsin the full and incremental crawls, named AllowedUsers, AllowedGroups, DeniedUsers, and DeniedGroups. Each column is expected to be comma or semicolon delimited and can include UPNs, AAD IDs or Security IDs. I just wanted to see if we could get this data coming back and looking good!
The last big step is our Manage Schema step. We define what can be queried, searched and retrieved. If an item is found in search, you can only show it in the adaptive card layout if it's been marked as Retrievable, thus added to the schema. So select what you want. I went with anything text-based to be searchable and pulled almost all fields into the schema by marking them as Retrievable.
The last few steps are a bit of a breeze, especially since I chose not to do ACL columns and row level security.
Our final step is to create our vertical and design the result set. After creating the connector, you'll see some callouts on your new connector in the Required Actions column. You can either click there to set things up or you can access Results Types and Verticals in the Customizations tab. As of the time of this writing, the best order of events is to setup the Result Type first then the Vertical. If you do it the other way, you will have 1 extra step to Enable the Vertical after you finish setting up your result type.
Let's start with the Result Type. Here are the basic steps, which the below GIF flies through:
Set a name for the result type
Tie it to a Content Source, which will be the new Graph Connector you made
Apply any display rules (I skipped this)
Design the layout
Review and Finish
The big thing here is our layout design. We are provided a link on this step to the Search Layout Designer. Select a design to start with and Click the Get Started button. This takes you to the designer where you can layer in the fields you want to replace the template with. We want to make some substantial changes, so let's click the Edit Layout button. This layout designer leverages the Adaptive Card schema to do it's magic. Also, any field we set as retrievable and is in the schema is now a field we can display in the layout. Here's what I built:
Here's the last "special" thing we wanted to add to the heap of "All the Things". In the AdventureWorks Products table, there are Thumbnail binaries and Thumbnail names. Well, of course, I wanted to see these thumbnails come through in the results. Varbinary fields aren't supported by the crawler, so I had 1 of 2 options: either make an endpoint that would pull the item from the database for the Product ID on any call and return the byte array as the response or pull all of the binaries out of the database once and save them to files elsewhere. I chose the latter. Here's the source for it if you want to do something similar yourself. So now I had the files I needed in github, named by the ThumbnailPhotoFileName field value. So that's how I'm able to include that in my layout.
Here's a quick rundown of setting up the Result Type:
Last, but not least, we make our Vertical. It's even simpler.
Provide a name
Select the Content Source
Provide a KQL query (optional I skipped it)
Review and Finish
Click the button to Enable Vertical
Here's a quick rundown of setting up the Vertical:
So what exactly have we cooked up here? Let's head over to a search box somewhere and type something in that we know is a word in the Products or Models. How about mountain?
So there we have it! It was a lot to toss into the pot, but I think we brought it all together nicely. Hope you've learned a little something and that it gets you thinking about what you want to do next with the new Graph Connectors and what other structured data you may want to start piping into Microsoft Search for your customers and employees. Don't forget to delete this resource group when you're done messing around. Enjoy!
Like a good nerd, I'm a sucker for gamification and achievements in all of their forms. So I've taken to MS Learn badges and trophies like a duck to water. On the plus side, it's already single-handedly gotten me an Azure Developer Associate certification. So I'm giving back by doing a monthly series where I highlight some Learn badges that I felt were exceptional and deserve to be viewed.
Microsoft Ignite was all digital this year but was still pretty fun. It was like a giant Channel 9 event. In that event, John Papa did a great session unveiling Azure Static Web Apps. True to form, almost immediately after that unveiling, this little number showed up on MS Learn. It's great first look at a preview product, where Azure is throwing their hat into the ring with other JAMstack hosting services, like Render, GitHub pages, or GitLabs. I definitely recommend this walkthrough - it's quick, easy and free to try and you get a good glimpse of everything that's supported.
I have to imaging the Azure API Management can get pretty pricey so I haven't actually done any of this in a production sense, but the lessons surrounding Azure API Management are pretty neat and put a great bow on your API structure. But this particular lesson, which shows you how to pull disparate Azure functions into a single cohesive API, both structurally for you or your organization but also for consumers of said API, is really strong and shows you the true power of Azure API Management
I have high hopes for Blazor in the long run. The route of building WebAssembly is some pretty interesting stuff and this is a great introduction to the technology and nicely self-contained. Hopefully it will leave you wanting more.
The state of sharing in the development world, particularly thanks to the gains of open source over the years, has never been stronger. We can push code to GitHub almost instantaneously. We have lots of ways of describing our code to others using markdown files, typically at the root of our solution. Maybe we've even gone the distance and built out a set of GitHub pages. In it we have real writeups on specific features with example snippets of code or showing how to use our product.
Have you ever felt like there's almost too much to learn and to unpack? Developers are expected to move faster than ever and know more than ever. How do you move quickly when you're jumping into a new technology or project without feeling overwhelmed? Is there more that we can add to our tool belt to assist with knowledge sharing, documentation or investigation? Allow me to throw at you 2 new technologies to assist in this endeavor: Jupyter Labs and CodeTour.
Jupyter Labs comes from the Python world of data science. But it has moved far beyond just that. Essentially, it leverages Python to allow for different languages to have a runnable kernel against a jupyter notebook. What this gives you is something quite powerful and cool. It's a web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text.
I once described it to my team as this: imagine having a wiki with runnable code snippets directly in the wiki. One of my teammates, upon seeing a demo for the first time, described it in a somewhat opposite way: it's like code with way better commenting. Either way you look at it, it's certainly more powerful that just code or just documentation.
Jupyter's site has instructions to install here. But just doing that isn't going to be as powerful, especially if you're a Microsoft developer. Jupyter's base install is a Python notebook that can run in the Kernel, We want to deal in things like C# and PowerShell. So lets add that to kernel using .Net Interactive. Personally, I think Scott Hanselman's instructions here may be your best bet, especially if you're on Windows. This means you'll need Anaconda installed first (remember all this is based on Python).
As you've probably already caught on, this is Python and cross-platform. This means we Microsoft folks need to stick with all things Core. .Net interactive's notebooks give us C#, F# and PowerShell Core though, so we have some fun things we can do. This does mean that the PnP PoSH folks are on the outside looking in, until it supports PowerShell Core. But hopefully that's coming very soon. So check out what we can do using PowerShell Core, in the examples below, and hopefully that will get your mind spinning about other things you could do, including when PnP gets added to PS Core.
So keeping in mind that we are sticking to PowerShell Core, I whipped up a few examples of utilizing other CLI's with PowerShell to do some computing. Let's start with Azure CLI. Below is something simple. I just copied the MS documentation for getting started with the CLI into a notebook.
It's a totally different way to imagine documentation. Allow readers to instantly see the results, in the context of their own data!
Let's look at another aspect of using these notebooks: helping your team get something done. In this example, I've crafted some instructions to give to someone to create a site with the same theme that I made inside my tenant. Check it out.
Let's end with a bang. I have absolutely fallen in love with this next one: CodeTour. It's pretty new extension for VS Code and allows for providing a tour of your solution. As someone who has a passion for learning and teaching, I can't think of a better way to handle the onboarding experience for coders than a guided tour. And there are many other applications too. Recently, the PnP team used Code Tour to assist with the SPFx Project Upgrade. I'm sure once you play around with it, you will also think of new applications for it.
Hopefully I've provided you some new thought-starters for better ways to share information. These technologies and others should become part of our best practices tool bag. They allow for easier explanation of code, faster results in collaboration, simpler paths to onboarding and so much more. Please take the time to consider how you might use these solutions on your next project.
#Adventures in PnP PowerShell provisioning templates
Has this ever happened to you? I had built a custom list with a custom view. To be more precise, I had basically lifted Chris Kent's sample here for a custom FAQ and dropped this on the page and the client was thrilled with it just as it is. Thanks Chris! Here's what the FAQ page looked like on the template site:
Not a bad looking FAQ list, right?
But this is just the beginning! This was my template. I need lots of sites to have this same FAQ page as a starting point, and it needs to look this good too.
So, onto provisioning with Powershell and PnP! At first I was running this:
So here was my hunch. Perhaps, the pages are getting deployed before the custom list and custom view, sooo when the page gets made, there's no view to select, which is why it looks like the above. I acted on this hunch, by doing the following:
I split out just the FAQ list portion from the full Get-PnPProvisioningTemplate - essentially doing 2 Gets: one for the list only and one for everything else. Here's what that looked like:
Now you have 2 files. But there's 1 trick to this, if you want it to work in your favor. You need to open up the XML file for everything, and delete just the ListInstance node for the list (in my case, FAQ) from the XML file. So you can't easily do this all in one full script. You'd have to keep your pulls separate from your applies because of this manual intervention.
Then I applied my 2 files separately as well, starting with the lists first:
#Adventures in getting your kids interested in coding
This is a series of posts I hope to do cataloging some of my progress with spreading the gospel of code to my kids. Recently I made some progress with my son, Nate, and hope to share what we build together in this series. 😊
For many years, I've struggling in fits and spurts to get my kids to embrace coding. I took to it like a duck to water and was coding on this when I was 9 years old:
Behold the Radioshack TRS-80
I've had a few glimmers of hope, like when they were in robotics and were helping to write the programs, but eventually that died out. Then I had a brief period where my sons were doing lessons on freecodecamp.org. That site is fantastic, by the way. But so far, nothing lasting. But I press on.
About a month ago, I challenged my sons to come up with an app idea. My thinking was that they might be more inclined to learn something if it would result in something that originated from their imagination. My son Nate started doing the Swift Playground lessons and things were looking up. But as usual, things sputtered out.
2 weekends ago, we were playing our favorite collector card game, Marvel Legendary. I have amassed pretty much the entire collection and we spent a fair bit of the weekend battling supervillains. There have been some apps made by the community already - namely one called Assemble. It randomizing matchups, which is really nice for a game that keeps growing with 3 or 4 expansions a year. But this app was starting to slow down in its updating. Additionally my son voiced his frustration with not being able to search up cards by the text on them or by their features. Suddenly, I saw the light go on and he turned to me and said, "Maybe this is the app we can build."
Eureka! We had a problem that we wanted to solve and it was surrounding a topic that we both enjoyed. It was the perfect storm.
So the basics for the idea are now there. We want a mobile app that will allow for in-depth searching of thousands of cards and that will also randomize setups of games.
Technically, here's the early plan: a database of cards built in Azure Data Storage, accessible from Azure Functions, via a mobile app built in React Native. We will open source the whole thing, so feel free to contribute or give feedback, especially if you happen to also play the game.
Our first order of business was getting the data figured out. My first real teachable moment. We examined various cards and started making notes, first in bullet lists of the various types of cards and the properties we were seeing on them. Here are some examples.
As you can see, there are many various qualities and this is just a small sample. In fact, just at the top-most level, there are 10 types of cards: Wound, Token, Officer, Sidekick, Bystander, Scheme, Enemy Leader, Enemy Support, Enemy Group, Playable. Pretty quickly in, we switched to a JSON document and started going over samples of each card type, appending properties to the same single JSON as we discovered new things on a card. This actually sunk in with him. Seeing the JSON get updated live as we were reviewing information on a card was like seeing the objects and properties come to life real-time on the screen. Here's what we ended up with:
"Color":"Yellow | Blue | Green | Red | Silver | Grey",
"Fight or Fail",
"X Uru-Enchanted Weapon"
"Expansion":"Core | Villains | Civil War | X-Men | S.H.I.E.L.D.",
Seeing as we will have to start simple, we will begin by manually creating JSON docs for cards, until we get a UI going. So to try and make it easier on us, especially Nate, I set out to make a schema for this sample doc. This route would give us intellisense and document validation in a good editor, like VS Code.
This was my first foray into JSON Schema. I started at the source and read through this walkthrough: Getting Started Step-by-step. That gave me most of the tools I needed to understand what was going on. Essentially, you're setting up type definitions and other rules as you identify them.
That said, it was still a pretty big doc for me to try and render out the schema manually, so I started with a schema generator - found here - and applied changes from there, namely titles and descriptions, as well as identifying things we missed as I was validating fields. The generator helps a lot with following along, as you can compare your node to the schema generated. The only big thing I had to learn how to add was the enum property. This is a great option for string properties. It allows you to provide an array of options for editors to leverage. Here's an example:
So far, the schema is all we have and we're starting to create the data for Azure Storage. I'll share more as we make progress. Hopefully he stays interested and we actually produce a working product in the App Store. We shall see.