Create the User
adduser username
Set User Password
passwd username
Give User SUDO Privilages
usermod -aG wheel username
Log in as User
su – username
Generate SSH key
ssh-keygen -t rsa -b 4096 -C "me@myemail.com"
adduser username
passwd username
usermod -aG wheel username
su – username
ssh-keygen -t rsa -b 4096 -C "me@myemail.com"
Every UI Document component references a UI Document asset (.uxml
file) that defines the UI and a Panel Settings asset that renders it. You can connect more than one UI Document asset to a single Panel Settings asset.
Right-click Project window, and then select Create > UI Toolkit > UI Document
MonoBehaviour
.Any UXML file in the project is available as a library object in the UI Builder.
Library→Project→Component (RMB)→Open UI Builder
Close the current context and open the component in the UI Builder.
Hierarchy→Component (RMB)→Open Instance in Isolation
Edit the component in the UI Builder, maintaining a breadcrumb to the previous context.
Hierarchy→Component (RMB)→Open Instance in Context
Edit the component while keeping the view on the current component. Maintains a breadcrumb to the previous context.
Typically a controller for a UI Element will is added to the UI Document as a component. Alternately the the UI Document can be set as a field on a MonoBehaviour instance.
Extending the MonoBehaviour class is most expedient way to control a UI component when it has only one instance. Public accessors permit access from other controllers, and should be used sparingly.
Select UI Document, in the Inspector→Add Component→Scripts.
using UnityEngine;
using UnityEngine.UIElements;
public class GameListController : MonoBehaviour {
public string GameListID = "GameList";
private VisualElement Root {
get { return GetComponent<UIDocument>().rootVisualElement; }
}
private ListView GameList{
get { return Root.Q<ListView>(GameListID); }
}
}
Add event handlers to the MonoBehaviour Start method.
public void Start() {
Root.Q<Button>(JoinButtonID).RegisterCallback<ClickEvent>(ClickJoinButton);
}
public UIDocument uiDocument1;
[SerializeReference] private UIDocument uiDocument2;
var uiDocument = GetComponent<UIDocument>();
public VisualElement Root {
get { return GetComponent<UIDocument>().rootVisualElement; }
}
Create a C# class derived from the VisualElement
class or a subclass.
You can initialize the control in the constructor, or when it’s added to the UI.
https://docs.unity3d.com/Manual/UIE-Events.html
You can register an event handler on an existing class, to handle events such as a mouse click.
ScrollView scrollView = Root.Q<ScrollView>(GameListID);
scrollView.contentContainer.Add(new Label("I done got clicked"));
Override VisualElement.ContainsPoint()
to assign custom intersection logic. [source]
You can also create USS custom properties to style a custom control. [source]
https://docs.unity3d.com/2022.2/Documentation/Manual/UIToolkits.html
https://docs.unity3d.com/2022.2/Documentation/Manual/UIE-get-started-with-runtime-ui.html
https://docs.unity3d.com/ScriptReference/UIElements.VisualElement.html
https://docs.unity3d.com/Manual/UIE-USS.html
https://docs.unity3d.com/Manual/UIE-create-tabbed-menu-for-runtime.html
A Socket is one end of a two way communication link. The major protocols for socket communication are TCP and UDP. The primary difference is that TCP guarantees data delivery and the order of data packets. UDP does not make such guarantees, but as a consequence it is faster. Developers typically default to TCP.
The C# socket library is found inthe System.Net package.
using System.Net;
using System.Net.Sockets;
An IPEndpoint is the pairing of an IPAddress and a port number. You can use DNS lookup to obtain an IP address.
IPHostEntry ipHostInfo = await Dns.GetHostEntryAsync("google.com");
IPAddress ipAddress = ipHostInfo.AddressList[0];
IPEndPoint ipEndPoint = new(ipAddress, 7000);
When creating the server endpoint, you can specify ‘any’ for the ip address.
IPEndPoint ipEndPoint = new(IPAddress.Any, 7000);
Shutdown disables sends and receives on a Socket.
Close will terminate the Socket connection and releases all associated resources.
socket.Shutdown(SocketShutdown.Both);
socket.Close();
A server must first listen for and accept connections. Then, when a connection is made, listen for data on a seperate Socket.
Bind associates a socket with an endpoint.
Listen causes a connection-oriented (server) Socket to listen for incoming connection attempts.
Socket socket = new Socket(ipEndPoint.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
socket.Bind(ipEndPoint);
socket.Listen(port);
Accept blocks until an incoming connection attempt is queued then it extracts the first pending request from a queue. It then creates and returns a new Socket. You can call the RemoteEndPoint method of the returned Socket to identify the remote host’s network address and port number.
See AcceptAsync for the asynchronous accept call.
Socket handler = this.listener.Accept();
You create the client side socket in the same manner as the server. The difference being, instead of Bind and Listen, the client uses Connect.
IPAddress ipAdd = IPAddress.Parse("127.0.0.1");
IPEndPoint ipEndPt = new IPEndPoint(ipAdd, port);
Socket socket = new Socket(ipAdd.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
socket.Connect(ipEndPt);
Reading and writing date on a socket is the same for both the server and client side socket. Once the actual connection is made, the difference between the two is arbitrary. Both the read and operations require a byte array to act as a buffer. So your data will need to be converted to and from an array of bytes.
The Send method is used to write to a socket. While there are a number of different method flavours, the most common is to send the entire contents of a byte array.
byte[] msg = Encoding.ASCII.GetBytes(aString);
this.socket.Send(msg);
If you are using a connection-oriented protocol, Send will block until the requested number of bytes are sent, unless a time-out was set by using Socket.SendTimeout. If the time-out value was exceeded, the Send call will throw a SocketException.
If the receiving socket (server) has not been started a SocketException will be thrown. If the server has been started but Accept has not been called reading and writing will hang. This can be remedied by setting a timeout (in ms).
socket.ReceiveTimeout = 1000;
scoket.SentTimeout = 1000;
If the remote host shuts down the Socket connection with the Shutdown method, and all available data has been received, the Receive method will complete immediately and return zero bytes. This allows you to detect a clean shutdown.
There are many flavours of the Receive method, but we will only concern ourselves with two of them. The first reads all bytes from a socket. It returns the number of bytes read. You will need to provide your own EOF indicator or wait until 0 bytes are read which means the socket has finished writing and closed. This is useful if you are only connecting the sockets for a single read-write operation.
byte[] bytes = new byte[BUFFER_SIZE];
socket.Receive(bytes);
The second, and the one we will be using, is to read a specific number of bytes from the socket.
int count = socket.Receive(bytes, nextReadSize, SocketFlags.None);
We will use this to first read the size of the data, then read the body of the data.
// read size
byte[] bytes = new byte[INT_BUFFER_SIZE];
socket.Receive(bytes, INT_BUFFER_SIZE, SocketFlags.None);
int size = BitConverter.ToInt32(bytes, 0);
// read message
byte[] bytes = new byte[size];
socket.Receive(bytes, size, SocketFlags.None)
string data = Encoding.ASCII.GetString(bytes, 0, count);
The json encoding and decoding will be handled by the Newtonsoft Json.Net library.
dotnet add package Newtonsoft.Json --version 13.0.1
The include statements for this library are.
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
There are 3 source files in the project:
Most of the details for this class are provided in the above background section. The main detail here is the way new connections are managed. I have put the loop inside a IEnumerable method. This should be looped to hand off new connection objects. The following example does this in a new thread.
Thread thread = new Thread(new ThreadStart(()=>{
foreach(Connection connection in server.Connections()){
connection.WriteString("ack");
connection.Close();
}
}));
The connection object reads and writes JSON objects or Strings in two parts. First it writes four bytes representing the integer size of the data. Next it writes the data it’s self.
Client client = new Client().Connect("127.0.0.1", 7000);
client.socket.ReceiveTimeout = 3000;
Console.WriteLine("> " + client.ReadString());
server.Stop();
For a recent project I needed update to a single shared Google calendar. I decided to go with a Google service account to manage this. I’m going to post here how I did it.
A service account is used by an application as opposed to a real living person. It allows your application to make authorized API calls directly. A service account is identified by its email address, which is unique to the account.
At the very top choose the project drop down. This will open up the project window where you can select an ongoing project or create a new one. Each service account is located in a project. After you create a service account, you cannot move it to a different project.
In the Google Developers console (link) under the leftmost APIs & Services menu you will find the credentials option. At the top of the screen click the Create Credentials option. You will have a choice as to which type of credential you would like to create. Select service account. Most, though not all, API’s should work with service accounts.
There are a number of ways to authenticate an application using a service account (link). If you are deployed on Google cloud you can use an attached service account. You can use a Workload Identity with Kubernetes pods. There is also the Workloads Identity Federation that works with other service providers. Here we will be using a service account key which allows us to deploy on our own cloud provider.
Click on your service account name to bring up the management menu. There is a number of menus near the top of the screen; select keys. Press the add keys dropdown menu and select create new key. Use json unless you have a reason to do otherwise. Remember not to add this key to your git repository or put it anywhere the public can see it.
Create a calendar in Google Calendars and look at it’s settings (3 little dots next to the name). Find “share the calendar with specific people” and add the email address of your service account. Give your service account the role “make changes to events”. Since the service account isn’t a real user, you don’t get a confirmation email, and the calendar won’t immediately show up using the Google Calendar API list method.
Find the calendar id under the integrate calendar heading on the settings page. It should look like an email address. Save this, we will be using it in a bit.
Install the google api package.
npm install googleapis
View the Google APIs documentation on github.
https://github.com/googleapis/google-api-nodejs-client
In the Google console, you will find “APIs and Services > Enabled APIs and Services” add the Calendar API to your project.
The following are three preliminary steps we need to preform before accessing the calendar API. After this we can start accessing the API methods. I will implement them in a class structure just to keep things clean.
import { google } from "googleapis";
constructor() {
const auth = new google.auth.GoogleAuth({
keyFilename: GoogleCalendar.KEY_FILENAME,
scopes: GoogleCalendar.SCOPES,
});
this.calendar = google.calendar({
version: "v3",
auth: auth,
});
}
Now that the environment is setup we will now go through a few of the available API calls. For a full list of see the Calendar API Documentation.
The list method allows you to view available calendars. It is found in the calendar.calendarList implementation. This is also where you find the create, get, and delete calendar methods. This is one of the simpler API calls.
list() {
return new Promise((resolve, reject) => {
this.calendar.calendarList.list((err, res) => {
if (err) reject(err);
if (res) resolve(res.data);
});
});
}
Newly added calendars won’t show up in the list until you have inserted them. We use the calendar identifier we saved above. In this case we also pass in an options object, containing a resource object, which in turn has the calendar id we want to add.
insert(id) {
return new Promise((resolve, reject) => {
const options = {
resource: {
id: id,
},
};
this.calendar.calendarList.insert(options, (err, res) => {
if (err) reject(err);
if (res) resolve(res.data);
});
});
}
When you look at the API documentation for delete you notice that calendar id is in the url. This implies that we use a calendarId field in our options object. The resource field, as used above, goes in the body. This is something to watch out for when interpreting HTTP calls to NodeJS API calls.
remove(id) {
return new Promise((resolve, reject) => {
const options = {
calendarId: id,
};
this.calendar.calendarList.delete(options, (err, res) => {
if (err) reject(err);
if (res) resolve(res.data);
});
});
}
To add an event we access the events property of the calendar API. In this case we include both the calendar id in the URL as well as body properties.
addEvent(id, start, end, summary){
return new Promise((resolve, reject) => {
const options = {
calendarId: id,
resource: {
start: {
date: "2022-05-21",
},
end: {
date: "2022-05-23",
},
summary: summary,
},
};
this.calendar.events.insert(options, (err, res) => {
if (err) reject(err);
if (res) resolve(res.data);
});
});
}
$ nc -l 8080
hello world
$ ncat 127.0.0.1 8080
$ hello world
In WSL the nc -l 8080
command listens on port 8080 for any incoming data, then prints it to the screen. In Powershell ncat 127.0.0.1 8080
sends everything you type to port 8080. You should see what you type both in the Powershell terminal and in the WSL terminal. Alternately you can open your browser and enter http://127.0.0.1:8080/
into your URL bar, and the WSL terminal will print out the HTML request.
Install windows version of ncat [direct link] from https://nmap.org/.
$ ncat -l 8080
hello world
$ cat /etc/resolv.conf
nameserver 172.59.192.1
$ nc 172.59.192.1 8080
$ hello world
If you are installing NodeJS on a machine that you have admin privileges to, you can install NodeJS by downloading the binaries directly. This allows you to bypass any special setup that a version manager may require. Browse to https://nodejs.org/en/download/ and download the linux binaries. Alternatively use wget
.
cd /opt/node
sudo wget https://nodejs.org/dist/v17.0.1/node-v17.0.1-linux-x64.tar.xz
sudo tar -xvf node-v17.0.1-linux-x64.tar.xz
sudo mv node-v17.0.1-linux-x64 17.0.1
You would of course replace the version numbers with the version you are interested in. Also, the move (mv
) command is not required, I just prefer to keep the version numbers simple. If this is your first NodeJS install you will need to add sudo mkdir /opt/node
.
After you have installed the binary files into the /opt directory you will want to create links so that you can execute them from anywhere.
sudo ln -s /opt/node/17.0.1/bin/node node
sudo ln -s /opt/node/17.0.1/bin/npm npm
sudo ln -s /opt/node/17.0.1/bin/npx npx
We will examine what is going on with this snippet of code? But first, what is the goal. The files variable is an array of objects. Each object has two fields: ‘name’, and ‘fullpath’. We want to return a dictionary with ‘name’ as the key, and the ‘fullpath’ as the value. We will look at this snippet of code piecemeal.
const fileMap = Object.assign({}, ...files.map(x=>({[x.name] : x.fullpath})));
The JS spread syntax allows iterable expressions, such as an array, to be expanded into an argument list during a function call. A simplified example is shown below.
const A = [1, 3, 5];
function sum(a, b, c){
return a + b + c;
}
console.log(sum(...A));
> 9
In the case of the Object.assign
function the first parameter to the object to write to, all remaining arguments are source objects. Using the spread syntax on the map function (which returns an array of objects) passes each object into ‘assign’ treating them each as individual sources.
The map function of Array, accepts a callback function and return a new array. It fills each element of the array by passing the source value into the callback function. You can achieve the same thing by iterating through the array.
const A = [1, 1, 3, 5, 8];
const B = [];
for (let i = 0; i < A.length; i++){
B[i] = A[i] + i;
}
Equivalent operation using map:
const A = [1, 1, 3, 5, 8];
const B = A.map((x, i)=>x + i);
The snippet x=>({[x.name] : x.fullpath})
seems a little odd at first. Why are there parentheses around the curly braces. To understand this we need to look at the two ways JS allows lambda functions.
Implicit return statement:
x => {return x + 1}
Implied return statement:
x => x + 1
An implicit return statement has curly braces, while the implied does not. Because the object constructor uses the same braces as the function body, if you were to do this: x => {"name" : "apple"}
JS would think you are trying to do an implicit-return lambda function. So we need to encapsulate the object declaration with parentheses to turn that statement into an implied-return lambda function: x => ({"name" : "apple"})
.
The second oddity of the statement x=>({[x.name] : x.fullpath})
, is why is [x.name]
in square brackets. This was introduced in ES1015 as shorthand to tell JS to use the value in the variable and not to treat it as a key value. This is a bit more obvious if we look at it outside of the context of a lambda function.
Given an object with two fields, you want to use one of the fields for the key of a new object. The vanilla JS way of things:
const key = "name";
const obj2 = {};
obj2[key] = "apple";
console.log(obj2);
> { name: 'apple' }
The shorthand way of doing this, which fits nicely into a compact lambda function, is as follows:
const key = "name";
const obj2 = {[key] : "apple"};
git push -d [remote] [branch]
git branch [branch] [sha]
Use with caution it can delete or overwrite existing commits.
git push -f
This creates an upstream tracking branch related to your local branch.
git push -u origin [branch]
git push --all
git push --tags
This code snippet recursively examines a directory, and all it’s subdirectories, to identify files. It returns an array of objects with the fullpath, and the filename as strings.
import FS from 'fs';
import Path from 'path';
/**
* Recursively retrieve a list of files from the specified directory.
* @param {String} directory
* @returns An array of {fullpath, name} obects.
*/
function getFiles(directory = "."){
const dirEntries = FS.readdirSync(directory, { withFileTypes: true });
const files = dirEntries.map((dirEntry) => {
return dirEntry.isDirectory() ? getFiles(resolved) : {fullpath : resolved, name : dirEntry.name};
});
return files.flat();
}
export default getFiles;
The filesystem library for JS provides the synchronous read directory function (line 10). Setting withFileTypes to true in the options, directs readdirSync to return directory entry objects.
The array’s map method passes in each directory entry object into a provided callback function (lines 12-15). The return value of this function is inserted into a new array.
The ternary operator on line 12 is the callback, and works as follows:
If the dirEntry object is not a function then add the fullpath and name to the array. If it is, then recursively call the getFiles function and and add the result to the array. Because the recursive call nests arrays into arrays, we call the array’s flat method which creates a new single 1-dimensional array.