UnrealEngine provides you two options to build your project and you can choose one of them. The options are BLUEPRINT and C++ as you can see at the screenshot above. Selecting left one means that, “I gonna develop my project using only blueprint”. Otherwise, selecting right one means, “I want to use both blueprint and cpp on my project”.
By the way, what is different between them ? How can we convert BP only project into BP+CPP project ? Let us go over.
Comparison
After creation, you can see the directory if selected the BP-Only. In this project BPOnly, you only can execute UnrealEngine editor and write blueprints. Even if you make source code files and place them into appropriate position, your project does not compile the source code. Let us find out “why not working” by the difference between BP only project and BP+CPP project.
After creation with C++ selection. The BPCPP project supports both blueprint and cpp like its name. You can see the difference on number of files, BPOnly is 6 while BPCPP is 10. Files that exist only in BPCPP are here.
Name of file/folder
Description
.vs
Containing VisualStudio related files. Mostly, cached data for optimization.
Binaries
Containing output files of this project. Currently, this project’s UnrealEditor library exists.
Source
Containing some simple source code files. Plus, BuildRule and TargetRule exist in this folder.
<ProjectName>.sln
Just like uproject file, it defines required version of VisualStudio, dependency of the project, and so on.
The only Source folder is not generated one. The Binaries folder is generated when you build the project with a certain target such as WindowsClient, WindowsServer, and Editor. The files related to VisualStudio are generated when you attempt to make project files. Also, UnrealEngine refers the Source folder while generating project files.
So, is that all ? No, actually there is one more thing different. Check the uproject file and you can find some difference. The contents of uproject file looks like similar, but BPCPP‘s one has a Modules property. The name of module is the same with project name, BPCPP.
In summary, there are some differences between BP only project and BP+CPP project. (Except for generated files)
Existence of Source folder
Property Modules in uproject file
Where these differences come from ?
Template
We have learned about templates used in UnrealEngine at the post. What found was that making new project from a template is equal to copying the template project and replacing placeholders. Right, then it would be similar to that. Find the template project for BP only project and BP+CPP project.
TMap<FName, TArray<TSharedPtr<FTemplateItem>> > SProjectDialog::FindTemplateProjects() { // Clear the list out first - or we could end up with duplicates TMap<FName, TArray<TSharedPtr<FTemplateItem>>> Templates;
// Now discover and all data driven templates TArray<FString> TemplateRootFolders;
// Add the Enterprise templates TemplateRootFolders.Add(FPaths::EnterpriseDir() + TEXT("Templates"));
// Allow plugins to define templates TArray<TSharedRef<IPlugin>> Plugins = IPluginManager::Get().GetEnabledPlugins(); for (const TSharedRef<IPlugin>& Plugin : Plugins) { FString PluginDirectory = Plugin->GetBaseDir(); if (!PluginDirectory.IsEmpty()) { const FString PluginTemplatesDirectory = FPaths::Combine(*PluginDirectory, TEXT("Templates"));
if (IFileManager::Get().DirectoryExists(*PluginTemplatesDirectory)) { TemplateRootFolders.Add(PluginTemplatesDirectory); } } } ...
As you see, UnrealEngine finds template files from the path; Root/Templates/.
There are many folders for each template, and now we found. The TP_Blank and TP_BlankBP. The templates contain a uproject file, which is used for making new uproject file while creating new project using template.
The BPOnly.uproject was created based on TP_BlankBP.uproject. You can check that at the FProjectDescriptor::Write() function.
Why the EngineAssociation property not filled ? That property is filled later at the FDesktopPlatformBase::SetEngineIdentifierForProject() function.
Of course, the BPCPP.uproject was created based on TP_Blank.uproject. In this case, whole contents of file copied. And, the EngineAssociation would be overwritten.
// Retarget any files that were chosen to have parts of their names replaced here FString DestBaseFilename = FPaths::GetBaseFilename(SrcFileSubpath); const FString FileExtension = FPaths::GetExtension(SrcFileSubpath); for ( const FTemplateReplacement& Replacement : TemplateDefs->FilenameReplacements ) { if ( Replacement.Extensions.Contains( FileExtension ) ) { // This file matched a filename replacement extension, apply it now FString LastDestBaseFilename = DestBaseFilename; DestBaseFilename = DestBaseFilename.Replace(*Replacement.From, *Replacement.To, Replacement.bCaseSensitive ? ESearchCase::CaseSensitive : ESearchCase::IgnoreCase);
if (LastDestBaseFilename != DestBaseFilename) { UE_LOG(LogGameProjectGeneration, Verbose, TEXT("'%s': Renaming to '%s/%s' as it matched file rename ('%s'->'%s')"), *SrcFilename, *DestFileSubpathWithoutFilename, *DestBaseFilename, *Replacement.From, *Replacement.To); } } } ... // Open all files with the specified extensions and replace text for ( const FString& FileToFix : FilesThatNeedContentsReplaced ) { InnerSlowTask.EnterProgressFrame();
if ( !bSuccessfullyProcessed ) { FFormatNamedArguments Args; Args.Add( TEXT("FileToFix"), FText::FromString( FileToFix ) ); OutFailReason = FText::Format( LOCTEXT("FailedToFixUpFile", "Failed to process file \"{FileToFix}\"."), Args ); returnTOptional<FGuid>(); } }
The name of folders and content of files are replaced by the codes above. In this post, from TP_Blank into BPCPP. (Or, from TP_BlankBP into BPOnly)
Module
We have confirmed that the difference between BPOnly and BPCPP is about a module system, which are Modules property in uproject and Source folder containing code files. Thus, it would be possible converting blueprint only project into blueprint with cpp project by making some changes. In other words, we should make a new module.
#1. Prepare a project created with TP_BlankBP. In this post, I use the BPOnly project.
#2. Make a folder Source at project directory, and make a folder <ModuleName> in the Source directory.
Name the module as you want, but it is recommended to set by project name. (Because this module is the first module of project) Just to show that any name is okay, I set the module name as Robb, which is different with project name.
#3. Copy some files from the template TP_Blank. Replace their names and contents.
I had copied all of files in Source folder of TP_Blank template. For using the template files in this project, I replaced filenames and contents. (In this case, I need to replace the text TP_Blank into Robb)
#4. Generate VisualStudio project files and open VisualStudio project file.
#5. Build the project and open UnrealEngine editor. Profit !
Wrap-Up
It is not common case that creating a project with blueprint only option, but we are able to convert blueprint only project into blueprint with cpp project. We have checked what happens while creating our project using template, what is different between TP_Blank and TP_BlankBP, and how to add cpp module at blueprint only project. As we seen earlier in this post, the conversion we did is the same work of what UnrealEngine does.
When we make an initial module, the name of module does not have to be the same with project name. But, it is recommended to set by project name with convention and several reasons. For example, I had made a module Robb at the project BPOnly. I tried to package the project and got the result like below. Some of files have the name as Robb, but others have the name as BPOnly. Kind of disharmony on naming could be problem when accessing files with name.
About 4 months ago, I moved to new house, which is rented for 2 years. The building had been built in 2018, so I expected a quite simple and modern facilities including home networks infra. But, on the day moving to new house, a previous tenant said to me that “Only one LAN port works while others not”. At that time, I took this as a misunderstanding of the previous tenant. Because it is common that general people cannot handle or solve an network issue easily. Well…as you can see I write this post, he was right.
The previous tenant has mostly used the internet via wireless network, a.k.a. Wi-fi. It seemed that he does not know computer things. Even he connected to other rooms with exposed LAN cables. (I did not take the picture, but it was similar to a picture below.)
Floor Plan
I made a floor plan for new house. The green markers mean LAN ports. The LAN port with blue check mark was the only one working properly. Other LAN ports were not. The orange marker means a terminal box. When I first opened the terminal box, it looked like a picture below.
Very weird. The red cable might be the inbound. But other cables are connected in disorder. In this situation, I cannot guess which cable is destinated to certain LAN port. So I followed steps below for examination.
Check whether the red cable is inbound. The result was yes.
Connect a inbound cable to each cable. Check where each cable is connected to.
Repeat the second step until all of unknown ports found.
After some moments, I could organize a mapping for LAN ports. Let us see the picture below.
My Goal
Now, preparation done. It was time to do my plan. My plan was…
Reuse my gears as possible. At that time, I had a wireless router and some switch hubs.
Enable 2 ports. The port #3 and port #4.
Activate wireless network at appropriate position.
For this purpose, I planned to put a switch hub into the terminal box. Then, the wireless router should be near port #3. Because the Wi-fi signal would get weak when the router is in terminal box. However, on trying this plan, I found a very critical problem.
First Try
There is no power socket in the terminal box. In other words, there is no way to place a switch hub in the terminal box. In general, switch hub consumes power even it is small amount. What a panic !
I searched for bypassing the issue. Fortunately, there is one way fits in my case. The PoE, Power over Ethernet. The PoE is usually used at certain devices such as CCTV, Network Router, and VoIP Phone. These devices can be installed restricted environments.
There may be no power socket or power source due to small space.
There may be only LAN cable due to intra structure.
Yes. That is a perfect feature for this situation.
I could use only LAN cables in the terminal box.
I did not want to lay the power socket as the house is rented.
So, I chose to find injector and splitter. They are needed to implement PoE infra. The injector injects signal and power into LAN cable. The splitter splits it into signal and power at destination. Therefore, the floor plan can be redrawn as below.
Though it looks like some mess, anyway it worked. First, injector provides power to splitter via Ethernet. Therefore, splitter can supply power to switch hub. Second, inbound signal gets distributed by switch hub in terminal box. Finally, home network is constructed by the router on port #3.
I reused my gears. The wireless router and switch hubs.
Now I can access internet via port #3 and port #4.
I activated wireless network at the middle of house, living room.
Great. I was satisfied with the result…for a while.
Second Try
It was totally fine that connecting multiple devices with the router. Of course, because the router is extremely close. But, the problem happened when I had added several devices on port #4 side. When using one device on port #4, the device could recognize the network well. In contrast, when using two devices on port #4, one of them could not recognize the network. It can be drawn as below.
By the way, the devices were too far from the router. There were two switch hub between the router and devices, and it could prone some network conflicts. I had decided to change my home network configuration, with keeping my goals mentioned before.
A solution for this problems is simple. Placing a router in the terminal box. Then, my home network would be like picture above. But, one thing left behind, a wireless network. I cannot expect a wireless network functions well if the wireless router is in terminal box because the terminal box must be closed.
So, I had to buy new router for putting it in the terminal box. Placing new router in the terminal box and leave old router in the same position would be okay. In this case, the old router must be used like switch hub, not a router. My home network looks like picture above. And I will be able to connect the devices on port #4.
Third Try
Hmm…everything works well. Nothing malfunctions. But Wi-fi SSID issue was annoying me. New router and old router had been activated on wireless network, and they got each one of SSID. Yeah, right. There are TWO SSID separately, even though in the same network. I wanted them to merge into one.
Searched again. Maybe the EasyMesh a solution for me. The EasyMesh is one of Wi-fi technology that enables to merge multiple access points. Even EasyMesh does not care about type of frequency of access point. In other words, all of access points with 2.4GHz or 5.0GHz frequency will be merged into single access point. That was what I looked for !
My routers were made by EFM networks, the company famous of ipTIME trademark. EFM networks provides an utility program controls EasyMesh configuration like picture above. (Almost every company seems that develops and implements the EasyMesh specification, so find out other companies too.)
Setting up EasyMesh was completely easy. The structure of my home network was fit for the conditions. Now I can access Wi-fi via only single SSID. Furthermore, Wi-fi range is larger than before by merging two SSID. What a convenient :)
Epilogue
With the series of effort, I could setup my home network completely without great expense. It was lucky that there was no need to call workers. Maybe I would tell a next tenant of these story and give advice. I do not want anyone to suffer from the problems. ;)
Oh, I almost forgot. You should check the router before buying it if you want to setup EasyMesh. EasyMesh requires two types of router; MeshController and MeshAgent. Check the item you gonna buy whether it is kind of MeshController or MeshAgent.
As the Perforce is a kind of CVCS*, it is recommended to use a dedicated server. The dedicated server should run all day and night, and should have a fixed IP address. Thus, you would better choose a cloud server if not have a machine for the purposes. In this post, I chose DigitalOcean for a cloud server provider. The DigitalOcean provides instances with cheaper cost than others such as AWS. You know, you may not need a high quality instance for your small size project. ※ CVCS : Centralized Version Control System. Find more information at here.
Sign-up and Sign-in the DigitalOcean. Click the button Create and choose Droplets.
At the page Create Droplets, you would be asked to choose options for a instance. In this post, we gonna choose options like below:
Choose an image
Ubuntu 20.04 (LTS) x64
Choose a plan
SHARED CPU
Basic
CPU options
Regular Intel with SSD
$5/month (= $0.007/hour)
1 core CPU, 1 GB RAM, 25 GB SSD, 1000 GB transfer
Add block storage
$5/month (= $0.007/hour)
50 GB SSD
Choose a datacenter region
(Select a datacenter that is closest to your location)
(In my case, it is Singapore)
Select additional options
Monitoring
Authentication
Password
(Choose a password for entering the instance)
Any options I did not mention are left as default selection. Let us create our Droplet by clicking the button Create Droplet.
Now you can see the new Droplet. Connect the instance via SSH. You can do it with Powershell or WSL in Windows 10. The X.X.X.X must be replaced with the IP address of instance.
1
> ssh root@X.X.X.X
But, when you attempt to connect the instance via SSH, you are asked to enter a password.
1
root@X.X.X.X's password:
Enter the password that you typed at the Authentication text block. If the right password entered, you can see the logs like below:
System information as of Sat Sep 11 16:12:19 UTC 2021
System load: 0.0 Users logged in: 0 Usage of /: 9.3% of 24.06GB IPv4 address for eth0: X.X.X.X Memory usage: 24% IPv4 address for eth0: X.X.X.X Swap usage: 0% IPv4 address for eth1: X.X.X.X Processes: 113
66 updates can be applied immediately. 1 of these updates is a standard security update. To see these additional updates run: apt list --upgradable
*** System restart required *** Last login: Sat Sep 11 11:35:09 2021 from X.X.X.X root@ubuntu-test:~#
At the your instance page, you can check the volume setting. Click the Config Instructions.
If you selected the option Automatically Format & Mount at the section Add block storage, the volume is already attached even you did nothing. In other words, the process Mount the volume is already done. Let us check whether the volume is well mounted. The volume_X must be replaced with yours.
1 2 3
> cd /mnt/volume_X > ls lost+found
You successfully setup an Ubuntu instance. Good to go !
Install & Setup P4D
We need to setup public key for accessing Perforce packages. For this, you need to download the public key at https://package.perforce.com/perforce.pubkey. The download can be done by command below:
This command let you save the public key as a file, whose name is perforce.pubkey.
1 2 3 4 5 6
> curl https://package.perforce.com/perforce.pubkey > perforce.pubkey % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1707 100 1707 0 0 2024 0 --:--:-- --:--:-- --:--:-- 2022 > ls perforce.pubkey snap
> sudo apt-get install helix-p4d Reading package lists... Done Building dependency tree Reading state information... Done ... Started 0 services. No services configured. Processing triggers for man-db (2.9.1-1) ... Processing triggers for systemd (245.4-4ubuntu3.6) ...
Now you have one last step, launching the Perforce service ! Execute the batch file for it.
For a list of other options, type Ctrl-C to exit, and then run: $ sudo /opt/perforce/sbin/configure-helix-p4d.sh --help
You have entered interactive configuration for p4d. This script will ask a series of questions, and use your answers to configure p4d for first time use. Options passed in from the command line or automatically discovered in the environment are presented as defaults. You may press enter to accept them, or enter an alternative.
Please provide the following details about your desired Perforce environment:
Perforce Service name [master]:
You will be asked to enter some configurations such as name of service, directory, case sensitiviy, and so on. Setup them appropriately.
1 2 3
Perforce Service name [master]: Test Service Test not found. Creating... Perforce Server root (P4ROOT) [/opt/perforce/servers/Test]:
You should select the proper directory. It would be better to select the attached volume if the size of your project would be more than 25GB.
Perforce Service name [master]: Test Service Test not found. Creating... Perforce Server root (P4ROOT) [/opt/perforce/servers/Test]: Create directory? (y/n) [y]: y Perforce Server unicode-mode (y/n) [n]: y Perforce Server case-sensitive (y/n) [y]: Perforce Server address (P4PORT) [ssl:1666]: Perforce super-user login [super]: Perforce super-user password: Re-enter password. Perforce super-user password:
Configuring p4d service 'Test' with the information you specified...
Perforce db files in '/opt/perforce/servers/Test/root' will be created if missing... ... :: :: - For help with creating Perforce Helix user accounts, populating :: the depot with files, and making other customizations for your :: site, see the Helix Versioning Engine Administrator Guide: :: :: https://www.perforce.com/perforce/doc.current/manuals/p4sag/index.html :: ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Now you can access the Perforce service via P4V at a client. Before that, take care of typemap. The typemap is an abbreviation of Type Mapping. You can define how Perforce handles certain type of files by it. If your project uses UnrealEngine, for the types related to UnrealEngine, Epic Games recommends to setup like below:
TypeMap: binary+w //depot/....exe binary+w //depot/....dll binary+w //depot/....lib binary+w //depot/....app binary+w //depot/....dylib binary+w //depot/....stub binary+w //depot/....ipa binary //depot/....bmp text //depot/....ini text //depot/....config text //depot/....cpp text //depot/....h text //depot/....c text //depot/....cs text //depot/....m text //depot/....mm text //depot/....py binary+l //depot/....uasset binary+l //depot/....umap binary+l //depot/....upk binary+l //depot/....udk binary+l //depot/....ubulk
You can edit the typemap of instance by executing command p4 typemap. The command would open typemap file with vi editor.
Add the Epic Games’s mappings to your mapping file*. Now all preparation of server side completed. ※ FYI, the // string does not mean “It is a kind of comment.” in P4 typemap system. You should copy the all of text.
Enter ssl:X.X.X.X:1666 at the section Server. The X.X.X.X must be replaced with the IP address of instance. Enter super at the section User. The user super is an administrator account we have set. Now click the button OK. Check Trust this fingerprint and click the button Connect if you encounter the dialog like below:
Enter the password you set while launching the Perforce service at instance.
You can see the display when successfully entered.
The admin tool can be accessed at Tools/Administration.
In the tool, you can add or delete user directly.
Let us prepare some Depot and Stream. Click the Depots. Right-click any depot and select New Depot....
Type the name of new Depot.
Select stream at the section Deopt type and click OK.
Close the admin tool and return to the P4V*. Now you can find the new Depot at Depot view. ※ Actually, the admin tool was the program P4Admin, which is different with P4V. Just, Perforce supports to launch the P4Admin from P4V.
Restart the P4V for applying changes from P4Admin. After restart, find the File/New/Stream... and click it.
Let us make a Stream, name of mainline. The Stream will be placed in the new Deopt. Click the button OK.
Click New Workspace... at workspace view.
Name the new workspace and click the button Browse in line of Stream. You can find the Stream mainline at the dialog. Select it.
Finally, we have prepared a workspace in totally empty new Perforce service ! But, you should config p4ignore before getting into the work. Open any terminal and execute the command below:
1
> p4 set P4IGNORE=.p4ignore
This command will let your Perforce refer the file whose name is .p4ignore. It is kind of configuration lets you can use Perforce like git, which provides .gitignore. To apply this changes, restart P4V. And, create .p4ignore at your workspace directory. Let us test whether .p4ignore works well. Fill the contents of .p4ignore like below:
1
*.sln
Select Mark for Add... for .p4ignore and submit it.
Next, create a empty file whose name is Test.sln. Try to add this !
You try to check out the file, but the dialog would be popped-up. Great, your p4ignore works well.
If your project uses UnrealEngine, you should search for good one. I recommend you to use the p4ignore mentioned at references. Okay, then…all of preparation done. You are good to go :) !
UCLASS(config=Game, BlueprintType, Blueprintable, meta=(ShortTooltip="A Player Controller is an actor responsible for controlling a Pawn used by the player.")) class ENGINE_API APlayerController : public AController
The macro UCLASS() may be the most famous one of the unreal macros. First of all, let us find out how it can be expanded. Our goal is expanding UCLASS() of the class APlayerController, which can be found at line #222 of PlayerController.h.
In UnrealEngine, most of definitions for core macros are placed in ObjectMacros.h file. We can see the definition of UCLASS here, and it would be the second definition in usual case. Then, what is the macro BODY_MACRO_COMBINE ?
// This pair of macros is used to help implement GENERATED_BODY() and GENERATED_USTRUCT_BODY() #define BODY_MACRO_COMBINE_INNER(A,B,C,D) A##B##C##D #define BODY_MACRO_COMBINE(A,B,C,D) BODY_MACRO_COMBINE_INNER(A,B,C,D)
The macro is defined as BODY_MACRO_COMBINE_INNER, which concatenates parameters as one string. Thus, the macro UCLASS would result the text like below:
It can be tested with simple code. Check it out at the screenshot below. The __LINE__ is one of pre-defined macro, so it is turned to 222, where the UCLASS is written.
Here is test code and its result. Check the name of integer variable.
Actually, the macro CURRENT_FILE_ID can be found at header files generated by Unreal Header Tool. You can find the definition at generated header files, for instance, PlayerController.generated.h. The generated header files are created when you attempt to build your project.
And, the Engine_Source_Runtime_Engine_Classes_GameFramework_PlayerController_h_222_PROLOG is also defined at the generated header file for PlayerController.h.
Okay, we have just peeled off one layer to the truth. But, is that all ? We should take care of something more…Most of time, the macro UCLASS is not solely used. Various keywords and specifiers come with this. (ex: config=Game, BlueprintType, meta=(ShortTooltip=..., …) So, how they are handled ? Even, how the generated header file is created ?
The UHT writes header files containing auto-generated codes at the code above. The PreloadedFiles has absolute paths of generated header file, for instance, D:/Git/UnrealEngine/Engine/.../Inc/Engine/PlayerController.generated.h.
You can track what UHT writes on the generated header file via the variable GeneratedHeaderText. Because its contents will replace old generated header file, whenever there is any difference between old contents and new contents.
That is why the CURRENT_FILE_ID and ..._EVENT_PARMS macros are defined. Furthermore, other codes can be found at FNativeClassHeaderGenerator::FNativeClassHeaderGenerator(const UPackage*, const TSet<FUnrealSourceFile*>&, FClasses&, bool).
So, we have found the relationship of unreal macro and generated header file. But, there is one thing left, the metadata.
UCLASS(config=Game, BlueprintType, Blueprintable, meta=(ShortTooltip="A Player Controller is an actor responsible for controlling a Pawn used by the player.")) class ENGINE_API APlayerController : public AController
Back to the start, there are metadata within the macro UCLASS such as config=Game, BlueprintType and meta=.... We are going to check out how they are handled by UnrealEngine.
// These are used for syntax highlighting and to allow autocomplete hints
namespace UC { // valid keywords for the UCLASS macro enum { /// This keyword is used to set the actor group that the class is show in, in the editor. classGroup,
/// Declares that instances of this class should always have an outer of the specified class. This is inherited by subclasses unless overridden. Within, /* =OuterClassName */
/// Exposes this class as a type that can be used for variables in blueprints BlueprintType,
/// Prevents this class from being used for variables in blueprints NotBlueprintType,
/// Exposes this class as an acceptable base class for creating blueprints. The default is NotBlueprintable, unless inherited otherwise. This is inherited by subclasses. Blueprintable,
/// Specifies that this class is *NOT* an acceptable base class for creating blueprints. The default is NotBlueprintable, unless inherited otherwise. This is inherited by subclasses. NotBlueprintable, ...
You can find some enum definitions seems related to the metadata. But, they are for only supporting autocomplete hints such as Intellisense and VisualAssistX. There is another code handling metadata.
static void SetupUObjectModuleHeader(UHTModuleInfo ModuleInfo, FileItem HeaderFile, SourceFileMetadataCache MetadataCache) { // Check to see if we know anything about this file. If we have up-to-date cached information about whether it has // UObjects or not, we can skip doing a test here. if (MetadataCache.ContainsReflectionMarkup(HeaderFile)) { lock(ModuleInfo) { bool bFoundHeaderLocation = false; foreach (DirectoryReference ModuleDirectory in ModuleInfo.ModuleDirectories) { if (HeaderFile.Location.IsUnderDirectory(DirectoryReference.Combine(ModuleDirectory, "Classes"))) { ModuleInfo.PublicUObjectClassesHeaders.Add(HeaderFile); bFoundHeaderLocation = true; } else if (HeaderFile.Location.IsUnderDirectory(DirectoryReference.Combine(ModuleDirectory, "Public"))) { ModuleInfo.PublicUObjectHeaders.Add(HeaderFile); bFoundHeaderLocation = true; } } if (!bFoundHeaderLocation) { ModuleInfo.PrivateUObjectHeaders.Add(HeaderFile); } } } }
About all modules, header files in Classes folder are stored at PublicUObjectClassesHeaders and header files in Public folder are stored at PublicUObjectHeaders. Even you have located a header file in other folder, the Unreal Build Tool collects it into PrivateUObjectHeaders.
A screenshot on debugging UBT.
Back to the FBaseParser::ReadSpecifierSetInsideMacro(), let us test with the keyword BlueprintType. How does the BlueprintType keyword parsed ? The UHT parses your header file with tokens. Suppose the input as UCLASS(config=Game, BlueprintType, Blueprintable, meta=(...)).
if (Token.Matches(TEXT("UCLASS"), ESearchCase::CaseSensitive)) { bHaveSeenUClass = true; bEncounteredNewStyleClass_UnmatchedBrackets = true; UClass* Class = CompileClassDeclaration(AllClasses); GStructToSourceLine.Add(Class, MakeTuple(GetCurrentSourceFile()->AsShared(), Token.StartLine)); return true; }
Due to this code, the left input would be (config=Game, BlueprintType, Blueprintable, meta=(...)). And, the following tokenizing is like below based on FBaseParser::ReadSpecifierSetInsideMacro().
C/CPP (pure) macro C/CPP code with macro ---(preprocessor)---> C/CPP code with evaluated code from macro ---(rest of job)---> ...
unreal macro C/CPP code with unreal macro ---(UHT and UBT)---> C/CPP code with generated code(+macro) from UHT and UBT ---(preprocessor)---> C/CPP code with evaluated code from macro ---(rest of job)---> ...
There are so many hidden code for implementing unreal macros, and the macros have complicated relationship with other engine code. Even most part of final code from the macros cannot be evaluated before some preprocessing and compilation. In this perspective, unreal macro such as UCLASS is not a pure C/CPP macro, because unreal macro functions fully only when UHT and UBT must preprocess the macro.
There is no doubt. Any of C/CPP compiler cannot recognize the unreal macro such as UCLASS. Even the Epic Games did not modify the compilers, and did not have to do. They have simply setup some build pipeline satisfying their needs. The program managing their custom build pipeline is the Unreal Build Tool, UBT. Most of jobs for build are done by UBT and UHT. In official document for UHT, these background knowledge is introduced.
1 2 3 4 5 6
UnrealHeaderTool (UHT) is a custom parsing and code-generation tool that supports the UObject system. Code compilation happens in two phases:
1. UHT is invoked, which parses the C++ headers for Unreal-related class metadata and generates custom code to implement the various UObject-related features. 2. The normal C++ compiler is invoked to compile the results.
When compiling, it is possible for either tool to emit errors, so be sure to look carefully.
As they said, the compilation order is the opposite direction of the paragraphs; Expanding UCLASS, Generated Header File and Metadata Parser. The actions for Expanding UCLASS are done by C/CPP compilers(+preprocessors), and the actions for Generated Header File and Metadata Parser are done by UHT. Additionally, actions for Metadata Parser happens early than ones for Generated Header File.
1 2
main function for parsing metadata -> FHeaderParser::ParseHeaders() main function for generating header file -> FHeaderParser::ExportNativeHeaders()
Let us make a conclusion.
The result from unreal macro is hard to evaluate before processing by UBT(+UHT).
Some features of UnrealEngine are implemented by auto-generated codes.
You should look into the build pipeline of UnrealEngine if need to modify unreal macro things.
Nowadays, STL is an essential component in almost every CPP project. That is why several questions about STL are asked in a technical interview. Especially, the std::vector is a popular subject. In this post, we gonna check the codes related to std::vector‘s growth, which is a hot topic in STL. The term “growth” in std::vector means an event to increase a size of instance by some actions. The action would be inserting an element (e.g. push_back()) or tuning its size (e.g. resize()). Some of us say “When the growth happens, its size become twice.”, but some of others say “No, it is exactly 3/2 times.”. Well…both of saying are not wrong. Let us find out why it is.
As you can see, there is a branch on returning the function. When _Mylast != _My_data._Myend is true, the growth not happens. Because the logic in the _Emplace_back_with_unused_capacity() does not reallocate memory, but reuse unused memory. FYI, values about _Mypair have the relationship like below:
1
_Compressed_pair<_Alty, _Scary_val> _Mypair;
1 2 3 4 5
// https://github.com/microsoft/STL/blob/c12089e489c7b6a3896f5043ed545ac8d1870590/stl/inc/xmemory template <class_Ty1, class_Ty2, bool = is_empty_v<_Ty1> && !is_final_v<_Ty1>> class _Compressed_pair final : private _Ty1 { // store a pair of values, deriving from empty first public: _Ty2 _Myval2;
1 2 3 4 5 6 7 8 9 10 11 12
// CLASS TEMPLATE vector template <class_Ty, class_Alloc = allocator<_Ty>> class vector { // varying size array of values private: template <class> friendclass _Vb_val; friend _Tidy_guard<vector>;
using _Alty = _Rebind_alloc_t<_Alloc, _Ty>; ... using _Scary_val = _Vector_val<conditional_t<_Is_simple_alloc_v<_Alty>, _Simple_types<_Ty>, _Vec_iter_types<_Ty, size_type, difference_type, pointer, const_pointer, _Ty&, const _Ty&>>>;
pointer _Myfirst; // pointer to beginning of array pointer _Mylast; // pointer to current end of sequence pointer _Myend; // pointer to end of array };
Since the elements are placed in sequential memory address, _Mylast - _Myfirst means “currently used size”.
As a result, _Mylast != _My_data._Myend is true when _Mylast < _Myend is true. That is why reallocation not happens. Get back to emplace_back() code. According to those upper reasons, now we need to focus on _Emplace_reallocated() function.
As you can see the codes, deallocation and reallocation happen. The variable _Newcapacity determines the size of memory will be reallocated. Let us check the function _Calculate_growth().
if (_Geometric < _Newsize) { return _Newsize; // geometric growth would be insufficient }
return _Geometric; // geometric growth is sufficient }
There are three return statement in the function.
First, when the current available size is bigger than 2/3 times of maximum size.
For instance, a maximum value of int type is +2,147,483,647 and 2/3 times of value is +1,431,655,764.666... ≒ +1,431,655,765. Let us put them in the expression. if (1431655765 > 2147483647 - 1431655765 / 2) will be false, but how about if _Oldcapacity = +1,431,655,766 ? if (1431655766 > 2147483647 - 1431655766 / 2) will be true. In this case, new size will be forced as the maximum size.
Second, when the current available size is less than 2.
For instance, when the _Oldcapacity is in {0, 1} the expression const size_type _Geometric = _Oldcapacity + _Oldcapacity / 2; will be the same with _Oldcapacity. In this case, new size will be forced as _Newsize, which is passed by _Oldsize + 1 in _Emplace_reallocate().
_Oldcapacity
Calculation
0
0 + 0 / 2 = 0
1
1 + 1 / 2 = 1
Third, other cases of the current available size.
The _Geometric will have 3/2 times of _Oldcapacity. That is why the 3/2 times of growth happens in MSVC. And now you understand why new size has to be set by maximum value when the _Oldcapacity is bigger than 2/3 times of maximum size.
The resize() has a similar flow. Let us find out.
1 2 3 4
_CONSTEXPR20_CONTAINER voidresize(_CRT_GUARDOVERFLOW const size_type _Newsize){ // trim or append value-initialized elements, provide strong guarantee _Resize(_Newsize, _Value_init_tag{}); }
// if _Newsize == _Oldsize, do nothing; avoid invalidating iterators }
The resize() can trim or append available memory. Trimming happens when you call resize() with smaller value than current available size. Appending happens when you call resize() with greater value than current available size. We go to _Resize_reallocate().
// [23.2.4.3] modifiers /** * @brief Add data to the end of the %vector. * @param __x Data to be added. * * This is a typical stack operation. The function creates an * element at the end of the %vector and assigns the given data * to it. Due to the nature of a %vector this operation can be * done in constant time if the %vector has preallocated space * available. */ void push_back(const value_type& __x) { if (this->_M_impl._M_finish != this->_M_impl._M_end_of_storage) { _GLIBCXX_ASAN_ANNOTATE_GROW(1); _Alloc_traits::construct(this->_M_impl, this->_M_impl._M_finish, __x); ++this->_M_impl._M_finish; _GLIBCXX_ASAN_ANNOTATE_GREW(1); } else _M_realloc_insert(end(), __x); }
Here are two cases. First, when an available memory exists. Second, otherwise.
void _M_swap_data(_Vector_impl_data& __x) _GLIBCXX_NOEXCEPT { // Do not use std::swap(_M_start, __x._M_start), etc as it loses // information used by TBAA. _Vector_impl_data __tmp; __tmp._M_copy_data(*this); _M_copy_data(__x); __x._M_copy_data(__tmp); } }; ... _Vector_impl _M_impl;
std::vector in GCC has internal indicators like std::vector in MSVC. So we should focus on _M_realloc_insert() function.
#if __cplusplus >= 201103L template<typename _Tp, typename _Alloc> template<typename... _Args> void vector<_Tp, _Alloc>:: _M_realloc_insert(iterator __position, _Args&&... __args) #else template<typename _Tp, typename _Alloc> void vector<_Tp, _Alloc>:: _M_realloc_insert(iterator __position, const _Tp& __x) #endif { const size_type __len = _M_check_len(size_type(1), "vector::_M_realloc_insert"); pointer __old_start = this->_M_impl._M_start; pointer __old_finish = this->_M_impl._M_finish; const size_type __elems_before = __position - begin(); pointer __new_start(this->_M_allocate(__len)); pointer __new_finish(__new_start); __try { // The order of the three operations is dictated by the C++11 // case, where the moves could alter a new element belonging // to the existing vector. This is an issue only for callers // taking the element by lvalue ref (see last bullet of C++11 // [res.on.arguments]). _Alloc_traits::construct(this->_M_impl, __new_start + __elems_before, #if __cplusplus >= 201103L std::forward<_Args>(__args)...); #else __x); #endif __new_finish = pointer();
Hoo, it is too long. We do not have to look into whole code, but the variable __len. The variable is used for reallocation. And it is set by _M_check_len().
1 2 3 4 5 6 7 8 9 10
// Called by _M_fill_insert, _M_insert_aux etc. size_type _M_check_len(size_type __n, constchar* __s) const { if (max_size() - size() < __n) __throw_length_error(__N(__s));
The code throw an error when current size is the same with maximum size because the function was called as _M_check_len(size_type(1), ...). Otherwise, new size will be set by 2 times of current size. Except for when current size is 0.
Current size
Calculation
0
1 = 0 + max(0, 1)
1
2 = 1 + max(1, 1)
2
4 = 2 + max(2, 1)
And, returns maximum size when underflow or overflow happens. Otherwise, returns new size calculated as 2 times of current size.
Next, check the resize() in GCC.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
/** * @brief Resizes the %vector to the specified number of elements. * @param __new_size Number of elements the %vector should contain. * * This function will %resize the %vector to the specified * number of elements. If the number is smaller than the * %vector's current size the %vector is truncated, otherwise * default constructed elements are appended. */ void resize(size_type __new_size) { if (__new_size > size()) _M_default_append(__new_size - size()); elseif (__new_size < size()) _M_erase_at_end(this->_M_impl._M_start + __new_size); }
We can see the resize() in GCC also do trimming and appending. (Interestingly, nothing happens when __new_size is equal to current size.) So, we should focus on _M_default_append() function.
It is long one, too. What is __navail ? It seems meaning of Number of AVAILable memory, not the Not AVAILable memory. So, we can see the memory is reused when if (__navail >= __n) is true. Otherwise, reallocation happens. Oh, Hi. We meet _M_check_len() again. Then, new size will be 2 times of current size.
Wrap-up
Common
Try to recycle memory as possible as can. (e.g. reuse available memory in push_back() logic.)
Care about underflow and overflow.
Have internal indicators for {First, Current, End}
Currently allocated size = End - First
Currently used size = Current - First
Currently available size = End - Current
MSVC
Growth happens with 3/2 times of amount. (in normal case)
GCC
Growth happens with 2 times of amount. (in normal case)
UnrealEngine provides its own module system, which is absolutely different with CPP 20 Module. The class UnrealBuildTool.ModuleRules is for the module system and it is written by [ModuleName].Build.cs. You can decide what to include for creating output files (= DLL). For example, ShaderCompileWorker project has the module rules below:
There are several libraries such as Core, Projects, RenderCore and so on. We can also find them in Binaries folder like below: (FYI, the ShaderCompileWorker.exe is created with ShaderCompileWorker.Target.cs not the ShaderCompileWorker.Build.cs.)
In other words, output files are created with the name containing its module when you add corresponding libraries to module rules (ex: [TargetName]-[ModuleName]-[ConfigurationName].dll). Additionally, UnrealEngine re-uses them as possible. Suppose your project need some libraries already included on engine side. In this situation, UnrealEngine does not create output files for the duplicated libraries included in your project. Instead of that, UnrealEngine leaves some meta file describes what the project included.
1 2 3 4 5 6 7 8 9 10 11 12 13
// Copyright Epic Games, Inc. All Rights Reserved.
This is a generated module rules based on third person template. This module rules contains CoreUObject library but we cannot find it in Binaries folder. Let us see the ThirdPerson_4_25Editor.target file.
For more details about the module system, visit reference #1.
UnrealBuildTool.TargetRules
UnrealEngine provides its own target system, which makes you can create an executable. There are various target configurations such as Editor, Client and Server. The class UnrealBuildTool.TargetRules is for the target system and it is written by [TargetName].Target.cs. You can decide which modules to include for a certain target. For example, UE4 project has the target rules for editor below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
// Copyright Epic Games, Inc. All Rights Reserved.
using UnrealBuildTool; using System.Collections.Generic;
The target rules specifies UE4Game module included, which is located in Engine/Source/Runtime/UE4Game. So, output files for editor are consist of modules in UE4Game.
...csharp ///<summary> /// TargetRules is a data structure that contains the rules for defining a target (application/executable) ///</summary> publicabstractpartialclassTargetRules { ... ///<summary> /// The type of target. ///</summary> publicglobal::UnrealBuildTool.TargetType Type = global::UnrealBuildTool.TargetType.Game; ...
The field is used for branching target-specific features such as build configuration. For example, UnrealEngine manages target configurations as enum.
... ///<summary> /// The type of configuration a target can be built for ///</summary> publicenum UnrealTargetConfiguration { ///<summary> /// Unknown ///</summary> Unknown,
... ///<summary> /// Gets a list of configurations that this target supports ///</summary> ///<returns>Array of configurations that the target supports</returns> internal UnrealTargetConfiguration[] GetSupportedConfigurations() { // Otherwise take the SupportedConfigurationsAttribute from the first type in the inheritance chain that supports it for (Type CurrentType = GetType(); CurrentType != null; CurrentType = CurrentType.BaseType) { object[] Attributes = CurrentType.GetCustomAttributes(typeof(SupportedConfigurationsAttribute), false); if (Attributes.Length > 0) { return Attributes.OfType<SupportedConfigurationsAttribute>().SelectMany(x => x.Configurations).Distinct().ToArray(); } }
// Otherwise, get the default for the target type if (Type == TargetType.Editor) { returnnew[] { UnrealTargetConfiguration.Debug, UnrealTargetConfiguration.DebugGame, UnrealTargetConfiguration.Development }; } else { return ((UnrealTargetConfiguration[])Enum.GetValues(typeof(UnrealTargetConfiguration))).Where(x => x != UnrealTargetConfiguration.Unknown).ToArray(); } } ...
Each module has its own Build.cs file. For example, a [ProjectName].Build.cs will be generated when you create new project with cpp enabled. Because UnrealEngine makes a default module that has the same name with project. (Exactly, Build.cs and Target.cs files are copied from template in general cases.)
// Discover and copy all files in the src folder to the destination, excluding a few files and folders TArray<FString> FilesToCopy; TArray<FString> FilesThatNeedContentsReplaced; TMap<FString, FString> ClassRenames; IFileManager::Get().FindFilesRecursive(FilesToCopy, *SrcFolder, TEXT("*"), /*Files=*/true, /*Directories=*/false);
if ( ReplacementsInFilesExtensions.Contains(FileExtension) ) { FilesThatNeedContentsReplaced.Add(DestFilename); }
// Allow project template to extract class renames from this file copy if (FPaths::GetBaseFilename(SrcFilename) != FPaths::GetBaseFilename(DestFilename) && TemplateDefs->IsClassRename(DestFilename, SrcFilename, FileExtension)) { // Looks like a UObject file! ClassRenames.Add(FPaths::GetBaseFilename(SrcFilename), FPaths::GetBaseFilename(DestFilename)); } } ...
WHEN YOU CREATE A PROJECT ITS NAME OF ThridPerson_4_25 FROM THIRD PERSON TEMPLATE
Saying that again, [ModuleName].Build.cs defines the dependencies for building its module. So, every module must have its own [ModuleName].Build.cs file and every module has its own [ModuleName].Build.cs will generate a DLL when you build the project.
While every module must have a Build.cs file, but every module do not have to have a Target.cs file. Some modules have only Build.cs file. It means the modules should be used for library not a standalone. The AIModule is a good example. The module has only Build.cs as it is written for providing a support to make AI.
... if (Target.bBuildEditor == true) { PublicDependencyModuleNames.AddRange( newstring[] { "UnrealEd", "Kismet" } ); // @todo api: Only public because of WITH_EDITOR and UNREALED_API ...
Sometimes, some modules should not be included on certain target configurations. For instance, only editor features should not be included in client or server target configuration. In the need, we can branch for ease like below:
... // Editor builds include SessionServices to populate the remote target drop-down for remote widget snapshots if (Target.Type == TargetType.Editor) { PublicDefinitions.Add("SLATE_REFLECTOR_HAS_SESSION_SERVICES=1");
We can use Widget Relfector only at editor target configuration. As we see, non-editor target will not contain the reflector feature. One more step, we can force to block generating projects by throwing an exception like this.
According to reference #1, the name of Natvis framework means visualization of native types. It can help your debugging with more plentiful visibility, and sometimes support mutiple environments that have a different size on the same data type. Suppose you want to make a string class storing its text as UTF-32 or something custom format, it should not be displayed well because it is not kind of ASCII. Though, do you want to see the data (in this case, the text) at Watch or Local viewport ? Then you should implement your own xml for custom Natvis visualization.
SCREENSHOT WITHOUT CUSTOM NATVIS
SCREENSHOT WITH CUSTOM NATVIS
Basic syntax and usage
The custom visualizer has XML syntax and it would be comfortable than a sole programming language. Just create a file of any name with .natvis extension and locate at the directory Documents/Visual Studio 2019/Visualizers. The Visual Studio IDE will find all natvis files at there and parse them. At first after you create the file, you need the tag AutoVisualizer.
1 2
<AutoVisualizer> </AutoVisualizer>
But you may meet the error like below with our current visualizer. (You should turn on an option for showing errors related to Natvis. Manipulate the option at Tools/Options/Debugging/Output Window/General OutputSettings/Natvisdiagnostic messages. I recommend you to set the level as Error.)
1 2
Natvis: ...\Documents\Visual Studio 2019\Visualizers\Example.natvis(1,2): Fatal error: Expected element with namespace 'http://schemas.microsoft.com/vstudio/debugger/natvis/2010'.
And, the AutoVisualizer tag can have a child tag such as Type. The Type tag must have an attribute Name. Name can be set as the name of type. For example, you should type SomeClass at the attribute when you created a type SomeClass.
The Type tag can have a child tag such DisplayString/Expand. The DisplayString tag can be used for displaying a string at the debugging window like below.
1 2 3 4 5 6 7
<AutoVisualizerxmlns="http://schemas.microsoft.com/vstudio/debugger/natvis/2010"> <TypeName="SomeClass"> <DisplayString> This is my class </DisplayString> </Type> </AutoVisualizer>
You can get the value of member variable. Brace the member variable as {}.
1 2 3 4 5 6 7
<AutoVisualizerxmlns="http://schemas.microsoft.com/vstudio/debugger/natvis/2010"> <TypeName = "SomeClass"> <DisplayString> My ID is {ID} </DisplayString> </Type> </AutoVisualizer>
With the Expand tag, you can customize the expanded view. The Item tags consist of the list. If you customize the expanded view as Expand tag, automatically Raw View item created, which was the original expanded view. You can decorate each line of list in expanded view. The specifier sb and x are respectively meaning “Display the string without quotation marks” and “Display the integer with hexa-decimal format”. For more details, visit reference #2.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
<AutoVisualizerxmlns="http://schemas.microsoft.com/vstudio/debugger/natvis/2010"> <TypeName = "SomeClass"> <DisplayString> My ID is {ID} </DisplayString> <Expand> <ItemName="Description"> "Natvis is awesome", sb </Item> <ItemName="ID"> ID, x </Item> </Expand> </Type> </AutoVisualizer>
Example in UnrealEngine
Let us find an example in UnrealEngine one. You can find UE4.natvis if you installed UnrealEngine at your local system. Mostly, the UE4.natvis located in Engine/Extras/VisualStudioDebugging/UE4.natvis.
First of all, the FString welcomes us. Have a look for why the FString visualizer has been made like this. ( < and > things are escaped characters in xml. For more details, visit reference #3. )
Put a breakpoint at where the FString is initialized. Before initialization, we can see Invalid at the debugging window.
Expand the items. We can see the ArrayNum has a negative value. The condition Data.ArrayNum < 0 is satisfied and Invalid would be shown.
After initialization, we can see the string very well. In this case, the condition Data.ArrayMax >= Data.ArrayNum is satisfied and L"ABC" would be shown. Why does the string look like L"..." ? Because of the format specifier su. Check the reference #2 again.
template <> structTBitsToSizeType<8> { using Type = int8; }; template <> structTBitsToSizeType<16> { using Type = int16; }; template <> structTBitsToSizeType<32> { using Type = int32; }; template <> structTBitsToSizeType<64> { using Type = int64; };
/** The indirect allocation policy always allocates the elements indirectly. */ template <int IndexSize> classTSizedHeapAllocator { public: using SizeType = typename TBitsToSizeType<IndexSize>::Type; ... };
And you can find the type of ArrayNum and ArrayMax is int32 with this flow.
Sometimes, the UE4.natvis gives us a hint for understanding the complicated engine code. Even someday you may need to customize UE4.natvis for special case while supporting various platforms. It would be also good to learn Natvis if you mostly use Visual Studio IDE. Read premade ones and write your ones. :)
Exactly one year passed after I wrote the retrospection of 2019. (https://baemincheon.github.io/2019/12/24/retrospection-2019/) Feeling that time goes so fast as I look back lots of things behind. From the god damn COVID-19 to the remote work during quarantine, we got so many things to say. But, the old year (2019) is still the hardest year to me. This year would be second one as I believe myself that I have taken plenty of breaks in this year.
Though the main topics of retrospection are related to worldwide crisis, I will try not to talk about it as possible. Now it is some kinda routine issue and I want to care about personal events. In addition to, other people are already saying or sharing the contents about it. I just hope this crisis ends as soon as possible and get back into old routines.
play
This year, I played games most of time with Nintendo Switch. Especially, Splatoon 2 and Ring Fit Adventure were the ones that I liked. Splatoon 2 is not such a fresh game (released at 2017/07/21), so it was hard to enjoy multi-play contents due to other players much more skillful than me. But, its single-play contents were quite good and well structured. Therefore, I also purchased a DLC for another single-play contents. I played Splatoon 2 about 100 hours and am pleased with the game quality. The game is not kinda “GOD GAME” thing, but I am sure it has worth to play at least once. For me, an enthusiast of FPS game, unusual concepts of this game looks nice.
I really like the Ring Fit Adventure. It has been so difficult to exercise funny that I used to stop exercising several times. You know, even such a strong will sustains less than few weeks or months. But, I am still keeping exercising for more than 6 months with the game. Thanks to its gamification, I seldom lost interest on exercising and sometimes got motive, too. I played Ring Fit Adventure about 150 hours and am pleased with the game quality. I am sure I can recommend this game if you have Nintendo Switch. And there may be only this option in these days as fitness clubs closed. Home training is not optional but required.
Oh, I almost forgot it. I played the game 3000th Duel, too. It is a game released at PC(Steam) and Nintendo Switch. The game is said as one of Metroidvania things, because of its contents and mood. It was not easy one but there was the balance between feeling fulfilled and bearing hardship. I played 3000th Duel about 40 hours including DLC part. Though It does not have an easy mode option such as Just Enjoy The Story difficulty, you can go easy or hard depends on how much you spent time at farming. It is up to you. Though there were several times that I screamed (lol) for anger, it helped me to get used to manipulating Nintendo Switch controller.
activity
Some old people may know about Touhou Project. It is one of vertically scrolled shooting games such as Strikers 1945. The game was popular on 90s and 00s for its unique worldview, which lead to numerous numbers of derivative works created by users. Some works are still created in these days although it is become less popular than before. But, almost every one is aimed for English or Japanese version not the Korean.
Fortunately, Touhou Spell Bubble has recently started to support Korean version. It was first released at 2020/02/06, and started to support Korean version from 2020/10/15. Despite of many concerns, I was happy to see the game supports Korean. (You might know, Korean market is not actually attractive.) So I was willing to visit the cafe when the game is promoted. We ordered every menu once and pictured them.
I remember that I mentioned the laboratory in previous retrospection. (Maybe because I did my best in lab) The professor of lab suggested me to give a lecture, whose content is about C language. I accepted the suggestion without hesitation, and wrote some slideshows. The lecture is in the first-year curriculum, however, I wanted to deal with real application of C language. Why ? Every student in 2020 may have a doubt on studying C language. Because there are already many programming languages that looks awesome and easy such as Python or Javascript.
So I focused on “Why We Study C Language” and “How C Language Is Used” for resolving the doubts. I prepared the contents like Explaining “Why C Language Can Manipulate Memory” with assembly codes, showing “How C Language Is Used In Real Project” with Linux kernel codes. People rarely say about them. Time has passed, we do not use C language for all purposes. I thought now we should consider to focus what only C can do when we teach C language to students.
work
Already one year passed from join the PUBG. Exactly 1.5 year ? I am now familiar to my work, and even got the sub role additionally. We had an anniversary cake with people who joined PUBG in the same time, too. Work is not easy going, but we try to do boost each other and overcome it. I wish I can go with the people as many as can also even one year passed again. It is sad that now we cannot get together in offline due to the crisis. We had often got together once per 1~2 months, and had a dinner. I already miss that times.
Starting remote work, I found some pros and cons. Remote work seems not the silver bullet one in every situation. Of course, there are common topics on remote work regardless of job. But, some topics are unique ones only existing on game developer job. I can show cons below:
Common Topics
chores that did not exist when you commute to office (ex: cooking, managing workplace)
hardness to know about work mood (ex: are they excited ? are they angry ?)
irregular work time with personal circumstances (ex: family with kids)
Extra Topics (for game developer job)
heavy traffic due to massive volume of program (it is painful at home network to upload or download programs when compared with office network)
laggy remote screen sharing (game developer should usually run the client program, while it is okay other developers use only console prompt)
poor response time on input or output (when the game is kind of real time game…)
Play Station 5 has been released in recent. Sony planned to celebrate the event with partner companies, and collected the picture of members. I sent the picture above, and you can find it on the site https://sie.offbaseproductions.com/ too. It is something monumental and memorable. Sony did good job. I was glad to develop on PS5. :)
thoughts
This year, I leave some regrets that I should have done more things. I should have met more people and read books. But…the lethargy from the crisis, everyone may have felt this. I cannot assert it did not affect me. People around me seems sometimes sad and depressed, too. What was worse, we end up this year as bad situation with high amount of patients. It is hard to believe next year would be better.
Somebody said, “The World Never Be The Same”. At first of this year, I did not agree the words. Because I could not imagine the new world. But…we going to the new world anyway, and it seems we must adapt. Even this year is said the most terrible one, I think we should remember it. To look back to stop this tragedy. We gotta be worry about how to adapt new world and how to live next year, based on new rules.
we can find some pre-defined non-axis/axis keys as FName in GenericApplication.cpp. they are for mapping from various input messages to generic input messages. “various input messages” means, there are many types of gamepad in the world. the button/stick layout differs in Xbox One controller, Playstation 4 controller and so on. the gamepads below are Xbox One, Playstation 4, Stadia and Switch in order.
look at the Xbox One one and Playstation 4 one. they have many differences such as position of stick and exsitance of touch pad. even in comparison for Xbox One one and Switch one, the number of buttons differs. in this situation, it is not easy for every individual developer to support every type of gamepad, so the need of generic mapping for gamepad input arises. let us find out the generic mapping with Xbox One controller examples.
the tables are from the reference #1. you can find more details for each button/stick at the URL. though there are so many items in table, some of them are not counted as user input in common situation. so, we can gotta consider the items below:
Index
Item Name
Unreal Mapping
Input Type
1
Left Stick
(Move Horizontally) Gamepad_LeftX
Key, Axis
(Move Vertically) Gamepad_LeftY
Key, Axis
(Move Left Side More Than Deadzone) Gamepad_LeftStick_Left
Key
(Move Up Side More Than Deadzone) Gamepad_LeftStick_Up
Key
(Move Right Side More Than Deadzone) Gamepad_LeftStick_Up
Key
(Move Down Side More Than Deadzone) Gamepad_LeftStick_Down
Key
(Click) Gamepad_LeftThumbstick
Key
2
Left Bumper
Gamepad_LeftShoulder
Key
3
View Button
Gamepad_Special_Left
Key
6
Menu Button
Gamepad_Special_Right
Key
7
Right Bumper
Gamepad_RightShoulder
Key
8
Directional Pad
(Left) Gamepad_DPad_Left
Key
(Up) Gamepad_DPad_Up
Key
(Right) Gamepad_DPad_Right
Key
(Down) Gamepad_DPad_Down
Key
10
Right Stick
(Move Horizontally) Gamepad_RightX
Key, Axis
(Move Vertically) Gamepad_RightY
Key, Axis
(Move Left Side More Than Deadzone) Gamepad_RightStick_Left
Key
(Move Up Side More Than Deadzone) Gamepad_RightStick_Up
Key
(Move Right Side More Than Deadzone) Gamepad_RightStick_Up
Key
(Move Down Side More Than Deadzone) Gamepad_RightStick_Down
Key
(Click) Gamepad_RightThumbstick
Key
11
Right Trigger
Gamepad_RightTriggerAxis
Key, Axis
(Press More Than Deadzone) Gamepad_RightTrigger
Key
14
Left Trigger
Gamepad_LeftTriggerAxis
Key, Axis
(Press More Than Deadzone) Gamepad_LeftTrigger
Key
X
X Button
Gamepad_FaceButton_Left
Key
Y
Y Button
Gamepad_FaceButton_Up
Key
A
A Button
Gamepad_FaceButton_Bottom
Key
B
B Button
Gamepad_FaceButton_Right
Key
some of them are handled as not only Key but Axis, too.
the Gamepad_LeftY is the one of cases
Gamepad Non-Axis Input Handling Process
focus the function XInputInterface::SendControllerEvents(). there is the logic to filter hardware input state.
in this case, OnControllerAnalog() is called even a tiny change of input value exists. because the code compares with OldAxisValue != NewAxisValue. the function will not be called only when there is no change on input value.
as unreal engine handles objects inherits UObject, we can use unreal engine easily. there are several benefits when the object inherits UObject.
garbage collection
reference update
reflection
serialization
etc.
then you might have one question, “how we can handle the objects does not inherit UObject ? should we use raw pointer for the objects ?” well…there are 2 ways for this.
using std::unique_ptr of cpp std library in memory.h
using TUniquePtr of unreal API in UniquePtr.h
you can use std::unique_ptr in unreal project, but unreal engine implements their own smart pointer library. and it is common that using TUniquePtr in unreal project unless you do not need cpp std library.
as purpose and functionality are the same, TUniquePtr is similar to std::unique_ptr. TUniquePtr also provides the unique ownership and other features. let us check out what it is and how it is used.
classSANDBOXFILE_API FSandboxPlatformFile : public IPlatformFile { .... };
class FSandboxPlatformFile is not a class inherits UObject and it is possible to be indicated with TUniquePtr. ( conventionally, prefix U is attached when the class inherits UObject )
1 2 3 4 5 6 7 8 9 10 11
UnrealEngine/Engine/Source/Editor/UnrealEd/Classes/CookOnTheSide/CookOnTheFlyServer.h UCLASS() class UNREALED_API UCookOnTheFlyServer : public UObject, public FTickableEditorObject, public FExec { ....
validation on TUniquePtr is the same on raw pointer. if the value is zero, the TUniquePtr is pointing nullptr. you can use check for ensuring whether the TUniquePtr is valid.
FAssetRegistryGenerator::SaveManifests gets FSandboxPlatformFile* as one of parameters. the type of parameter is not the TUniquePtr, so we should convert TUniquePtr<T> into T*.
MakeUnique returns TUniquePtr object and calls the constructor of the template class. in this case, MakeUnique<FSandboxPlatformFile> takes one boolean value.
in this code, CreateUserInformation will be called in BeginPlay and ReleaseUserInformation will be called in EndPlay. UserInformation is a memeber variable with TUniquePtr<UserInfo> type. another TUniquePtr<UserInfo> exists in CreateUserInformation, which gets UserInfo*.
what happens when CreateUserInformation is ended ? AnotherPtr would disappear and its destructor would be called. when the destructor of TUniquePtr is called, it releases memory that TUniquePtr has pointed. as a result, variable UserInformation would be a dangling pointer.
UserInformation has abnormal values.
because already the memory is released, an exception would be thrown when we execute ReleaseUserInformation. it is why you have to use TUniquePtr at the object only existing one thing and care about moving the ownership. moving the ownership with raw pointer is dangerous as we have seen.